OpenAI, Anthropic reached an agreement to subject their models to safety testing before releasing them to the public
Further actions have been taken by NIST, such as establishing an AI safety advisory panel earlier this year consisting of AI developers, users, and scholars, to impose some safety measures on the application and progression of AI.
Further actions have been taken by NIST, such as establishing an AI safety advisory panel earlier this year consisting of AI developers, users, and scholars, to impose some safety measures on the application and progression of AI.
The panel, known as the American AI Safety Institute Consortium (AISIC), has been assigned the responsibility of formulating protocols for testing AI systems, assessing AI capabilities, overseeing risks, ensuring safety and protection, and embedding watermarks in AI-generated content. Numerous prominent tech companies, including OpenAI, Meta, Google, Microsoft, Amazon, Intel, and Nvidia, have joined the consortium to guarantee the secure advancement of AI.
