OpenAI, Anthropic reached an agreement to subject their models to safety testing before releasing them to the public

Further actions have been taken by NIST, such as establishing an AI safety advisory panel earlier this year consisting of AI developers, users, and scholars, to impose some safety measures on the application and progression of AI.

[…Keep reading]

OpenAI, Anthropic agree to get their models tested for safety before making them public

Further actions have been taken by NIST, such as establishing an AI safety advisory panel earlier this year consisting of AI developers, users, and scholars, to impose some safety measures on the application and progression of AI.

The panel, known as the American AI Safety Institute Consortium (AISIC), has been assigned the responsibility of formulating protocols for testing AI systems, assessing AI capabilities, overseeing risks, ensuring safety and protection, and embedding watermarks in AI-generated content. Numerous prominent tech companies, including OpenAI, Meta, Google, Microsoft, Amazon, Intel, and Nvidia, have joined the consortium to guarantee the secure advancement of AI.

About Author

Subscribe To InfoSec Today News

You have successfully subscribed to the newsletter

There was an error while trying to send your request. Please try again.

World Wide Crypto will use the information you provide on this form to be in touch with you and to provide updates and marketing.