How the new AI executive order stacks up: B-

The White House’s new executive order, “Safe, Secure, and Trustworthy Artificial Intelligence,” is poised to usher in a new era of national AI regulation, focusing on safety and responsibility across the sector.

[…]

How the new AI executive order stacks up: B-

The White House’s new executive order, “Safe, Secure, and Trustworthy Artificial Intelligence,” is poised to usher in a new era of national AI regulation, focusing on safety and responsibility across the sector. But will it? 

The executive order represents the U.S. government’s opening salvo in creating a comprehensive regulatory framework for AI, applicable both in the federal government and the private sector. While it addresses a broad spectrum of objectives for the AI ecosystem and builds upon previous directives related to AI, it is not without its challenges, notably a lack of accountability and specific timelines alongside potentially over-reaching reporting requirements.

Instead of setting up a few guardrails for the AI industry to guide itself, the executive order holds fast to an outdated approach to regulation in which the government will somehow guide AI’s future on its own. As other recent technology waves have taught us, developments will simply come too fast for such an approach and will be driven by the speed of private industry. Here are my thoughts regarding its potential implications and effectiveness.

AI regulations

The executive order calls for creating new safety and security standards for AI, most notably by requiring the largest of model developers to share safety test results with the federal government. However, and very importantly, the reporting requirements for the very large number of companies and builders fine-tuning the regulated large models for their particular use cases remain unclear.

AI must be regulated. It is a very powerful technology, and while it is not inherently good or bad, given its sheer power, guardrails must be put into place. While the executive order takes a focused approach towards applying these standards to the largest model developers, reporting requirements should continue to mirror the progressive structure of other regulated industries such that the largest underlying infrastructure providers that affect every American carry the regulatory burden. In contrast, US regulators must have a light touch with startups to maintain the country’s leadership position in innovation. 

AI security

While it is refreshing to see the specificity in a handful of elements, such as the Department of Commerce’s development of guidance for content authentication and watermarking to label AI-generated content clearly, many security goals remain open to interpretation. 

About Author

Subscribe To InfoSec Today News

You have successfully subscribed to the newsletter

There was an error while trying to send your request. Please try again.

World Wide Crypto will use the information you provide on this form to be in touch with you and to provide updates and marketing.