White House to issue AI rules for federal employees

After earlier efforts to reign in generative artificial intelligence (genAI) were criticized as too vague and ineffective, the Biden Administration is now expected to announce new, more restrictive rules for use of the technology by federal employees.

[…]

White House to issue AI rules for federal employees

After earlier efforts to reign in generative artificial intelligence (genAI) were criticized as too vague and ineffective, the Biden Administration is now expected to announce new, more restrictive rules for use of the technology by federal employees.

The executive order, expected to be unveiled Monday, would also change immigration standards to allow a greater influx of technology workers to help accelerate US development efforts.

On Tuesday night, the White House sent invitations for a “Safe, Secure, and Trustworthy Artificial Intelligence” event Monday hosted by President Joseph R. Biden Jr., according to The Washington Post.

Generative AI, which has been advancing at breakneck speeds and setting off alarm bells among industry experts, spurred Biden to issue “guidance” last May. Vice President Kamala Harris also met with the CEOs of Google, Microsoft, and OpenAI — the creator of the popular ChatGPT chatbot— to discuss potential issues with genAI, which include security, privacy, and control problems.

Even before the launch of ChatGPT in November 2022, the administration had unveiled a blueprint for a so-called “AI Bill of Rights” as well as an AI Risk Management Framework; it also pushed a roadmap for standing up a National AI Research Resource.

The new executive order is expected to elevate national cybersecurity defenses by requiring large language models (LLMs) — the foundation of generative AI — to undergo assessments before they can be used by US government agencies. Those agencies include the US Defense Department, Energy Department and intelligence agencies, according to the Post.

The new rules will bolster what was a voluntary commitment by 15 AI development companies to do what they could to ensure the evaluation of genAI systems that is consistent with responsible use.

“I’m afraid we don’t have a very good track record there; I mean, see Facebook for details,” Tom Siebel, CEO of enterprise AI application vendor C3 AI and founder of Siebel Systems, told an audience at MIT’s EmTech Conference last May. “I’d like to believe self-regulation would work, but power corrupts, and absolute power corrupts absolutely.”

shutterstock 771481801 Shutterstock

While genAI offers extensive benefits with its ability to automate tasks and create sophisticated text responses, images, video and even software code, the technology also has been known to go rogue — an anomaly known as hallucinations.

“Hallucinations happen because LLMs, in their in most vanilla form, don’t have an internal state representation of the world,” said Jonathan Siddharth, CEO of Turing, a Palo Alto, CA company that uses AI to find, hire, and onboard software engineers remotely. “There’s no concept of fact. They’re predicting the next word based on what they’ve seen so far — it’s a statistical estimate.”

GenAI can also unexpectedly expose sensitive or personally identifiable data. At its most basic level, the tools can gather and analyze massive quantities of data from the Internet, corporations, and even government sources in order to more accurately and deeply offer content to users. The drawback is that the information gathered by AI isn’t necessarily stored securely. AI applications and networks can make that sensitive information vulnerable to data exploitation by third parties.

Smartphones and self-driving cars, for example, track users’ locations and driving habits. While that tracking software is meant to help the technology better understand habits to more efficiently serve users, it also gathers personal information as part of big data sets used for training AI models.

For companies developing AI, the executive order might necessitate an overhaul in how they approach their practices, according to Adnan Masood, chief AI architect at digital transformation services company UST. The new rules may also driving up operational costs initially.

“However, aligning with national standards could also streamline federal procurement processes for their products and foster trust among private consumers,” Masood said. “Ultimately, while regulation is necessary to mitigate AI’s risks, it must be delicately balanced with maintaining an environment conducive to innovation.

“If we tip the scales too far towards restrictive oversight, particularly in research, development, and open-source initiatives, we risk stifling innovation and conceding ground to more lenient jurisdictions globally,” Masood continued. “The key lies in making regulations that safeguard public and national interests while still fueling the engines of creativity and advancement in the AI sector.”

Masood said the upcoming regulations from the White House have been “a long time coming, and it’s a good step [at] a critical juncture in the US government’s approach to harnessing and containing AI technology.

“I hold reservations about extending regulatory reach into the realms of research and development,” Masood said. “The nature of AI research requires a level of openness and collective scrutiny that can be stifled by excessive regulation. Particularly, I oppose any constraints that could hamper open-source AI initiatives, which have been a driving force behind most innovations in the field. These collaborative platforms allow for rapid identification and remediation of flaws in AI models, fortifying their reliability and security.”

GenAI is also vulnerable to baked-in biases, such as AI-assisted hiring applications that tend to choose men versus women, or white candidates over minorities. And, as genAI tools get better at mimicking natural language, images and video, it will soon be impossible to discern fake results from real ones; that’s prompting companies to set up “guardrails” against the worst outcomes, whether they be accidental or intentional efforts by bad actors.

US efforts to reign in AI followed similar efforts by European countries to ensure the technology isn’t generating content that violates EU laws; that could include child pornography or, in some EU countries, denial of the Holocaust. Italy outright banned further development of ChatGPT over privacy concerns after the natural language processing app experienced a data breach involving user conversations and payment information.

The European Union’s “Artificial Intelligence Act” (AI Act) was the first of its kind by a western set of nations. The proposed legislation relies heavily on existing rules, such as the General Data Protection Regulation (GDPR), the Digital Services Act, and the Digital Markets Act. The AI Act was originally proposed by the European Commission in April 2021.

States and municipalities are eyeing restrictions of their own on the use of AI-based bots to find, screen, interview, and hire job candidates because of privacy and bias issues. Some states have already put laws on the books.

The White House is also expected to lean on the National Institute of Standards and Technology to tighten industry guidelines on testing and evaluating AI systems — provisions that would build on the voluntary commitments on safety, security and trust that the Biden administration extracted from 15 major tech companies this year on AI.

Biden’s move is especially critical as genAI experiences an ongoing boom, leading to unprecedented capabilities in creating content, deepfakes, and potentially new forms of cyber threats, Masood said.

“This landscape makes it evident that the government’s role isn’t just a regulator, but [also as] a facilitator and consumer of AI technology,” he added. “By mandating federal assessments of AI and emphasizing its role in cybersecurity, the US government acknowledges the dual nature of AI as both a strategic asset and a potential risk.”

Masood said he’s a staunch advocate for a nuanced approach to AI regulation, as overseeing the deployment of AI products is essential to ensure they meet safety and ethical standards.

“For instance, advanced AI models used in healthcare or autonomous vehicles must undergo rigorous testing and compliance checks to protect public well-being,” he said.

About Author

Subscribe To InfoSec Today News

You have successfully subscribed to the newsletter

There was an error while trying to send your request. Please try again.

World Wide Crypto will use the information you provide on this form to be in touch with you and to provide updates and marketing.