REFERENCE: o1 tally published on the OpenAI website on September 12, 2024. Available at: https://openai.com/index/openai-o1-system-card/.
AI’s progression towards conflict
Majority of AI guidelines revolve around averting AI systems from causing harm. In instances of conflict, the calculation becomes more complex: how to guarantee AI-driven weaponry causes only the appropriate type of impact. A fresh New York Times commentary contended that the world isn’t prepared for the consequences of AI-infused weapon systems, illustrating how Ukrainian troops had to relinquish tanks because of kamikaze drone assaults—a sign of “the conclusion of a period of piloted mechanized warfare as we recognize it.” (Presently, the Ukrainians send unmanned tanks.)
These concerns were at the forefront of thoughts for military and diplomatic figures that participated in the second REAIM Summit this September in South Korea. (REAIM signifies Responsible AI in the Military Domain.) The summit produced a Blueprint for Action delineating 20 principles for military AI usage, including the assertion that “Humans remain accountable and answerable for [AI] utilization and [the] consequences of Al applications in the military domain, and liability and answerability can never be transferred to machines.”
Not every nation supported the blueprint, prompting a provocative headline in the Times of India: “China refuses to sign agreement to prohibit AI from controlling nuclear weapons”. The reality is more nuanced, but REAIM highlights the crucial necessity of global powers harmonizing on how AI weapons will be utilized.
Collaborating for AI safety
The OASIS Open standards organization initiated the Coalition for Secure AI (CoSAI) this previous summer as a venue for technology sector members to collaborate on enhancing AI safety. Specific objectives comprise ensuring trust in AI and promoting responsible development by engineering systems that are inherently secure.
Other organizations are also bringing attention to best methodologies that businesses and AI consumers can depend on for AI safety regardless of the existence of legislation. A notable illustration is the Top 10 Checklist issued by the Open Worldwide Application Security Project (OWASP) earlier this year, which highlights principal risks associated with extensive language models (LLMs) and ways to mitigate them.
A considerable concern for many observers currently is the misleading manipulation of AI in elections, particularly with the U.S. Presidential campaigns rapidly approaching the conclusion. Back in March, almost two dozen corporations signed a pact to combat the deceitful use of AI in 2024 elections including Adobe, Amazon, Anthropic, Arm, ElevenLabs, Google, IBM, Inflection AI, LinkedIn, McAfee, Meta, Microsoft, Nota, OpenAI, Snap, Stability AI, TikTok, Truepic, X, and Trend Micro—another demonstration of the significance of collective action on AI safety.
[AI Menace Trends]
AI Menace Trends
DOJ apprehends Russian Lookalike domains
On September 4, the U.S. Department of Justice disclosed its takeover of 32 internet domains used to “secretly disseminate Russian government propaganda with the goal of diminishing international backing for Ukraine, boosting pro-Russian regulations and interests, and influencing voters in U.S. and foreign elections….” These operations were all part of an influence scheme labeled ‘Lookalike’ that infringed U.S. fund laundering and legal trademark statutes.
U.S. authorities are vigilant against misinformation, manipulation, and the deceptive exploitation of AI to skew the results of the upcoming November Presidential election. As per Fox News, U.S. Attorney General Merrick Garland is also targeting Russia’s state-run Russia Today (RT) media outlet, which Meta stated it was prohibiting from Facebook and Instagram on September 17 due to purported foreign interference.
“Let me refer to my risk encyclopedia…”
This August, MIT introduced a public AI Risk Repository to chart and categorize the continuously expanding AI risk landscape in an accessible and organized manner. The present version details over 700 risks based on more than 40 different frameworks and encompasses citations alongside two risk taxonomies: one causal (indicating when, how, and why risks transpire) and the other grounded on seven primary realms encompassing privacy and security, nefarious actors, misinformation, and more.
MIT mentions that the repository will be frequently updated to support research, curricula, audits, and policy formulation and offer all stakeholders a “universal frame of reference” for conversing about AI-linked risks.
Grok AI delves into X user data for clever ‘anti-woke’ results
X’s Grok AI was conceived to be an AI search aide with fewer limitations and fewer ‘woke’ sensitivities than other conversational agents. While distinctly ironic, it has proven to be more open-minded than some had anticipated—and contentious for an entirely different rationale. This summer, it emerged that X was automatically enrolling users to have their data educate Grok. This stirred dissatisfaction among European regulators and censure from individuals like NordVPN CTO Marijus Briedis, who informed WIRED that the action carries “significant privacy repercussions,” including “[the] ability to access and scrutinize potentially confidential or sensitive info… [and the] capability to generate imagery and content with minimal supervision.”
[AI Forecasts]
What’s on the Horizon for AI Model Development
AI is approaching a substantial data scarcity
