AI Beat: Best AI Trends from 2024 – Remembered
Placing individuals at the forefront: Fresh steps in the EU
In the month of March, the EU gave the green light to a groundbreaking law known as the Artificial Intelligence Act with the aim of ensuring safety, fundamental human rights, and fostering AI innovation. Zero tolerance was set for specific applications that jeopardize human rights, for instance, the use of biometrics to “tag” people, compiling facial recognition databases from various sources like the internet and CCTV footage, and using AI for social ranking, predictive law enforcement, or human manipulation.
In December, the EU then introduced the Cyber Resilience Act, mandating digital product creators, software developers, importers, distributors, and sellers to incorporate cybersecurity attributes such as incident management, data protection, and support for updates and patches. Developers must promptly address any vulnerabilities as they are discovered. Failure to comply could result in substantial penalties and sanctions.
Also in December, the EU revised its Product Liability Directive (PLD) to cover software—unlike other legal systems like the U.S. that do not view software as a ‘commodity’. This renders software companies accountable for damages caused if their solutions are found to contain defects resulting in harm, including but not limited to, AI models.
Born in the USA: Supervision of AI on American soil
In the latter part of the year, there was significant activity at the federal level in the U.S., with the White House releasing its initial National Security Memorandum on AI in October. The memorandum urged for “tangible and influential actions” to:
- Maintain U.S. dominance in the advancement of reliable, trustworthy AI
- Enhance U.S. national security through AI
- Spearhead international agreements on the use and governance of AI
In November, the National Institute of Standards and Technology (NIST) established a taskforce known as Testing Risks of AI for National Security (TRAINS) to address the national security and public safety implications of AI. TRAINS is comprised of representatives from the Departments of Defense, Energy, and Homeland Security, as well as the National Institutes of Health. They will facilitate coordinated evaluations and testing of AI models in areas of national security concern such as radiological, nuclear, chemical, and biological security, cybersecurity, and beyond.
In November as well, the Departments of Commerce and State jointly launched the International Network of AI Safety Institutes for the first time, concentrating on the risks associated with synthetic content, foundational model testing, and sophisticated AI risk evaluation.
Throughout the equator: AI regulations in Latin America
Most Latin American nations have taken measures to tackle AI risks while embracing its possibilities. As reported by White & Case, Brazil and Chile are among those with the most comprehensive proposals, while others like Argentina and Mexico have approached the matter more generally. Some are focusing on mitigating risks by implementing restrictions or regulatory boundaries, while others view an opportunity in a more liberal approach that encourages innovation and international investment.
Understand Your Opponent: AI and Cyber Threat
To govern AI, it is essential to understand the actual risks involved. In 2024, OWASP, MIT, and other entities dedicated themselves to the mission of identifying and elaborating on AI vulnerabilities.
OWASP’s LLM hit-list
The Open Global Application Security Project (OWASP) revealed its 2025 Top 10 Risk List for LLMs. Reappearing on the list are familiar risks like immediate injection dangers, supply chain risks, and inadequate output handling. New inclusions comprise vector and embedding vulnerabilities, misinformation, and unlimited consumption (an expansion of the previous DoS risk category).
OWASP broadened its concerns regarding ‘excessive agency’ predominantly due to the rise in semi-autonomous agent-based structures. According to OWASP, “With LLMs operating as agents or in plug-in configurations, unchecked permissions can result in unintended or risky actions, rendering this element more crucial than ever before.”
