AI Pulse: Surprising Prices, Emergence of the Agents, Rebel AI

Deceiving encounters at the cash registerThe issue of fraud remains prominently featured among the risk patterns we are monitoring for AI Pulse.

AI Pulse: Sticker Shock, Rise of the Agents, Rogue AI

Deceiving encounters at the cash register
The issue of fraud remains prominently featured among the risk patterns we are monitoring for AI Pulse. Despite the absence of technologies like autonomous AI, scammers are progressively enhancing their capabilities. While it’s challenging to pinpoint where AI is being utilized without directly observing an attacker’s actions, it is evidently playing a part. The frequency and complexity of phishing and business email compromise (BEC) attacks have both risen, indicating some form of enhanced capability. Trend Micro’s sensor defense network is detecting artifacts that seem to have been generated by creative AI, and counterfeit domains and websites also appear to be leveraging the linguistic and multimodal content creation skills of LLMs.

Just a few weeks back, ConsumerAffairs published an article featuring a “locate the counterfeit Amazon page” quiz on how numerous shoppers are falling for fake transactions on seemingly legitimate websites. The article highlights the use of inexpensive “phish kits” that criminals can employ to swiftly launch ready-to-go scam sites. It also references a study by Memcyco revealing the discovery of four fraudulent Amazon sites in a single scan, along with the disclosure that Amazon allocated over $1.2 billion in 2023 and maintained a team of 15,000 individuals focused on preventing fraud.

Over 10,000 athletes. Representing 200+ countries. Hit by over 140 cyberattacks.
As forecasted in our June edition of AI Pulse, the recent Olympic Games in Paris experienced a surge of cyberattacks—exceeding 140 in total. According to Anssi, the French cybersecurity agency, the Games themselves remained unaffected. Government entities and infrastructure related to sports, transportation, and telecommunications were the primary targets, with a third of the attacks leading to disruptions, and half of them attributed to denial-of-service incidents.

One of the significant incidents involved a ransomware attack on the Grand Palais—which had a role in hosting the Games—and numerous French museums. Nevertheless, the agency clarified that the systems tied to the Olympics were not impacted.

Immersed in a world of AI garbage and filth
We extensively covered deepfakes in the inaugural publication of AI Pulse, yet they are not the sole type of potentially harmful AI-generated material. The Washington Post recently published a piece in mid-August discussing the growing number of images generated on X using the AI image generator, Grok. Although Grok is not inherently malicious, the article raised concerns regarding its lack of controls, citing instances with Nazi content.

AI is also accountable for the rising tide of bothersome content colloquially referred to as ‘slop’—material crafted to resemble human-made content that blurs the distinction between legitimate, valuable information and misleading, time-wasting junk. More disturbing is the outright misinformation, which a study from spring 2024 discovered is frequently and effortlessly disseminated by AI chatbots. According to the NewsGuard website, when tested, 10 prominent chatbots repeated false Russian information about a third of the time, raising concerns about the credibility of sources for those seeking accurate information in a crucial election year.

What lies ahead with autonomous AI

Enough with another #$%@&*!! chatbot!
Assertions that GenAI is falling short of its potential are narrow-minded at best, founded on the misconception that LLMs are the ultimate form of artificial intelligence. They are not. If interest has reached a plateau—as indicated by the recent results of major players—it merely signifies that the market does not require yet another chatbot. The real demand is for what lies beyond: adaptable problem-solving abilities.

Achieving this capability necessitates more than just enlarging LLMs. It mandates a comprehensive solution engineering strategy and the creation of compound or composite systems.

Segmenting and conquering
Composite systems, as their name implies, comprise multiple elements collaborating to execute tasks of higher complexity than a single statistical model could handle independently. The hierarchical form of autonomous AI embodies this concept by involving and coordinating multiple agents to perform distinct functions towards a common objective.

According to the Berkeley Artificial Intelligence Research (BAIR) group, composite systems offer increased flexibility, dynamism beyond singular LLMs, and better alignment with user-specific performance (and cost) benchmarks. Additionally, they are arguably more manageable and trustworthy, as opposed to relying on a ‘single source of truth’ (a solitary LLM), composite results can be filtered and verified by other components.

The ultimate manifestation of a composite system would be an AI mesh comprising agents that interact internally and with other agents across organizational boundaries.

Shifting the cloud dynamic
InfoWorld highlights that forms of autonomous AI are already operational in mobile personal assistants, automotive systems, and home environmental controls. As organizations assimilate these technologies, many are opting to adjust their infrastructure strategy—combining on-premises, local device, and cloud-based AI for enhanced flexibility and performance. Expect to see agents emerging everywhere, from wearable devices and laptops to data centers. Establishing secure and trustworthy domains within this interconnected framework demands a careful approach as the AI mesh expands.

Taming the wild beast
Transitioning to a composite framework where LLMs and agents communicate and collaborate will lead to superior AI results. However, safeguarding this form of AI necessitates a ‘bigger boat’ or, quoting AI expert Yoshua Bengio’s terminology, a ‘better cage for the AI bear’. Essentially, this boils down to an alignment issue: is the AI system accomplishing its intended objectives or pursuing undesired outcomes? (Compounded by the question of whose objectives, and which are the most desirable?)

Currently, there seems to be a greater emphasis on advancing AI reasoning capabilities than on integrating security measures into AI. This narrative must evolve, or we risk winding up with rogue AI models seeking to fulfill objectives we did not authorize—and that are not in our best interest.

Additional insight on autonomous AI from Trend Micro

Explore these supplementary materials:

About Author

Subscribe To InfoSec Today News

You have successfully subscribed to the newsletter

There was an error while trying to send your request. Please try again.

World Wide Crypto will use the information you provide on this form to be in touch with you and to provide updates and marketing.