Radware Discloses ZombieAgent Technique to Compromise AI Agents
Radware this week announced it has discovered a zero-click indirect prompt injection (IPI) vulnerability targeting the Deep Research agent developed by OpenAI.
Why Senior Software Engineers Will Matter More (In 2026) in an AI-First World
Radware this week announced it has discovered a zero-click indirect prompt injection (IPI) vulnerability targeting the Deep Research agent developed by OpenAI.Dubbed ZombieAgent, Radware researchers have discovered that it is possible to implant malicious rules directly into the long-term memory or working notes of an AI agent. That technique enables a malicious actor to establish persistence in a way that enables hidden executions of actions every time the agent is used.Pascal Geenens, vice president of threat intelligence for Radware, said cybercriminals can, for example, silently collect sensitive information over time or initiate actions across any set of tools or applications that an AI agent has been given access to without having to re-engage with the target account after the initial compromise.Radware has yet to see this particular type of attack in the wild, but like most prompt injection attacks it’s relatively trivial for cybercriminals to insert via an AI account that they have managed to gain access to via a stolen set of credentials, noted Geenens.The defining characteristic of a ZombieAgent is that all malicious actions occur within OpenAI’s cloud infrastructure, he added. As a result, no endpoint logs record the activity. Nor is there network traffic that passes through, for example, a firewall or gateway, which means no alerts are generated, noted Geenens.The ZombieAgent research builds on previous Radware disclosure of a ShadowLeak vulnerability that showed how compromised AI agents could be used to read emails, interact with corporate systems, initiate workflows, and make decisions autonomously. Radware disclosed the vulnerability to OpenAI, which subsequently put guardrails in place to thwart this type of attack.However, the ZombieAgent technique shows just how easy it is to end run those guardrails, said Geenens. More troubling still, there are no tools to continuously monitor the activities of an AI agent, he added.Of course, adoption of AI agents is already outpacing the ability of cybersecurity teams to put the proper controls and policies in place. It’s now more a question of how often and to what degree these AI agents will be compromised before cybersecurity teams have the tools needed to secure them.The level of risk, unfortunately, is also rising as more AI agents are deployed. In addition to increasing the overall size of the attack surface that needs to be defended, AI agents also provide a mechanism that could enable cybercriminals to compromise an entire business process.Ideally, organizations should carefully consider the cybersecurity implications of deploying AI agents, especially when it comes to monitoring privileges assigned to AI agents and the sensitivity of the data they are allowed to access. Organizations should also review the licensing agreement that providers of these tools ask end users to sign to ensure that no data is being retained for training purposes, noted Geenens.In the meantime, however, cybersecurity teams should be preparing now to respond as quickly as possible to the AI agent security incident that is now all but inevitable.
