AppOmni Surfaces BodySnatcher AI Agent Security Flaw Affecting ServiceNow Apps
AppOmni, a provider of a platform for securing software-as-a-service (SaaS) applications, this week disclosed it has discovered a flaw in the ServiceNow platform that could be used to create a malicious artificial intelligence (AI) agent.
Threat Actor Teases Source Code for Sale After Hack of Target Systems
AppOmni, a provider of a platform for securing software-as-a-service (SaaS) applications, this week disclosed it has discovered a flaw in the ServiceNow platform that could be used to create a malicious artificial intelligence (AI) agent.
Dubbed BodySnatcher (CVE-2025-12420), AppOmni researchers discovered it was possible for an unauthenticated intruder to impersonate any ServiceNow user using only an email address, bypassing multifactor authentication (MFA) and single sign-on (SSO) frameworks that ServiceNow has adopted.
Once access was gained, AppOmni researchers discovered they could create an AI agent with escalated privileges that enabled it to access external environments via the Virtual Agent application programming interface (API) that ServiceNow developed.
Since that discovery, ServiceNow has created a patch for customers that remediates this issue and there are no known instances of this exploit being used.
Aaron Costello, chief of security research for AppOmni, said as providers of SaaS applications deploy AI agents the BodySnatcher exploit should serve as an object lesson for potential risks. It’s still relatively trivial for cybercriminals to gain access to SaaS applications using stolen credentials or by bypassing MFA. Once access is gained, they can then compromise an AI agent to potentially take over an entire workflow, he noted.
The issue that organizations will ultimately need to come to terms with is the level of risk associated with deploying AI agents is significantly higher than previous generations of emerging technologies.
Unfortunately, the pace at which AI agents are being adopted is already exceeding the ability of many cybersecurity teams to keep pace, added Costello. As such, it’s likely only a matter of time before a major cybersecurity incident involving AI agents is discovered and disclosed, he said.
Cybersecurity teams, meanwhile, would be well-advised to review the guardrails that SaaS application providers are putting in place to secure AI agents. Many of those efforts only provide a minimum level of security that can be easily circumvented, noted Costello.
It is, of course, challenging these days for any cybersecurity team to prevent any technology from being adopted, but nevertheless they need to find a way to at least make employees aware of the potential hazards. Cybersecurity professionals are generally reluctant to appear as “party poopers” as AI agents gain momentum but there needs to be more focus on end user education, noted Costello.
At the same time, cybersecurity teams should be preparing now to respond to a breach involving AI agents that has the potential to rapidly expand, especially if that AI agent has access to massive amounts of sensitive data. The potential blast radius of a breach involving an AI agent is huge, said Costello.
The degree to which providers of AI agents and platforms are aware of these issues is less clear. However, as cybersecurity researchers spend more time on reviewing the guardrails currently in place the greater the appreciation for the actual state of AI there will be. The hope then becomes finding a way to resolve these issues before cybercriminals are able to exploit them.
