We’re Moving Too Fast: Why AI’s Race to Market Is a Security Disaster
The recently disclosed ServiceNow vulnerability should terrify every CISO in America. CVE-2025-12420, dubbed “BodySnatcher,” represents everything wrong with how we’re deploying AI in the enterprise today.
AppGuard Critiques AI Hyped Defenses; Expands its Insider Release for its Next-Generation Platform
The recently disclosed ServiceNow vulnerability should terrify every CISO in America. CVE-2025-12420, dubbed “BodySnatcher,” represents everything wrong with how we’re deploying AI in the enterprise today. An unauthenticated attacker—someone who has never logged into your system, sitting anywhere in the world—can impersonate your administrators using nothing more than an email address. They bypass your multi-factor authentication, sidestep your single sign-on, and weaponize your own AI agents to create backdoor accounts with full privileges.With a CVSS score of 9.3 out of 10, this vulnerability affected ServiceNow’s Now Assist AI platform—software used by nearly half of Fortune 100 companies. The attack vector? A hardcoded, platform-wide secret combined with account-linking logic that trusted nothing more than an email address. An attacker sitting halfway across the globe with zero credentials could impersonate an administrator and execute AI agents to override security controls.The AI Gold Rush Is Breaking SecurityCompanies are shipping AI features because AI is the current marketing imperative, not because customers are demanding these capabilities or because the security implications have been properly considered. Executives see competitors announcing AI initiatives and panic. Product managers get mandates to “add AI” to roadmaps. Developers face crushing pressure to deploy agentic systems on impossibly short timelines.The result is predictable. Security becomes an afterthought. Authentication mechanisms get implemented hastily. Authorization boundaries blur. And suddenly, your enterprise AI agent—designed to “simplify workflows”—becomes a remote-controlled attack vector that bypasses every security control you’ve spent years building.Default configurations in AI platforms routinely enable second-order prompt injection attacks, where attackers embed malicious instructions in data fields that higher-privileged AI agents later process. Agent discovery features create attack vectors when agents are improperly configured. These are fundamental design failures born from rushing AI to market.The uncomfortable truth is that most organizations deploying AI don’t understand what they’re deploying. Executives don’t grasp that agentic AI systems require fundamentally different security models. Developers don’t realize that the “helpful” AI agent they just connected to their database is now a privileged execution path that attackers can hijack. Security teams discover these AI deployments after the fact, when it’s too late to architect proper controls.The Agentic AI Nightmare and MCP’s Authentication GapAgentic AI systems represent a quantum leap in attack surface. Unlike traditional APIs, AI agents make decisions, chain together complex workflows, and interact with multiple systems autonomously. When you give an AI agent the ability to “help users manage their accounts,” you’ve created an autonomous system with privileges that attackers can manipulate through prompt injection or broken authentication.The risks multiply with protocols like Model Context Protocol (MCP), which allows AI systems to connect to various data sources and tools. MCP servers can expose file systems, databases, and internal tools to AI agents—often without requiring authentication or authorization at the MCP layer itself. The assumption is that authentication happens elsewhere in the stack. But assumptions kill security programs.When an attacker hijacks an AI agent’s identity, they gain access to everything connected through MCP—customer databases, internal APIs, file systems—all without directly authenticating to those systems. The AppOmni researchers demonstrated exactly this attack pattern with ServiceNow, showing how low-privileged users could embed malicious instructions that higher-privileged AI agents would faithfully execute.What We Must Do NowOrganizations need to take immediate action:Halt the AI feature factory. Every AI initiative needs a security review before deployment, not after. Deploying broken AI features damages your company more than not having AI features at all.Implement zero-trust architecture for AI agents. AI agents should never inherit ambient privileges—the privileges of whatever user or system happens to be hosting them. Every operation requires explicit authentication and authorization. MCP servers need authentication mechanisms. AI agents need scoped credentials that follow least privilege. Audit every action an AI agent takes.Eliminate dangerous default configurations. Stop shipping products with permissive defaults. Agent discovery should be disabled by default. MCP servers should require authentication by default. Make security the default and let users consciously choose to relax controls if they understand the risks.Build defense in depth. Assume attackers will compromise your AI agent. Use separate service accounts with limited scopes. Monitor agent behavior for anomalies. Implement rate limiting and output validation. Create kill switches. Build these controls into your architecture from day one.Establish lifecycle management. Organizations are deploying AI agents and forgetting about them. Unused agents with elevated privileges become dormant attack vectors. De-provision stagnant agents. Audit which agents exist, what privileges they have, and whether they’re needed.The Bottom LineThe BodySnatcher vulnerability reveals how fundamentally broken our approach to AI security has become. We’re deploying autonomous agents with elevated privileges, connecting them to sensitive data through protocols that assume authentication happens elsewhere, and rushing everything to market because competitors are doing the same thing.Security cannot be retrofitted. You cannot ship broken authentication and patch it later. You cannot deploy AI agents with unbounded privileges and hope nobody notices. You cannot assume protocols like MCP will be secure simply because they’re new.The choice is stark: either we slow down the AI deployment frenzy and build these systems correctly, or we continue discovering 9.3 severity vulnerabilities that let unauthenticated attackers remote-control our enterprises. The AI gold rush is breaking security. It’s time to choose security over time-to-market before the next BodySnatcher emerges—and the attackers who exploit it won’t responsibly disclose it first.
