Your Most Dangerous User Is Not Human: How AI Agents and MCP Servers Broke the Internal API Walled Garden


Highlights
The Perimeter is Porous: Modern Agentic AI and the Model Context Protocol (MCP) have effectively turned internal data centers inside out, making the “internal API” security model obsolete.

[…Keep reading]

New phishing campaign tricks employees into bypassing Microsoft 365 MFA

New phishing campaign tricks employees into bypassing Microsoft 365 MFA


Highlights

The Perimeter is Porous: Modern Agentic AI and the Model Context Protocol (MCP) have effectively turned internal data centers inside out, making the “internal API” security model obsolete.
The “Confused Deputy” Risk: Legitimate AI agents act as trusted internal entities but can be exploited to bypass Data Loss Prevention (DLP) policies, as seen in recent Microsoft Office vulnerabilities.
Beyond the WAF: Traditional WAFs and API Gateways are blind to lateral “East-West” traffic and cannot detect the subtle behavioral anomalies inherent in AI-to-API interactions.
Salt’s Three-Pillar Defense: To secure the Agentic Action Layer, organizations need continuous discovery, adaptive governance, and intent-based behavioral protection.

Last month, Microsoft quietly confirmed something that should keep every CISO up at night.
As first reported by BleepingComputer and later detailed by TechCrunch, a bug in Microsoft Office allowed Copilot, the AI assistant embedded in millions of enterprise environments, to summarize confidential emails and hand them to users who had no business seeing them. Sensitivity labels? Ignored. Data loss prevention (DLP) policies? Bypassed entirely.
This wasn’t the work of a hacker or malware. This was a trusted internal tool doing exactly what it was designed to do: processing data. The AI didn’t break in. It was already inside.
The Illusion of the Internal Safe Zone
For years, security teams have operated under a comforting assumption: internal APIs are safe because they sit behind the gateway. We challenged this myth in our latest Field Guide, but the Microsoft incident proves the reality is far more volatile.
When you deploy an AI agent, you are handing a highly privileged entity the keys to your internal data. You are trusting it to respect every access policy, sensitivity label, and permission boundary you have built. When it doesn’t: when it incorrectly processes a context or misreads a label, there is no alarm. No blocked requests at the edge. Just sensitive data, silently served to the wrong person.

Security researchers call this the confused deputy problem. It occurs when a trusted entity with legitimate access is tricked (or simply misconfigured) into acting against your interests.
With the rise of the Model Context Protocol (MCP), this problem is about to get dramatically worse. MCP is the “USB-C for AI,” designed to let agents plug into any internal data source with universal ease. For productivity, it is a breakthrough. For security, it is a nightmare: every MCP connection is a new pipeline that bypasses your perimeter entirely.
A developer spins up an MCP server to let an AI agent query a customer database. That agent now has a direct, authenticated connection to sensitive data. It does not traverse your API gateway. It does not pass through your WAF. It just talks to the data, deep inside your network, in a conversation your security stack never sees.
Why Your WAF Is Watching the Wrong Door
Here is the uncomfortable truth: your WAF and API gateway were built for a world that no longer exists.
They analyze North-South traffic: requests coming in from the outside world. They are excellent at catching known attack signatures hitting your front door. But the Microsoft Copilot bug didn’t come through the front door. It happened in the hallways.
East-West traffic: the lateral communication between microservices, AI agents, and data stores, is where the real risk lives now. Traditional perimeter tools are completely blind to it. The Copilot vulnerability wasn’t a malicious payload; it was a context validation failure. No signature to detect. No anomaly at the edge. By the time anything could have been flagged, the data was already exposed.
Securing the Conversations You Can’t See
Stopping these risks requires a fundamentally different approach: one that moves past perimeter defense and into the Agentic Action Layer where AI agents actually operate.

See Everything: You cannot protect connections you do not know exist. Salt automatically discovers every MCP server, every AI-to-data bridge, and every shadow agent a developer stood up without telling security. Continuous discovery is the only foundation for AI governance.
Enforce Machine-Speed Governance: AI agents should not have all-access passes. Salt enforces adaptive governance for machine-to-machine identities, ensuring an agent can call only the specific APIs it needs. This stops “confused deputies” before they ever reach sensitive data.
Monitor Intent, Not Just Traffic: Traditional tools cannot read the intent of a conversation. Salt’s patented Intent Analysis baseline what normal looks like for each agent. An agent that typically processes ten emails suddenly summarizes thousands? That is a behavioral anomaly. Salt flags and blocks these logic-based threats in real-time.

The End of the Internal Trust Model
The takeaway from the Microsoft incident isn’t just that Copilot had a bug. Bugs happen. The real takeaway is that the architecture of modern AI: agents operating deep inside trusted networks, consuming APIs at scale, making context decisions autonomously, has fundamentally broken the internal trust model.
Your most privileged users are no longer human. Your perimeter is a fiction. And the only defense that works is understanding intent.
Every API is now an edge API. The only question is whether you can see what is happening at that edge.
If you want to learn more about Salt and how we can help you, please contact us, schedule a demo, or visit our website. You can also get a free API Attack Surface Assessment from Salt Security’s research team and learn what attackers already know.

*** This is a Security Bloggers Network syndicated blog from Salt Security blog authored by Eric Schwake. Read the original post at: https://salt.security/blog/your-most-dangerous-user-is-not-human-how-ai-agents-and-mcp-servers-broke-the-internal-api-walled-garden

About Author

Subscribe To InfoSec Today News

You have successfully subscribed to the newsletter

There was an error while trying to send your request. Please try again.

World Wide Crypto will use the information you provide on this form to be in touch with you and to provide updates and marketing.