Microsoft Patches Security Flaw That Exposed Confidential Emails to AI
Microsoft Corp. confirmed it is addressing a significant security lapse that allowed its Copilot AI to bypass privacy protections and summarize users’ confidential emails without authorization.
NDSS 2025 – Try to Poison My Deep Learning Data? Nowhere To Hide Your Trajectory Spectrum!
Microsoft Corp. confirmed it is addressing a significant security lapse that allowed its Copilot AI to bypass privacy protections and summarize users’ confidential emails without authorization.
The bug, which has persisted since late January, effectively ignored data loss prevention (DLP) protocols designed to keep sensitive corporate information out of the reach of large language models (LLMs).
The vulnerability, tracked by system administrators as CW1226324, specifically targeted the work tab within Copilot Chat. Despite users applying confidential labels to their correspondence — a standard practice meant to shield data from automated tools — artificial intelligence (AI) continued to ingest and outline messages stored in Sent Items and Drafts folders.
The issue was first identified by BleepingComputer and later confirmed by Microsoft, which attributed the failure to an unspecified “code issue.” While the company began rolling out a fix in early February, it has yet to disclose the exact number of affected Microsoft 365 business customers.
The incident highlights a growing friction between AI productivity and data sovereignty. Copilot is currently integrated across Microsoft’s suite of Office products, including Word, Excel, and Outlook, promising to streamline workflows by synthesizing vast amounts of organizational data.
As AI becomes deeply embedded in the modern workplace, security experts warn that the burden of vigilance remains with the user. Even with enterprise-grade protections in place, code issues can still turn a productivity tool into a privacy liability.
However, the revelation that the tool could circumvent explicit security labels has fueled anxieties swirling around cloud-based AI.
The timing of the disclosure coincides with a crackdown on AI tools within high-stakes environments. Recently, the European Parliament’s IT department moved to block built-in AI features on work-issued devices, citing the risk of confidential legislative correspondence being uploaded to the cloud without oversight.
This is not the first security hurdle for Microsoft’s flagship AI. In January, security firm Varonis detailed a Reprompt vulnerability that could allow hackers to access a user’s sensitive files and personal details via a single malicious link, even after a chat session had ended.
The breach underscores a widening security gap in the corporate world. According to Microsoft’s Cyber Pulse report, more than 80% of Fortune 500 companies are currently deploying AI agents but only 47% of businesses report having the necessary security controls to manage generative AI platforms effectively.
“Agent adoption and scaling is pretty significant, but at the same time, the visibility that organizations have on the agents is very limited,” said Vasu Jakkal, Microsoft Security’s corporate vice president.
