Microsoft Rectifies ASCII Smuggling Vulnerability Enabling Data Theft from Microsoft 365 Copilot
Insights have surfaced regarding a now-fixed vulnerability in Microsoft 365 Copilot that could facilitate the unlawful acquisition of delicate user data using a method known as ASCII smuggling.
“ASCII Smuggling is an innovative technique that employs special Unicode characters which imitate ASCII but remain invisible in the user interface,” stated security expert Johann Rehberger in a report.
“This implies that a cyber attacker can compel the [large language model] to display hidden data to the user and embed them within clickable hyperlinks. Essentially, this technique stages the data for extraction!”
The entire attack involves linking various attack techniques to construct a dependable exploit chain. This includes the subsequent sequences –
- Initiating prompt injection by concealing malevolent content in a document shared on the chat platform
- Employing a prompt injection payload to direct Copilot to search for additional emails and documents
- Using ASCII smuggling to entice the user into clicking on a link for extracting valuable data to an external server
The final result of the attack is that confidential data present in emails, such as multi-factor authentication (MFA) codes, could be dispatched to a server under adversary control. Microsoft has subsequently dealt with the concerns subsequent to a responsible disclosure made in January 2024.
The discovery emerges as proof-of-concept (PoC) assaults have been showcased against Microsoft’s Copilot system to tamper with responses, exfiltrate private information, and circumvent security measures, once again emphasizing the necessity of monitoring risks associated with artificial intelligence (AI) tools.
The techniques, explained by Zenity, enable malicious entities to execute retrieval-augmented generation (RAG) poisoning and indirect prompt injection, culminating in remote code execution attacks that grant total control over Microsoft Copilot and other AI applications. In a theoretical attack scenario, an external hacker equipped with code execution capabilities could deceive Copilot into furnishing users with phishing websites.

Arguably, one of the most innovative attacks involves transforming the AI system into a spear-phishing mechanism. Known as LOLCopilot, this red-teaming approach enables an attacker possessing a victim’s email account to dispatch phishing messages mimicking the communication style of the compromised users.
Microsoft has also recognized that publicly accessible Copilot bots created via Microsoft Copilot Studio devoid of any authentication defenses could serve as a conduit for threat actors to extract sensitive data, provided they possess prior knowledge of the Copilot name or URL.
“Enterprises should assess their risk tolerance and exposure to prevent potential data leaks from Copilots (formerly Power Virtual Agents) and should enable Data Loss Prevention and other security mechanisms as necessary to regulate the creation and publication of Copilots,” Rehberger mentioned.

