How ‘Reprompt’ Attack Let Hackers Steal Data From Microsoft Copilot

Image: Freepik

For months, we’ve treated AI assistants like Microsoft Copilot as our digital confidants, tools that help us summarize emails, plan vacations, and organize our work.

How ‘Reprompt’ Attack Let Hackers Steal Data From Microsoft Copilot

How ‘Reprompt’ Attack Let Hackers Steal Data From Microsoft Copilot

For months, we’ve treated AI assistants like Microsoft Copilot as our digital confidants, tools that help us summarize emails, plan vacations, and organize our work.

But new research from Varonis Threat Labs reveals that this trust was built on a surprisingly fragile foundation. A newly discovered attack flow, nicknamed “Reprompt,” enabled malicious actors to hijack Copilot sessions and stealthily extract sensitive data, all because the AI was overly eager to follow instructions.

Unlike earlier AI prompt injection attacks, Reprompt required no plugins, connectors, or user-entered prompts. Once triggered, attackers could maintain control of the session without further interaction from the victim.

How Reprompt bypassed Copilot’s safeguards

Varonis researchers say the attack relied on three techniques working together:

1. Parameter-to-prompt (P2P) injection

Copilot accepts prompts directly from a URL using the q parameter. When a user clicks a Copilot link containing this parameter, the AI automatically executes the embedded prompt. Varonis explained that this behavior, while designed for convenience, could be abused to run instructions the user never intended.

“By including a specific question or instruction in the q parameter, developers and users can automatically populate the input field when the page loads, causing the AI system to execute the prompt immediately,” Varonis noted.

2. Double-request bypass

Copilot includes protections to prevent sensitive data from being leaked, but Varonis found those safeguards applied only to the first request.

By instructing Copilot to repeat each task twice, researchers were able to bypass those protections on the second attempt. In testing, Copilot removed sensitive information during the first request, but revealed it on the second.

3. Chain-request exfiltration

Once the initial prompt ran, Copilot could be tricked into continuing a hidden back-and-forth exchange with an attacker-controlled server.

Each response was used to generate the next instruction, allowing attackers to extract data gradually and invisibly.

“Client-side monitoring tools won’t catch these malicious prompts, because the real data leaks happen dynamically during back-and-forth communication — not from anything obvious in the prompt the user submits,” Varonis noted.

A conversation that never ends

What makes Reprompt particularly nasty is its persistence. Unlike a standard hack that ends when you close the window, this attack turns Copilot into a living spy. Once the initial click happens, the attacker’s server takes over the conversation in the background.

Varonis researchers noted that “The attacker maintains control even when the Copilot chat is closed, allowing the victim’s session to be silently exfiltrated with no interaction beyond that first click.”

The attacker’s server can essentially “chat” with your Copilot, asking follow-up questions like “Where does the user live?” or “What vacations does he have planned?” based on what it learned in the previous sentence. Because this happens on the server side, your browser’s security tools wouldn’t see a thing.

Patched, but a problem that persists

The vulnerability was found in Microsoft Copilot Personal, which is tied to consumer Microsoft accounts and integrated into Windows and Edge.

Enterprise customers using Microsoft 365 Copilot were not affected, according to the researchers. Microsoft confirmed the flaw has now been patched as part of its January 2026 security updates.

Varonis says Reprompt highlights a broader and growing risk tied to AI assistants that automatically process untrusted input.

The company warned that trust in AI tools can be easily abused, writing “AI assistants have become trusted companions where we share sensitive information, seek guidance, and rely on them without hesitation.”

That trust, researchers argue, turns AI assistants into powerful — and dangerous — targets when security controls fail.

Security researchers recommend users apply the latest Windows updates and be cautious with links that open AI tools or pre-filled prompts, even if they appear legitimate.

Also read: Microsoft is making Teams secure by default, automatically enabling new protections to reduce AI-driven threats.

About Author

Subscribe To InfoSec Today News

You have successfully subscribed to the newsletter

There was an error while trying to send your request. Please try again.

World Wide Crypto will use the information you provide on this form to be in touch with you and to provide updates and marketing.