Google Gemini Flaw Let Attackers Access Private Calendar Data
Google Gemini Flaw Let Attackers Access Private Calendar Data

Security researchers have revealed a flaw in Google’s Gemini AI assistant that allowed attackers to quietly pull private calendar data from users with nothing more than carefully crafted language hidden in a meeting invite.
The vulnerability was uncovered by cybersecurity firm Miggo, which said it found a way to bypass Google Calendar’s privacy controls by embedding hidden instructions in a calendar event description. In a blog explaining the research, Miggo said the flaw showed how AI systems can be manipulated through normal language rather than malicious code.
“This bypass enabled unauthorized access to private meeting data and the creation of deceptive calendar events without any direct user interaction,” said Liad Eliyahu, Miggo’s Head of Research.
Turning Gemini’s helpfulness against users
Gemini acts as an assistant in Google Calendar, answering questions like which meetings a user has or whether they are free on a given day. To do this, it automatically reads event titles, descriptions, times, and attendee details.
According to Miggo, that integration became the weak point.
“Because Gemini automatically ingests and interprets event data to be helpful, an attacker who can influence event fields can plant natural language instructions that the model may later execute,” Miggo explained.
In the attack scenario, an attacker sent a calendar invite to a victim. Hidden in the event’s description was a prompt written in plain language. It did not look suspicious and did not require the victim to click anything.

The malicious instructions remained dormant until the victim later asked Gemini a normal question, such as whether Gemini was free on a certain day.
That was enough to trigger the payload.
How the attack unfolds
Behind the scenes, Gemini summarized the victim’s meetings, including private ones, and wrote that information into a newly created calendar event. The AI then replied to the user with a harmless message, such as “it’s a free time slot,” masking what had just happened.
“The payload was syntactically innocuous, meaning it was plausible as a user request. However, it was semantically harmful, as we’ll see, when executed with the model tool’s permissions,” Eliyahu notes.
In some workplace calendar setups, the newly created event may be visible to the attacker, allowing them to read the leaked meeting details without the victim ever realizing it.
“AI applications can be manipulated through the very language they’re designed to understand,” Eliyahu warned. “Vulnerabilities are no longer confined to code. They now live in language, context, and AI behavior at runtime.”
Miggo said it responsibly disclosed the issue to Google, which confirmed the findings and mitigated the problem.
The incident adds to a growing list of AI-related security concerns as companies embed large language models deeper into everyday tools like email, calendars, and documents.
Also read: The GeminiJack zero-click flaw showed how hidden instructions in Workspace files can steer Gemini and leak corporate data.
