Google has confirmed a security vulnerability involving a new AI-driven attack that can compromise Gmail accounts.
The company noted that the threat “is not specific to Google” and highlights the need for stronger defenses against prompt injection attacks.
How the prompt injection attack works
The attack uses malicious instructions hidden inside seemingly harmless items like emails, attachments, or calendar invitations. While these instructions are invisible to a human user, an AI assistant can read and execute them.
Researcher Eito Miyamura demonstrated the vulnerability in a video posted on X.
We got ChatGPT to leak your private email data. All you need? The victim’s email address. AI agents like ChatGPT follow your commands, not your common sense… with just your email, we managed to exfiltrate all your private information.
We got ChatGPT to leak your private email data 💀💀
All you need? The victim's email address. ⛓️💥🚩📧
On Wednesday, @OpenAI added full support for MCP (Model Context Protocol) tools in ChatGPT. Allowing ChatGPT to connect and read your Gmail, Calendar, Sharepoint, Notion,… pic.twitter.com/E5VuhZp2u2
— Eito Miyamura | 🇯🇵🇬🇧 (@Eito_Miyamura) September 12, 2025
The attack can be triggered by a specially crafted calendar invite that the user does not even need to accept. When the user asks their AI assistant to perform a routine task like checking their calendar, the AI reads the hidden command in the invite. The malicious command then instructs the AI to search the user’s private emails and send the data to the attacker.
How to protect your account and Google’s response
Google previously warned about this type of threat in June, stating that instructions embedded in documents or calendar invites could instruct AI to “exfiltrate user data or execute other rogue actions.” The company is now implementing defenses and advising users on how to protect themselves.
- Enable the “known senders” setting in Google Calendar: Google states this is an effective way to prevent malicious invites from automatically appearing on your calendar. The attack is less likely to work unless the user has previously interacted with the attacker or changed this default setting.
- Google is training its AI models to resist these attacks: The company says its training with adversarial data has “significantly enhanced our defenses against indirect prompt injection attacks in Gemini 2.5 models.”
- New detection models are being deployed: Google is rolling out proprietary machine learning models that can detect and neutralize malicious prompts within emails and files before they are executed.
Remember, AI might be super smart, but can be tricked and phished in incredibly dumb ways to leak your data.