Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

Gmail hit by AI prompt injection attack via calendar

Hidden instructions in emails, files, and calendar invites can trick AI assistants into leaking private information, Google confirms.

byKerem Gülen
September 15, 2025
in Cybersecurity

Google has confirmed a security vulnerability involving a new AI-driven attack that can compromise Gmail accounts.

The company noted that the threat “is not specific to Google” and highlights the need for stronger defenses against prompt injection attacks.

How the prompt injection attack works

The attack uses malicious instructions hidden inside seemingly harmless items like emails, attachments, or calendar invitations. While these instructions are invisible to a human user, an AI assistant can read and execute them.

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.

Researcher Eito Miyamura demonstrated the vulnerability in a video posted on X.

We got ChatGPT to leak your private email data. All you need? The victim’s email address. AI agents like ChatGPT follow your commands, not your common sense… with just your email, we managed to exfiltrate all your private information.

We got ChatGPT to leak your private email data 💀💀

All you need? The victim's email address. ⛓️‍💥🚩📧

On Wednesday, @OpenAI added full support for MCP (Model Context Protocol) tools in ChatGPT. Allowing ChatGPT to connect and read your Gmail, Calendar, Sharepoint, Notion,… pic.twitter.com/E5VuhZp2u2

— Eito Miyamura | 🇯🇵🇬🇧 (@Eito_Miyamura) September 12, 2025

The attack can be triggered by a specially crafted calendar invite that the user does not even need to accept. When the user asks their AI assistant to perform a routine task like checking their calendar, the AI reads the hidden command in the invite. The malicious command then instructs the AI to search the user’s private emails and send the data to the attacker.

How to protect your account and Google’s response

Google previously warned about this type of threat in June, stating that instructions embedded in documents or calendar invites could instruct AI to “exfiltrate user data or execute other rogue actions.” The company is now implementing defenses and advising users on how to protect themselves.

  • Enable the “known senders” setting in Google Calendar: Google states this is an effective way to prevent malicious invites from automatically appearing on your calendar. The attack is less likely to work unless the user has previously interacted with the attacker or changed this default setting.
  • Google is training its AI models to resist these attacks: The company says its training with adversarial data has “significantly enhanced our defenses against indirect prompt injection attacks in Gemini 2.5 models.”
  • New detection models are being deployed: Google is rolling out proprietary machine learning models that can detect and neutralize malicious prompts within emails and files before they are executed.

Remember, AI might be super smart, but can be tricked and phished in incredibly dumb ways to leak your data.


Featured image credit

Tags: CybersecurityFeaturedgmail

Related Posts

Shinyhunters extorts Red Hat over stolen CER data

Shinyhunters extorts Red Hat over stolen CER data

October 7, 2025
CPAP breach exposes data of 90k military members

CPAP breach exposes data of 90k military members

October 7, 2025
Fight for the Future launches campaign to protect VPN access

Fight for the Future launches campaign to protect VPN access

October 6, 2025
Yubico survey: 62% of Gen Z engaged with phishing scams

Yubico survey: 62% of Gen Z engaged with phishing scams

October 6, 2025
High-resolution computer mice can listen to conversations through desk vibrations

High-resolution computer mice can listen to conversations through desk vibrations

October 6, 2025
Could CTEM have prevented the Oracle Cloud breach?

Could CTEM have prevented the Oracle Cloud breach?

October 5, 2025

LATEST NEWS

Shinyhunters extorts Red Hat over stolen CER data

CPAP breach exposes data of 90k military members

Windows 11 test build blocks local account bypass

Excel gets AI agent mode for automated data tasks

What is new at iOS 26.1 beta 2?

ChatGPT reaches 800m weekly active users

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.