Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

Gmail hit by AI prompt injection attack via calendar

Hidden instructions in emails, files, and calendar invites can trick AI assistants into leaking private information, Google confirms.

byKerem Gülen
September 15, 2025
in Cybersecurity

Google has confirmed a security vulnerability involving a new AI-driven attack that can compromise Gmail accounts.

The company noted that the threat “is not specific to Google” and highlights the need for stronger defenses against prompt injection attacks.

How the prompt injection attack works

The attack uses malicious instructions hidden inside seemingly harmless items like emails, attachments, or calendar invitations. While these instructions are invisible to a human user, an AI assistant can read and execute them.

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.

Researcher Eito Miyamura demonstrated the vulnerability in a video posted on X.

We got ChatGPT to leak your private email data. All you need? The victim’s email address. AI agents like ChatGPT follow your commands, not your common sense… with just your email, we managed to exfiltrate all your private information.

We got ChatGPT to leak your private email data 💀💀

All you need? The victim's email address. ⛓️‍💥🚩📧

On Wednesday, @OpenAI added full support for MCP (Model Context Protocol) tools in ChatGPT. Allowing ChatGPT to connect and read your Gmail, Calendar, Sharepoint, Notion,… pic.twitter.com/E5VuhZp2u2

— Eito Miyamura | 🇯🇵🇬🇧 (@Eito_Miyamura) September 12, 2025

The attack can be triggered by a specially crafted calendar invite that the user does not even need to accept. When the user asks their AI assistant to perform a routine task like checking their calendar, the AI reads the hidden command in the invite. The malicious command then instructs the AI to search the user’s private emails and send the data to the attacker.

How to protect your account and Google’s response

Google previously warned about this type of threat in June, stating that instructions embedded in documents or calendar invites could instruct AI to “exfiltrate user data or execute other rogue actions.” The company is now implementing defenses and advising users on how to protect themselves.

  • Enable the “known senders” setting in Google Calendar: Google states this is an effective way to prevent malicious invites from automatically appearing on your calendar. The attack is less likely to work unless the user has previously interacted with the attacker or changed this default setting.
  • Google is training its AI models to resist these attacks: The company says its training with adversarial data has “significantly enhanced our defenses against indirect prompt injection attacks in Gemini 2.5 models.”
  • New detection models are being deployed: Google is rolling out proprietary machine learning models that can detect and neutralize malicious prompts within emails and files before they are executed.

Remember, AI might be super smart, but can be tricked and phished in incredibly dumb ways to leak your data.


Featured image credit

Tags: CybersecurityFeaturedgmail

Related Posts

WhatsApp introduces passkeys for end-to-end encrypted chat backups

WhatsApp introduces passkeys for end-to-end encrypted chat backups

October 30, 2025
Azure outage: Microsoft blames Azure Front Door for major global disruption

Azure outage: Microsoft blames Azure Front Door for major global disruption

October 30, 2025
183M Gmail passwords exposed via infostealer malware

183M Gmail passwords exposed via infostealer malware

October 28, 2025
Google’s Live Threat Detection is reportedly coming to more Android phones

Google’s Live Threat Detection is reportedly coming to more Android phones

October 23, 2025
Meta’s latest update focuses on protecting older users from scams

Meta’s latest update focuses on protecting older users from scams

October 22, 2025
US judge bans NSO Group from targeting WhatsApp users with Pegasus spyware

US judge bans NSO Group from targeting WhatsApp users with Pegasus spyware

October 21, 2025

LATEST NEWS

Google marks Pac-Man’s 45th anniversary with a Halloween Doodle

OpenAI Sora adds character cameos and video stitching

WhatsApp introduces passkeys for end-to-end encrypted chat backups

Character.AI is closing the door on under-18 users

Rode upgrades its Wireless Micro Camera Kit with universal compatibility

YouTube’s new Super Resolution turns blurry uploads into HD and 4K

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.