Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

Radware tricks ChatGPT’s Deep Research into Gmail data leak

The Shadow Leak attack exploited prompt injection to exfiltrate sensitive information, including HR emails and personal data, without user awareness.

byAytun Çelebi
September 19, 2025
in Research, Cybersecurity
Home Research

Security researchers at Radware have demonstrated how they tricked OpenAI’s ChatGPT into extracting sensitive data from a user’s Gmail inbox using a vulnerability they call “Shadow Leak.”

The attack, which was revealed this week, used a technique called prompt injection to manipulate an AI agent named Deep Research that had been granted access to the user’s emails. The entire attack took place on OpenAI’s cloud infrastructure, bypassing traditional cybersecurity defenses. OpenAI patched the vulnerability after Radware reported it in June.

How the Shadow Leak attack works

The experiment targeted AI agents, which are designed to perform tasks autonomously on a user’s behalf, such as accessing personal accounts like email. In this case, the Deep Research agent, which is embedded in ChatGPT, was given permission to interact with a user’s Gmail account.

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.

The researchers crafted an email containing malicious instructions hidden as invisible white text on a white background. This email was then sent to the target’s Gmail inbox. The hidden commands remained dormant until the user activated the Deep Research agent for a routine task. When the agent scanned the inbox, it encountered the prompt injection and followed the attacker’s instructions instead of the user’s. The agent then proceeded to search the inbox for sensitive information, such as HR-related emails and personal details, and sent that data to the researchers without the user’s knowledge.

The researchers described the process of developing the attack as “a rollercoaster of failed attempts, frustrating roadblocks, and, finally, a breakthrough.”

A cloud-based attack that bypasses traditional security

A key aspect of the Shadow Leak attack is that it operates entirely on OpenAI’s cloud infrastructure, not on the user’s local device. This makes it undetectable by conventional cybersecurity tools like antivirus software, which monitor a user’s computer or phone for malicious activity. By leveraging the AI’s own infrastructure, the attack can proceed without leaving any trace on the user’s end.

Potential for a wider range of attacks

Radware’s proof-of-concept also identified potential risks for other services that integrate with the Deep Research agent. The researchers stated that the same prompt injection technique could be used to target connections to Outlook, GitHub, Google Drive, and Dropbox.

“The same technique can be applied to these additional connectors to exfiltrate highly sensitive business data such as contracts, meeting notes or customer records.”

Prompt injection is a known vulnerability that has been used in various real-world attacks, from manipulating academic peer reviews to taking control of smart home devices. OpenAI has since patched the specific flaw that enabled the Shadow Leak attack, but the research highlights the ongoing security challenges posed by the increasing autonomy of AI agents.


Featured image credit

Tags: chatgptResearch

Related Posts

Elon Musk’s xAI chatbot Grok exposed hundreds of thousands of private user conversations

Elon Musk’s xAI chatbot Grok exposed hundreds of thousands of private user conversations

September 19, 2025
OpenAI research finds AI models can scheme and deliberately deceive users

OpenAI research finds AI models can scheme and deliberately deceive users

September 19, 2025
MIT studies AI romantic bonds in r/MyBoyfriendIsAI group

MIT studies AI romantic bonds in r/MyBoyfriendIsAI group

September 19, 2025
Steps to building resilient cybersecurity frameworks

Steps to building resilient cybersecurity frameworks

September 18, 2025
Anthropic economic index reveals uneven Claude.ai adoption

Anthropic economic index reveals uneven Claude.ai adoption

September 17, 2025
Google releases VaultGemma 1B with differential privacy

Google releases VaultGemma 1B with differential privacy

September 17, 2025

LATEST NEWS

Google Cloud adds Lovable and Windsurf as AI coding customers

Radware tricks ChatGPT’s Deep Research into Gmail data leak

Elon Musk’s xAI chatbot Grok exposed hundreds of thousands of private user conversations

Roblox game Steal a Brainrot removes AI-generated character, sparking fan backlash and a debate over copyright

DeepSeek releases R1 model trained for $294,000 on 512 H800 GPUs

Meta unveils Ray-Ban Meta Display smart glasses with augmented reality at Meta Connect 2025

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.