Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

Radware tricks ChatGPT’s Deep Research into Gmail data leak

The Shadow Leak attack exploited prompt injection to exfiltrate sensitive information, including HR emails and personal data, without user awareness.

byAytun Çelebi
September 19, 2025
in Research, Cybersecurity
Home Research
Share on FacebookShare on TwitterShare on LinkedInShare on WhatsAppShare on e-mail

Security researchers at Radware have demonstrated how they tricked OpenAI’s ChatGPT into extracting sensitive data from a user’s Gmail inbox using a vulnerability they call “Shadow Leak.”

The attack, which was revealed this week, used a technique called prompt injection to manipulate an AI agent named Deep Research that had been granted access to the user’s emails. The entire attack took place on OpenAI’s cloud infrastructure, bypassing traditional cybersecurity defenses. OpenAI patched the vulnerability after Radware reported it in June.

How the Shadow Leak attack works

The experiment targeted AI agents, which are designed to perform tasks autonomously on a user’s behalf, such as accessing personal accounts like email. In this case, the Deep Research agent, which is embedded in ChatGPT, was given permission to interact with a user’s Gmail account.

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.

The researchers crafted an email containing malicious instructions hidden as invisible white text on a white background. This email was then sent to the target’s Gmail inbox. The hidden commands remained dormant until the user activated the Deep Research agent for a routine task. When the agent scanned the inbox, it encountered the prompt injection and followed the attacker’s instructions instead of the user’s. The agent then proceeded to search the inbox for sensitive information, such as HR-related emails and personal details, and sent that data to the researchers without the user’s knowledge.

The researchers described the process of developing the attack as “a rollercoaster of failed attempts, frustrating roadblocks, and, finally, a breakthrough.”

A cloud-based attack that bypasses traditional security

A key aspect of the Shadow Leak attack is that it operates entirely on OpenAI’s cloud infrastructure, not on the user’s local device. This makes it undetectable by conventional cybersecurity tools like antivirus software, which monitor a user’s computer or phone for malicious activity. By leveraging the AI’s own infrastructure, the attack can proceed without leaving any trace on the user’s end.

Potential for a wider range of attacks

Radware’s proof-of-concept also identified potential risks for other services that integrate with the Deep Research agent. The researchers stated that the same prompt injection technique could be used to target connections to Outlook, GitHub, Google Drive, and Dropbox.

“The same technique can be applied to these additional connectors to exfiltrate highly sensitive business data such as contracts, meeting notes or customer records.”

Prompt injection is a known vulnerability that has been used in various real-world attacks, from manipulating academic peer reviews to taking control of smart home devices. OpenAI has since patched the specific flaw that enabled the Shadow Leak attack, but the research highlights the ongoing security challenges posed by the increasing autonomy of AI agents.


Featured image credit

Tags: chatgptResearch

Related Posts

Google wants AI to build web pages instead of just writing text

Google wants AI to build web pages instead of just writing text

November 20, 2025
What AI really sees in teen photos: New data shows sexual content is flagged 7× more often than violence

What AI really sees in teen photos: New data shows sexual content is flagged 7× more often than violence

November 19, 2025
Cloudflare admits a bot filter bug caused its worst outage since 2019

Cloudflare admits a bot filter bug caused its worst outage since 2019

November 19, 2025
Harvard’s new metasurface shrinks quantum optics into a single ultrathin chip

Harvard’s new metasurface shrinks quantum optics into a single ultrathin chip

November 19, 2025
Cloudflare is down worldwide: Internal server error explained

Cloudflare is down worldwide: Internal server error explained

November 18, 2025
A wireless eye implant helps patients with severe macular degeneration read again

A wireless eye implant helps patients with severe macular degeneration read again

November 18, 2025

LATEST NEWS

Amazon claims its new AI video summaries have “theatrical quality”

Google finally copies the best feature from Edge and Vivaldi

Perplexity launches free agentic shopping tool with PayPal

You should keep your Snapdragon 8 Gen 3 if you want to run emulators

Netflix grabs the Home Run Derby in fifty million dollar baseball deal

OpenAI says its new coding model can work for 24 hours straight

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.