Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Whitepapers
  • AI toolsNEW
  • Newsletter
  • + More
    • Glossary
    • Conversations
    • Events
    • About
      • Who we are
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Whitepapers
  • AI toolsNEW
  • Newsletter
  • + More
    • Glossary
    • Conversations
    • Events
    • About
      • Who we are
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

Radware tricks ChatGPT’s Deep Research into Gmail data leak

The Shadow Leak attack exploited prompt injection to exfiltrate sensitive information, including HR emails and personal data, without user awareness.

byAytun Çelebi
September 19, 2025
in Research, Cybersecurity
Home Research
Share on FacebookShare on TwitterShare on LinkedInShare on WhatsAppShare on e-mail

Security researchers at Radware have demonstrated how they tricked OpenAI’s ChatGPT into extracting sensitive data from a user’s Gmail inbox using a vulnerability they call “Shadow Leak.”

The attack, which was revealed this week, used a technique called prompt injection to manipulate an AI agent named Deep Research that had been granted access to the user’s emails. The entire attack took place on OpenAI’s cloud infrastructure, bypassing traditional cybersecurity defenses. OpenAI patched the vulnerability after Radware reported it in June.

How the Shadow Leak attack works

The experiment targeted AI agents, which are designed to perform tasks autonomously on a user’s behalf, such as accessing personal accounts like email. In this case, the Deep Research agent, which is embedded in ChatGPT, was given permission to interact with a user’s Gmail account.

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.

The researchers crafted an email containing malicious instructions hidden as invisible white text on a white background. This email was then sent to the target’s Gmail inbox. The hidden commands remained dormant until the user activated the Deep Research agent for a routine task. When the agent scanned the inbox, it encountered the prompt injection and followed the attacker’s instructions instead of the user’s. The agent then proceeded to search the inbox for sensitive information, such as HR-related emails and personal details, and sent that data to the researchers without the user’s knowledge.

The researchers described the process of developing the attack as “a rollercoaster of failed attempts, frustrating roadblocks, and, finally, a breakthrough.”

A cloud-based attack that bypasses traditional security

A key aspect of the Shadow Leak attack is that it operates entirely on OpenAI’s cloud infrastructure, not on the user’s local device. This makes it undetectable by conventional cybersecurity tools like antivirus software, which monitor a user’s computer or phone for malicious activity. By leveraging the AI’s own infrastructure, the attack can proceed without leaving any trace on the user’s end.

Potential for a wider range of attacks

Radware’s proof-of-concept also identified potential risks for other services that integrate with the Deep Research agent. The researchers stated that the same prompt injection technique could be used to target connections to Outlook, GitHub, Google Drive, and Dropbox.

“The same technique can be applied to these additional connectors to exfiltrate highly sensitive business data such as contracts, meeting notes or customer records.”

Prompt injection is a known vulnerability that has been used in various real-world attacks, from manipulating academic peer reviews to taking control of smart home devices. OpenAI has since patched the specific flaw that enabled the Shadow Leak attack, but the research highlights the ongoing security challenges posed by the increasing autonomy of AI agents.


Featured image credit

Tags: chatgptResearch

Related Posts

OpenAI GPT 5.2 cracks Erdős math problem in 15 minutes

OpenAI GPT 5.2 cracks Erdős math problem in 15 minutes

January 19, 2026
Appfigures: Mobile app spending hits record 5.8 billion

Appfigures: Mobile app spending hits record $155.8 billion

January 15, 2026
FTC bans GM from selling driver data without explicit consent

FTC bans GM from selling driver data without explicit consent

January 15, 2026
10-hour long Verizon outage is finally resolved

10-hour long Verizon outage is finally resolved

January 15, 2026
85% of security leaders are flying blind on supply chain threats, Panorays study says

85% of security leaders are flying blind on supply chain threats, Panorays study says

January 14, 2026
Engineers build grasshopper-inspired robots to solve battery drain

Engineers build grasshopper-inspired robots to solve battery drain

January 14, 2026

LATEST NEWS

Nvidia hits 200 teraFLOP emulated FP64 for scientific computing

Walmart maintains Apple Pay ban in U.S. stores for 2026

iOS 27: Everything we know so far

Google Wallet and Tasks integrations surface in new Pixel 10 leak

Threads hits 141 million daily users to claim the mobile throne from X

Microsoft pushes emergency OOB update to fix Windows 11 restart loop

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Whitepapers
  • AI tools
  • Newsletter
  • + More
    • Glossary
    • Conversations
    • Events
    • About
      • Who we are
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.