Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

Radware tricks ChatGPT’s Deep Research into Gmail data leak

The Shadow Leak attack exploited prompt injection to exfiltrate sensitive information, including HR emails and personal data, without user awareness.

byAytun Çelebi
September 19, 2025
in Research, Cybersecurity

Security researchers at Radware have demonstrated how they tricked OpenAI’s ChatGPT into extracting sensitive data from a user’s Gmail inbox using a vulnerability they call “Shadow Leak.”

The attack, which was revealed this week, used a technique called prompt injection to manipulate an AI agent named Deep Research that had been granted access to the user’s emails. The entire attack took place on OpenAI’s cloud infrastructure, bypassing traditional cybersecurity defenses. OpenAI patched the vulnerability after Radware reported it in June.

How the Shadow Leak attack works

The experiment targeted AI agents, which are designed to perform tasks autonomously on a user’s behalf, such as accessing personal accounts like email. In this case, the Deep Research agent, which is embedded in ChatGPT, was given permission to interact with a user’s Gmail account.

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.

The researchers crafted an email containing malicious instructions hidden as invisible white text on a white background. This email was then sent to the target’s Gmail inbox. The hidden commands remained dormant until the user activated the Deep Research agent for a routine task. When the agent scanned the inbox, it encountered the prompt injection and followed the attacker’s instructions instead of the user’s. The agent then proceeded to search the inbox for sensitive information, such as HR-related emails and personal details, and sent that data to the researchers without the user’s knowledge.

The researchers described the process of developing the attack as “a rollercoaster of failed attempts, frustrating roadblocks, and, finally, a breakthrough.”

A cloud-based attack that bypasses traditional security

A key aspect of the Shadow Leak attack is that it operates entirely on OpenAI’s cloud infrastructure, not on the user’s local device. This makes it undetectable by conventional cybersecurity tools like antivirus software, which monitor a user’s computer or phone for malicious activity. By leveraging the AI’s own infrastructure, the attack can proceed without leaving any trace on the user’s end.

Potential for a wider range of attacks

Radware’s proof-of-concept also identified potential risks for other services that integrate with the Deep Research agent. The researchers stated that the same prompt injection technique could be used to target connections to Outlook, GitHub, Google Drive, and Dropbox.

“The same technique can be applied to these additional connectors to exfiltrate highly sensitive business data such as contracts, meeting notes or customer records.”

Prompt injection is a known vulnerability that has been used in various real-world attacks, from manipulating academic peer reviews to taking control of smart home devices. OpenAI has since patched the specific flaw that enabled the Shadow Leak attack, but the research highlights the ongoing security challenges posed by the increasing autonomy of AI agents.


Featured image credit

Tags: chatgptResearch

Related Posts

Google reveals AI-powered malware using LLMs in real time

Google reveals AI-powered malware using LLMs in real time

November 12, 2025
Oxford study finds AI benchmarks often exaggerate model performance

Oxford study finds AI benchmarks often exaggerate model performance

November 12, 2025
Anthropic study finds AI has limited self-awareness of its own thoughts

Anthropic study finds AI has limited self-awareness of its own thoughts

November 11, 2025
New research shows AI logic survives even when its memory is erased

New research shows AI logic survives even when its memory is erased

November 10, 2025
Microsoft uncovers Whisper Leak: A flaw that lets spies your AI chats

Microsoft uncovers Whisper Leak: A flaw that lets spies your AI chats

November 10, 2025
Google urges Gmail users to abandon passwords for passkeys

Google urges Gmail users to abandon passwords for passkeys

November 10, 2025

LATEST NEWS

Tech News Today: OpenAI’s Sora burn, Microsoft’s AGI efforts and AI stitched into every screen

Don’t miss: The Game Awards to be live on Amazon Prime Video

Collins Dictionary names “vibe coding” the 2025 word of the year

Google Photos AI expands to 100+ countries

Masayoshi Son trades Nvidia profits for a $30B AI spending spree

Nintendo rolls out quality-of-life updates for both Switch generations

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.