Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

Radware finds ChatGPT deep research ShadowLeak zero-click flaw

Security firm says the flaw lets attackers exfiltrate confidential data from OpenAI’s servers without any user interaction.

byEmre Çıtak
September 23, 2025
in Cybersecurity
Home News Cybersecurity

Security firm Radware has discovered a zero-click vulnerability, “ShadowLeak,” in ChatGPT’s Deep Research agent.

The flaw allows data theft from OpenAI’s servers as enterprises increasingly use AI to analyze sensitive emails and internal reports.

The adoption of these AI platforms introduces new security risks when handling confidential business information. ShadowLeak is a server-side exploit, meaning an attack executes entirely on OpenAI’s servers. This mechanism allows attackers to exfiltrate sensitive data without requiring any user interaction, operating completely covertly.

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.

David Aviv, chief technology officer at Radware, classified it as “the quintessential zero-click attack.” He stated, “There is no user action required, no visible cue, and no way for victims to know their data has been compromised. Everything happens entirely behind the scenes through autonomous agent actions on OpenAI cloud servers.”

This exploit functions independently of user endpoints or company networks, which makes detection by enterprise security teams extremely difficult. Radware researchers demonstrated that sending an email with hidden instructions could trigger the Deep Research agent, causing it to leak information autonomously without the user’s knowledge.

Pascal Geenens, director of cyber threat intelligence at Radware, warned that internal protections are insufficient. “Enterprises adopting AI cannot rely on built-in safeguards alone to prevent abuse,” Geenens said. “AI-driven workflows can be manipulated in ways not yet anticipated, and these attack vectors often bypass the visibility and detection capabilities of traditional security solutions.”

ShadowLeak represents the first purely server-side, zero-click data exfiltration attack that leaves almost no forensic evidence from a business perspective. With ChatGPT reporting over 5 million paying business users, the potential scale of exposure is substantial. This lack of evidence complicates incident response efforts.

Experts emphasize that human oversight and strict access controls are critical when connecting autonomous AI agents to sensitive data. Organizations are advised to continuously evaluate security gaps and combine technology with operational practices.

Recommended protective measures include:

  • Implementing layered cybersecurity defenses.
  • Regularly monitoring AI-driven workflows for unusual activity or data leaks.
  • Deploying antivirus solutions to protect against traditional malware.
  • Maintaining robust ransomware protection to safeguard information.
  • Enforcing strict access controls and user permissions for AI tools.
  • Ensuring human oversight when autonomous AI agents process sensitive information.
  • Implementing logging and auditing of AI agent activity to identify anomalies early.
  • Integrating additional AI tools for anomaly detection and automated security alerts.
  • Educating employees on AI-related threats and autonomous agent risks.
  • Combining software defenses, operational practices, and continuous vigilance.

Featured image credit

Tags: chatgptzero-click

Related Posts

Sentinelone finds malterminal malware using OpenAI GPT-4

Sentinelone finds malterminal malware using OpenAI GPT-4

September 23, 2025
FBI warns of fake IC3 websites stealing data

FBI warns of fake IC3 websites stealing data

September 23, 2025
Selected AI fraud prevention solutions – September 2025

Selected AI fraud prevention solutions – September 2025

September 22, 2025
Radware tricks ChatGPT’s Deep Research into Gmail data leak

Radware tricks ChatGPT’s Deep Research into Gmail data leak

September 19, 2025
Elon Musk’s xAI chatbot Grok exposed hundreds of thousands of private user conversations

Elon Musk’s xAI chatbot Grok exposed hundreds of thousands of private user conversations

September 19, 2025
Steps to building resilient cybersecurity frameworks

Steps to building resilient cybersecurity frameworks

September 18, 2025

LATEST NEWS

Perplexity Max gets email assistant for Gmail and Outlook

Germany seeks to block Apple, Google from EU’s FiDA

Created by Humans licenses author content to AI firms

Sentinelone finds malterminal malware using OpenAI GPT-4

FBI warns of fake IC3 websites stealing data

WhatsApp Android beta mutes @everyone mentions

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.