Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

AI is now the number one channel for data exfiltration in the enterprise

A new LayerX report reveals that AI platforms like ChatGPT and Copilot are already the leading channel for enterprise data loss, driven by unmanaged accounts and copy-paste workflows.

byEmre Çıtak
October 8, 2025
in Research, Artificial Intelligence

Artificial intelligence has become the single largest uncontrolled channel for corporate data exfiltration, surpassing both shadow SaaS and unmanaged file sharing, according to a new report from AI and browser security company LayerX. The research, based on real-world enterprise browsing telemetry, indicates that the primary risk from AI in the enterprise is not a future threat, but a present-day reality unfolding in everyday workflows.

Sensitive corporate data is already flowing into generative AI tools like ChatGPT, Claude, and Copilot at high rates, primarily through unmanaged personal accounts and the copy-and-paste function.

The rapid, ungoverned adoption of AI

AI tools have achieved a level of adoption in just two years that took other technologies decades to reach. Nearly half of all enterprise employees (45%) already use generative AI, with ChatGPT alone reaching 43% penetration. AI now accounts for 11% of all enterprise application activity, rivaling file-sharing and office productivity apps.

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.

This growth has largely occurred without corresponding governance. The report found that 67% of AI usage happens through unmanaged personal accounts, leaving security teams with no visibility into which employees are using which tools or what data is being shared.

Sensitive data is leaking through files and copy-paste

The research uncovered alarming trends in how sensitive data is being handled with AI platforms.

  • File uploads: 40% of files uploaded into generative AI tools contain personally identifiable information (PII) or payment card industry (PCI) data. Nearly four in ten of these uploads are done using personal accounts.
  • Copy-and-paste: The primary channel for data leakage is the copy-paste function. 77% of employees paste data into generative AI tools, and 82% of that activity comes from unmanaged personal accounts. On average, employees paste sensitive data into these tools via personal accounts at least three times per day.

The report identifies copy-and-paste into generative AI as the number one vector for corporate data leaving enterprise control. Traditional security programs focused on scanning file attachments and blocking unauthorized uploads are missing this threat entirely.

Other major security blind spots

The report highlights two other critical areas where corporate data is at risk.

  • Non-federated logins: Even when employees use corporate credentials for high-risk platforms like CRM and ERP systems, they overwhelmingly bypass single sign-on (SSO). 71% of CRM logins and 83% of ERP logins are non-federated, making a corporate login functionally the same as a personal one from a security visibility standpoint.
  • Instant messaging: 87% of enterprise chat usage occurs through unmanaged personal accounts, and 62% of users paste PII or PCI data into them.

Recommendations for enterprise security in the AI era

The report offers several clear recommendations for security leaders.

  • Treat AI security as a core enterprise category, not an emerging one, with monitoring for uploads, prompts, and copy-paste flows.
  • Shift from a file-centric security model to an action-centric one that accounts for file-less methods like copy-paste and chat.
  • Restrict the use of unmanaged personal accounts and enforce federated logins for all corporate applications.
  • Prioritize the highest-risk application categories for the tightest controls: AI, chat, and file storage.

The findings paint a clear picture: the enterprise security perimeter has shifted to the browser, where employees fluidly move sensitive data between sanctioned and unsanctioned tools. The report concludes that if security teams do not adapt to this new reality, AI will not just shape the future of work, but also the future of data breaches.


Featured image credit

Tags: AIFeaturedResearch

Related Posts

Project Paraphrase shows AI can redesign toxins to evade security screening

Project Paraphrase shows AI can redesign toxins to evade security screening

October 8, 2025
Google expands its AI vibe-coding app Opal to 15 more countries

Google expands its AI vibe-coding app Opal to 15 more countries

October 8, 2025
Google introduces CodeMender, an AI agent for code security

Google introduces CodeMender, an AI agent for code security

October 8, 2025
ChatGPT reaches 800m weekly active users

ChatGPT reaches 800m weekly active users

October 7, 2025
Claude Sonnet 4.5 flags its own AI safety tests

Claude Sonnet 4.5 flags its own AI safety tests

October 7, 2025
Ethical hackers invited: Google launches Gemini AI bug bounty

Ethical hackers invited: Google launches Gemini AI bug bounty

October 7, 2025

LATEST NEWS

AI is now the number one channel for data exfiltration in the enterprise

Google expands its AI vibe-coding app Opal to 15 more countries

Google introduces CodeMender, an AI agent for code security

Megabonk once again proves you don’t need fancy graphics to become a hit

Qualcomm acquires Arduino, announces new hardware and development tools

Shinyhunters extorts Red Hat over stolen CER data

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.