Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Whitepapers
  • AI toolsNEW
  • Newsletter
  • + More
    • Glossary
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Whitepapers
  • AI toolsNEW
  • Newsletter
  • + More
    • Glossary
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

Sophos: AI deepfakes hit 62% of firms last year

New survey reveals deepfake audio calls as the most common threat vector, while prompt-injection exploits are emerging as a significant risk to business security and IP.

byAytun Çelebi
September 24, 2025
in Research
Home Research
Share on FacebookShare on TwitterShare on LinkedInShare on WhatsAppShare on e-mail

A survey of cybersecurity leaders shows 62 percent reported AI-driven attacks against staff in the past year. These incidents, involving prompt-injection and deepfake audio or video, have caused significant business disruptions, including financial and intellectual property losses.

The most common attack vector was deepfake audio calls targeting employees, with 44 percent of businesses reporting at least one incident. Six percent of these occurrences led to business interruption, financial loss, or intellectual property loss. The data indicated that the implementation of an audio-screening service correlated with a reduction in these loss rates to two percent.

Incidents involving video deepfakes were reported by 36 percent of the surveyed organizations. Among these cases, five percent were classified as having caused a serious problem for the business. This represents a persistent, though less frequent, threat compared to audio impersonation attempts.

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.

Chester Wisniewski, global field CISO at security firm Sophos, explained that deepfake audio is becoming highly convincing and inexpensive. “With audio you can kind of generate these calls in real time at this point,” he stated, noting it can deceive a coworker one interacts with occasionally. Wisniewski believes the audio deepfake figures may underestimate the problem, and he found the video figures higher than expected, given that a real-time video fake of a specific individual can cost millions of dollars to produce.

Sophos has observed tactics where a scammer briefly uses a CEO or CFO video deepfake on a WhatsApp call before claiming connectivity issues, deleting the feed, and switching to text to continue a social-engineering attack. Generic video fakes are also used to conceal an identity rather than steal one. It has been reported that North Korea earns millions by having its staff use convincing AI fakery to gain employment with Western companies.

Another rising threat is the prompt-injection attack, where malicious instructions are embedded into content processed by an AI system. This can trick the AI into revealing sensitive information or misusing tools, potentially leading to code execution if integrations allow it. According to a Gartner survey, 32 percent of respondents reported such attacks against their applications.

Recent examples illustrate these vulnerabilities. Researchers have shown Google’s Gemini chatbot being used to target a user’s email and smart-home systems.

Anthropic’s Claude language model has also experienced issues with prompt injection. In other research, ChatGPT was successfully tricked into solving CAPTCHAs, which are designed to differentiate human and machine users, and into generating traffic that could be used for denial-of-service-style attacks against websites.


Featured image credit

Tags: deepfakeFeaturedResearch

Related Posts

Sodium-ion batteries edge closer to fast charging as researchers crack ion bottlenecks

Sodium-ion batteries edge closer to fast charging as researchers crack ion bottlenecks

December 19, 2025
USENIX study finds AI extensions collect medical, banking data

USENIX study finds AI extensions collect medical, banking data

December 15, 2025
LLMs show distinct cultural biases in English vs Chinese prompts

LLMs show distinct cultural biases in English vs Chinese prompts

December 13, 2025
Catching the  trillion ghost: AI is rewriting the rules of financial crime

Catching the $2 trillion ghost: AI is rewriting the rules of financial crime

December 12, 2025
AI mirrors the brain’s processing and is quietly changing human vocabulary

AI mirrors the brain’s processing and is quietly changing human vocabulary

December 11, 2025
New robot builds furniture from voice commands in 5 minutes

New robot builds furniture from voice commands in 5 minutes

December 8, 2025

LATEST NEWS

SpaceX loses control of Starlink satellite 35956

Google-featured VPN extension harvested and sold ChatGPT and Claude conversations

Tesla files patent to integrate Starlink antennas directly into vehicle roofs

The Wealth Management Trap: Why “Digital” Isn’t Enough to Win in 2026

LG backs down on Copilot shortcut after TV users push back

Texas Attorney General Paxton sues Sony, Samsung, LG, Hisense, TCL over TV ACR

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Whitepapers
  • AI tools
  • Newsletter
  • + More
    • Glossary
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.