Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

Sophos: AI deepfakes hit 62% of firms last year

New survey reveals deepfake audio calls as the most common threat vector, while prompt-injection exploits are emerging as a significant risk to business security and IP.

byAytun Çelebi
September 24, 2025
in Research
Home Research

A survey of cybersecurity leaders shows 62 percent reported AI-driven attacks against staff in the past year. These incidents, involving prompt-injection and deepfake audio or video, have caused significant business disruptions, including financial and intellectual property losses.

The most common attack vector was deepfake audio calls targeting employees, with 44 percent of businesses reporting at least one incident. Six percent of these occurrences led to business interruption, financial loss, or intellectual property loss. The data indicated that the implementation of an audio-screening service correlated with a reduction in these loss rates to two percent.

Incidents involving video deepfakes were reported by 36 percent of the surveyed organizations. Among these cases, five percent were classified as having caused a serious problem for the business. This represents a persistent, though less frequent, threat compared to audio impersonation attempts.

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.

Chester Wisniewski, global field CISO at security firm Sophos, explained that deepfake audio is becoming highly convincing and inexpensive. “With audio you can kind of generate these calls in real time at this point,” he stated, noting it can deceive a coworker one interacts with occasionally. Wisniewski believes the audio deepfake figures may underestimate the problem, and he found the video figures higher than expected, given that a real-time video fake of a specific individual can cost millions of dollars to produce.

Sophos has observed tactics where a scammer briefly uses a CEO or CFO video deepfake on a WhatsApp call before claiming connectivity issues, deleting the feed, and switching to text to continue a social-engineering attack. Generic video fakes are also used to conceal an identity rather than steal one. It has been reported that North Korea earns millions by having its staff use convincing AI fakery to gain employment with Western companies.

Another rising threat is the prompt-injection attack, where malicious instructions are embedded into content processed by an AI system. This can trick the AI into revealing sensitive information or misusing tools, potentially leading to code execution if integrations allow it. According to a Gartner survey, 32 percent of respondents reported such attacks against their applications.

Recent examples illustrate these vulnerabilities. Researchers have shown Google’s Gemini chatbot being used to target a user’s email and smart-home systems.

Anthropic’s Claude language model has also experienced issues with prompt injection. In other research, ChatGPT was successfully tricked into solving CAPTCHAs, which are designed to differentiate human and machine users, and into generating traffic that could be used for denial-of-service-style attacks against websites.


Featured image credit

Tags: deepfakeFeaturedResearch

Related Posts

Kaist creates self-correcting memristor for AI chips

Kaist creates self-correcting memristor for AI chips

September 24, 2025
Delphi-2M AI predicts 1000+ diseases using over 400k medical records

Delphi-2M AI predicts 1000+ diseases using over 400k medical records

September 23, 2025
Deepmind details AGI safety via frontier safety framework

Deepmind details AGI safety via frontier safety framework

September 23, 2025
Auditability at scale: Why runtime-first UI architectures are redefining regulated applications

Auditability at scale: Why runtime-first UI architectures are redefining regulated applications

September 22, 2025
Radware tricks ChatGPT’s Deep Research into Gmail data leak

Radware tricks ChatGPT’s Deep Research into Gmail data leak

September 19, 2025
OpenAI research finds AI models can scheme and deliberately deceive users

OpenAI research finds AI models can scheme and deliberately deceive users

September 19, 2025

LATEST NEWS

The affordable Google AI Plus expands to 40 new countries

Cloudflare open-sources VibeSDK AI app platform

Greece used Predator spyware on ministers and military

WhatsApp rolls out in-app message translation

Judge: Amazon violated ROSCA with Prime sign-up tactics

Xiaomi to launch 17, 17 Pro, and 17 Pro Max series in China on September 25

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.