Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

Sophos: AI deepfakes hit 62% of firms last year

New survey reveals deepfake audio calls as the most common threat vector, while prompt-injection exploits are emerging as a significant risk to business security and IP.

byAytun Çelebi
September 24, 2025
in Research

A survey of cybersecurity leaders shows 62 percent reported AI-driven attacks against staff in the past year. These incidents, involving prompt-injection and deepfake audio or video, have caused significant business disruptions, including financial and intellectual property losses.

The most common attack vector was deepfake audio calls targeting employees, with 44 percent of businesses reporting at least one incident. Six percent of these occurrences led to business interruption, financial loss, or intellectual property loss. The data indicated that the implementation of an audio-screening service correlated with a reduction in these loss rates to two percent.

Incidents involving video deepfakes were reported by 36 percent of the surveyed organizations. Among these cases, five percent were classified as having caused a serious problem for the business. This represents a persistent, though less frequent, threat compared to audio impersonation attempts.

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.

Chester Wisniewski, global field CISO at security firm Sophos, explained that deepfake audio is becoming highly convincing and inexpensive. “With audio you can kind of generate these calls in real time at this point,” he stated, noting it can deceive a coworker one interacts with occasionally. Wisniewski believes the audio deepfake figures may underestimate the problem, and he found the video figures higher than expected, given that a real-time video fake of a specific individual can cost millions of dollars to produce.

Sophos has observed tactics where a scammer briefly uses a CEO or CFO video deepfake on a WhatsApp call before claiming connectivity issues, deleting the feed, and switching to text to continue a social-engineering attack. Generic video fakes are also used to conceal an identity rather than steal one. It has been reported that North Korea earns millions by having its staff use convincing AI fakery to gain employment with Western companies.

Another rising threat is the prompt-injection attack, where malicious instructions are embedded into content processed by an AI system. This can trick the AI into revealing sensitive information or misusing tools, potentially leading to code execution if integrations allow it. According to a Gartner survey, 32 percent of respondents reported such attacks against their applications.

Recent examples illustrate these vulnerabilities. Researchers have shown Google’s Gemini chatbot being used to target a user’s email and smart-home systems.

Anthropic’s Claude language model has also experienced issues with prompt injection. In other research, ChatGPT was successfully tricked into solving CAPTCHAs, which are designed to differentiate human and machine users, and into generating traffic that could be used for denial-of-service-style attacks against websites.


Featured image credit

Tags: deepfakeFeaturedResearch

Related Posts

71% of workers are using rogue AI tools at work, Microsoft warns

71% of workers are using rogue AI tools at work, Microsoft warns

October 14, 2025
Google taught your voice assistant to understand what you mean

Google taught your voice assistant to understand what you mean

October 14, 2025
Apple researchers just made AI text generation 128x faster

Apple researchers just made AI text generation 128x faster

October 13, 2025
Have astronomers finally found the universe’s first dark stars?

Have astronomers finally found the universe’s first dark stars?

October 10, 2025
KPMG: CEOs prioritize AI investment in 2025

KPMG: CEOs prioritize AI investment in 2025

October 9, 2025
Physicists build and verify a quantum lie detector for large systems

Physicists build and verify a quantum lie detector for large systems

October 8, 2025

LATEST NEWS

NVTS stock skyrockets 27%: What is the correlation between Navitas and Nvidia

ChatGPT Android beta includes direct messaging

HP revealed a “League of Legends laptop” for $1,999

Samsung is not done with Bixby after all

Slack’s next-gen Slackbot aims to give “every employee AI superpowers”

Google integrates its viral Nano Banana AI into everyday tools

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.