A survey of cybersecurity leaders shows 62 percent reported AI-driven attacks against staff in the past year. These incidents, involving prompt-injection and deepfake audio or video, have caused significant business disruptions, including financial and intellectual property losses.
The most common attack vector was deepfake audio calls targeting employees, with 44 percent of businesses reporting at least one incident. Six percent of these occurrences led to business interruption, financial loss, or intellectual property loss. The data indicated that the implementation of an audio-screening service correlated with a reduction in these loss rates to two percent.
Incidents involving video deepfakes were reported by 36 percent of the surveyed organizations. Among these cases, five percent were classified as having caused a serious problem for the business. This represents a persistent, though less frequent, threat compared to audio impersonation attempts.
Chester Wisniewski, global field CISO at security firm Sophos, explained that deepfake audio is becoming highly convincing and inexpensive. “With audio you can kind of generate these calls in real time at this point,” he stated, noting it can deceive a coworker one interacts with occasionally. Wisniewski believes the audio deepfake figures may underestimate the problem, and he found the video figures higher than expected, given that a real-time video fake of a specific individual can cost millions of dollars to produce.
Sophos has observed tactics where a scammer briefly uses a CEO or CFO video deepfake on a WhatsApp call before claiming connectivity issues, deleting the feed, and switching to text to continue a social-engineering attack. Generic video fakes are also used to conceal an identity rather than steal one. It has been reported that North Korea earns millions by having its staff use convincing AI fakery to gain employment with Western companies.
Another rising threat is the prompt-injection attack, where malicious instructions are embedded into content processed by an AI system. This can trick the AI into revealing sensitive information or misusing tools, potentially leading to code execution if integrations allow it. According to a Gartner survey, 32 percent of respondents reported such attacks against their applications.
Recent examples illustrate these vulnerabilities. Researchers have shown Google’s Gemini chatbot being used to target a user’s email and smart-home systems.
Anthropic’s Claude language model has also experienced issues with prompt injection. In other research, ChatGPT was successfully tricked into solving CAPTCHAs, which are designed to differentiate human and machine users, and into generating traffic that could be used for denial-of-service-style attacks against websites.