As deepfake technology advances, concerns over misinformation and identity theft are rising, highlighted by a recent iProov study revealing that most individuals struggle to distinguish between real and AI-generated content.
The study involved 2,000 participants across the UK and US, exposing them to a mix of genuine and deepfake images and videos. Alarmingly, only 0.1% of participants, totaling just two individuals, could accurately differentiate between the real and deepfake stimuli.
Older adults emerged as particularly vulnerable to AI-generated deception. Approximately 30% of participants aged 55-64 and 39% of those over 65 reported they had never heard of deepfakes prior to the study. Although younger participants (aged 18-34) displayed greater confidence in their ability to detect deepfakes, their actual performance did not reflect improvement.
Deepfake detection challenges
The study indicated that detecting deepfake videos was significantly more challenging than identifying images. Participants were 36% less likely to accurately identify a synthetic video compared to a synthetic image, raising concerns regarding the potential for video-based fraud, such as impersonation during video calls.
Social media platforms were identified as major sources of deepfake content. Nearly half of the participants (49%) cited Meta platforms, including Facebook and Instagram, as the most common sites for deepfakes, while 47% pointed to TikTok.
Andrew Bud, founder and CEO of iProov, commented on the findings, noting the heightened vulnerability of both organizations and consumers to identity fraud in the deepfake era. He stated, “Criminals are exploiting consumers’ inability to distinguish real from fake imagery, putting personal information and financial security at risk.” Despite the alarming results, the study found that only 20% of respondents would report a suspected deepfake online.
As deepfakes become increasingly sophisticated, iProov suggests that human perception alone is insufficient for reliable detection. Bud emphasized the necessity for biometric security solutions with liveness detection to combat the threat posed by convincing deepfake material.
iProov’s research showcases a pressing need for organizations to protect their customers by integrating robust security measures. Bud believes that the use of facial biometrics with liveness detection offers a trustworthy authentication factor that prioritizes both security and individual control.
According to the study, only 22% of consumers had heard of deepfakes before participating. Furthermore, many individuals exhibited significant overconfidence regarding their detection skills, with over 60% believing they could identify deepfakes, despite a majority performing poorly. Among younger adults, this false sense of security was especially prevalent.
The findings also indicated a decline in trust towards social media platforms after users became aware of deepfakes, with 49% reporting a decrease in trust. Meanwhile, 74% of participants expressed concerns about the societal ramifications of deepfakes, particularly the spread of misinformation, which was a top concern for 68% of respondents. This apprehension was notably strong among older generations, where up to 82% of individuals aged 55 and above expressed fears regarding false information dissemination.
Why small AI models can’t keep up with large ones
Less than a third of those surveyed (29%) indicated they would take no action upon encountering a suspected deepfake. The lack of engagement is partly due to 48% of respondents stating they do not know how to report deepfakes, while a quarter admitted indifference toward suspected deepfakes. Only 11% critically analyze sources and context to determine the authenticity of information, creating a landscape where many individuals remain highly susceptible to deception.
Professor Edgar Whitley, a digital identity expert, warned that organizations cannot rely solely on human judgment to detect deepfakes and must explore alternative methods of user authentication.
The growing prevalence of deepfakes poses significant challenges in the digital landscape. iProov’s 2024 Threat Intelligence Report indicated a staggering 704% increase in face swaps, emphasizing their role as tools for cybercriminals seeking unauthorized access to sensitive data. This trend highlights the urgent need for enhanced awareness and technological solutions to thwart deepfake-related threats.
Featured image credit: Kerem Gülen/Ideogram