U.S. investigators are deploying artificial intelligence to identify child sexual abuse material created by generative AI. The Department of Homeland Security’s Cyber Crimes Center is leading this effort to differentiate synthetic imagery from depictions of real victims.
The initiative addresses a documented surge in AI-generated abusive content. The Internet Watch Foundation (IWF) has reported a significant increase in this type of imagery appearing on dark-web forums. This material often consists of photorealistic deepfakes, which can incorporate the likenesses of real individuals or past victims. The synthetic origin of these images presents substantial challenges for law enforcement, complicating efforts to establish criminality.
In response, the DHS Cyber Crimes Center, the primary U.S. investigative body for child exploitation, is experimenting with AI-powered detection tools. These advanced methods are being adopted to counteract the misuse of generative AI technology in this context. The objective is to improve the identification of computer-generated content, thereby enhancing intervention and investigative processes in child exploitation cases.