An alarming report from KELA Research and Strategy Ltd. today documents a 200% surge in mentions of malicious artificial intelligence tools across cybercrime forums through 2024, illustrating how swiftly cybercriminals are adopting AI tools and tactics. The findings were outlined in KELA’s 2025 AI Threat Report: How Cybercriminals are Weaponizing AI Technology.
The report relies on data from KELA’s own intelligence-gathering platform, which analyzes cybercrime activity across dark web forums, Telegram channels, and through tracking threat actor behavior.
Beyond the increase in mentions of malicious AI tools, the report notes a 52% rise in discussions around AI jailbreaks over the past year. Cybercrime actors are reported to continually refine AI jailbreaking techniques to bypass security restrictions in public AI systems.
The threat actors are proactively distributing and monetizing what KELA defines as “dark AI tools,” encompassing jailbroken models and specialized malicious applications, such as WormGPT and FraudGPT. These tools are designed to execute cybercrime actions like phishing, forming malware, and financial fraud—which tools are usually carried out on a large scale.
AI systems are designed to remove safety restrictions and add custom functionalities, bringing down the barrier for inexperienced attackers to execute sophisticated attacks at scale.
In the realm of phishing, experts note that threat actors are leveraging generative AI to craft strong social engineering content, enhancing such schemes with deepfake audio and video. This improves the ability of cybercriminals to impersonate executives and mislead employees into approving fraudulent transactions.
KELA also observed that AI has expedited malware development, allowing for the creation of highly evasive ransomware and infostealers. This development poses difficulties for traditional methods of detection and response.
“We are witnessing a seismic shift in the cyber threat landscape,” said Yael Kishon, AI product and research lead at KELA. “Cybercriminals are not just using AI – they are building entire sections in the underground ecosystem dedicated to AI-powered cybercrime.” She emphasized that “organizations must adopt AI-driven defenses to combat this growing threat.” To stay ahead of escalating AI-powered cyber threats, KELA advises investing in employee training.
Hackers now have a toolkit for mass account breaches: It’s called Atlantis AIO
KELA recommends monitoring emerging AI threats and tactics, and implementing AI-powered security measures such as automated intelligence-based red teaming and adversarial emulations for generative AI models.
This report makes it clear that the arms race between cybercrime innovators and security experts mandates nuanced and proactive engagement. If organizations must employ generative AI to counteract generative AI in phishing, so be it, but it’s troubling how outmatched conventional methods will be if bad actors remain unchecked.
The alarming jump in AI-focused online chatter reveals that cybercriminals are not merely adopting AI but perfecting their AI hacking methods. This is even worse than weaponizing external code, because now the foes are effectively reverse-engineering security measures. This is a long game, but cybercriminals are exploiting an open goal at the moment and demand action now.
The rise in the number of jailbreak methods cited significantly outweighs the counter-interventions.
Aided by targeted tools showcasing sophisticated applications in areas like deepfake technology and malware development, these cybercriminals add yet another layer of complexity for law enforcement and security companies.
KELA’s warning can’t be ignored for much longer where developers, whether good or bad, are focusing on creating tool kits to exploit AI software and tackle significant cybersecurity issues. For now, the underground ecosystem focused on AI-powered cybercrime grows.
Despite the fact that more of these tactics rely on social engineering, employees—who are known to be the weakest cybersecurity link—are still not investing in AI tools as frequently as they should. This problem will be intensified when company executives need to access such mechanisms.