Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

KELA: Malicious AI chatter outbumps cybercrime chatter by 200%

Beyond the increase in mentions of malicious AI tools, the report notes a 52% rise in discussions around AI jailbreaks over the past year.

byKerem Gülen
March 26, 2025
in Cybersecurity, News
Home News Cybersecurity

An alarming report from KELA Research and Strategy Ltd. today documents a 200% surge in mentions of malicious artificial intelligence tools across cybercrime forums through 2024, illustrating how swiftly cybercriminals are adopting AI tools and tactics. The findings were outlined in KELA’s 2025 AI Threat Report: How Cybercriminals are Weaponizing AI Technology.

The report relies on data from KELA’s own intelligence-gathering platform, which analyzes cybercrime activity across dark web forums, Telegram channels, and through tracking threat actor behavior.

Beyond the increase in mentions of malicious AI tools, the report notes a 52% rise in discussions around AI jailbreaks over the past year. Cybercrime actors are reported to continually refine AI jailbreaking techniques to bypass security restrictions in public AI systems.

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.

The threat actors are proactively distributing and monetizing what KELA defines as “dark AI tools,” encompassing jailbroken models and specialized malicious applications, such as WormGPT and FraudGPT. These tools are designed to execute cybercrime actions like phishing, forming malware, and financial fraud—which tools are usually carried out on a large scale.

AI systems are designed to remove safety restrictions and add custom functionalities, bringing down the barrier for inexperienced attackers to execute sophisticated attacks at scale.

In the realm of phishing, experts note that threat actors are leveraging generative AI to craft strong social engineering content, enhancing such schemes with deepfake audio and video. This improves the ability of cybercriminals to impersonate executives and mislead employees into approving fraudulent transactions.

KELA also observed that AI has expedited malware development, allowing for the creation of highly evasive ransomware and infostealers. This development poses difficulties for traditional methods of detection and response.

“We are witnessing a seismic shift in the cyber threat landscape,” said Yael Kishon, AI product and research lead at KELA. “Cybercriminals are not just using AI – they are building entire sections in the underground ecosystem dedicated to AI-powered cybercrime.” She emphasized that “organizations must adopt AI-driven defenses to combat this growing threat.” To stay ahead of escalating AI-powered cyber threats, KELA advises investing in employee training.


Hackers now have a toolkit for mass account breaches: It’s called Atlantis AIO


KELA recommends monitoring emerging AI threats and tactics, and implementing AI-powered security measures such as automated intelligence-based red teaming and adversarial emulations for generative AI models.

This report makes it clear that the arms race between cybercrime innovators and security experts mandates nuanced and proactive engagement. If organizations must employ generative AI to counteract generative AI in phishing, so be it, but it’s troubling how outmatched conventional methods will be if bad actors remain unchecked.

The alarming jump in AI-focused online chatter reveals that cybercriminals are not merely adopting AI but perfecting their AI hacking methods. This is even worse than weaponizing external code, because now the foes are effectively reverse-engineering security measures. This is a long game, but cybercriminals are exploiting an open goal at the moment and demand action now.

The rise in the number of jailbreak methods cited significantly outweighs the counter-interventions.

Aided by targeted tools showcasing sophisticated applications in areas like deepfake technology and malware development, these cybercriminals add yet another layer of complexity for law enforcement and security companies.

KELA’s warning can’t be ignored for much longer where developers, whether good or bad, are focusing on creating tool kits to exploit AI software and tackle significant cybersecurity issues. For now, the underground ecosystem focused on AI-powered cybercrime grows.

Despite the fact that more of these tactics rely on social engineering, employees—who are known to be the weakest cybersecurity link—are still not investing in AI tools as frequently as they should. This problem will be intensified when company executives need to access such mechanisms.


Featured image credit

Tags: AI

Related Posts

Zoom announces AI Companion 3.0 at Zoomtopia

Zoom announces AI Companion 3.0 at Zoomtopia

September 19, 2025
Google Cloud adds Lovable and Windsurf as AI coding customers

Google Cloud adds Lovable and Windsurf as AI coding customers

September 19, 2025
Radware tricks ChatGPT’s Deep Research into Gmail data leak

Radware tricks ChatGPT’s Deep Research into Gmail data leak

September 19, 2025
Elon Musk’s xAI chatbot Grok exposed hundreds of thousands of private user conversations

Elon Musk’s xAI chatbot Grok exposed hundreds of thousands of private user conversations

September 19, 2025
Roblox game Steal a Brainrot removes AI-generated character, sparking fan backlash and a debate over copyright

Roblox game Steal a Brainrot removes AI-generated character, sparking fan backlash and a debate over copyright

September 19, 2025
DeepSeek releases R1 model trained for 4,000 on 512 H800 GPUs

DeepSeek releases R1 model trained for $294,000 on 512 H800 GPUs

September 19, 2025

LATEST NEWS

Zoom announces AI Companion 3.0 at Zoomtopia

Google Cloud adds Lovable and Windsurf as AI coding customers

Radware tricks ChatGPT’s Deep Research into Gmail data leak

Elon Musk’s xAI chatbot Grok exposed hundreds of thousands of private user conversations

Roblox game Steal a Brainrot removes AI-generated character, sparking fan backlash and a debate over copyright

DeepSeek releases R1 model trained for $294,000 on 512 H800 GPUs

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.