Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Whitepapers
  • AI toolsNEW
  • Newsletter
  • + More
    • Glossary
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Whitepapers
  • AI toolsNEW
  • Newsletter
  • + More
    • Glossary
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

Google reveals AI-powered malware using LLMs in real time

Identified malware families include PROMPTSTEAL, QUIETVAULT, FRUITSHELL, PROMPTFLUX, and PROMPTLOCK.

byKerem Gülen
November 12, 2025
in Research
Home Research
Share on FacebookShare on TwitterShare on LinkedInShare on WhatsAppShare on e-mail

Google’s Threat Intelligence Group (GTIG) has identified a significant escalation in the malicious use of artificial intelligence. Adversaries are no longer just using AI for productivity tasks like drafting phishing emails; they are now deploying novel malware that actively uses AI during an attack to dynamically alter its behavior.

This new phase of AI abuse involves what Google calls “Just-in-Time” AI. For the first time, GTIG has identified malware families that use Large Language Models (LLMs) mid-execution. These tools can dynamically generate malicious scripts or obfuscate their own code on the fly to evade detection, rather than relying on hard-coded functions.

The report details several new malware families using this technique. “PROMPTSTEAL,” which was observed in active operations, is a data miner that queries a Hugging Face API to an LLM to generate Windows commands for collecting system information. “QUIETVAULT,” also seen in the wild, is a credential stealer that uses AI CLI tools installed on the victim’s machine to search for additional secrets. Another malware, “FRUITSHELL,” contains hard-coded prompts specifically designed to bypass analysis by LLM-powered security systems. Google also identified experimental malware, including “PROMPTFLUX,” a dropper that uses the Google Gemini API to repeatedly rewrite its own source code to remain hidden, and “PROMPTLOCK,” a proof-of-concept ransomware that dynamically generates malicious scripts at runtime.

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.

The GTIG report also found that threat actors are adapting “social engineering” techniques to bypass AI safety guardrails. Google observed actors posing as students in a “capture-the-flag” competition or as cybersecurity researchers to persuade Gemini to provide information, such as help with tool development, that would otherwise be blocked.

State-sponsored actors, including those from North Korea, Iran, and the People’s Republic of China (PRC), continue to use AI like Gemini to enhance all stages of their operations, from reconnaissance and phishing lure creation to developing command and control (C2) infrastructure. Furthermore, Google notes that the underground marketplace for illicit AI tools has matured in 2025, offering multifunctional tools that lower the barrier to entry for less sophisticated attackers.

Google stated it is actively disrupting this activity by disabling projects and accounts associated with these actors. The company emphasized it is continuously improving its models, including Gemini, to make them less susceptible to misuse and is applying the intelligence to strengthen its security classifiers.


Featured image credit

Tags: AIGooglellmMalware

Related Posts

Scientists discover more than 17,000 new species

Scientists discover more than 17,000 new species

December 25, 2025
GPT-5.2 surpasses expert PhD baseline with 92% science score

GPT-5.2 surpasses expert PhD baseline with 92% science score

December 24, 2025
Why DIG AI is the most dangerous malicious AI of 2025

Why DIG AI is the most dangerous malicious AI of 2025

December 23, 2025
Pew Research reveals significant racial gaps in teen AI chatbot usage

Pew Research reveals significant racial gaps in teen AI chatbot usage

December 23, 2025
MIT’s JETS model predicts disease from Apple Watch data

MIT’s JETS model predicts disease from Apple Watch data

December 22, 2025
Sodium-ion batteries edge closer to fast charging as researchers crack ion bottlenecks

Sodium-ion batteries edge closer to fast charging as researchers crack ion bottlenecks

December 19, 2025

LATEST NEWS

Stock watch: Nvidia, Samsung, AMD, and Intel updates (December 26)

OnePlus Turbo spotted with 9000 mAh battery

5 essential Mac apps to transform your productivity in 2026

One year in Tiangong: China to test long-duration space stay in 2026

Google NotebookLM introduces “Lecture Mode” for 30-minute AI learning

ChatGPT evolves into an office suite with new formatting blocks

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Whitepapers
  • AI tools
  • Newsletter
  • + More
    • Glossary
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.