Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

Google reveals AI-powered malware using LLMs in real time

Identified malware families include PROMPTSTEAL, QUIETVAULT, FRUITSHELL, PROMPTFLUX, and PROMPTLOCK.

byKerem Gülen
November 12, 2025
in Research
Home Research
Share on FacebookShare on TwitterShare on LinkedInShare on WhatsAppShare on e-mail

Google’s Threat Intelligence Group (GTIG) has identified a significant escalation in the malicious use of artificial intelligence. Adversaries are no longer just using AI for productivity tasks like drafting phishing emails; they are now deploying novel malware that actively uses AI during an attack to dynamically alter its behavior.

This new phase of AI abuse involves what Google calls “Just-in-Time” AI. For the first time, GTIG has identified malware families that use Large Language Models (LLMs) mid-execution. These tools can dynamically generate malicious scripts or obfuscate their own code on the fly to evade detection, rather than relying on hard-coded functions.

The report details several new malware families using this technique. “PROMPTSTEAL,” which was observed in active operations, is a data miner that queries a Hugging Face API to an LLM to generate Windows commands for collecting system information. “QUIETVAULT,” also seen in the wild, is a credential stealer that uses AI CLI tools installed on the victim’s machine to search for additional secrets. Another malware, “FRUITSHELL,” contains hard-coded prompts specifically designed to bypass analysis by LLM-powered security systems. Google also identified experimental malware, including “PROMPTFLUX,” a dropper that uses the Google Gemini API to repeatedly rewrite its own source code to remain hidden, and “PROMPTLOCK,” a proof-of-concept ransomware that dynamically generates malicious scripts at runtime.

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.

The GTIG report also found that threat actors are adapting “social engineering” techniques to bypass AI safety guardrails. Google observed actors posing as students in a “capture-the-flag” competition or as cybersecurity researchers to persuade Gemini to provide information, such as help with tool development, that would otherwise be blocked.

State-sponsored actors, including those from North Korea, Iran, and the People’s Republic of China (PRC), continue to use AI like Gemini to enhance all stages of their operations, from reconnaissance and phishing lure creation to developing command and control (C2) infrastructure. Furthermore, Google notes that the underground marketplace for illicit AI tools has matured in 2025, offering multifunctional tools that lower the barrier to entry for less sophisticated attackers.

Google stated it is actively disrupting this activity by disabling projects and accounts associated with these actors. The company emphasized it is continuously improving its models, including Gemini, to make them less susceptible to misuse and is applying the intelligence to strengthen its security classifiers.


Featured image credit

Tags: AIGooglellmMalware

Related Posts

MIT: AI capability outpaces current adoption by five times

MIT: AI capability outpaces current adoption by five times

December 2, 2025
Study shows AI summaries kill motivation to check sources

Study shows AI summaries kill motivation to check sources

December 2, 2025
Study finds poetry bypasses AI safety filters 62% of time

Study finds poetry bypasses AI safety filters 62% of time

December 1, 2025
Stanford’s Evo AI designs novel proteins using genomic language models

Stanford’s Evo AI designs novel proteins using genomic language models

December 1, 2025
Your future quantum computer might be built on standard silicon after all

Your future quantum computer might be built on standard silicon after all

November 25, 2025
Microsoft’s Fara-7B: New agentic LLM from screenshots

Microsoft’s Fara-7B: New agentic LLM from screenshots

November 25, 2025

LATEST NEWS

Your next Android call might tell you exactly why it is urgent

Your Android 16 phone gets a dark mode that works on every app

Raspberry Pi just got up to $25 more expensive

Users are mad about app suggestions in the highly priced ChatGPT Pro Plan

Red Dead Redemption is now available on Netflix Games mobile

MagSafe Battery Pack gets a mysterious firmware update

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.