Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Whitepapers
  • Newsletter
  • + More
    • Glossary
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Whitepapers
  • Newsletter
  • + More
    • Glossary
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

Google reveals AI-powered malware using LLMs in real time

Identified malware families include PROMPTSTEAL, QUIETVAULT, FRUITSHELL, PROMPTFLUX, and PROMPTLOCK.

byKerem Gülen
November 12, 2025
in Research
Home Research
Share on FacebookShare on TwitterShare on LinkedInShare on WhatsAppShare on e-mail

Google’s Threat Intelligence Group (GTIG) has identified a significant escalation in the malicious use of artificial intelligence. Adversaries are no longer just using AI for productivity tasks like drafting phishing emails; they are now deploying novel malware that actively uses AI during an attack to dynamically alter its behavior.

This new phase of AI abuse involves what Google calls “Just-in-Time” AI. For the first time, GTIG has identified malware families that use Large Language Models (LLMs) mid-execution. These tools can dynamically generate malicious scripts or obfuscate their own code on the fly to evade detection, rather than relying on hard-coded functions.

The report details several new malware families using this technique. “PROMPTSTEAL,” which was observed in active operations, is a data miner that queries a Hugging Face API to an LLM to generate Windows commands for collecting system information. “QUIETVAULT,” also seen in the wild, is a credential stealer that uses AI CLI tools installed on the victim’s machine to search for additional secrets. Another malware, “FRUITSHELL,” contains hard-coded prompts specifically designed to bypass analysis by LLM-powered security systems. Google also identified experimental malware, including “PROMPTFLUX,” a dropper that uses the Google Gemini API to repeatedly rewrite its own source code to remain hidden, and “PROMPTLOCK,” a proof-of-concept ransomware that dynamically generates malicious scripts at runtime.

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.

The GTIG report also found that threat actors are adapting “social engineering” techniques to bypass AI safety guardrails. Google observed actors posing as students in a “capture-the-flag” competition or as cybersecurity researchers to persuade Gemini to provide information, such as help with tool development, that would otherwise be blocked.

State-sponsored actors, including those from North Korea, Iran, and the People’s Republic of China (PRC), continue to use AI like Gemini to enhance all stages of their operations, from reconnaissance and phishing lure creation to developing command and control (C2) infrastructure. Furthermore, Google notes that the underground marketplace for illicit AI tools has matured in 2025, offering multifunctional tools that lower the barrier to entry for less sophisticated attackers.

Google stated it is actively disrupting this activity by disabling projects and accounts associated with these actors. The company emphasized it is continuously improving its models, including Gemini, to make them less susceptible to misuse and is applying the intelligence to strengthen its security classifiers.


Featured image credit

Tags: AIGooglellmMalware

Related Posts

AI mirrors the brain’s processing and is quietly changing human vocabulary

AI mirrors the brain’s processing and is quietly changing human vocabulary

December 11, 2025
Catching the  trillion ghost: AI is rewriting the rules of financial crime

Catching the $2 trillion ghost: AI is rewriting the rules of financial crime

December 9, 2025
LLMs show distinct cultural biases in English vs Chinese prompts

LLMs show distinct cultural biases in English vs Chinese prompts

December 9, 2025
New robot builds furniture from voice commands in 5 minutes

New robot builds furniture from voice commands in 5 minutes

December 8, 2025
Study: LLMs favor sentence structure over meaning

Study: LLMs favor sentence structure over meaning

December 5, 2025
OpenAI wants its AI to confess to hacking and breaking rules

OpenAI wants its AI to confess to hacking and breaking rules

December 4, 2025

LATEST NEWS

The Game Awards 2025: Clair Obscur sweeps Oscars of gaming amid massive announcements

Trump signs executive order limiting state AI laws

Meet the world’s smallest AI supercomputer that fits in your pocket

Samsung is building a global shutter-level sensor for the Galaxy S26

Google now lets you try on clothes virtually with just a selfie

Fortnite returns to Google Play Store after 5-year antitrust battle

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Whitepapers
  • Newsletter
  • + More
    • Glossary
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.