Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Whitepapers
  • AI toolsNEW
  • Newsletter
  • + More
    • Glossary
    • Conversations
    • Events
    • About
      • Who we are
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Whitepapers
  • AI toolsNEW
  • Newsletter
  • + More
    • Glossary
    • Conversations
    • Events
    • About
      • Who we are
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

Google reveals AI-powered malware using LLMs in real time

Identified malware families include PROMPTSTEAL, QUIETVAULT, FRUITSHELL, PROMPTFLUX, and PROMPTLOCK.

byKerem Gülen
November 12, 2025
in Research
Home Research
Share on FacebookShare on TwitterShare on LinkedInShare on WhatsAppShare on e-mail

Google’s Threat Intelligence Group (GTIG) has identified a significant escalation in the malicious use of artificial intelligence. Adversaries are no longer just using AI for productivity tasks like drafting phishing emails; they are now deploying novel malware that actively uses AI during an attack to dynamically alter its behavior.

This new phase of AI abuse involves what Google calls “Just-in-Time” AI. For the first time, GTIG has identified malware families that use Large Language Models (LLMs) mid-execution. These tools can dynamically generate malicious scripts or obfuscate their own code on the fly to evade detection, rather than relying on hard-coded functions.

The report details several new malware families using this technique. “PROMPTSTEAL,” which was observed in active operations, is a data miner that queries a Hugging Face API to an LLM to generate Windows commands for collecting system information. “QUIETVAULT,” also seen in the wild, is a credential stealer that uses AI CLI tools installed on the victim’s machine to search for additional secrets. Another malware, “FRUITSHELL,” contains hard-coded prompts specifically designed to bypass analysis by LLM-powered security systems. Google also identified experimental malware, including “PROMPTFLUX,” a dropper that uses the Google Gemini API to repeatedly rewrite its own source code to remain hidden, and “PROMPTLOCK,” a proof-of-concept ransomware that dynamically generates malicious scripts at runtime.

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.

The GTIG report also found that threat actors are adapting “social engineering” techniques to bypass AI safety guardrails. Google observed actors posing as students in a “capture-the-flag” competition or as cybersecurity researchers to persuade Gemini to provide information, such as help with tool development, that would otherwise be blocked.

State-sponsored actors, including those from North Korea, Iran, and the People’s Republic of China (PRC), continue to use AI like Gemini to enhance all stages of their operations, from reconnaissance and phishing lure creation to developing command and control (C2) infrastructure. Furthermore, Google notes that the underground marketplace for illicit AI tools has matured in 2025, offering multifunctional tools that lower the barrier to entry for less sophisticated attackers.

Google stated it is actively disrupting this activity by disabling projects and accounts associated with these actors. The company emphasized it is continuously improving its models, including Gemini, to make them less susceptible to misuse and is applying the intelligence to strengthen its security classifiers.


Featured image credit

Tags: AIGooglellmMalware

Related Posts

Appfigures: Mobile app spending hits record 5.8 billion

Appfigures: Mobile app spending hits record $155.8 billion

January 15, 2026
Engineers build grasshopper-inspired robots to solve battery drain

Engineers build grasshopper-inspired robots to solve battery drain

January 14, 2026
Global memory chip shortage sends PC prices soaring

Global memory chip shortage sends PC prices soaring

January 12, 2026
63% of new AI models are now based on Chinese tech

63% of new AI models are now based on Chinese tech

January 12, 2026
Physics at -271°C: How the cold is heating up quantum computing

Physics at -271°C: How the cold is heating up quantum computing

January 8, 2026
Nature study projects 2B wearable health devices by 2050

Nature study projects 2B wearable health devices by 2050

January 7, 2026

LATEST NEWS

Is Twitter down? Users report access issues as X won’t open

Paramount+ raises subscription prices and terminates free trials for 2026

Capcom reveals Resident Evil Requiem gameplay and February release date

Mother of one of Elon Musk’s children sues xAI over sexual Grok deepfakes

Samsung revamps Mobile Gaming Hub to fix broken game discovery

Bluesky launches Live Now badge and cashtags in major update

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Whitepapers
  • AI tools
  • Newsletter
  • + More
    • Glossary
    • Conversations
    • Events
    • About
      • Who we are
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.