Cybersecurity researchers at SentinelOne have identified a new malware, MalTerminal, which uses OpenAI’s GPT-4 to generate malicious code in real time. This functionality establishes a new category of threat that integrates large language models directly into malware operations.
The discovery introduces LLM-enabled malware, which SentinelOne describes as a “qualitative shift in adversary tradecraft.” MalTerminal functions as a malware generator. Upon execution, it prompts an attacker to select a payload, offering choices such as a ransomware encryptor or a reverse shell. This selection is then sent as a prompt to the GPT-4 AI, which responds by generating Python code tailored to the requested malicious format.
A primary feature of MalTerminal is its evasion capability. The malicious code is not stored statically within the malware file but is created dynamically during runtime. This on-the-fly generation complicates detection for traditional security tools that rely on scanning static files for known malicious signatures. SentinelOne researchers confirmed the GPT-4 integration by discovering Python scripts and a Windows executable that contained hardcoded API keys and specific prompt structures for communicating with the AI.
The malware’s development has been dated to before late 2023. Researchers reached this conclusion because the API endpoint hardcoded into the malware was deactivated at that time, making MalTerminal the earliest known example of AI-powered malware. Currently, no evidence suggests MalTerminal was ever deployed in a live attack. This indicates it may have been created as a proof-of-concept or used as a tool for red teaming exercises.
SentinelOne’s report emphasized the challenges posed by this new malware type.
“With the ability to generate malicious logic and commands at runtime, LLM-enabled malware introduces new challenges for defenders.”
The report also framed the current situation as an opportunity for the cybersecurity community. “Although the use of LLM-enabled malware is still limited and largely experimental, this early stage of development gives defenders an opportunity to learn from attackers’ mistakes and adjust their approaches accordingly.” The researchers added, “We expect adversaries to adapt their strategies, and we hope further research can build on the work we have presented here.”