Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

Sentinelone finds malterminal malware using OpenAI GPT-4

Researchers identify the first known malware that uses GPT-4 to produce malicious scripts at runtime, evading traditional defenses.

byEmre Çıtak
September 23, 2025
in Cybersecurity
Home News Cybersecurity

Cybersecurity researchers at SentinelOne have identified a new malware, MalTerminal, which uses OpenAI’s GPT-4 to generate malicious code in real time. This functionality establishes a new category of threat that integrates large language models directly into malware operations.

The discovery introduces LLM-enabled malware, which SentinelOne describes as a “qualitative shift in adversary tradecraft.” MalTerminal functions as a malware generator. Upon execution, it prompts an attacker to select a payload, offering choices such as a ransomware encryptor or a reverse shell. This selection is then sent as a prompt to the GPT-4 AI, which responds by generating Python code tailored to the requested malicious format.

A primary feature of MalTerminal is its evasion capability. The malicious code is not stored statically within the malware file but is created dynamically during runtime. This on-the-fly generation complicates detection for traditional security tools that rely on scanning static files for known malicious signatures. SentinelOne researchers confirmed the GPT-4 integration by discovering Python scripts and a Windows executable that contained hardcoded API keys and specific prompt structures for communicating with the AI.

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.

The malware’s development has been dated to before late 2023. Researchers reached this conclusion because the API endpoint hardcoded into the malware was deactivated at that time, making MalTerminal the earliest known example of AI-powered malware. Currently, no evidence suggests MalTerminal was ever deployed in a live attack. This indicates it may have been created as a proof-of-concept or used as a tool for red teaming exercises.

SentinelOne’s report emphasized the challenges posed by this new malware type.

“With the ability to generate malicious logic and commands at runtime, LLM-enabled malware introduces new challenges for defenders.”

The report also framed the current situation as an opportunity for the cybersecurity community. “Although the use of LLM-enabled malware is still limited and largely experimental, this early stage of development gives defenders an opportunity to learn from attackers’ mistakes and adjust their approaches accordingly.” The researchers added, “We expect adversaries to adapt their strategies, and we hope further research can build on the work we have presented here.”


Featured image credit

Tags: gpt-4

Related Posts

FBI warns of fake IC3 websites stealing data

FBI warns of fake IC3 websites stealing data

September 23, 2025
Radware finds ChatGPT deep research ShadowLeak zero-click flaw

Radware finds ChatGPT deep research ShadowLeak zero-click flaw

September 23, 2025
Selected AI fraud prevention solutions – September 2025

Selected AI fraud prevention solutions – September 2025

September 22, 2025
Radware tricks ChatGPT’s Deep Research into Gmail data leak

Radware tricks ChatGPT’s Deep Research into Gmail data leak

September 19, 2025
Elon Musk’s xAI chatbot Grok exposed hundreds of thousands of private user conversations

Elon Musk’s xAI chatbot Grok exposed hundreds of thousands of private user conversations

September 19, 2025
Steps to building resilient cybersecurity frameworks

Steps to building resilient cybersecurity frameworks

September 18, 2025

LATEST NEWS

Nvidia and OpenAI announce landmark $100 billion partnership, igniting global stock rally

Perplexity Max gets email assistant for Gmail and Outlook

Germany seeks to block Apple, Google from EU’s FiDA

Created by Humans licenses author content to AI firms

Sentinelone finds malterminal malware using OpenAI GPT-4

FBI warns of fake IC3 websites stealing data

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.