Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Whitepapers
  • AI toolsNEW
  • Newsletter
  • + More
    • Glossary
    • Conversations
    • Events
    • About
      • Who we are
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Whitepapers
  • AI toolsNEW
  • Newsletter
  • + More
    • Glossary
    • Conversations
    • Events
    • About
      • Who we are
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

AI agents can be controlled by malicious commands hidden in images

Subtle pixel manipulations in wallpapers or online images could allow attackers to issue hidden commands, highlighting urgent security risks as AI agents become widespread.

byKerem Gülen
September 15, 2025
in Research, Artificial Intelligence
Home Research
Share on FacebookShare on TwitterShare on LinkedInShare on WhatsAppShare on e-mail

A 2025 study from the University of Oxford has revealed a security vulnerability in AI agents, which are expected to be widely used within two years. Unlike chatbots, these agents can take direct actions on a user’s computer, such as opening tabs or filling out forms. The research shows how attackers can embed invisible commands in images to take control of these agents.

How the image-based attack works

Researchers demonstrated that by making subtle changes to the pixels in an image—such as a desktop wallpaper, an online ad, or a social media post—they could embed malicious commands. While these alterations are invisible to the human eye, an AI agent can interpret them as instructions.

The study used a “Taylor Swift” wallpaper as an example. A single manipulated image could command a running AI agent to retweet the image on social media and then send the user’s passwords to an attacker. The attack only affects users who have an AI agent active on their computer.

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.

Why are wallpapers an effective attack vector?

AI agents work by repeatedly taking screenshots of the user’s desktop to understand what is on the screen and identify elements to interact with. Because a desktop wallpaper is always present in these screenshots, it serves as a persistent delivery method for a malicious command. The researchers found that these hidden commands are also resistant to common image changes like resizing and compression.
Open-source AI models are especially vulnerable because attackers can study their code to learn how they process visual information. This allows them to design pixel patterns that the model will reliably interpret as a command.

The vulnerability allows attackers to string together multiple commands. An initial malicious image can instruct the agent to navigate to a website, which could host a second malicious image. This second image can then trigger another action, creating a sequence that allows for more complex attacks.

What can be done?

The researchers hope their findings will push developers to build security measures before AI agents become widespread. Potential defenses include retraining models to ignore these types of manipulated images or adding security layers that prevent agents from acting on on-screen content.

People are rushing to deploy the technology before its security is fully understood.

Yarin Gal, an Oxford professor and co-author of the study, expressed concern that the rapid deployment of agent technology is outpacing security research. The authors stated that even companies with closed-source models are not immune, as the attack exploits fundamental model behaviors that cannot be protected simply by keeping code private.


Featured image credit

Tags: artificial intelligenceFeaturedResearchSecurity

Related Posts

DeepSeek reveals MODEL1 architecture in GitHub update ahead of V4

DeepSeek reveals MODEL1 architecture in GitHub update ahead of V4

January 21, 2026
Altman breaks anti-ad stance with “sponsored” links below ChatGPT answers

Altman breaks anti-ad stance with “sponsored” links below ChatGPT answers

January 21, 2026
Samsung leaks then deletes Bixby overhaul featuring Perplexity search

Samsung leaks then deletes Bixby overhaul featuring Perplexity search

January 21, 2026
Miggo Security bypasses Google Gemini defenses via calendar invites

Miggo Security bypasses Google Gemini defenses via calendar invites

January 21, 2026
Lehane confirms OpenAI will debut first consumer hardware in late 2026

Lehane confirms OpenAI will debut first consumer hardware in late 2026

January 21, 2026
Google launches free SAT practice exams in Gemini with Princeton Review

Google launches free SAT practice exams in Gemini with Princeton Review

January 21, 2026

LATEST NEWS

Apple to shrink iPhone 18 Pro Dynamic Island by hiding Face ID sensors

OnePlus faces dismantling claims after 20% drop in global phone shipments

Nvidia shares slide as Inventec warns of H200 chip delays in China

DeepSeek reveals MODEL1 architecture in GitHub update ahead of V4

Altman breaks anti-ad stance with “sponsored” links below ChatGPT answers

Samsung leaks then deletes Bixby overhaul featuring Perplexity search

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Whitepapers
  • AI tools
  • Newsletter
  • + More
    • Glossary
    • Conversations
    • Events
    • About
      • Who we are
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.