Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Whitepapers
  • Newsletter
  • + More
    • Glossary
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Whitepapers
  • Newsletter
  • + More
    • Glossary
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

AI agents can be controlled by malicious commands hidden in images

Subtle pixel manipulations in wallpapers or online images could allow attackers to issue hidden commands, highlighting urgent security risks as AI agents become widespread.

byKerem Gülen
September 15, 2025
in Research, Artificial Intelligence
Home Research
Share on FacebookShare on TwitterShare on LinkedInShare on WhatsAppShare on e-mail

A 2025 study from the University of Oxford has revealed a security vulnerability in AI agents, which are expected to be widely used within two years. Unlike chatbots, these agents can take direct actions on a user’s computer, such as opening tabs or filling out forms. The research shows how attackers can embed invisible commands in images to take control of these agents.

How the image-based attack works

Researchers demonstrated that by making subtle changes to the pixels in an image—such as a desktop wallpaper, an online ad, or a social media post—they could embed malicious commands. While these alterations are invisible to the human eye, an AI agent can interpret them as instructions.

The study used a “Taylor Swift” wallpaper as an example. A single manipulated image could command a running AI agent to retweet the image on social media and then send the user’s passwords to an attacker. The attack only affects users who have an AI agent active on their computer.

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.

Why are wallpapers an effective attack vector?

AI agents work by repeatedly taking screenshots of the user’s desktop to understand what is on the screen and identify elements to interact with. Because a desktop wallpaper is always present in these screenshots, it serves as a persistent delivery method for a malicious command. The researchers found that these hidden commands are also resistant to common image changes like resizing and compression.
Open-source AI models are especially vulnerable because attackers can study their code to learn how they process visual information. This allows them to design pixel patterns that the model will reliably interpret as a command.

The vulnerability allows attackers to string together multiple commands. An initial malicious image can instruct the agent to navigate to a website, which could host a second malicious image. This second image can then trigger another action, creating a sequence that allows for more complex attacks.

What can be done?

The researchers hope their findings will push developers to build security measures before AI agents become widespread. Potential defenses include retraining models to ignore these types of manipulated images or adding security layers that prevent agents from acting on on-screen content.

People are rushing to deploy the technology before its security is fully understood.

Yarin Gal, an Oxford professor and co-author of the study, expressed concern that the rapid deployment of agent technology is outpacing security research. The authors stated that even companies with closed-source models are not immune, as the attack exploits fundamental model behaviors that cannot be protected simply by keeping code private.


Featured image credit

Tags: artificial intelligenceFeaturedResearchSecurity

Related Posts

Amazon launches Ask this Book AI feature for Kindle iOS app

Amazon launches Ask this Book AI feature for Kindle iOS app

December 15, 2025
Rivian announces home-grown AI assistant coming to all R1 vehicles in 2026

Rivian announces home-grown AI assistant coming to all R1 vehicles in 2026

December 15, 2025
Google wipes Disney AI videos from YouTube following legal threat

Google wipes Disney AI videos from YouTube following legal threat

December 15, 2025
OpenAI exec says your typing speed is the main bottleneck to AGI

OpenAI exec says your typing speed is the main bottleneck to AGI

December 15, 2025
USENIX study finds AI extensions collect medical, banking data

USENIX study finds AI extensions collect medical, banking data

December 15, 2025
Alibaba’s Qwen3 surpasses Llama as top open-source model

Alibaba’s Qwen3 surpasses Llama as top open-source model

December 15, 2025

LATEST NEWS

India mandates continuous SIM binding for WhatsApp and Telegram

Amazon launches Ask this Book AI feature for Kindle iOS app

Uber launches YOUBER year-in-review for US users

Rivian announces home-grown AI assistant coming to all R1 vehicles in 2026

Google wipes Disney AI videos from YouTube following legal threat

OpenAI exec says your typing speed is the main bottleneck to AGI

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Whitepapers
  • Newsletter
  • + More
    • Glossary
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.