Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

AI agents can be controlled by malicious commands hidden in images

Subtle pixel manipulations in wallpapers or online images could allow attackers to issue hidden commands, highlighting urgent security risks as AI agents become widespread.

byKerem Gülen
September 15, 2025
in Research, Artificial Intelligence

A 2025 study from the University of Oxford has revealed a security vulnerability in AI agents, which are expected to be widely used within two years. Unlike chatbots, these agents can take direct actions on a user’s computer, such as opening tabs or filling out forms. The research shows how attackers can embed invisible commands in images to take control of these agents.

How the image-based attack works

Researchers demonstrated that by making subtle changes to the pixels in an image—such as a desktop wallpaper, an online ad, or a social media post—they could embed malicious commands. While these alterations are invisible to the human eye, an AI agent can interpret them as instructions.

The study used a “Taylor Swift” wallpaper as an example. A single manipulated image could command a running AI agent to retweet the image on social media and then send the user’s passwords to an attacker. The attack only affects users who have an AI agent active on their computer.

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.

Why are wallpapers an effective attack vector?

AI agents work by repeatedly taking screenshots of the user’s desktop to understand what is on the screen and identify elements to interact with. Because a desktop wallpaper is always present in these screenshots, it serves as a persistent delivery method for a malicious command. The researchers found that these hidden commands are also resistant to common image changes like resizing and compression.
Open-source AI models are especially vulnerable because attackers can study their code to learn how they process visual information. This allows them to design pixel patterns that the model will reliably interpret as a command.

The vulnerability allows attackers to string together multiple commands. An initial malicious image can instruct the agent to navigate to a website, which could host a second malicious image. This second image can then trigger another action, creating a sequence that allows for more complex attacks.

What can be done?

The researchers hope their findings will push developers to build security measures before AI agents become widespread. Potential defenses include retraining models to ignore these types of manipulated images or adding security layers that prevent agents from acting on on-screen content.

People are rushing to deploy the technology before its security is fully understood.

Yarin Gal, an Oxford professor and co-author of the study, expressed concern that the rapid deployment of agent technology is outpacing security research. The authors stated that even companies with closed-source models are not immune, as the attack exploits fundamental model behaviors that cannot be protected simply by keeping code private.


Featured image credit

Tags: artificial intelligenceFeaturedResearchSecurity

Related Posts

Samsung Internet beta brings Galaxy AI to Windows PCs

Samsung Internet beta brings Galaxy AI to Windows PCs

October 31, 2025
Tim Cook says Siri’s delayed AI upgrade is finally on track for 2026

Tim Cook says Siri’s delayed AI upgrade is finally on track for 2026

October 31, 2025
Adobe turns Photoshop into a chatbot that edits, renames and collaborates

Adobe turns Photoshop into a chatbot that edits, renames and collaborates

October 31, 2025
Chrome tests “Nano Banana” and “Deep Search” AI buttons

Chrome tests “Nano Banana” and “Deep Search” AI buttons

October 31, 2025
Canva unveils its Creative Operating System to rival Adobe

Canva unveils its Creative Operating System to rival Adobe

October 31, 2025
OpenAI Sora adds character cameos and video stitching

OpenAI Sora adds character cameos and video stitching

October 30, 2025

LATEST NEWS

Tech News Today: Nvidia builds the AI world while Adobe and Canva fight to rule it

Disney+ and Hulu streams now look sharper on Samsung TVs with HDR10+

Min Mode: Android 17 to have a special Always-On Display

Samsung Internet beta brings Galaxy AI to Windows PCs

Amazon cancels its Lord of the Rings MMO again

Windows 11 on Quest 3: Microsoft’s answer to Vision Pro

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.