Google has released the Gemini 2.5 Computer Use model, a new specialized model available in preview through the Gemini API. It is built on Gemini 2.5 Pro and enables developers to build AI agents that can control websites and mobile applications by clicking, typing, and scrolling, just as a human would.
While AI models can often interact with software through structured APIs, many digital tasks, such as filling out forms or navigating complex web pages, still require direct interaction with a graphical user interface (GUI). This model is designed to automate those tasks, allowing agents to operate behind logins and manipulate interactive elements like dropdowns and filters.
How the Gemini 2.5 Computer Use model works
The model’s capabilities are accessed through a new `computer_use` tool in the Gemini API and operate in a continuous loop.
- The developer provides the agent with a user request, a screenshot of the current user interface, and a history of recent actions.
- The model analyzes these inputs and generates a suggested action, such as a function call to click an element or type text into a field.
- The client-side code executes the action.
- A new screenshot of the updated GUI is sent back to the model, and the loop repeats until the task is complete or terminated.
The model is primarily optimized for web browsers but also shows strong performance on mobile UI control tasks. It is not yet optimized for controlling a desktop operating system.
Performance on benchmarks
According to Google, the Gemini 2.5 Computer Use model demonstrates strong performance on multiple web and mobile control benchmarks. In tests conducted by the browser automation company Browserbase, the model delivered high accuracy on browser control tasks while maintaining a lower latency than competing models.
Safety features and developer controls
Recognizing the risks associated with AI agents that can control computers, Google has built safety features directly into the model and provided additional controls for developers.
- Built-in safety training: The model is trained to address risks such as intentional misuse by users, unexpected model behavior, and prompt injection attacks.
- Per-step safety service: An external safety service assesses each action the model proposes before it is executed.
- System instructions: Developers can specify that the agent must either refuse or ask for user confirmation before taking high-stakes actions, such as making a purchase, bypassing a CAPTCHA, or controlling a medical device.
Early use cases and feedback
The model has already been deployed internally at Google for UI testing and powers some agent capabilities in AI Mode in Search. Early access users have been testing it for personal assistants and workflow automation.
- The proactive assistant Poke.com noted that the model was often 50% faster than other solutions.
- The AI agent company Autotab reported that the model increased performance by up to 18% on its most difficult evaluations for reliably parsing context.
- Google’s payments platform team implemented the model to fix fragile UI tests, successfully rehabilitating over 60% of test executions that previously would have failed.
How to use Gemini 2.5 Computer Use model
The Gemini 2.5 Computer Use model is available today in public preview through the Gemini API on Google AI Studio and Vertex AI. Developers can start building by using the provided documentation and can test the model in a demo environment hosted by Browserbase.