Google announced Antigravity, an agent-first coding tool built for Gemini 3 Pro, on November 18, 2025. The tool supports third-party models including Claude Sonnet 4.5 and OpenAI’s GPT-OSS, provides agents with direct access to editor, terminal, and browser, and enters free public preview for Windows, macOS, and Linux to enable agent-driven development.
An agent-first future is coming to a store near you!
Antigravity accompanies the launch of Gemini 3 Pro. Google positions the tool as designed for an “agent-first future.” It supports multiple agents operating simultaneously. These agents gain direct access to essential development environments: the code editor for writing and editing files, the terminal for executing commands and running scripts, and the browser for interacting with web applications and testing interfaces. This direct access allows agents to perform complex coding tasks without requiring constant human intervention for each step.
Autonomous agents get development superpowers
A central feature of Antigravity involves its reporting mechanism. The tool reports on its work plan before starting tasks. As agents execute work, Antigravity generates Artifacts, which consist of task lists outlining objectives, detailed plans specifying steps, screenshots capturing visual states of the workspace or application, and browser recordings documenting interactions within web pages. These Artifacts serve to verify both completed actions and upcoming intentions. While Antigravity also logs actions and external tool usage, Google states that Artifacts are “easier for users to verify” compared to comprehensive lists of model actions and tool calls. Developers can review these tangible outputs to confirm agent behavior matches expectations.
Oversight and transparency: The Artifacts aystem
Antigravity provides two primary interface views to accommodate different workflows. The Editor view delivers an Integrated Development Environment (IDE) experience akin to established tools such as Cursor and GitHub Copilot. In this mode, a single agent operates within a side panel alongside the main editing area, facilitating interactive coding sessions where the agent assists in real time. The Manager view shifts focus to oversight of multiple agents. Users control several autonomous agents concurrently across various workspaces. Google describes this as “mission control for spawning, orchestrating, and observing multiple agents across multiple workspaces in parallel.” Agents in this view handle tasks independently, with the user directing high-level coordination.
Non-disruptive feedback and knowledge retention
Feedback mechanisms in Antigravity allow non-disruptive communication with agents. Users leave comments directly on specific Artifacts. Agents incorporate these comments into their processes without pausing ongoing work. This preserves workflow continuity while enabling refinements based on human input. Agents also retain knowledge from previous sessions. They store specific snippets of code for reuse in similar contexts and preserve detailed steps for recurring tasks. This capability enables progressive improvement over time as agents build on accumulated experience.
A demonstration video showcases Antigravity constructing a basic flight tracker application. The agents handle the full cycle: building the app’s components, conducting tests to ensure functionality, and generating a browser recording to report test outcomes. This recording captures the app’s performance in a live browser environment, providing visual proof of successful execution.
Free preview and third-party model support
Antigravity enters public preview immediately upon announcement. Compatibility extends to Windows, macOS, and Linux operating systems. Access remains free, with generous rate limits applied to Gemini 3 Pro usage. These limits refresh every five hours. Google notes that only “a very small fraction of power users” encounter the limits during typical operation. Support includes third-party models Claude Sonnet 4.5 and OpenAI’s GPT-OSS, broadening model options beyond Google’s ecosystem.





