Coral Protocol has released its Coral v1 agent stack to standardize the discovery, composition, and operation of AI agents. The release introduces a new runtime based on the Model Context Protocol, developer tools, and a public registry.
The Coral v1 release furnishes developers with a system to publish AI agents to a public marketplace, rent agents on demand, and prepare for a future payment structure. Its core components begin with the Coral Server, a runtime that implements Model Context Protocol (MCP) primitives. This enables agents to register, create threads, and send messages for structured coordination, replacing brittle context splicing. The release also includes the Coral CLI and Studio, a workflow for orchestrating agents by adding them to shared threads and inspecting thread and message telemetry for debugging. A public registry completes the stack, serving as a discovery layer to find and integrate available agents.
The protocol addresses the lack of a common operational standard among agent frameworks like LangChain and CrewAI, which currently prevents effective composition. Coral’s MCP provides a shared transport and addressing scheme, allowing specialized agents from different stacks to coordinate without requiring ad-hoc glue code. The system uses persistent threads and mention-based targeting to maintain organized and low-overhead collaboration. This approach is designed to provide a more stable alternative to methods such as prompt concatenation.
Coral’s open-source reference implementation, Anemoi, demonstrates a semi-centralized pattern that involves a lightweight planner coordinating with specialized workers communicating over MCP threads. When tested on the GAIA benchmark, Anemoi reported a 52.73% pass@3 score. This result was achieved using GPT-4.1-mini as the planner and GPT-4o as the workers. This performance surpassed a reproduced OWL setup, which achieved a 43.63% score under identical LLM and tooling constraints. The coordination loop, which involves a cycle of planning, execution, critique, and refinement, is documented along with the results in an associated arXiv paper and on GitHub.
The design of Anemoi reduces reliance on a single, powerful planner model for complex operations. This architecture also trims redundant token passing between agents, which improves scalability and cost-efficiency for long-horizon tasks. The empirical results from the GAIA benchmark provide evidence, anchored in a recognized standard, that structured agent-to-agent communication can outperform naive prompt chaining. The performance advantage is particularly evident in scenarios where the capacity of the central planner agent is limited.
Coral’s long-term vision includes a usage-based marketplace where agent authors can list their creations with pricing metadata and receive automated payments per call, with pay-per-usage payouts on Solana cited as a planned feature. As of this release, these monetization features are not yet live. The official developer page labels “Pay Per Usage / Get Paid Automatically” and “Hosted checkout” as “coming soon.” Coral advises developers to build on the current runtime and registry, but to keep any payment-related functionalities feature-flagged within their applications until general availability is formally announced.