Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

Cohere’s 111B-parameter AI model can run on just two GPUs

The core architecture of Command A employs an optimized transformer design featuring three layers of sliding window attention, each with a window size of 4096 tokens

byKerem Gülen
March 17, 2025
in Artificial Intelligence, News
Home News Artificial Intelligence
Share on FacebookShare on TwitterShare on LinkedInShare on WhatsAppShare on e-mail

Cohere has released Command A, a high-performance AI model featuring 111 billion parameters, a 256K context length, and support for 23 languages, on March 16, 2025. The model is designed for enterprise applications, promising a 50% reduction in operational costs compared to existing API-based models.

Meet Cohere Command A

Command A addresses the significant challenges posed by training and deploying large-scale AI models that often require extensive computational resources. Typical models, such as GPT-4o and DeepSeek-V3, demand up to 32 GPUs and extensive infrastructure, which poses a barrier for smaller enterprises. Command A, however, operates efficiently on just two GPUs while maintaining competitive performance levels.

The core architecture of Command A employs an optimized transformer design featuring three layers of sliding window attention, each with a window size of 4096 tokens. This structure enhances local context modeling, allowing the model to effectively manage detailed information across lengthy text inputs. Additionally, it includes a fourth layer that consists of global attention mechanisms, facilitating unrestricted token interactions throughout the entire sequence, thereby enriching its contextual understanding.

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.

Command A achieves a token generation rate of 156 tokens per second, which is 1.75 times faster than GPT-4o and 2.4 times faster than DeepSeek-V3. Its performance in handling instruction-following tasks, SQL queries, and retrieval-augmented generation (RAG) applications has shown exceptional accuracy in real-world evaluations, outperforming its competitors in multilingual scenarios.


Baidu just made AI cheaper: Ernie 4.5 costs 1% of GPT-4.5


The model’s multilingual capabilities extend beyond basic translation, exhibiting superior proficiency in various dialects, including Arabic, with evaluations showing enhanced contextually appropriate responses for regional dialects such as Egyptian, Saudi, Syrian, and Moroccan Arabic. This linguistic versatility is particularly beneficial for businesses operating in diverse language environments.

Performance evaluations indicate that Command A consistently outperforms its peers in fluency, faithfulness, and response utility during human assessments. It is equipped with advanced RAG capabilities that include verifiable citations, which enhance its utility for enterprise information retrieval applications. Furthermore, the model includes high-level security features designed to protect sensitive business information.

Noteworthy features of Command A include:

  • Operational efficiency on two GPUs, significantly lowering computational costs.
  • 111 billion parameters optimized for extensive text processing demands in enterprise applications.
  • Support for a 256K context length, facilitating effective processing of long-form documents.
  • Proficiency in 23 languages, ensuring high accuracy across global markets.
  • Exceptional execution in SQL, agentic tasks, and tool-based applications.
  • Private deployments being up to 50% more economical than traditional API alternatives.
  • Enterprise-grade security to safely manage sensitive data.

The introduction of Command A marks a significant advancement for businesses seeking cost-effective, efficient AI solutions that maintain robust performance standards.


Featured image credit: Kerem Gülen/Midjourney

Tags: AIGPU

Related Posts

Why you have to wait until 2027 for the next real F1 game

Why you have to wait until 2027 for the next real F1 game

November 19, 2025
Cloudflare admits a bot filter bug caused its worst outage since 2019

Cloudflare admits a bot filter bug caused its worst outage since 2019

November 19, 2025
Snapchat now lets you talk to strangers without exposing your real profile

Snapchat now lets you talk to strangers without exposing your real profile

November 19, 2025
You can now use GPT-5 and Claude together in one chaotic thread

You can now use GPT-5 and Claude together in one chaotic thread

November 19, 2025
You can finally tell TikTok to stop showing you fake AI videos

You can finally tell TikTok to stop showing you fake AI videos

November 19, 2025
Atomico report shows EU tech is lobbying harder than ever

Atomico report shows EU tech is lobbying harder than ever

November 19, 2025

LATEST NEWS

Why you have to wait until 2027 for the next real F1 game

Cloudflare admits a bot filter bug caused its worst outage since 2019

Snapchat now lets you talk to strangers without exposing your real profile

You can now use GPT-5 and Claude together in one chaotic thread

You can finally tell TikTok to stop showing you fake AI videos

Atomico report shows EU tech is lobbying harder than ever

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.