Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Whitepapers
  • AI toolsNEW
  • Newsletter
  • + More
    • Glossary
    • Conversations
    • Events
    • About
      • Who we are
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Whitepapers
  • AI toolsNEW
  • Newsletter
  • + More
    • Glossary
    • Conversations
    • Events
    • About
      • Who we are
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

Why Nvidia’s Groq deal is so key for the future of AI

Nvidia's chips currently handle much of the AI training phase across the industry. Inference represents a bottleneck that Nvidia does not fully control. Groq's chips target this inference stage specifically, where AI models apply knowledge gained from training to produce results on new data.

byEmre Çıtak
December 30, 2025
in Industry
Home Industry
Share on FacebookShare on TwitterShare on LinkedInShare on WhatsAppShare on e-mail

Chipmakers Nvidia and Groq entered a non-exclusive technology licensing agreement last week to accelerate and reduce the cost of running pre-trained large language models using Groq’s language processing unit chips.

Groq’s language processing unit chips power real-time chatbot queries during the inference stage of AI operations, distinct from the model training process. These chips enable AI models to generate responses rapidly in applications such as chatbots.

Nvidia’s chips currently handle much of the AI training phase across the industry. Inference represents a bottleneck that Nvidia does not fully control. Groq’s chips target this inference stage specifically, where AI models apply knowledge gained from training to produce results on new data.

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.

Groq designs its chips for inference to move AI models from laboratory experimentation into practical deployment. Inference occurs after training, when models process unseen inputs to deliver outputs in real-world scenarios.

Investors direct funds toward inference startups to connect AI research with large-scale everyday applications. Axios reporter Chris Metinko covered this investment trend earlier this year.

Enhanced inference capabilities allow companies to pursue additional enterprise AI projects at larger scales. These initiatives increase demand for training processes, which in turn elevates the need for Nvidia’s training chips.

AI models function through two phases: training and inference. During training, models process extensive datasets including text, images, and video to construct internal representations of knowledge.

In the inference phase, models identify patterns within previously unseen data and produce responses to specific prompts based on those patterns. This process resembles a student who studies material for an examination and then applies that knowledge during the test.

Groq originated in 2016 under the founding leadership of Jonathan Ross. The company bears no relation to Elon Musk’s xAI chatbot named Grok.

Jonathan Ross, Groq president Sunny Madra, and select other employees plan to join Nvidia, as stated on Groq’s website. Groq intends to maintain independent operations following these transitions.

The agreement constitutes a “non-exclusive inference technology licensing agreement.” This arrangement resembles an acquisition or acquihire. Stacy Rasgon described the structure in a note to clients as maintaining the fiction of competition, according to CNBC.

Companies employ such deal structures to navigate antitrust reviews while securing specialized AI personnel.

  • Microsoft example: Recruited Mustafa Suleyman, co-founder of DeepMind.
  • Google example: Re-engaged Noam Shazeer, co-inventor of the Transformer architecture central to GPT models.

Jonathan Ross, now moving to Nvidia, previously developed Google’s Tensor Processing Unit, known as the TPU. Deployment costs determine the extent to which companies utilize models developed through prior training efforts.


Featured image credit

Tags: groqNvidia

Related Posts

From meme factory to social petri dish: The cultural life cycle of Roblox trends

From meme factory to social petri dish: The cultural life cycle of Roblox trends

January 23, 2026
Backwards compatibility: Nostalgia or necessary?

Backwards compatibility: Nostalgia or necessary?

January 23, 2026
Avatars and self-expression: Who are you in a game world?

Avatars and self-expression: Who are you in a game world?

January 23, 2026
Vimeo begins staff layoffs following Bending Spoons acquisition

Vimeo begins staff layoffs following Bending Spoons acquisition

January 23, 2026
LiveKit nets  billion valuation as the engine behind ChatGPT’s voice mode

LiveKit nets $1 billion valuation as the engine behind ChatGPT’s voice mode

January 23, 2026
Barret Zoph to lead OpenAI’s aggressive commercial pivot

Barret Zoph to lead OpenAI’s aggressive commercial pivot

January 23, 2026

LATEST NEWS

Substack goes for the living room with beta TV app launch

Google rolls out opt-in “Personal Intelligence” for AI Pro and Ultra users

JBL launches AI-powered BandBox amps

The billion-event problem: How data engineering powers 8-hour battery life in AR glasses

Influencer collaboration with brands: 15 real formats beyond the sponsored post

From fragmented systems to intelligent workflows: How CRM platforms like Salesforce power data-driven enterprise operations

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Whitepapers
  • AI tools
  • Newsletter
  • + More
    • Glossary
    • Conversations
    • Events
    • About
      • Who we are
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.