Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

DeepSeek releases R1 model trained for $294,000 on 512 H800 GPUs

The model achieved competitive performance against higher-budget rivals, demonstrating efficiency in mathematics, programming, and problem-solving tasks.

byAytun Çelebi
September 19, 2025
in Artificial Intelligence
Home News Artificial Intelligence

The Chinese company DeepSeek AI has released its large language model, R1, which was trained for only $294,000 using 512 Nvidia H800 GPUs.

In a paper published in the journal Nature, the company detailed how it achieved this low cost by using a trial-and-error reinforcement learning method, allowing the model to achieve competitive performance against rivals with much larger budgets, like OpenAI.

How DeepSeek’s reinforcement learning method works

DeepSeek’s key innovation was to move away from the expensive, human-intensive process of creating annotated datasets. Traditional AI models for reasoning tasks are often trained on vast datasets where human experts provide step-by-step solutions to complex problems. Instead, DeepSeek developed an autonomous learning system that uses reinforcement learning to refine the model’s reasoning skills through a system of rewards and penalties.

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.

Researchers from Carnegie Mellon University, in an article accompanying the Nature paper, compared the process to a child learning to play a video game.

“As the child navigates their avatar through the game world, they learn through trial and error that some actions (such as collecting gold coins) earn points, whereas others (such as running into enemies) set their score back to zero. In a similar vein, DeepSeek-R1 was awarded a high score when it answered questions correctly and a low score when it gave wrong answers.”

This method was particularly effective for tasks in mathematics and programming, where answers can be definitively verified as right or wrong. The model would generate potential solutions, which were then evaluated by an automated scoring system. It would then iterate on its approach until it achieved the highest score, all without human intervention.

This efficient, self-directed process allowed the company to build a powerful AI system with a fraction of the investment required by its competitors.

Limitations and concerns about the model

While the reinforcement learning approach proved cost-effective, it also has some limitations. The model’s outputs often hide the underlying reasoning steps, making it difficult for a human to understand how it arrived at a conclusion. When asked to provide its reasoning, R1 generated extremely long and hard-to-read explanations—sometimes over 10,000 words—that switched between English and Chinese. The technique also struggled with tasks requiring nuance or subjectivity, where there is no single “correct” answer.

Beyond its technical limitations, the model’s development in China has raised concerns about potential government influence. A recent report from The Washington Post found that R1 exhibited biases in its outputs. Researchers discovered that the model would refuse to generate code with major security flaws when the prompts involved groups considered sensitive by Chinese authorities.

However, when asked to create code for entities like Tibet, Taiwan, or the Falun Gong religious movement, the model produced less secure versions with built-in vulnerabilities. This suggests that the model’s behavior may be shaped by the political priorities of the Chinese government.


Featured image credit

Tags: deepseekFeatured

Related Posts

Google Cloud adds Lovable and Windsurf as AI coding customers

Google Cloud adds Lovable and Windsurf as AI coding customers

September 19, 2025
Elon Musk’s xAI chatbot Grok exposed hundreds of thousands of private user conversations

Elon Musk’s xAI chatbot Grok exposed hundreds of thousands of private user conversations

September 19, 2025
Google’s Gemini AI achieves gold medal in prestigious ICPC coding competition, outperforming most human teams

Google’s Gemini AI achieves gold medal in prestigious ICPC coding competition, outperforming most human teams

September 18, 2025
Leveraging AI to transform data visualizations into engaging presentations

Leveraging AI to transform data visualizations into engaging presentations

September 18, 2025
Google launches Gemini Canvas AI no-code platform

Google launches Gemini Canvas AI no-code platform

September 17, 2025
AI tool uses mammograms to predict women’s 10-year heart health and cancer risk

AI tool uses mammograms to predict women’s 10-year heart health and cancer risk

September 17, 2025

LATEST NEWS

Google Cloud adds Lovable and Windsurf as AI coding customers

Radware tricks ChatGPT’s Deep Research into Gmail data leak

Elon Musk’s xAI chatbot Grok exposed hundreds of thousands of private user conversations

Roblox game Steal a Brainrot removes AI-generated character, sparking fan backlash and a debate over copyright

DeepSeek releases R1 model trained for $294,000 on 512 H800 GPUs

Meta unveils Ray-Ban Meta Display smart glasses with augmented reality at Meta Connect 2025

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.