Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

DeepSeek releases R1 model trained for $294,000 on 512 H800 GPUs

The model achieved competitive performance against higher-budget rivals, demonstrating efficiency in mathematics, programming, and problem-solving tasks.

byAytun Çelebi
September 19, 2025
in Artificial Intelligence
Home News Artificial Intelligence
Share on FacebookShare on TwitterShare on LinkedInShare on WhatsAppShare on e-mail

The Chinese company DeepSeek AI has released its large language model, R1, which was trained for only $294,000 using 512 Nvidia H800 GPUs.

In a paper published in the journal Nature, the company detailed how it achieved this low cost by using a trial-and-error reinforcement learning method, allowing the model to achieve competitive performance against rivals with much larger budgets, like OpenAI.

How DeepSeek’s reinforcement learning method works

DeepSeek’s key innovation was to move away from the expensive, human-intensive process of creating annotated datasets. Traditional AI models for reasoning tasks are often trained on vast datasets where human experts provide step-by-step solutions to complex problems. Instead, DeepSeek developed an autonomous learning system that uses reinforcement learning to refine the model’s reasoning skills through a system of rewards and penalties.

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.

Researchers from Carnegie Mellon University, in an article accompanying the Nature paper, compared the process to a child learning to play a video game.

“As the child navigates their avatar through the game world, they learn through trial and error that some actions (such as collecting gold coins) earn points, whereas others (such as running into enemies) set their score back to zero. In a similar vein, DeepSeek-R1 was awarded a high score when it answered questions correctly and a low score when it gave wrong answers.”

This method was particularly effective for tasks in mathematics and programming, where answers can be definitively verified as right or wrong. The model would generate potential solutions, which were then evaluated by an automated scoring system. It would then iterate on its approach until it achieved the highest score, all without human intervention.

This efficient, self-directed process allowed the company to build a powerful AI system with a fraction of the investment required by its competitors.

Limitations and concerns about the model

While the reinforcement learning approach proved cost-effective, it also has some limitations. The model’s outputs often hide the underlying reasoning steps, making it difficult for a human to understand how it arrived at a conclusion. When asked to provide its reasoning, R1 generated extremely long and hard-to-read explanations—sometimes over 10,000 words—that switched between English and Chinese. The technique also struggled with tasks requiring nuance or subjectivity, where there is no single “correct” answer.

Beyond its technical limitations, the model’s development in China has raised concerns about potential government influence. A recent report from The Washington Post found that R1 exhibited biases in its outputs. Researchers discovered that the model would refuse to generate code with major security flaws when the prompts involved groups considered sensitive by Chinese authorities.

However, when asked to create code for entities like Tibet, Taiwan, or the Falun Gong religious movement, the model produced less secure versions with built-in vulnerabilities. This suggests that the model’s behavior may be shaped by the political priorities of the Chinese government.


Featured image credit

Tags: deepseekFeatured

Related Posts

New leak shows Google plans to let Gemini read your NotebookLM files

New leak shows Google plans to let Gemini read your NotebookLM files

November 24, 2025
Perplexity brings its AI browser Comet to Android

Perplexity brings its AI browser Comet to Android

November 21, 2025
Google claims Nano Banana Pro can finally render legible text on posters

Google claims Nano Banana Pro can finally render legible text on posters

November 21, 2025
OpenAI turns ChatGPT into a social network with global group chats

OpenAI turns ChatGPT into a social network with global group chats

November 21, 2025
OpenAI launches free ChatGPT for teachers until 2027

OpenAI launches free ChatGPT for teachers until 2027

November 21, 2025
Why Microsoft is letting you ditch OpenAI for your clipboard tools

Why Microsoft is letting you ditch OpenAI for your clipboard tools

November 21, 2025

LATEST NEWS

Why that harmless looking desktop icon might actually be a weapon

This Netflix notification is actually a malware

Facebook Groups finally lets you use nicknames

Nothing OS 4.0 brings Android 16 to the Phone 3 starting today

iPhone 17e will launch in February with a flagship camera

Apple’s latest limited-edition accessory is a sculptural stand

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.