The Chinese company DeepSeek AI has released its large language model, R1, which was trained for only $294,000 using 512 Nvidia H800 GPUs.
In a paper published in the journal Nature, the company detailed how it achieved this low cost by using a trial-and-error reinforcement learning method, allowing the model to achieve competitive performance against rivals with much larger budgets, like OpenAI.
How DeepSeek’s reinforcement learning method works
DeepSeek’s key innovation was to move away from the expensive, human-intensive process of creating annotated datasets. Traditional AI models for reasoning tasks are often trained on vast datasets where human experts provide step-by-step solutions to complex problems. Instead, DeepSeek developed an autonomous learning system that uses reinforcement learning to refine the model’s reasoning skills through a system of rewards and penalties.
Researchers from Carnegie Mellon University, in an article accompanying the Nature paper, compared the process to a child learning to play a video game.
“As the child navigates their avatar through the game world, they learn through trial and error that some actions (such as collecting gold coins) earn points, whereas others (such as running into enemies) set their score back to zero. In a similar vein, DeepSeek-R1 was awarded a high score when it answered questions correctly and a low score when it gave wrong answers.”
This method was particularly effective for tasks in mathematics and programming, where answers can be definitively verified as right or wrong. The model would generate potential solutions, which were then evaluated by an automated scoring system. It would then iterate on its approach until it achieved the highest score, all without human intervention.
This efficient, self-directed process allowed the company to build a powerful AI system with a fraction of the investment required by its competitors.
Limitations and concerns about the model
While the reinforcement learning approach proved cost-effective, it also has some limitations. The model’s outputs often hide the underlying reasoning steps, making it difficult for a human to understand how it arrived at a conclusion. When asked to provide its reasoning, R1 generated extremely long and hard-to-read explanations—sometimes over 10,000 words—that switched between English and Chinese. The technique also struggled with tasks requiring nuance or subjectivity, where there is no single “correct” answer.
Beyond its technical limitations, the model’s development in China has raised concerns about potential government influence. A recent report from The Washington Post found that R1 exhibited biases in its outputs. Researchers discovered that the model would refuse to generate code with major security flaws when the prompts involved groups considered sensitive by Chinese authorities.
However, when asked to create code for entities like Tibet, Taiwan, or the Falun Gong religious movement, the model produced less secure versions with built-in vulnerabilities. This suggests that the model’s behavior may be shaped by the political priorities of the Chinese government.