Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Whitepapers
  • AI toolsNEW
  • Newsletter
  • + More
    • Glossary
    • Conversations
    • Events
    • About
      • Who we are
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Whitepapers
  • AI toolsNEW
  • Newsletter
  • + More
    • Glossary
    • Conversations
    • Events
    • About
      • Who we are
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

Why throwing more AI compute at verification might be a mistake

If you thought AI should verify its own answers, new research says: only if you’ve got compute to burn. Otherwise? Think more, judge less.

byKerem Gülen
April 11, 2025
in Research
Home Research
Share on FacebookShare on TwitterShare on LinkedInShare on WhatsAppShare on e-mail

Getting large language models (LLMs) to reason better is one thing. Getting them to do it without burning through absurd amounts of compute is another. A new research paper from TU Darmstadt, UCLA, Google DeepMind, and Mila digs deep into this trade-off — and might just change how AI developers think about scaling reasoning at test time.

The core tension? Whether LLMs should spend their compute generating more answers (what’s known as Self-Consistency, or SC), or verifying a few promising answers using Generative Reward Models (GenRMs). Turns out, choosing wrong can make your model waste up to 128 times more compute — for a barely noticeable performance bump.

The new math of reasoning at scale

LLMs like GPT-4, Llama, or Qwen have gotten shockingly good at solving math and science problems by generating multiple chains of thought (CoTs) and picking the most common result. That’s the idea behind SC — brute force wisdom of the crowd. But researchers have also been excited by GenRMs, a newer approach that lets LLMs act like their own judge by verifying answers through further chain-of-thought reasoning.

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.

Previous comparisons made GenRM look wildly efficient: matching SC’s accuracy with 4× fewer solutions. But this paper calls that framing out — hard. Why? Because nobody was counting the true compute cost of all those verification steps.

Compute budgets change everything

This study introduces a clean framework for measuring the real cost of SC and GenRM approaches under a fixed compute budget. It works like this: you can either spend compute generating more answers (SC), or split that budget between a few answers and many verifications (GenRM). Their model for calculating total inference compute is refreshingly straightforward: C(S, V) = S(1 + λV), where S is the number of solutions, V the number of verifications, and λ reflects verification length relative to solutions.

The brutal result: SC is still king (unless you’re rich)

The experiments left little doubt. Across Llama and Qwen models, from 7B to 70B parameters, and across math and science reasoning tasks, the story repeated: SC outperformed GenRM at lower compute budgets. Only when compute scaled past 8× did GenRM catch up. And getting a modest 3.8% performance boost over SC required an eye-watering 128× more compute.

That result held up even for advanced “thinking models” like QwQ-32B, and on hard math datasets like AIME24. SC wins when compute is tight. GenRM only makes sense when compute is practically free — or when the problems are so difficult that verification pays off dramatically.


IEA warns: AI could double global data center energy use by 2030


The smart way to use GenRM (if you must)

Still, the study doesn’t dismiss GenRM entirely. In fact, it derives inference scaling laws for GenRM — a blueprint for compute-optimal problem solving. The key finding? When scaling GenRM, allocate compute towards generating solutions faster than verifications — roughly 1.5 to 2 times faster. In numbers, their scaling laws found optimal solution count scales with compute budget as S ∝ C^0.57, while optimal verifications scale as V ∝ C^0.39.

This research leaves practitioners with a very practical guide: if compute is limited, trust SC and spend it on generating more solutions. If compute is abundant, and especially if you’re dealing with harder reasoning tasks, using GenRM with the right scaling balance might be worth it — but only with serious optimization.

For AI developers facing real-world constraints, the takeaway is almost comically simple: more thinking beats more verifying, unless you have near-infinite resources. And even then, verifying needs to be smart, efficient, and minimal.

The full paper, “When To Solve, When To Verify: Compute-Optimal Problem Solving and Generative Verification for LLM Reasoning,” is available on arXiv. Their codebase is open at GitHub.


Featured image credit

Tags: AILLMs

Related Posts

Appfigures: Mobile app spending hits record 5.8 billion

Appfigures: Mobile app spending hits record $155.8 billion

January 15, 2026
Engineers build grasshopper-inspired robots to solve battery drain

Engineers build grasshopper-inspired robots to solve battery drain

January 14, 2026
Global memory chip shortage sends PC prices soaring

Global memory chip shortage sends PC prices soaring

January 12, 2026
63% of new AI models are now based on Chinese tech

63% of new AI models are now based on Chinese tech

January 12, 2026
Physics at -271°C: How the cold is heating up quantum computing

Physics at -271°C: How the cold is heating up quantum computing

January 8, 2026
Nature study projects 2B wearable health devices by 2050

Nature study projects 2B wearable health devices by 2050

January 7, 2026

LATEST NEWS

Is Twitter down? Users report access issues as X won’t open

Paramount+ raises subscription prices and terminates free trials for 2026

Capcom reveals Resident Evil Requiem gameplay and February release date

Mother of one of Elon Musk’s children sues xAI over sexual Grok deepfakes

Samsung revamps Mobile Gaming Hub to fix broken game discovery

Bluesky launches Live Now badge and cashtags in major update

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Whitepapers
  • AI tools
  • Newsletter
  • + More
    • Glossary
    • Conversations
    • Events
    • About
      • Who we are
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.