Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Whitepapers
  • AI toolsNEW
  • Newsletter
  • + More
    • Glossary
    • Conversations
    • Events
    • About
      • Who we are
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Whitepapers
  • AI toolsNEW
  • Newsletter
  • + More
    • Glossary
    • Conversations
    • Events
    • About
      • Who we are
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

New stress-test framework reveals flaws in advanced AI reasoning

Most current benchmarks used to evaluate LRMs, such as GSM8K and MATH, assess models by asking one question at a time.

byKerem Gülen
July 28, 2025
in Research
Home Research
Share on FacebookShare on TwitterShare on LinkedInShare on WhatsAppShare on e-mail

While advanced AI systems known as large reasoning models (LRMs) have demonstrated impressive performance on complex problem-solving benchmarks, their true reasoning capabilities may be overestimated by current evaluation methods. According to a recent article by Sajjad Ansari, a novel multi-problem stress-testing framework reveals that even state-of-the-art models struggle under more realistic conditions.

The framework, detailed in the article REST: A Stress-Testing Framework for Evaluating Multi-Problem Reasoning in Large Reasoning Models, was developed by researchers from Tsinghua University, OpenDataLab, Shanghai AI Laboratory, and Renmin University to address critical gaps in how these advanced models are tested.

Why single-question tests are becoming obsolete

Most current benchmarks used to evaluate LRMs, such as GSM8K and MATH, assess models by asking one question at a time. This approach has two significant drawbacks that limit its effectiveness for measuring true reasoning ability. First, the discriminative power of these benchmarks is decreasing as top models achieve near-perfect scores, making it difficult to distinguish meaningful improvements between them. For example, some models now reach 97% accuracy on benchmarks like MATH500, a level of saturation that forces the expensive creation of ever-harder datasets.

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.

Second, single-question testing fails to reflect real-world scenarios where AI systems must reason across multiple, potentially interfering problems at the same time. Applications like technical support, educational tutoring, or multitasking AI assistants require dynamic cognitive load management, a skill that isolated tests cannot measure. To address this, the researchers developed REST (Reasoning Evaluation through Simultaneous Testing), a method that bundles multiple questions from existing benchmarks into a single prompt to better simulate real-world demands.


The great paradox of AI trust is falling as its value soars


Key findings from multi-problem stress-testing

By applying the REST framework to 34 advanced LRMs, researchers uncovered several groundbreaking insights into their true capabilities. The evaluation, conducted on 7 diverse benchmarks, revealed that performance degrades significantly when models are forced to handle multiple problems simultaneously.

  • Significant performance degradation: Even top-performing models like DeepSeek-R1 showed a notable drop in accuracy when tested with REST. On challenging benchmarks like AIME24, the model’s accuracy fell by nearly 30% compared to its performance in isolated question testing.
  • Enhanced discriminative power: REST dramatically amplified the performance differences between models that appeared similar in single-question tests. On the MATH500 benchmark, two models with close initial scores of 93% and 94.6% showed a massive 22% performance gap under REST, with their accuracies falling to 66.75% and 88.97%, respectively.
  • Training method insights: The study found that models fine-tuned with common methods like reinforcement learning on single-problem tasks often fail to maintain their advantage in a multi-problem setting. However, models trained with “long2short” techniques, which encourage more concise and efficient reasoning, maintained higher accuracy under stress, suggesting a promising direction for future development.

The REST framework simulates a high cognitive load, forcing models to dynamically allocate resources, resist interference from concurrent tasks, and avoid overthinking a single problem. This method also allows for a more nuanced analysis of errors that are invisible in single-question tests, such as question omission, where a model ignores later questions in a prompt, and summary errors, where it incorrectly synthesizes answers from multiple problems. By revitalizing existing datasets and reflecting real-world demands, the framework provides a more reliable and future-proof paradigm for evaluating next-generation reasoning AI systems.

Tags: llmLRM

Related Posts

Miggo Security bypasses Google Gemini defenses via calendar invites

Miggo Security bypasses Google Gemini defenses via calendar invites

January 21, 2026
JWST identifies SN Eos: The most distant supernova ever spectroscopically confirmed

JWST identifies SN Eos: The most distant supernova ever spectroscopically confirmed

January 21, 2026
How AI built VoidLink malware in just seven days

How AI built VoidLink malware in just seven days

January 20, 2026
Forrester analyst: AI has failed to move the needle on global productivity

Forrester analyst: AI has failed to move the needle on global productivity

January 19, 2026
OpenAI GPT 5.2 cracks Erdős math problem in 15 minutes

OpenAI GPT 5.2 cracks Erdős math problem in 15 minutes

January 19, 2026
Appfigures: Mobile app spending hits record 5.8 billion

Appfigures: Mobile app spending hits record $155.8 billion

January 15, 2026

LATEST NEWS

Blue Origin sets late February launch for third New Glenn mission

Anthropic overhauls hiring tests due to Claude AI

NexPhone launches triple OS phone for $549

Google Photos redesigns sharing with immersive full-screen carousel

Snap rolls out granular screen time tracking in Family Center update

Spotify launches AI-powered Prompted Playlists

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Whitepapers
  • AI tools
  • Newsletter
  • + More
    • Glossary
    • Conversations
    • Events
    • About
      • Who we are
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.