Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Whitepapers
  • AI toolsNEW
  • Newsletter
  • + More
    • Glossary
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Whitepapers
  • AI toolsNEW
  • Newsletter
  • + More
    • Glossary
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

Researchers warn that LLMs can get “brain rot” too

The study ran controlled experiments using models like Llama3 8B and Qwen2.5 7B on a massive corpus of real Twitter/X posts.

byRafael Rodrigues
October 24, 2025
in Research
Home Research
Share on FacebookShare on TwitterShare on LinkedInShare on WhatsAppShare on e-mail

In a new preprint paper, researchers from Texas A&M University, the University of Texas at Austin, and Purdue University have introduced a troubling new concept: the “LLM Brain Rot Hypothesis.” The study finds that continually pre-training large language models (LLMs) on “junk web text” causes a lasting cognitive decline in their abilities. . This matters because it’s not just a temporary glitch; the researchers found the damage is persistent, reframing the simple act of data curation as a critical, training-time safety problem for all future AI development.

How to give an AI ‘brain rot’

The term “brain rot” was famously named Oxford’s word of the year for 2024, describing the mental fog humans get from consuming too much trivial online content. The researchers set out to see if the same thing happens to AI. To do this, they ran a controlled experiment using a massive corpus of real Twitter/X posts. They created two distinct datasets: a “junk” dataset and a “control” dataset.

The “junk” data was defined in two different ways:

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.

  • M1 (Engagement Degree): This dataset was filled with short, highly popular posts (length < 30 tokens, popularity > 500). The researchers found this non-semantic metric—popularity—was a surprisingly powerful indicator of the brain rot effect, distinct from the text’s actual meaning.
  • M2 (Semantic Quality): This dataset was filled with content that an AI (GPT-4o-mini) classified as low-quality, such as “conspiracy theories, exaggerated claims, unsupported assertions or superficial lifestyle content.”

They then took four different LLMs (including Llama3 8B and Qwen2.5 7B) and continually trained them on these junk datasets, comparing their performance against models trained on the control data.

The cognitive decline is real

The results were immediate and significant. Models trained on the junk data showed a non-trivial cognitive decline (Hedges’ g > 0.3) across the board. The more “junk” the models consumed, the worse they got, demonstrating a clear “dose-response” decay. For example, as the junk ratio of M1 data rose from 0% to 100%, one reasoning benchmark score plummeted from 74.9 to 57.2.

The damage wasn’t just in one area. The researchers found declines in:

  • Reasoning: Models lost their ability to solve complex problems.
  • Long-context understanding: Their ability to retrieve information from long documents collapsed.
  • Safety: The models became less aligned with ethical norms.
  • Personality: Most disturbingly, the models developed “dark traits,” showing a significant spike in psychopathy and narcissism.

When the researchers dug into why this was happening, they identified a primary failure mode they call “thought-skipping.” The AI models would increasingly truncate or skip reasoning chains entirely. Instead of thinking step-by-step, they would just jump to a (usually wrong) answer, mimicking the short, attention-grabbing, non-reflective style of the junk data they were fed.

Can the rot be cured?

This is the most worrying part of the study: not really. The researchers tried two different ways to “heal” the brain-rotted models, and neither was fully successful.

    1. Training-free reflection: They tried to get the models to “reflect” on their mistakes and fix them. This failed. The models’ “internalized cognitive decline” was so deep that they were unable to even identify their own reasoning failures.
    2. Post-hoc tuning: They tried to “wash out” the bad training by re-training the models on a massive amount of clean, high-quality instruction data. While this helped, it couldn’t restore the models’ original capabilities. Even after scaling the “clean” data to 4.8 times the amount of the junk data, a large performance gap remained.

The findings provide powerful, causal evidence that data quality is a critical driver of AI capability and safety. The damage, once done, appears to be deeply internalized. This suggests that simply scraping the internet for ever-larger datasets is a dangerous path, and it motivates the need for routine “cognitive health checks” for AI models, lest they, too, fall victim to the internet’s junk food.


Featured image credit

Tags: brain rotllm

Related Posts

63% of new AI models are now based on Chinese tech

63% of new AI models are now based on Chinese tech

January 12, 2026
Nature study projects 2B wearable health devices by 2050

Nature study projects 2B wearable health devices by 2050

January 7, 2026
DeepSeek introduces Manifold-Constrained Hyper-Connections for R2

DeepSeek introduces Manifold-Constrained Hyper-Connections for R2

January 6, 2026
Imperial College London develops AI to accelerate cardiac drug discovery

Imperial College London develops AI to accelerate cardiac drug discovery

January 5, 2026
DarkSpectre malware infects 8.8 million users via browser extensions

DarkSpectre malware infects 8.8 million users via browser extensions

January 2, 2026
CMU researchers develop self-moving objects powered by AI

CMU researchers develop self-moving objects powered by AI

December 31, 2025

LATEST NEWS

Official: Google Gemini will power Apple Intelligence and Siri

Amazon: 97% of our devices are ready for Alexa+

Anthropic’s Cowork brings developer-grade AI agents to non-coders

Xiaomi eyes total independence with new chip and OS

63% of new AI models are now based on Chinese tech

Nvidia CEO Jensen Huang slams “doomsday” AI narratives

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Whitepapers
  • AI tools
  • Newsletter
  • + More
    • Glossary
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.