Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

Researchers warn that LLMs can get “brain rot” too

The study ran controlled experiments using models like Llama3 8B and Qwen2.5 7B on a massive corpus of real Twitter/X posts.

byRafael Rodrigues
October 24, 2025
in Research
Home Research
Share on FacebookShare on TwitterShare on LinkedInShare on WhatsAppShare on e-mail

In a new preprint paper, researchers from Texas A&M University, the University of Texas at Austin, and Purdue University have introduced a troubling new concept: the “LLM Brain Rot Hypothesis.” The study finds that continually pre-training large language models (LLMs) on “junk web text” causes a lasting cognitive decline in their abilities. . This matters because it’s not just a temporary glitch; the researchers found the damage is persistent, reframing the simple act of data curation as a critical, training-time safety problem for all future AI development.

How to give an AI ‘brain rot’

The term “brain rot” was famously named Oxford’s word of the year for 2024, describing the mental fog humans get from consuming too much trivial online content. The researchers set out to see if the same thing happens to AI. To do this, they ran a controlled experiment using a massive corpus of real Twitter/X posts. They created two distinct datasets: a “junk” dataset and a “control” dataset.

The “junk” data was defined in two different ways:

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.

  • M1 (Engagement Degree): This dataset was filled with short, highly popular posts (length < 30 tokens, popularity > 500). The researchers found this non-semantic metric—popularity—was a surprisingly powerful indicator of the brain rot effect, distinct from the text’s actual meaning.
  • M2 (Semantic Quality): This dataset was filled with content that an AI (GPT-4o-mini) classified as low-quality, such as “conspiracy theories, exaggerated claims, unsupported assertions or superficial lifestyle content.”

They then took four different LLMs (including Llama3 8B and Qwen2.5 7B) and continually trained them on these junk datasets, comparing their performance against models trained on the control data.

The cognitive decline is real

The results were immediate and significant. Models trained on the junk data showed a non-trivial cognitive decline (Hedges’ g > 0.3) across the board. The more “junk” the models consumed, the worse they got, demonstrating a clear “dose-response” decay. For example, as the junk ratio of M1 data rose from 0% to 100%, one reasoning benchmark score plummeted from 74.9 to 57.2.

The damage wasn’t just in one area. The researchers found declines in:

  • Reasoning: Models lost their ability to solve complex problems.
  • Long-context understanding: Their ability to retrieve information from long documents collapsed.
  • Safety: The models became less aligned with ethical norms.
  • Personality: Most disturbingly, the models developed “dark traits,” showing a significant spike in psychopathy and narcissism.

When the researchers dug into why this was happening, they identified a primary failure mode they call “thought-skipping.” The AI models would increasingly truncate or skip reasoning chains entirely. Instead of thinking step-by-step, they would just jump to a (usually wrong) answer, mimicking the short, attention-grabbing, non-reflective style of the junk data they were fed.

Can the rot be cured?

This is the most worrying part of the study: not really. The researchers tried two different ways to “heal” the brain-rotted models, and neither was fully successful.

    1. Training-free reflection: They tried to get the models to “reflect” on their mistakes and fix them. This failed. The models’ “internalized cognitive decline” was so deep that they were unable to even identify their own reasoning failures.
    2. Post-hoc tuning: They tried to “wash out” the bad training by re-training the models on a massive amount of clean, high-quality instruction data. While this helped, it couldn’t restore the models’ original capabilities. Even after scaling the “clean” data to 4.8 times the amount of the junk data, a large performance gap remained.

The findings provide powerful, causal evidence that data quality is a critical driver of AI capability and safety. The damage, once done, appears to be deeply internalized. This suggests that simply scraping the internet for ever-larger datasets is a dangerous path, and it motivates the need for routine “cognitive health checks” for AI models, lest they, too, fall victim to the internet’s junk food.


Featured image credit

Tags: brain rotllm

Related Posts

Google wants AI to build web pages instead of just writing text

Google wants AI to build web pages instead of just writing text

November 20, 2025
What AI really sees in teen photos: New data shows sexual content is flagged 7× more often than violence

What AI really sees in teen photos: New data shows sexual content is flagged 7× more often than violence

November 19, 2025
Harvard’s new metasurface shrinks quantum optics into a single ultrathin chip

Harvard’s new metasurface shrinks quantum optics into a single ultrathin chip

November 19, 2025
A wireless eye implant helps patients with severe macular degeneration read again

A wireless eye implant helps patients with severe macular degeneration read again

November 18, 2025
Light powered tensor computing could upend how AI hardware works

Light powered tensor computing could upend how AI hardware works

November 17, 2025
Japan researchers simulate Milky Way with 100 billion stars using AI

Japan researchers simulate Milky Way with 100 billion stars using AI

November 17, 2025

LATEST NEWS

Amazon claims its new AI video summaries have “theatrical quality”

Google finally copies the best feature from Edge and Vivaldi

Perplexity launches free agentic shopping tool with PayPal

You should keep your Snapdragon 8 Gen 3 if you want to run emulators

Netflix grabs the Home Run Derby in fifty million dollar baseball deal

OpenAI says its new coding model can work for 24 hours straight

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.