Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

Researchers warn that LLMs can get “brain rot” too

The study ran controlled experiments using models like Llama3 8B and Qwen2.5 7B on a massive corpus of real Twitter/X posts.

byAytun Çelebi
October 24, 2025
in Research

In a new preprint paper, researchers from Texas A&M University, the University of Texas at Austin, and Purdue University have introduced a troubling new concept: the “LLM Brain Rot Hypothesis.” The study finds that continually pre-training large language models (LLMs) on “junk web text” causes a lasting cognitive decline in their abilities. . This matters because it’s not just a temporary glitch; the researchers found the damage is persistent, reframing the simple act of data curation as a critical, training-time safety problem for all future AI development.

How to give an AI ‘brain rot’

The term “brain rot” was famously named Oxford’s word of the year for 2024, describing the mental fog humans get from consuming too much trivial online content. The researchers set out to see if the same thing happens to AI. To do this, they ran a controlled experiment using a massive corpus of real Twitter/X posts. They created two distinct datasets: a “junk” dataset and a “control” dataset.

The “junk” data was defined in two different ways:

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.

  • M1 (Engagement Degree): This dataset was filled with short, highly popular posts (length < 30 tokens, popularity > 500). The researchers found this non-semantic metric—popularity—was a surprisingly powerful indicator of the brain rot effect, distinct from the text’s actual meaning.
  • M2 (Semantic Quality): This dataset was filled with content that an AI (GPT-4o-mini) classified as low-quality, such as “conspiracy theories, exaggerated claims, unsupported assertions or superficial lifestyle content.”

They then took four different LLMs (including Llama3 8B and Qwen2.5 7B) and continually trained them on these junk datasets, comparing their performance against models trained on the control data.

The cognitive decline is real

The results were immediate and significant. Models trained on the junk data showed a non-trivial cognitive decline (Hedges’ g > 0.3) across the board. The more “junk” the models consumed, the worse they got, demonstrating a clear “dose-response” decay. For example, as the junk ratio of M1 data rose from 0% to 100%, one reasoning benchmark score plummeted from 74.9 to 57.2.

The damage wasn’t just in one area. The researchers found declines in:

  • Reasoning: Models lost their ability to solve complex problems.
  • Long-context understanding: Their ability to retrieve information from long documents collapsed.
  • Safety: The models became less aligned with ethical norms.
  • Personality: Most disturbingly, the models developed “dark traits,” showing a significant spike in psychopathy and narcissism.

When the researchers dug into why this was happening, they identified a primary failure mode they call “thought-skipping.” The AI models would increasingly truncate or skip reasoning chains entirely. Instead of thinking step-by-step, they would just jump to a (usually wrong) answer, mimicking the short, attention-grabbing, non-reflective style of the junk data they were fed.

Can the rot be cured?

This is the most worrying part of the study: not really. The researchers tried two different ways to “heal” the brain-rotted models, and neither was fully successful.

    1. Training-free reflection: They tried to get the models to “reflect” on their mistakes and fix them. This failed. The models’ “internalized cognitive decline” was so deep that they were unable to even identify their own reasoning failures.
    2. Post-hoc tuning: They tried to “wash out” the bad training by re-training the models on a massive amount of clean, high-quality instruction data. While this helped, it couldn’t restore the models’ original capabilities. Even after scaling the “clean” data to 4.8 times the amount of the junk data, a large performance gap remained.

The findings provide powerful, causal evidence that data quality is a critical driver of AI capability and safety. The damage, once done, appears to be deeply internalized. This suggests that simply scraping the internet for ever-larger datasets is a dangerous path, and it motivates the need for routine “cognitive health checks” for AI models, lest they, too, fall victim to the internet’s junk food.


Featured image credit

Tags: brain rotllm

Related Posts

Google’s search business could lose  billion a year to ChatGPT

Google’s search business could lose $30 billion a year to ChatGPT

October 27, 2025
AI helps decode the epigenetic ‘off-switch’ in an ugly plant that lives for 3,000 years

AI helps decode the epigenetic ‘off-switch’ in an ugly plant that lives for 3,000 years

October 27, 2025
Cyberattacks are now killing patients not just crashing systems

Cyberattacks are now killing patients not just crashing systems

October 21, 2025
Gen Z workers are telling AI things they’ve never told a human

Gen Z workers are telling AI things they’ve never told a human

October 20, 2025
MIT researchers have built an AI that teaches itself how to learn

MIT researchers have built an AI that teaches itself how to learn

October 20, 2025
Apple builds an AI “engineering team” that finds and fixes bugs on its own

Apple builds an AI “engineering team” that finds and fixes bugs on its own

October 17, 2025

LATEST NEWS

OpenAI adds scheduling powers to ChatGPT with new Tasks feature

Instagram adds watch history for Reels on Android and iOS

Huawei: Quad-foldable phone is possible in 2026

Watch: SpaceX launches 84 more Starlink satellites in three Falcon 9 missions over 4 days

Mistral AI takes on Google with enterprise AI Studio

Gemini can now generate full presentations from text or files

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.