It is really easy to think of LLMs "consuming" training data and this lends itself to all sorts metphors:
If large language models learn from the same internet firehose, the question becomes unavoidable: what happens when we keep feeding models the digital equivalent of junk food?
As compelling as this metaphor is, I wonder how useful it will be. The authors of this paper certainly seem to think it's useful. Also important here is how one defines "junk data":
- M1: Engagement Degree — measures how popular and short a post is. Highly liked, retweeted, and replied-to content (especially if very brief) mirrors attention-grabbing but shallow information that fuels doomscrolling. These were labeled as junk; longer, less viral posts became the control.
- M2: Semantic Quality — evaluates how sensationalized or superficial the text is. Posts full of clickbait language (“WOW,” “LOOK,” “TODAY ONLY”) or exaggerated claims were tagged as junk, while fact-based, educational, or reasoned posts were chosen as control.
I am a little suspicious that "junk data" is really just repetitive data or data that is very homogeneous. But anyhow, here's what they did in their paper:
We propose and test the LLM Brain Rot Hypothesis: continual exposure to junk web text induces lasting cognitive decline in large language models (LLMs). To causally isolate data quality, we run controlled experiments on real Twitter/X corpora, constructing junk and reversely controlled datasets via two orthogonal operationalizations: M1 (engagement degree) and M2 (semantic quality), with matched token scale and training operations across conditions.Contrary to the control group, continual pre-training of 4 LLMs on the junk dataset causes non-trivial declines (Hedges' g>0.3) on reasoning, long-context understanding, safety, and inflating "dark traits" (e.g., psychopathy, narcissism). The gradual mixtures of junk and control datasets also yield dose-response cognition decay: for example, under M1, ARC-Challenge with Chain Of Thoughts drops 74.9 → 57.2 and RULER-CWE 84.4 → 52.3 as junk ratio rises from 0% to 100%.Error forensics reveal several key insights:
- Thought-skipping as the primary lesion: models increasingly truncate or skip reasoning chains, explaining most of the error growth.
- Partial but incomplete healing: scaling instruction tuning and clean data pre-training improve the declined cognition yet cannot restore baseline capability, suggesting persistent representational drift rather than format mismatch.
- Popularity as a better indicator: the popularity, a non-semantic metric, of a tweet is a better indicator of the Brain Rot effect than the length in M1.
- Together, the results provide significant, multi-perspective evidence that data quality is a causal driver of LLM capability decay, reframing curation for continual pretraining as a training-time safety problem and motivating routine "cognitive health checks" for deployed LLMs.
In this work, we introduced and empirically validated the LLM Brain Rot Hypothesis, demonstrating that continual exposure to junk data—defined as engaging (fragmentary and popular) or semantically low-quality (sensationalist) content—induces systematic cognitive decline in large language models. The decline includes worse reasoning, poorer long-context understanding, diminished ethical norms, and emergent socially undesirable personalities.Fine-grained analysis shows that the damage is multifaceted in changing the reasoning patterns and is persistent against large-scale post-hoc tuning. These results call for a re-examination of current data collection from the Internet and continual pre-training practices. As LLMs scale and ingest ever-larger corpora of web data, careful curation and quality control will be essential to prevent cumulative harms.