Lazy Language Model

Training AI on “Brain Rot” Content Causes Lasting Cognitive Damage, New Paper Finds

Skibidi rizz, folks!
Researchers found AI models trained on shortform, clickbait-y content experienced irreversible cognitive decline.
Illustration by Tag Hartman-Simkins / Futurism. Source: Getty Images

If you’ve spent any time around kids lately, you’ve probably heard about “brain rot.” Named Oxford Word of the Year in 2024, it’s defined as the “supposed deterioration of a person’s mental or intellectual state, especially viewed as the result of overconsumption of material (now particularly online content) considered to be trivial or unchallenging.”

As it turns out, it’s not just human minds getting rotted by low-effort memes like “6-7” and “skibidi toilet“: in new research, a team from Texas A&M University, the University of Texas at Austin, and Purdue University found that “continual exposure to junk web text induces lasting cognitive decline in large language models (LLMs).”

The resulting study is yet to be peer reviewed, but its findings suggest that AI’s sense of reasoning and contextual understanding declines as it’s trained on brain rot material. Basically, the researchers fed LLMs viral or clickbaity posts from X-formerly-Twitter, and found that they essentially started to abandon parts of their thinking processes — a phenomenon they termed “thought-skipping,” in which “models increasingly truncate or skip reasoning chains, explaining most of the error growth.” Worst of it, researchers found that exposing AI to brain rot content also seemed to nudge it toward psychopathy and narcissism.

None of that is entirely surprising. In humans, studies show that low-effort and brain rot content is associated with academic procrastination, diminished cognitive function, dissociative states, and even negative implications on physical health. Social media, which once felt like a place to connect with others, increasingly feels like an endless slop feed that’s making us dumber, sadder, slower and less healthy.

Though humans and machines learn differently, there are similarities. Both ingest existing material and learn patterns from it, so the lower quality those inputs are, the less accurately both biological and digital brains will be able to accurately map patterns onto novel cognitive challenges.

A grim addendum: even when the researchers attempted to “heal” the digital malnutrition of the LLMs by introducing higher-quality content, the damage persisted.

“The gap implies that the Brain Rot effect has been deeply internalized, and the existing instruction tuning cannot fix the issue. Stronger mitigation methods are demanded in the future,” the researchers warned.

This study underlines the dangers of training AI models on unregulated trash data — especially when research is already starting to show that humans who rely too much on AI end up with diminished cognitive abilities of them own.

More on AI: Using AI Increases Unethical Behavior, Study Finds