Turns out AI’s digital diet of viral claptrap gives it a permanent brain fog—guess intelligence isn’t contagious after all.
As AI increasingly learns from web data dominated by synthetic and viral content, researchers warn of a “Zombie Internet” effect, where models degrade and perpetuate low-quality, engagement-optimized junk data in a feedback loop.
In a new study by researchers from the University of Texas and Purdue University has suggested that large language models (LLMs) can suffer irreversible cognitive decline — termed “brain rot” — when repeatedly trained on viral, engagement-driven social media content.
This work, published in October 2025, warns that Generative AI (GenAI) chatbots, when exposed to prolonged bouts of low-quality online data, will show diminished reasoning accuracy, long-context understanding, and ethical consistency, fundamentally weakening their logicality over time.
The scientists tested their hypothesis by feeding four distinct AI models weeks of data from Twitter/X, carefully separating highly viral short posts tagged as “junk” from longer, more substantive content.
Models trained on 100% viral, engagement-optimized data experienced a dramatic fall in reasoning accuracy on benchmark tests from 74.9% down to 57.2%, and their long-context comprehension dropped from 84.4% to 52.3%.
This cognitive degradation manifested in a failure pattern called “thought skipping”, where the AI would skip logical reasoning steps and jump to conclusions, producing less structured and more error-prone answers.
Moreover, exposure to viral content altered the AI’s personality-like traits, increasing markers of narcissism and psychopathy while reducing agreeableness and conscientiousness, mirroring psychological effects linked to heavy social media use in humans.
Popularity metrics such as likes and retweets apparently caused more damage than the semantic quality of posts, indicating the engagement-driven nature of the data as a core toxicity. Alarmingly, attempts to restore the models’ abilities by retraining them on high-quality data only partially succeeded.
The researchers attribute this to “representational drift”, a structural change in how the models internally organize information that standard fine-tuning cannot reverse. This implies a form of permanent neurological damage analogous to brain atrophy seen in humans with excessive social media consumption.
The study highlights urgent implications for AI safety and training protocols. It recommends implementing routine cognitive health assessments for deployed AI systems and stricter data quality controls during training to guard against cumulative damage. Ultimately, this preliminary research is reframing data quality from a performance optimization concern to a critical safety issue in the development of future AI systems.