Child-development specialists warn that low-quality AI generated slop can overload young brains, blur reality.
An investigation by The New York Times (NYT) has found that YouTube is allowing users to increasingly fill children’s feeds with strange, low-quality videos made with AI, intensifying concerns about how automated content is reshaping kids’ media diets
The NYT’s video journalist Arijeta Lajka has reported that during brief viewing sessions on popular kids’ channels, a large share of recommended clips appeared to be AI-generated, often labeled as educational but filled with rapid, disjointed imagery and scarcely coherent storytelling.
Child-development specialists quoted in the report have warned that such “slop” content can overload young children’s still-forming brains, blur the boundary between fantasy and reality, and potentially interfere with how they later process real-world information.
The investigation lands amid broader alarms over AI-made kids’ videos on the platform. A December 2025 Bloomberg report showed how creators are using AI chatbots and video generators to cheaply mass-produce toddler content that looks instructional but is mainly nonsense, with one YouTuber bragging that AI does “about 95%” of the work while still bringing in hundreds of dollars a day.
Separate research by video-editing firm Kapwing estimates that more than one-fifth of the videos initially pushed to new YouTube users qualify as low-quality “AI slop”, underscoring how pervasive automated clips have become across the service.
Under mounting scrutiny, YouTube has begun to respond. After NYT shared examples of AI-generated children’s videos and channels, the company removed some clips, stripped several channels of ad revenue, and blocked them from appearing on YouTube Kids. In January 2026, a separate analysis found YouTube had deleted or demonetized channels responsible for more than 4.7bn AI-slop views in a single enforcement wave, in what it says are systems and monetization rules designed to penalize spammy, repetitive content, and it requires disclosure for realistic AI-generated material.
However, critics, including pediatric experts and wary parents, argue that enforcement remains piecemeal, and that vast quantities of AI-made kids’ videos still slip through recommendation feeds faster than any oversight can catch them.