In an age when more and more young children are hooked on digital devices, YouTube is bombarding them with AI slop.
After investigating over 1,000 YouTube shorts recommended to young children by the video platform, The New York Times found that the algorithm is heavily pushing AI-generated content that explicitly targets “toddlers” and “preschoolers.”
On top of being nonsensical, the videos are often presented under the guise of being educational. Two common themes are teaching kids about the alphabet and animals — subject matters, conveniently, that provide threadbare structures for easily produced low-effort slop.
Calling the videos educational is a stretch as well. One video highlighted by the NYT shows a gooey liquid being squeezed into a glass of water, before turning into different animals representing each letter of the alphabet — only the animals are bizarre chimeras with mermaid tails. In another set to an off-key rendition of “Old MacDonald Had a Farm,” a massive egg rolls out of a barn door before hatching an impossibly proportioned horse. And in another alphabet short, a quail transforms into an aerial drone, and a rhino into a dump truck that bears the megafauna’s head.
At best, these videos are redundant regurgitations of mindless “Cocomelon”-style content. But at worst, experts fear they could be actively harming their cognitive development.
“To me, the meaninglessness of these videos is a huge problem because they’re just attention capture,” Jenny Radesky, a developmental behavioral pediatrician and associate professor of pediatrics at the University of Michigan Medical School, told the NYT. “And then the worst case is that it’s so fantastical and full of attention capture that it is going to be cognitively overloading to the child.”
The hyper-realistic visuals used in many AI videos — some examples highlighted by the NYT are below — Radesky speculated, could inhibit a young child’s ability to distinguish fantasy from reality.



It’s not a niche issue. YouTube’s algorithm seems astoundingly eager to recommend AI slop; in its tests, the NYT began by watching popular children’s channels, then scrolling through Shorts.
More than 40 percent of the videos that followed in a fifteen minute session appeared to have AI visuals. That’s striking: instead of recommending more traditional children’s content, the algorithm, seemingly by default, gravitated towards AI.
“When I was watching channels like ‘Ms. Rachel’ and ‘Bluey,’ I was expecting to see content that would be more in the lines of those programs, more ‘Bluey’ shorts,” said the NYT reporter behind the investigation, Arijeta Lajka, in an interview on the newspaper’s Hard Fork podcast. “And I wasn’t really seeing that.”
While not all parents may harbor the same reservations over AI-generated imagery, it’s hard to deny what the tech is good at, and what it is by and large actually being used for: quickly making short form and often absurd content with no plot or message — the opposite, experts say, of what a child should see. Rachel Barr, a developmental psychologist and director of the Georgetown University Early Learning Project, told the NYT that children, instead, learn best from media that has a clear narrative, and characters and scenes that relate to real life.
Few relatable elements from real life are present for a child to glean from hyperreal clips of animals jumping off a diving board, for example. In theory, someone could make a thoughtful educational video for children with AI, but that’s not what’s proliferating YouTube Shorts and rapidly racking up views.
“At least when you’re watching a normal cartoon, there could be moments of relative calm, or a story might unfold over a few minutes,” said journalist Casey Newton during the Hard Fork interview with Lajka, sounding moderately disturbed. “When you’re just showing raw visual stimuli and bombarding a kid with it, it just doesn’t seem it’s probably that good for them.”
Newton speculated that slop makers love the alphabet because the subject is ripe for stitching together a bunch of short clips while retaining “some sort of coherence.”
For now, the long term effects of watching addictive AI shorts are unclear, even if the videos appear shamelessly designed to be as addictive and mind-numbing as possible. “These do strike me as something that are made to really get in your head,” Mitch Prinstein, a professor of psychology and neuroscience at the University of North Carolina, told the NYT. “It may even be harmful, but we need more data.”
But other research has suggested that other forms of AI usage, such as relying on a chatbot, can impact cognitive skills like critical thinking even among adults. And yet more research has explored how exposing children to “brain rot” content can also have negative impacts, such as a possible link between screen time and diagnoses of ADHD.
YouTube requires creators to disclose if they’ve used AI to create “realistic content,” but it has no AI labelling requirements for the cartoonish style used by Shorts that target children. That leaves the burden on parents to closely monitor what their kids view on app that provides an endless scroll of content. The smartest and most practical solution would not let children use these apps at all, but that ignores the reality that many parents rely on digital devices to keep their children entertained.
More on AI: Children’s Toys Are Shipping With Adult AI Inside Them