Just a few weeks after a New York Times investigation found that YouTube’s algorithm is pushing AI-generated content targeting toddlers and preschoolers, 200 hundred organizations and experts have sent an open letter to YouTube CEO Neal Mohan and Google CEO Sundar Pichai, asking them to crack down on AI kidslop.
The letter stems from children’s advocacy group Fairplay, and expresses “serious concern” about the propagation of AI videos on YouTube and YouTube Kids. Among its signees are 135 organizations, including the American Federation of Teachers and the American Counseling Association, plus individuals like child psychiatrists, educators, and more.
These organizations and experts say generated slop “harms children’s development by distorting their sense of reality, overwhelming their learning processes and hijacking their attention, thereby extending time online and displacing offline activities necessary for their healthy development.”
Subscribe for daily Tubefilter Top Stories
They also cite video elements that are seen beyond AI-generated content–things like loud voices, bright-colors, fast pacing, and clickbait titles, all of which are designed to keep viewers’ attention, and are commonplace in this era of MrBeastification.
“These harms are particularly acute for young children,” the letter reads.
Rachel Franz, the Director of Fairplay’s Young Children Thrive Offline program, said YouTube and YouTube Kids are “designed to maximize children’s time online,” and that YouTube’s recommendation algorithm “[p]ushing AI slop onto young children is just another testament [to that].”
“AI slop hypnotizes young children, making it hard for them to get off their screens and move onto essential activities like play, sleep and social interaction,” she added. “What’s more, YouTube’s algorithm makes it impossible for kids to avoid AI slop.”
Her allegation is backed by stats beyond the NYT‘s report: Last December, video editing company Kapwing found that more than 20% of videos shown to new users are low-quality AI slop.
What’s also interesting to note is that almost exactly a year ago, YouTube formed a coalition with nearly 20 kids’ content creators and distribution companies, including Pinkfong, WildBrain, The Wiggles, and Cocomelon owner Moonbug, that was specifically aimed at liming the spread of low-quality content on YouTube and beyond.
Fairplay and signees are asking YouTube to mitigate potential harm in a few ways:
- Clearly label all AI-generated content
- Ban all AI-generated content from YouTube Kids
- Ban all AI-generated content from being recommended to users under 18, even on YouTube’s main site
- Give parents an option to turn off AI-generated content, even if their child searches for it
That first bullet point requires a little more context. You’ve probably seen AI labels on YouTube–but those labels are only required if the video “makes a real person appear to say or do something they didn’t do,” “alters footage of a real event or place,” and/or “generates a realistic-looking scene that didn’t actually occur,” per YouTube’s current TOS.
Footage that is completely fake but doesn’t break those three rules does not need to be labeled as AI-generated.
Google spokesperson Boot Bullwinkle told the Associated Press that YouTube has “high standards for content in YouTube Kids, including limiting AI-generated content in the app to a small set of high-quality channels.”
He also said that parents already have “the option to block channels.”
“Across YouTube, we prioritize transparency when it comes to AI content, labeling content from our own AI tools, and requiring creators to disclose realistic AI content,” he added. “We’re always evolving our approach to stay current as the ecosystem evolves.”
YouTube said it is actively working on developing AI labels for YouTube Kids, per the AP. It’s not clear how effective those labels will be for kids who can’t read.
YouTube is in a complicated position here. Google, like other big tech company/social media platform hybrids, has gone whole-hog on generative AI, and is hoping to make a boatload of cash on the bandwagon. But the same tools it’s developing and endorsing are enabling prompters to clog its platform with garbage, crowding out human creators and seemingly edging us in the direction of an AI Elsagate.
How do you have one without the other? That’s the problem YouTube now has to solve.










