This is the inevitable collision of two trends: AI making it trivially easy to generate video at scale, and YouTube's inability to moderate content that doesn't exist in their training data. Parents are discovering AI-generated "educational" videos teaching children dangerous behaviors like playing in traffic and eating toxic foods.
The videos slip through content moderation because they're algorithmically generated faster than humans can review them. And unlike traditional harmful content, these videos look educational. Bright colors, catchy songs, simple lessons. Except the lessons are wrong. Dangerously wrong.
One video shows children walking in roads with approaching cars, teaching that "green means right" instead of "go." Others depict babies consuming whole grapes and honey, both dangerous for infants. Raw elderberries, which are toxic when uncooked, appear in videos marketed as nutrition education. A sing-along about US states features distorted names like "Oklolodia" and "Louggisslia," presented as factual information.
"I think of this as toddler AI misinformation at an industrial scale," a University of Chicago pediatrics professor told reporters. A former Sesame Street producer called the content "downright dangerous."
Young children learn through visual repetition. Show them something enough times in an engaging format, and they'll internalize it as truth. That's how educational TV works when it's done right. But it's also how AI-generated misinformation works when it's done at scale.
YouTube has begun deleting channels with billions of collective views and testing features to combat what they're internally calling "AI slop." They're surveying users about whether content "feels like" AI-generated spam. They're testing preview features to combat clickbait thumbnails. The specific videos documented in recent reports have been removed.
But here's the problem: for every channel YouTube deletes, a dozen more can be generated overnight. The economics are too compelling. AI video generation is essentially free at scale. Kids' content gets high engagement. Advertisers pay premium rates for children's programming. The math works even if 90% of your channels get banned.
The technology is impressive. The question is whether anyone needed the ability to generate infinite children's content with zero editorial oversight.
This isn't a moderation problem that can be solved by hiring more reviewers. The content is being created faster than any human workforce can evaluate it. It's not even a problem that better AI detection can solve, because the generators and detectors are in an arms race that the generators are currently winning.
The real solution is changing the incentive structure. If AI-generated content couldn't be monetized without human review, most of this would disappear. But that would require YouTube to leave money on the table. And if my time in tech taught me anything, it's that platforms don't voluntarily leave money on the table.
In the meantime, parents need to know: that educational video your toddler is watching? Check it carefully. Because it might have been generated by an algorithm that doesn't know the difference between education and poisoning.

