Indonesia, the world's fourth most populous country, has implemented a sweeping ban on social media access for children. Tech companies kept saying self-regulation was enough. Now governments are taking matters into their own hands, and Indonesia's huge population makes this one of the most significant policy moves yet.
The ban covers most major platforms—Facebook, Instagram, TikTok, YouTube. Kids below a certain age simply can't create accounts or access the services. It's a blunt instrument, but governments are increasingly deciding that blunt instruments are necessary because the platforms won't fix the problem themselves.
This is part of a growing global movement. Australia passed similar legislation. France requires parental consent. The UK has been pushing for age verification. What started as a fringe concern—"maybe kids shouldn't be on social media all day"—is becoming mainstream policy.
The tech industry's response has been predictable: This is impractical, it's unenforceable, it violates free expression, parents should make these decisions. All the arguments you'd expect from companies whose business model depends on capturing attention as early as possible.
But here's the thing: The platforms had years to address this proactively. They could have built better parental controls, limited algorithmic recommendations for minors, reduced engagement optimization for young users. They didn't, because those changes hurt growth metrics. So now governments are doing it for them.
Enforcement is the obvious challenge. Age verification is hard. Kids lie about their age. VPNs exist. Any sufficiently motivated teenager will find a workaround. But the goal isn't perfect enforcement—it's creating a norm shift. When the government says "this isn't for kids," it changes how parents and schools approach social media access.
The bigger question is whether this actually helps. Banning access doesn't teach kids how to use social media responsibly. It doesn't address the underlying issues—comparison culture, cyberbullying, algorithmic manipulation. It just delays exposure until they hit the age threshold, at which point they're thrown into the deep end with no preparation.
A better approach would be age-appropriate versions of these platforms—reduced features, no algorithmic feeds, heavy moderation, actual privacy protections. Some platforms have tried this. YouTube Kids exists, though it's had its own problems. Instagram has talked about teen-specific features but hasn't shipped anything meaningful.
The reason is obvious: Kids are valuable users. They establish habits early. They influence family purchasing. They're tomorrow's power users. Platforms don't want to create inferior versions for young users—they want kids on the regular platform, engaging with the full algorithmic experience, building dependency.
Indonesia's ban forces the issue. With 275 million people, it's not a market platforms can ignore. If the ban sticks and other countries follow, tech companies will have to choose: Build age-appropriate products or lose access to young users entirely.
I'd bet on the latter. The economics of engagement optimization don't work if you remove the most malleable users. So instead of adapting, platforms will fight these bans, lobby for weaker regulations, and try to shift the narrative back to parental responsibility.
But the momentum has shifted. Governments worldwide are deciding they don't trust tech companies to regulate themselves. Whether bans are the right approach is debatable. What's clear is that the era of platforms doing whatever they want with child users is ending.




