Two months after Australia's landmark under-16 social media ban took effect, the policy isn't working—at least according to the teenagers it's supposed to protect.
The ABC reports widespread workarounds, minimal enforcement, and teenagers continuing to access Instagram, TikTok, and Snapchat with little difficulty. The law, which the Albanese government touted as world-leading child protection legislation, has become yet another example of politicians legislating technology they don't understand.
Mate, classic Australian political overreach—big announcement, rushed legislation, no practical enforcement mechanism. And now we're discovering that telling teenagers they can't do something without actually preventing them from doing it is, shockingly, ineffective.
The ban, which passed Parliament in December with bipartisan support, requires social media platforms to prevent users under 16 from creating accounts. Platforms face fines up to $50 million for systemic failures to comply. The government framed it as protecting young people from mental health harms linked to social media use.
But the law has a glaring problem: it relies almost entirely on age verification technology that doesn't reliably work. Teenagers told the ABC they simply lied about their age when creating accounts, used VPNs to mask their location, or continued using accounts created before the ban. One 15-year-old from Melbourne said she "just put in a different birthdate" and was back on Instagram within minutes.
The platforms themselves have largely implemented basic age-declaration systems—users type in their birthdate, the platform accepts it, everyone moves on. More sophisticated age verification, like biometric scanning or government ID checks, remain optional and rarely used. Privacy advocates have warned that mandatory biometric verification would create massive data security risks, particularly for young people.
So we've created a law that doesn't actually prevent teenagers from accessing social media, doesn't protect their privacy if enforced properly, and has no meaningful consequences for non-compliance. Brilliant.
The eSafety Commissioner, Julie Inman Grant, has acknowledged the challenges but argues the law is still valuable as a "symbolic statement" about protecting children online. That's policy-speak for: we know it doesn't work, but at least we tried.
Some experts have warned that the ban might actually make teenagers less safe. If young people are accessing social media through workarounds, they're less likely to report problems or seek adult help—turning what was open behavior into something covert. Dr. Joanne Orlando, a technology researcher at Western Sydney University, told the ABC that "when you push something underground, you lose the ability to guide and support."
The international implications matter here. France, Norway, and several US states are watching Australia's experiment closely. If Australia can't make an under-16 ban work—with all our resources and political will—it suggests the entire approach is flawed.
Youth mental health advocates are divided. Some argue that even an imperfect ban reduces social media exposure for some young people. Others say the government would have been better off regulating platform algorithms, requiring transparency on content moderation, and funding digital literacy programs—policies that address the actual harms rather than pretending teenagers won't find workarounds.
The Albanese government, facing an election within months, has so far refused to acknowledge the policy's shortcomings. A spokesperson for the Minister for Communications said the government is "monitoring implementation" and "working with platforms to ensure compliance." That's code for: we're aware it's not working, but admitting that would be politically embarrassing.
Meanwhile, teenagers in Sydney, Melbourne, Brisbane, and everywhere else are continuing to scroll, post, and message—just like they did before the law passed. The only difference is they're now technically breaking the law while doing it.
Mate, if you wanted to protect young people online, you'd regulate the platforms properly—limit data collection, ban manipulative algorithms, require content transparency. Instead, we got theater. Expensive, legally dubious theater that doesn't actually protect anyone.
Two months in, the social media ban is a policy failure wrapped in good intentions. The question now is whether the government will admit it and try something that actually works—or just pretend everything is fine and hope voters don't notice.

