A Meta executive has rejected calls for teen social media bans during meetings in New Zealand, arguing other safeguards are better. Experts say the tech giant is dodging responsibility.
The pushback comes as both Australia and New Zealand consider age restrictions on social media platforms, responding to growing concerns about youth mental health and online safety.
Mate, here's what's really happening: Big Tech is fighting a coordinated Pacific regulatory effort, and they're using the same playbook they've deployed globally - promise to do better, argue bans don't work, and delay actual change for as long as possible.
The Meta representative told New Zealand officials that parental controls, age verification tools, and content moderation offer better protection than outright bans for teenagers. The company argued that blocking young people from platforms drives them to less regulated corners of the internet.
It's a familiar argument, and it's not entirely wrong. The problem is that Meta has been making these promises for years while youth mental health has deteriorated and online harms have multiplied.
Experts responding to Meta's position were blunt. One told Stuff that the company is "dodging responsibility" by opposing bans while failing to implement effective safeguards. Another noted that Meta's track record on youth safety doesn't inspire confidence in voluntary measures.
Australia has taken a harder line than most countries. The government has floated proposals to ban social media for children under 16, citing evidence linking platform use to anxiety, depression, and self-harm among young people.
New Zealand is watching closely. While Wellington hasn't committed to a specific age limit, officials are clearly exploring options. Meta's visit to explain why bans are bad policy suggests the company is worried about a trans-Tasman consensus emerging.
The tech giant's concern is justified. If both countries implement similar restrictions, it creates a Pacific template that could spread to other nations. That's precisely what happened with Australia's media bargaining code, which forced platforms to pay news publishers and inspired similar laws elsewhere.
Meta argues that age verification technology isn't mature enough to implement effectively. The company warns that requiring users to prove their age raises privacy concerns and could create new vulnerabilities.
Critics respond that Meta has the resources to solve these technical challenges if it chooses to. The company spends billions on artificial intelligence and content moderation - it could invest in robust age verification if protecting young users was genuinely a priority.
The debate reflects a broader tension between tech platforms and democratic governments. Companies like Meta operate globally but face a patchwork of national regulations. They prefer voluntary standards they can shape over mandatory rules imposed by parliaments.
Governments, meanwhile, face pressure from parents, educators, and health experts to do something about youth mental health. Social media bans are blunt instruments, but they're tangible actions politicians can point to when asked what they're doing about the problem.
The evidence on whether bans work is mixed. Some research suggests that limiting social media access improves youth wellbeing. Other studies find that bans are easily circumvented and that digital literacy education works better than prohibition.
What's clear is that the current system isn't working. Platforms say they prohibit users under 13, but enforcement is minimal. Kids lie about their age, parents don't monitor usage, and companies have little incentive to aggressively police their user bases.
Australia and New Zealand are small markets for Meta - together they represent a tiny fraction of the company's global user base. But both countries have outsized influence on tech policy, particularly in the Asia-Pacific region.
If Canberra and Wellington align on social media age restrictions, other Pacific nations may follow. That creates momentum for regional standards and puts pressure on larger markets like the European Union and United States to act.
Meta knows this, which is why the company is fighting the bans now, while they're still proposals rather than law. Once legislation passes, the fight shifts to implementation and enforcement - battles that are harder to win.
For Pacific tech policy, this is a test case. Can small democracies regulate Big Tech effectively? Can they coordinate across borders to create meaningful standards? Or will companies use their scale and resources to dilute or avoid regulations they don't like?
The answer matters not just for social media, but for digital governance broadly. From data privacy to artificial intelligence to platform accountability, the rules get written now will shape the internet for decades.
Mate, there's a whole continent and a thousand islands down here. And we're not waiting for Silicon Valley to tell us how to protect our kids.




