Court filings in Meta's ongoing lawsuit reveal the company's own research shows 19% of teen users reported seeing unwanted explicit content on Instagram. That's nearly 1 in 5 teenagers. The data was kept internal while Meta publicly claimed its safety systems were working, raising questions about what else the company knows about harm to minors on its platforms.
This is the Frances Haugen playbook repeating itself. Meta has the data showing their platforms harm kids. They choose not to act on it because fixing it would hurt engagement metrics.
What the Court Filings Show
According to The Globe and Mail's reporting, the internal research data emerged in court documents related to lawsuits against Meta over its handling of teen safety. The company's own studies found that a substantial percentage of teen users were exposed to explicit content they didn't request.
This wasn't an external report by critics or activists. This was Meta's internal research team quantifying the problem. They knew. They had the numbers. And they kept building the same systems anyway.
The Engagement Problem
Here's the uncomfortable truth about social media platforms: they're optimized for engagement, not wellbeing. Instagram's algorithm is designed to keep users scrolling. It learns what content keeps each user's attention and serves more of it.
If explicit content drives engagement - even unwanted explicit content that makes users uncomfortable but keeps them on the platform - the algorithm has an incentive to surface it. The system doesn't distinguish between "engaged because interested" and "engaged because disturbed." It just measures time spent.
Meta has spent millions on PR about teen safety features - parental controls, content filtering, age verification, reporting tools. But those are bolt-on features trying to fix a system fundamentally designed to maximize engagement. It's like adding seatbelts to a car that accelerates toward walls.

