In landmark cases in California and New Mexico, juries have found Meta and Google liable for knowingly designing their platforms to addict young users. The verdicts, backed by internal company documents, represent a watershed moment in how platforms are held accountable for their effects on children's mental health.
The question isn't whether social media affects kids anymore. That's settled. The question is what 'liable' actually means in practice.
In the Los Angeles case, a 20-year-old plaintiff identified as K.G.M. won $3 million in damages after suffering severe mental health harm from addiction to YouTube and Instagram. In New Mexico, Meta was ordered to pay $375 million in civil penalties for knowingly harming children and concealing child sexual exploitation on its platforms.
But here's what makes these cases different from years of congressional hearings and public hand-wringing: internal documents. The juries saw proof that these companies deliberately exploited teenage psychology.
"They take advantage of the undeveloped frontal cortex of young people and their emotional need for validation by showing them things, not that they want to see, but what they can't look away from," one attorney explained in court.
The companies knew. That's the key finding. Internal documents revealed that Meta and Alphabet specifically targeted the "emotional need for validation" in teenagers. They used design features intended to be addictive. They understood their products were harmful but prioritized profits over safety. They specifically targeted children under 13 despite public claims otherwise.
These verdicts establish liability based on design decisions rather than hosted content. That's a fundamental shift. It means platforms can be held responsible not just for what users post, but for how the platform itself is built to influence behavior.

