Three teenage girls are suing xAI, Elon Musk's AI company, after Grok generated sexualized deepfake images of them that were shared across Discord and Telegram channels.
The lawsuit, filed in federal court this week, alleges that Grok's image generation model has insufficient safeguards and failed to prevent users from creating sexually explicit images of minors. The plaintiffs, represented by their parents, claim the images were created using their school photos and shared in groups dedicated to trading deepfake content.
This is the first major lawsuit targeting an AI company specifically for deepfake generation of minors. And it's a test case for whether existing laws around child exploitation apply to AI-generated images.
Here's what makes this case different: the images aren't of real abuse. They're AI-generated. The teens were never photographed in sexual situations. But the images depict them in those situations, using their faces mapped onto AI-generated bodies. Under federal law, that might still be illegal—child sexual abuse material doesn't require an actual child to be abused, just a realistic depiction of one.
But there's ambiguity. A 2002 Supreme Court case, Ashcroft v. Free Speech Coalition, struck down parts of a federal law banning "virtual" child pornography on First Amendment grounds. The court said that if no real child is harmed, the images might be protected speech. Congress responded by passing a narrower law targeting images that are "indistinguishable" from real children. That's the law this case will test.
The plaintiffs argue that Grok's images are indistinguishable—they use real photos of real teens and generate realistic bodies. xAI's likely defense will be that the company isn't responsible for user-generated content, protected by Section 230 of the Communications Decency Act.
But Section 230 has limits. Platforms can be held liable if they actively facilitate illegal activity. And the lawsuit argues that was with insufficient safeguards, making complicit.
