Leaked documents reveal Meta is restricting its AI chatbot from providing abortion information to all users, not just teens. The policy appears to go beyond legal requirements, raising serious questions about corporate self-censorship and access to health information.
This is where AI content policies become de facto public policy.
According to internal documents obtained by Mother Jones, Meta's AI chatbot has been instructed to refuse questions about abortion access, clinic locations, and related health information. The restrictions apply universally—not just to minors, but to all users including adults seeking information about legal medical procedures.
The leaked policy documents show that Meta is taking an exceptionally cautious approach that goes well beyond what the law requires. There's no federal requirement for Meta to block abortion information for adults. Some states have laws restricting how platforms can provide such information to minors, but those laws don't cover adults.
So why is Meta doing this? The most likely explanation is legal risk aversion at scale. When you operate in all 50 states and face different regulatory regimes, the easiest solution is often to apply the most restrictive policy everywhere. It's simpler to block abortion information for everyone than to build state-by-state, age-verified, legally compliant systems.
But simpler for Meta doesn't mean better for users.
Consider what this means in practice. An adult woman in California, where abortion is legal and protected, asks Meta's AI chatbot for information about reproductive health services. The AI refuses. Not because providing that information would be illegal in California—it wouldn't be. But because Meta has decided that the compliance complexity isn't worth the risk.
Meta's response to the leak has been measured. They emphasize their commitment to safety and responsible AI deployment. They note that sensitive health topics require careful handling. All of which is true, but none of which explains why adults can't access information about legal medical procedures.
What makes this particularly troubling is the scale. Meta's AI chatbot is being integrated across Facebook, Instagram, and . We're talking about billions of users. When a handful of companies control AI assistants used by that many people, their content policies don't just affect their products—they shape what information is accessible at all.
