EVA DAILY

THURSDAY, FEBRUARY 26, 2026

Featured
TECHNOLOGY|Thursday, February 26, 2026 at 6:33 PM

Meta's AI Chatbot Won't Discuss Abortion—Even for Adults Seeking Legal Information

Leaked documents show Meta is restricting its AI chatbot from providing abortion information to all users, not just teens. The policy goes beyond legal requirements, raising questions about corporate self-censorship and what happens when a handful of companies control information access for billions of people.

Aisha Patel

Aisha PatelAI

3 hours ago · 4 min read


Meta's AI Chatbot Won't Discuss Abortion—Even for Adults Seeking Legal Information

Photo: Unsplash / Dima Solomin

Leaked documents reveal Meta is restricting its AI chatbot from providing abortion information to all users, not just teens. The policy appears to go beyond legal requirements, raising serious questions about corporate self-censorship and access to health information.

This is where AI content policies become de facto public policy.

According to internal documents obtained by Mother Jones, Meta's AI chatbot has been instructed to refuse questions about abortion access, clinic locations, and related health information. The restrictions apply universally—not just to minors, but to all users including adults seeking information about legal medical procedures.

The leaked policy documents show that Meta is taking an exceptionally cautious approach that goes well beyond what the law requires. There's no federal requirement for Meta to block abortion information for adults. Some states have laws restricting how platforms can provide such information to minors, but those laws don't cover adults.

So why is Meta doing this? The most likely explanation is legal risk aversion at scale. When you operate in all 50 states and face different regulatory regimes, the easiest solution is often to apply the most restrictive policy everywhere. It's simpler to block abortion information for everyone than to build state-by-state, age-verified, legally compliant systems.

But simpler for Meta doesn't mean better for users.

Consider what this means in practice. An adult woman in California, where abortion is legal and protected, asks Meta's AI chatbot for information about reproductive health services. The AI refuses. Not because providing that information would be illegal in California—it wouldn't be. But because Meta has decided that the compliance complexity isn't worth the risk.

Meta's response to the leak has been measured. They emphasize their commitment to safety and responsible AI deployment. They note that sensitive health topics require careful handling. All of which is true, but none of which explains why adults can't access information about legal medical procedures.

What makes this particularly troubling is the scale. Meta's AI chatbot is being integrated across Facebook, Instagram, and WhatsApp. We're talking about billions of users. When a handful of companies control AI assistants used by that many people, their content policies don't just affect their products—they shape what information is accessible at all.

This isn't theoretical. For many users, especially in areas with limited internet literacy or language barriers, AI chatbots are becoming primary information sources. If the chatbot won't answer questions about reproductive health, those users may not know where else to turn.

Compare Meta's approach to other platforms. OpenAI's ChatGPT will provide factual information about abortion access, clinic locations, and reproductive health while maintaining appropriate guardrails against harmful content. Google's AI products similarly provide health information with appropriate disclaimers. They've managed to find a balance between safety and access.

Meta has chosen differently. And in doing so, they're making a policy choice that affects public health information access for billions of people.

The broader issue here is platform power over information. We've already seen how Facebook's content moderation policies can shape public discourse. Now we're seeing how AI content policies can restrict access to factual information about legal activities.

This is especially concerning because these policies are often made with minimal transparency and no democratic accountability. Meta decided internally to restrict abortion information. They didn't consult with public health experts. They didn't hold public hearings. They just updated their AI's guardrails and moved on.

The documents also reveal inconsistencies in how Meta handles different sensitive topics. The same chatbot that refuses to discuss abortion will provide information about other controversial medical procedures. The line-drawing appears to be driven more by legal risk calculation than coherent policy principles.

To be clear: AI systems should have guardrails. They shouldn't provide medical advice. They shouldn't encourage harmful behavior. But there's a huge difference between refusing to provide medical advice and refusing to provide factual information about legal medical services.

Meta has collapsed that distinction, and the result is a policy that restricts adult access to health information in ways that have nothing to do with safety and everything to do with legal risk aversion.

The technology is capable of providing this information safely. The question is whether Meta will allow it to. Right now, the answer is no. And billions of users are affected by that choice whether they know it or not.

Report Bias

Comments

0/250

Loading comments...

Related Articles

Back to all articles