The Pentagon designated Anthropic - the company behind Claude AI - as a "supply chain risk" because the company isn't sufficiently "patriotic." Now a First Amendment lawsuit is challenging whether the government can punish companies for what their AI systems are allowed to say.
This is unprecedented territory, and it cuts right to the heart of how we govern AI systems that are being deployed in warfare.
According to a friend-of-the-court brief filed by the Foundation for Individual Rights and Expression (FIRE), Pentagon officials demanded that Anthropic remove safeguards from Claude to enable "fully autonomous weapons development and mass domestic surveillance capabilities." When Anthropic refused, citing its Usage Policy and "Claude's Constitution" - built-in ethical guardrails - the DoD labeled them a security risk.
Let me be clear: I've watched AI companies navigate the dual-use problem since GPT-2. Every AI lab struggles with this. Your technology can cure cancer or generate disinformation. It can optimize supply chains or plan military operations. Drawing lines is genuinely hard.
But what's happening here isn't a legitimate security review. According to FIRE's brief, Pentagon officials admitted they were targeting Anthropic for lacking "patriotic ideology" that would favor competitors. That's not security policy - that's viewpoint discrimination dressed up in national security clothing.
The First Amendment argument is fascinating. FIRE contends that Claude is "fundamentally expressive technology" - a system that talks, explains, summarizes, argues, and refuses. Demanding that Anthropic "change what Claude must and may say" as a condition for government contracts is compelled speech. It's the government forcing ideological compliance.
I'm sympathetic to the military's position that they need tools that work. If you're planning operations, you don't want your AI refusing to help because of overly broad safety filters. But there's a difference between "remove unnecessary restrictions" and

