The Pentagon is moving to designate Anthropic as a supply-chain risk while President Trump ordered all federal agencies to "immediately cease" using their technology. Within hours, OpenAI announced a deal to deploy its AI models on classified Defense Department networks. If this feels like watching the government play AI companies against each other, that's because it is.
Here's what's actually happening. Anthropic, the AI safety-focused company behind Claude, has been in negotiations with the Pentagon for months. But unlike most defense contractors who'll take any contract that comes their way, Anthropic wanted assurances. Specifically, that their models wouldn't be used for fully autonomous weapons or mass domestic surveillance of Americans.
Those aren't unreasonable red lines. They're the kind of ethical boundaries that, frankly, more tech companies should be setting. The military-industrial complex has a long history of taking dual-use technology and pushing it places the original creators never intended.
But the Pentagon doesn't like being told no. According to the administration, they're designating Anthropic as a supply-chain risk - a label typically reserved for foreign adversaries and companies with actual security vulnerabilities. The stated reason? Anthropic's refusal to allow unrestricted military use.
Meanwhile, Sam Altman's OpenAI - which publicly claims to have the same red lines as Anthropic - just signed a deal to put their models on classified networks. Either OpenAI got the assurances Anthropic wanted and the Pentagon is being vindictive, or OpenAI is being less stringent about those red lines than their PR suggests.
I spent four years building a startup before becoming a journalist, and I recognize a pressure campaign when I see one. This isn't about supply-chain security. It's about the government trying to force AI companies to surrender any say in how their technology gets used.
The irony is rich. The same administration that constantly warns about AI safety risks is now punishing the one major AI lab that's actually trying to implement safety boundaries. Anthropic was founded by former OpenAI researchers who left specifically because they wanted to prioritize safety over growth. Now they're being designated a national security risk for... prioritizing safety.
This matters beyond one company. If the government successfully makes an example of Anthropic, every other AI lab will get the message: take defense contracts with no questions asked, or face regulatory punishment. That's not a precedent that leads anywhere good.
The technology is impressive. The question is whether the companies building it will have any say in how it's used - or whether that decision will be made for them.
Anthropic says it will challenge the designation in court. Given that Swiss government agencies rejected Palantir on similar data sovereignty grounds without facing US retaliation, Anthropic might have a case. But this fight is just beginning, and it's one that will define the relationship between AI companies and the military for years to come.
