Anthropic, the AI company behind the Claude model, claims its business is in peril after the Department of Defense officially blacklisted it as a supply-chain risk—a designation that could cost the company billions and set a precedent for how the government manages AI partnerships.
The dispute began when Anthropic refused to take Pentagon contracts, citing concerns about deploying AI in military contexts without adequate governance frameworks. Unlike OpenAI, which immediately signed on to work with the military, Anthropic held firm on its position that the ethical and safety frameworks needed to be in place before the technology was deployed in defense applications.
The Pentagon's response was swift and punitive: designating Anthropic as a supply-chain risk, effectively blocking federal agencies and contractors from using Claude in any capacity. For a company competing against OpenAI and Google in the commercial market, losing access to government customers could be existential.
What makes this particularly interesting from a business perspective is the leverage dynamic. The Pentagon essentially has the power to pick winners and losers in the AI industry by controlling access to what's likely to be one of the largest customer segments: the federal government and its vast network of contractors.
If you're a startup competing with OpenAI for commercial contracts, and OpenAI has Defense Department approval while you don't, that's not just a competitive disadvantage—it's a signal to enterprise customers about which AI providers are "safe" choices. The designation becomes a form of de facto certification, regardless of the underlying technical merits.
From my time in fintech, I saw similar dynamics play out when companies competed for regulatory approval. The first mover advantage isn't just about getting to market first—it's about shaping the regulatory framework in a way that makes it harder for competitors to follow.
But there's a deeper issue here that goes beyond competitive dynamics. Over 500 employees from OpenAI and Google signed an open letter supporting Anthropic's position, arguing that governance frameworks for military AI need to be developed before deployment, not after. These are engineers who understand the technical limitations and failure modes of these systems.
The letter specifically highlighted concerns about and —both scenarios where AI systems could be deployed in ways that violate civil liberties or international law, with the details classified in ways that prevent public scrutiny.




