Everyone in the AI community bought the "we're different" messaging from Anthropic. Turns out we were buying vaporware - ethical vaporware, but vaporware nonetheless.
New reporting reveals that Anthropic's Claude AI was used by the US military in recent Iran operations, despite the company's carefully cultivated image as the ethical alternative to OpenAI. According to documents obtained by Gizmodo, Claude has been deployed for "intelligence assessments, target identification and simulating battle scenarios" by the Pentagon's Central Command.
This is a major credibility crisis for a company that built its entire brand on being the responsible AI lab. Anthropic raised billions from investors who believed they were funding a more ethical approach to AI development. The company emphasized "constitutional AI" and safety research, positioning itself as the grown-up alternative to OpenAI's move-fast-and-break-things mentality.
But here's what the documents actually show: Anthropic drew lines around hypothetical future scenarios - "killer robots" and mass surveillance systems that don't exist yet - while approving current military applications that are actually happening right now.
CEO Dario Amodei stated the company is "interested in working with them as long as it is in line with our red lines." The problem? Those red lines were drawn conveniently far from where the Pentagon is actually operating.
Secretary of Defense called 's public positioning duplicitous, and he's not entirely wrong. The company maintained its marketing while continuing to provide services for active military operations involving targeting and intelligence.

