The U.S. military is using Anthropic's Claude AI to help plan air attacks on Iran. Not to analyze intelligence. Not to run logistics. To plan strikes.
Let that sink in for a moment. An AI system - the same technology that sometimes confidently tells you that Rome is in France - is being used to help make targeting decisions that could kill people.
According to sources familiar with the operations, Claude has become a "crucial tool" for military planners despite ongoing clashes between Anthropic and the Department of Defense over the company's ethical guidelines. Lawmakers are now calling for immediate oversight, and they're right to be alarmed.
Here's what we need to understand about AI in warfare: these systems are impressive pattern-matching machines. They can process vast amounts of intelligence data, identify potential targets, and suggest strike packages faster than human analysts. That's genuinely useful for military operations.
But they're also black boxes that hallucinate. Claude might analyze satellite imagery and identify a weapons depot with 95% confidence. What it can't tell you is whether that confidence score is based on actual weapons signatures or whether it pattern-matched some innocent industrial equipment because the shapes looked similar.
The technology is real. The question is whether we're deploying it responsibly.
I've built AI systems. I've seen how they fail. The failure modes aren't always obvious. An AI doesn't tell you "I'm not sure about this one." It gives you a confident answer that might be completely wrong. In a military context, that's not just a bug - it's a potential war crime.
Congress is demanding oversight, but here's the problem: AI systems evolve faster than legislation. By the time lawmakers understand how current systems work, the military will have deployed three newer versions. How do you oversee something that changes faster than you can write regulations?
The Pentagon's position appears to be that AI is just another tool, like radar or GPS. But that's a category error. Radar tells you where things are. AI tells you what to do about it. That's a fundamentally different level of delegation.
Some argue that AI reduces civilian casualties by being more precise than human targeting. Maybe. But precision without accuracy is just confidently wrong. And when the system makes a mistake - and these systems make mistakes - who's accountable? The commander who approved it? The data scientist who trained the model? Anthropic?

