Anthropic CEO Dario Amodei just pushed back against Pentagon pressure to drop safety constraints on Claude for autonomous weapons systems. The company argues that removing safeguards could endanger US troops and civilians. Meanwhile, the Defense Department is reportedly considering using the Defense Production Act to compel compliance.
This is the AI safety debate moving from academic conferences to the real world. When the Pentagon comes knocking with DPA threats, "responsible AI" principles face their first real stress test.
Anthropic is calling the bluff. But can they afford to?
What the Pentagon Wants
According to BBC reporting, the Pentagon wants Claude integrated into autonomous weapons systems—but without the ethical guardrails that prevent the AI from generating harmful content or making decisions without human oversight.
The military's argument is straightforward: in combat situations, delays cost lives. AI systems need to operate at machine speed, not human approval speed. Safety constraints that work for consumer chatbots become liability in battlefield scenarios.
Anthropic's counter-argument is equally direct: those safety constraints exist for good reasons. Autonomous weapons without proper safeguards don't just risk enemy combatants—they risk friendly fire, civilian casualties, and unpredictable escalation.
It's the classic dual-use technology dilemma, except this time the company is saying "no."
The Defense Production Act Threat
Here's where it gets interesting. The Pentagon is reportedly exploring whether the Defense Production Act could compel Anthropic to modify Claude for military use.
The DPA, originally created during the Korean War, gives the federal government broad powers to require private companies to prioritize defense contracts and produce goods for national security. It's been used for everything from ventilators during COVID to semiconductor manufacturing.
