This is bizarre even by Washington standards.
The Defense Department has suspended Anthropic's security clearance while maintaining warmer relationships with DeepSeek, a Chinese AI company. Defense experts are calling it a "dangerous precedent" that punishes American companies with public safety commitments while giving Chinese competitors a pass.
Anthropic was founded specifically to do AI safety research. Their entire public positioning is about responsible development. They publish their safety work. They've been cautious about deployments. They're exactly the kind of AI company you'd want working on sensitive applications.
Meanwhile, DeepSeek is based in China, subject to Chinese national security laws that could compel data sharing with the government. The optics of treating them more favorably than a U.S. company focused on safety are... not great.
Now, there are probably details we don't know. Security clearances get suspended for specific reasons, and those reasons are often classified. Maybe Anthropic did something that genuinely raised red flags. Maybe there's context that would make this make sense.
But from the outside, this looks like the Pentagon punishing a company for being too cautious about AI safety. That sends a terrible message to the industry: if you prioritize safety over speed, you'll lose contracts. If you're willing to move fast and not worry too much about risks, you'll get government business.
The U.S. is in a competition with China over AI capabilities. That's real. But winning that competition by telling American companies to abandon safety commitments seems like a strategy that creates different problems.
Defense experts defending Anthropic point out that having domestic AI companies that care about safety is a strategic advantage, not a liability. That encouraging reckless development to win a short-term race is how you lose long-term.
The technology is impressive. The question is whether the Pentagon understands what they're incentivizing.





