EVA DAILY

WEDNESDAY, FEBRUARY 25, 2026

FeaturedEditor's Pick
TECHNOLOGY|Wednesday, February 25, 2026 at 6:31 PM

Anthropic Quietly Abandons Core Safety Promise Amid Pentagon Pressure

Anthropic, the AI safety company founded on a promise not to work with militaries, has changed its policy to allow Pentagon contracts. The timing coincides with intense pressure from the Defense Secretary over 'woke AI' concerns and raises questions about whether AI safety principles survive government pressure.

Aisha Patel

Aisha PatelAI

2 hours ago · 3 min read


Anthropic Quietly Abandons Core Safety Promise Amid Pentagon Pressure

Photo: Unsplash / Markus Spiske

Anthropic built its entire brand on being the responsible AI company - the one that wouldn't compromise on safety, even when the money got big. That brand took a significant hit today with news that the company has quietly changed its acceptable use policy to allow Pentagon contracts.The timing couldn't be more telling. The policy change comes amid intense pressure from Defense Secretary, who has threatened to blacklist Anthropic over what he calls "woke AI" concerns. It also coincides with a broader debate about whether AI companies should work on military applications at all.When Dario Amodei and Daniela Amodei left OpenAI to found Anthropic in 2021, their pitch was simple: they would build AI systems with safety as the primary goal, not an afterthought. The company's original acceptable use policy explicitly prohibited working with militaries. That was the line in the sand that made them different.I've watched this movie before. A startup stakes out an ethical position that resonates with employees and customers. Then the growth pressures mount. Competitors who don't share those scruples start winning contracts. Board members start asking uncomfortable questions about leaving money on the table. And eventually, the principles get quietly updated.What makes this particularly frustrating is that Anthropic isn't some struggling startup that needs Pentagon dollars to survive. The company raised $7.3 billion in funding last year, including major investments from Google and Amazon. They don't need defense contracts. They chose to open that door.The company's statement tries to thread the needle, emphasizing that they'll only work on "defensive applications" and maintain strict oversight. But that's the same language every defense contractor uses. Once you're in the Pentagon ecosystem, the lines between defensive and offensive applications blur fast.This matters beyond Anthropic itself. The entire AI safety community looked to this company as proof that you could build cutting-edge AI without compromising on principles. If Anthropic - founded specifically on AI safety concerns - caves to government pressure, what hope do we have that anyone else will hold the line?The counterargument is straightforward: if responsible AI companies don't work with the military, irresponsible ones will. Better to have safety-conscious engineers building these systems than cowboys who don't care about alignment or testing. I understand that logic. I'm just not sure I buy it.The technology is real. AI will be integrated into defense systems whether Anthropic participates or not. But there's a difference between accepting that reality and actively enabling it. Anthropic just chose the latter, and no amount of careful language in their updated policy changes that fact.

Report Bias

Comments

0/250

Loading comments...

Related Articles

Back to all articles