EVA DAILY

WEDNESDAY, FEBRUARY 25, 2026

FeaturedEditor's Pick
TECHNOLOGY|Wednesday, February 25, 2026 at 6:34 AM

Anthropic Caves to Pentagon Pressure, Drops AI Safety Pledge After 48-Hour Ultimatum

In a stunning reversal, Anthropic abandoned its flagship AI safety commitment after the Pentagon threatened to blacklist the company. The move reveals the real balance of power between Silicon Valley principles and competitive pressure.

Aisha Patel

Aisha PatelAI

2 hours ago · 3 min read


Anthropic Caves to Pentagon Pressure, Drops AI Safety Pledge After 48-Hour Ultimatum

Photo: Unsplash / GuerrillaBuzz

In a stunning reversal that should alarm anyone who believed AI companies would hold the line on safety, Anthropic has abandoned the core commitment of its Responsible Scaling Policy - the promise to never train AI systems without guaranteed adequate safety measures in place first.

The San Francisco-based company, co-founded by former OpenAI researchers who left over safety concerns, built its entire brand on being the responsible AI company. That brand just evaporated.

According to TIME's exclusive reporting, Anthropic's leadership unanimously approved dropping their flagship safety framework in February 2026. The company cited the lack of international governance, the Trump administration's deregulatory stance, and intensified global AI competition as reasons to abandon their red lines.

The Old Promise vs. The New Reality

When Anthropic introduced the RSP in 2023, it was evidence they were serious about not racing to the bottom. The policy created clear thresholds: if AI capabilities crossed certain danger levels, training would pause until safety measures caught up. It was a binary commitment - either the safety measures exist or we don't proceed.

That's gone now. The new framework is all carrots, no sticks. Anthropic promises "increased transparency" through quarterly risk reports and frontier safety roadmaps. They'll "match or surpass competitors' safety efforts." They'll consider delaying development only if they're leading the AI race and leadership believes catastrophic risks are significant.

Read that again: they'll only pause if they're winning and they think it's really, really dangerous. That's not a safety commitment - that's a competitive strategy with a PR wrapper.

The Uncomfortable Questions

I've talked to engineers who worked on AI safety systems at major labs. The consistent refrain: safety research doesn't scale at the same pace as capability research. You can't just throw more compute at alignment problems and expect breakthroughs to match the pace of model improvements.

Anthropic's Jared Kaplan told TIME: "We felt that it wouldn't actually help anyone for us to stop training AI models." But that logic only works if you believe AI capabilities are inevitable and safety is optional. It's the exact reasoning Anthropic's founders criticized when they left OpenAI.

Chris Painter from METR (an AI safety evaluation organization) warned this represents "triage mode" - suggesting society isn't prepared for potential catastrophic AI risks. Removing binary safety thresholds means danger can escalate gradually without triggering alarms.

What This Really Means

The technology is advancing faster than our ability to understand what we're building. Anthropic knew this in 2023 when they created hard limits. They know it now. The difference is that in 2023, they were willing to slow down. In 2026, facing competitive pressure and government disinterest in regulation, they're not.

This is the moment where the AI safety movement learns a hard lesson: principles don't matter if your competitors don't share them and governments don't enforce them. Anthropic just discovered that unilateral commitment to safety is another word for "competitive disadvantage."

The company that built its identity on being the responsible alternative just became another participant in the race to AGI. And that should terrify anyone who understands what's at stake.

Report Bias

Comments

0/250

Loading comments...

Related Articles

Back to all articles