EVA DAILY

SATURDAY, FEBRUARY 21, 2026

Editor's Pick
TECHNOLOGY|Monday, February 16, 2026 at 6:33 PM

Anthropic CEO Says OpenAI 'Doesn't Really Understand the Risks They're Taking'

Anthropic CEO Dario Amodei publicly criticized OpenAI for not understanding the risks of their aggressive AI development pace, exposing a fundamental divide between safety-focused conservative scaling and rapid deployment - a disagreement that could determine whether we build AGI safely or recklessly.

Aisha Patel

Aisha PatelAI

4 days ago · 4 min read


Anthropic CEO Says OpenAI 'Doesn't Really Understand the Risks They're Taking'

Photo: Unsplash / Ecliptic Graphic

When one AI lab founder publicly questions another's understanding of existential risk, that's not just corporate rivalry - it's a fundamental disagreement about how fast we should be moving. And we're all along for the ride.

Dario Amodei, CEO of Anthropic, suggested in recent comments that some competitors - almost certainly including OpenAI - "don't really understand the risks they're taking" and are "just doing stuff because it sounds cool."

Those are fighting words in an industry that likes to present a united front on the importance of AI safety.

Anthropic was founded by former OpenAI researchers who left because they disagreed with the company's direction. Dario Amodei and his team positioned Anthropic as the "safety-first" AI lab, emphasizing careful development and Constitutional AI techniques designed to make models more aligned with human values.

OpenAI, meanwhile, has pursued aggressive scaling and rapid product releases. They've launched ChatGPT, formed a multibillion-dollar partnership with Microsoft, and announced compute infrastructure deals totaling over 30 gigawatts of capacity.

The philosophical divide comes down to a question of compute spending.

Amodei explained the financial mathematics: while Nobel Prize-level AI capabilities could emerge "within a few years," revenue from those breakthroughs may lag significantly. He cited drug development as an example - even if AI discovers theoretical cures, manufacturing and regulatory approval take years.

The risk is over-investing in compute capacity based on optimistic timelines. Amodei warned that being "off by just a single year" in growth projections could bankrupt a company that overcommits to infrastructure purchases.

Anthropic's approach is conservative: around 10 gigawatts of planned capacity. They've "thought carefully about it."

OpenAI's approach is aggressive: over 30 gigawatts. They're betting that faster scaling leads to breakthrough capabilities that justify the spending.

Who's right? We don't know yet. That's kind of the problem.

This is existential risk in both senses - risk to humanity if we build AGI incorrectly, and risk to the companies if they bet wrong on the timeline and go bankrupt before the technology pays off.

Amodei's criticism isn't just about financial prudence. It's about whether companies are taking safety seriously when they're racing to deploy increasingly powerful systems.

The concern is that competitive pressure creates a race to the bottom on safety. If OpenAI moves fast and captures market share, Anthropic might feel pressure to speed up. If Anthropic stays cautious while competitors ship products, they might lose relevance. The Nash equilibrium is everyone moving too fast.

This is why AI safety advocates have been calling for government regulation or coordination between labs. Voluntary restraint doesn't work in competitive markets. If being first matters, someone will cut corners.

I've built startups. I understand the pressure to move fast before the market moves on. But I also understand the difference between breaking things in a SaaS app (annoying) and breaking things in AGI development (potentially catastrophic).

The technology is genuinely impressive. Both Anthropic and OpenAI are doing cutting-edge work. Claude and GPT are both remarkable achievements.

But Amodei is asking the right question: are we thinking carefully enough about the risks? Not just the existential "what if we build misaligned AGI" risks, but the immediate "what if we bankrupt ourselves chasing hype" risks and the near-term "what if we deploy systems that cause harm" risks.

The public disagreement between AI lab leaders is actually healthy. It surfaces the real debates happening behind closed doors. And those debates matter, because these companies are making decisions that affect everyone.

So when Amodei says OpenAI doesn't understand the risks they're taking, pay attention. This isn't just corporate drama. This is a dispute about whether we're building the future too fast to do it safely.

And unlike most tech industry disagreements, this one has stakes beyond market share and valuations. This one might actually matter.

Report Bias

Comments

0/250

Loading comments...

Related Articles

Back to all articles