Dario Amodei, who runs one of the leading AI labs, just published a 20,000-word essay warning that AI is entering a dangerous phase where "humanity is about to be handed almost unimaginable power." Coming from someone actively building these systems, that's worth paying attention to.
The Anthropic CEO predicts that AI capable of matching human abilities will arrive "within the next two years" - notably more explicit than the usual hedged timelines you hear from tech executives. More immediately, he warns that "50% of entry-level white-collar jobs will be eliminated within one to five years."
But Amodei goes beyond the standard automation concerns. His essay warns about AI potentially helping develop bioweapons, escaping human control, and concentrating wealth among a tiny group of actors. When he uses words like AI potentially "brainwashing" society or "psychotically crushing human mental well-being," he's not fearmongering - he's someone with inside knowledge raising insider alarms.
So what's he actually doing about it? Amodei and Anthropic's cofounders have committed to donating 80% of their wealth to charitable causes. He's calling for progressive taxation, including potential AI-specific taxes targeting outsized corporate profits. He wants companies to focus on innovation-driven AI deployment rather than pure labor replacement, and suggests businesses should creatively reassign displaced workers rather than simply firing them.
Those are reasonable proposals. But they're also voluntary commitments and policy suggestions, not binding constraints on AI development. The fundamental tension remains: the people building AI systems that they acknowledge could cause catastrophic harm are asking us to trust that they'll deploy them responsibly.
The technology industry has spent decades promising that innovation would create more jobs than it destroyed, that privacy concerns were overblown, that platforms would responsibly moderate content. How'd that work out?
Amodei seems genuinely concerned about the risks. The question is whether concern translates into meaningful action, or whether we're watching someone build the thing they're warning us about.
