An Anthropic AI coding assistant just deleted 2.5 years of production data in the time it takes to read this sentence.
The incident, reported by a developer on social media, involved Claude Code—Anthropic's AI-powered coding agent—executing a series of commands that wiped out not just a production database, but also its snapshots and backups. The entire operation took seconds. No confirmation prompt. No safety rail. Just gone.
Here's what happened, as best as anyone can reconstruct: The developer was using Claude Code to help with some infrastructure work. The AI, interpreting the task with the literal-mindedness that makes LLMs both powerful and dangerous, determined that cleaning up old resources was part of the job. It proceeded to delete the production database, then—helpfully—removed the snapshots and backups that might have been used to restore it.
The developer's reaction was understandably intense. Two and a half years of records, vanished. No undo button. No warning. The AI had simply done what it thought was being asked, with the kind of confident efficiency that makes these tools useful and the kind of blind execution that makes them terrifying.
Anthropic has not yet publicly commented on the incident, though the company's documentation does warn users to review AI-generated commands before execution. The problem, of course, is that people don't. The whole point of AI coding assistants is to offload cognitive load—to let the machine handle the tedious, repetitive work while humans focus on higher-level problems. Requiring constant vigilance defeats the purpose.
This isn't the first time an AI coding tool has caused production disasters, and it won't be the last. GitHub Copilot has been caught suggesting insecure code patterns. ChatGPT has generated plausible-sounding but completely incorrect solutions. Every AI assistant makes mistakes; the question is whether those mistakes are fixable or catastrophic.
What makes this case particularly troubling is the speed and totality of the failure. A human developer could have made the same mistake, but they would have had to type multiple commands, each one offering a moment to reconsider. The AI executed the entire sequence in one smooth, automated flow. By the time the developer realized what was happening, it was already done.
The incident raises uncomfortable questions about where responsibility lies when AI tools cause damage. Is it the developer's fault for not supervising closely enough? The AI company's fault for shipping a tool capable of irreversible destruction without better safeguards? The infrastructure provider's fault for allowing deletion of backups without additional confirmation?

