Timing is everything, and Anthropic just learned that the hard way—twice.
Claude experienced a major outage on March 2, with users reporting "elevated errors" that made the AI assistant largely unusable for several hours. Under normal circumstances, this would be a straightforward infrastructure story about growing pains at a rapidly scaling AI company. But these weren't normal circumstances.
The outage hit at the exact moment when Claude was surging to number one on Apple's App Store free apps chart, driven by the QuitGPT movement. Thousands of users were migrating from ChatGPT to Claude in protest of OpenAI's Pentagon deal, and Anthropic had just launched a convenient memory import tool to make the switch easier. Then the service went down.
From a technical perspective, this makes perfect sense: sudden traffic spikes break things. If thousands of new users are signing up and importing conversation histories simultaneously, that's exactly the kind of load that can overwhelm infrastructure that was scaled for previous usage patterns. It's a good problem to have, in the sense that it means your competitor just handed you their users.
But from a perception perspective, it's brutal. Users fleeing OpenAI over trust concerns don't want to hear about scaling challenges—they want the service to work. And while the outage was resolved within hours, the coincidence of Claude topping the charts while simultaneously going down made for some uncomfortable optics.
Dario Amodei and his team at Anthropic made the right call refusing the Pentagon's surveillance demands. That decision is why Claude is suddenly the ethics winner in a market that's never particularly cared about ethics before. But converting that ethical positioning into sustainable competitive advantage requires infrastructure that can handle the influx. The technology is impressive, and the principles are admirable. The question is whether the systems can scale as fast as the user migration.
