Anthropic just announced a partnership with Google and Broadcom for multiple gigawatts of next-generation TPU compute. This is a massive infrastructure commitment - not just chips, but actual power at utility scale.
The AI compute wars just went from buying GPUs to securing power plants.
"Multiple gigawatts" means Anthropic is planning for infrastructure that rivals small cities. For context, a gigawatt can power roughly 750,000 homes. Anthropic is talking about multiple gigawatts dedicated to training AI models. That's not a datacenter. That's an industrial operation.
This is the scale at which Claude and GPT-class models will be trained going forward, and it's fundamentally changing the economics of who can compete.
The partnership with Google makes sense - Google has been Anthropic's cloud infrastructure partner and a major investor. TPUs (Tensor Processing Units) are Google's custom AI chips, designed specifically for training large models. Broadcom's involvement suggests custom chip design beyond standard TPUs.
What's striking is the scale. When OpenAI trained GPT-3, they talked about tens of thousands of GPUs. When Meta announced their AI infrastructure plans, they talked about hundreds of thousands of NVIDIA chips. Now we're talking about power measured in gigawatts.
The bottleneck in AI isn't algorithms anymore. It's energy. Building bigger models requires more compute, which requires more power, which requires partnerships with utility companies and governments. This is infrastructure at the scale of aluminum smelting or steel production.
For competitors, this is a warning shot. OpenAI has similar deals with Microsoft and Oracle. Meta is building its own infrastructure. Google obviously has its own TPU capacity. But smaller AI companies? They're getting priced out of the frontier model game.
The concentration of power - both literal electrical power and market power - is staggering. Only a handful of companies can afford to play at this level. That has implications for competition, innovation, and who gets to shape the future of AI.
There are also practical questions. Where does this power come from? Is it renewable? Are we dedicating massive amounts of electricity to training AI models while arguing about residential solar installations? These aren't just technical questions - they're political and environmental ones.
Anthropic has positioned itself as the responsible AI company, the one thinking carefully about safety and alignment. That positioning is easier to maintain when you have the resources to compete with OpenAI and Google. This deal ensures they can.
The technology is impressive. The question is whether dedicating gigawatts of power to AI training is the best use of limited resources. And whether anyone other than a handful of companies will be able to afford it.
The compute wars are no longer about who has the best chips. They're about who can secure the power to run them.
