Big Tech is in the middle of an infrastructure spending spree that would make 1950s America blush. We're talking $800+ billion in annual AI capital expenditures - more than the entire GDP of Saudi Arabia - all betting on one thesis: that large language models will print money forever.
But what if they're wrong?
A post on r/investing this week articulated what a lot of smart money is quietly thinking: the massive LLM CapEx burn is starting to feel like a trap. And the more you dig into it, the harder it is to dismiss.
Here's the core problem: Large language models are probabilistic. They predict the next word based on patterns in training data. They don't "understand" anything - they're very sophisticated autocomplete. For consumer applications like chatbots and content generation, that's fine. Hallucinations are amusing, not catastrophic.
But enterprise clients need 100% accuracy. You can't run a power grid, manage logistics, or process financial transactions on a system that might hallucinate because of a weird prompt. One wrong prediction could black out a city or send millions of dollars to the wrong account.
This is why the smart money is quietly pivoting. At the recent Milken Institute Conference, ASML and Google executives participated in a panel focused on deterministic AI - systems that understand mathematical constraints and logic, and physically cannot hallucinate.
Think about what that signals. ASML makes the lithography machines that print every advanced chip on Earth. They're not interested in AI for the hype - they're interested because they need precision. If the industry's most important equipment maker is looking beyond LLMs, that's a canary in the coal mine.
