OpenAI's internal documents project the company will lose $14 billion in 2026, according to leaked financial forecasts reviewed by multiple outlets. That's not a typo: fourteen billion dollars, the largest annual loss in tech history by a company not named Tesla or Uber.
But here's the pitch: By 2029, OpenAI expects to be generating $100 billion in annual revenue with Nvidia-scale profit margins. It's either the most audacious growth story in modern business or the most spectacular case of wishful thinking since WeWork convinced investors it was a tech company.
Let's run the numbers. OpenAI generated roughly $3.7 billion in revenue in 2025, primarily from ChatGPT subscriptions and API access. To hit $100 billion by 2029, the company needs to grow revenue 27x in four years. That's not growth—that's a miracle.
For context, Microsoft took 20 years to grow from $3.7 billion to $100 billion in revenue. Google took 14 years. OpenAI is betting it can do it in four, in a market that didn't exist three years ago, competing against Google, Microsoft, Anthropic, and every other tech giant with a GPU budget.
The $14 billion loss isn't mysterious—it's compute costs. Training and running large language models requires staggering amounts of processing power, which means renting (or buying) thousands of Nvidia H100 chips at roughly $30,000 each. OpenAI's compute bill alone likely exceeds $5 billion annually. Add employees, research, infrastructure, and everything else, and you get a cash furnace that makes even venture capitalists nervous.
So where's the money coming from? Microsoft, primarily, which has invested over $13 billion in OpenAI and provides compute credits worth billions more. Other investors have poured in another $10 billion across various funding rounds. But even that war chest starts to look small when you're losing $14 billion a year.
OpenAI's bull case rests on two assumptions: First, that AI capabilities will continue improving at current rates. Second, that businesses will pay premium prices for access to those capabilities. If both hold true, OpenAI becomes the cloud computing platform of the AI era—think AWS for intelligence instead of storage.
The bear case? AI improvements plateau. Open-source models catch up. Enterprise customers balk at pricing. Regulatory scrutiny increases. Compute costs stay high. Any one of those could derail the path to profitability. All five happening simultaneously—not implausible—would be catastrophic.
Here's what nobody wants to say out loud: OpenAI is essentially betting that AGI (artificial general intelligence) arrives before the money runs out. If they achieve human-level AI, the business model becomes irrelevant because they'll own the most valuable technology in human history. If they don't, they're a very expensive chatbot company with no path to profitability.
The 2029 timeline is telling. It's far enough away to sound plausible but close enough to keep investors interested. It's also conveniently beyond the typical VC patience horizon, meaning OpenAI can burn cash for another three years before anyone demands proof of concept.
CEO Sam Altman has repeatedly said OpenAI will need "more capital than anyone has ever raised" to achieve its goals. That's increasingly looking like understatement. At current burn rates, the company needs roughly $40-50 billion more just to make it to 2029. Microsoft can provide some. Other investors can provide more. But eventually, the math stops working.
Unless, of course, the AI revolution happens exactly as predicted, on schedule, with businesses lining up to pay whatever OpenAI charges. In which case, $14 billion in annual losses will look like the bargain of the century.
It's the biggest cash furnace in tech history. Whether it's building the future or burning investor money is a question we won't answer until 2029.
The numbers don't lie. But OpenAI's projections might.


