In a Guardian interview, Sam Altman admitted that OpenAI has no way to control how the Pentagon actually uses its technology once deployed. This comes weeks after the company signed a major Defense Department contract, reversing years of stated opposition to military applications.
Let me translate what Altman is saying: We'll take the money and provide the AI, but we have no idea if it's being used to analyze logistics data or target drone strikes, and we can't really do anything about it either way.
This is OpenAI's "don't be evil" moment — that point where the high-minded principles collide with the reality of who's writing the checks.
For years, OpenAI positioned itself as the responsible AI company. They published safety research. They talked about beneficial AI and making sure advanced systems align with human values. They explicitly said they wouldn't work with militaries. That was the brand: we're different from those other companies that will sell to anyone.
Then the Pentagon started writing checks to competitors, particularly Palantir and Scale AI. The Defense Department made clear they were going to use AI whether OpenAI participated or not. And suddenly, the calculus changed.
In late 2025, OpenAI quietly updated its usage policy to allow military applications. They framed it as supporting defense and cybersecurity rather than weapons. Then came the contract announcement. Then Altman started doing the interview circuit explaining why this was actually consistent with their values.
But here's the thing: you can't simultaneously claim to be the responsible AI company and admit you have zero visibility into how the most powerful military on Earth deploys your models. Those positions are contradictory.
The Pentagon isn't buying AI models to run hypothetical scenarios in a lab. They're integrating them into operational systems. Intelligence analysis. Logistics optimization. Command and control. And yes, almost certainly targeting and autonomous weapons systems, no matter what the press releases say.
Altman's defense is that OpenAI can set usage policies and terminate access if they discover violations. But that only works if they actually know what the military is doing with their technology. And as Altman just admitted, they don't.
Once you hand over API access or deploy models into military infrastructure, you've lost control. The Pentagon operates in classified environments. OpenAI engineers don't have security clearances to audit how their models are being used in sensitive applications. The feedback loop is broken by design.
This is fundamentally different from, say, refusing to let people use ChatGPT to write malware. There, OpenAI controls the interface and can monitor for abuse. With the Pentagon contract, they're providing the underlying models and hoping the Defense Department follows the rules.
The broader question is whether this even matters. If AI is genuinely transformative for military capabilities — and it almost certainly is — then every major power will deploy it regardless of any one company's ethics. OpenAI opting out just means China and Russia don't have to worry about facing the best AI models in any future conflict.
That's the realpolitik argument Altman is making: better that U.S.-aligned democracies have the best AI than authoritarian regimes. It's not a crazy argument. But it's also not the argument OpenAI was making three years ago when they were the safety-first company.
What's particularly frustrating is the dishonesty in how this is being presented. Altman keeps using safety language while signing contracts that make safety guarantees impossible to enforce. Just own what this is: a strategic decision that advanced AI is too important to keep out of military hands.
The real precedent here is what it tells us about AI governance. If OpenAI — the company that talks most about AI safety and alignment — can't figure out how to work with the Pentagon while maintaining meaningful control over use cases, what does that mean for everyone else? How do you build AI systems powerful enough to be useful but constrained enough to be safe when the end users operate in classified environments?
The technology is dual-use by nature. The question is whether we can be honest about the tradeoffs instead of pretending we can have both cutting-edge military AI and meaningful safety constraints.
