Sam Altman reportedly told OpenAI staff that the company is pursuing a classified contract with NATO, marking the latest step in the AI lab's pivot toward military and defense applications.
This comes just months after OpenAI signed its first Pentagon contract, a move that required the company to quietly revise its usage policies. The original OpenAI charter explicitly prohibited military applications. That prohibition is now effectively dead.
The technology is impressive. The question is whether militarizing AI makes the world safer or more dangerous.
I'm conflicted about this in ways I wasn't expecting to be. On one hand, OpenAI is a private company that can pursue whatever contracts it wants. If NATO is going to use AI for intelligence analysis or logistics optimization, better it comes from a U.S.-based lab than a competitor in Beijing or Moscow.
On the other hand, this represents a fundamental betrayal of OpenAI's founding mission. The organization was created specifically to ensure that artificial general intelligence benefits all of humanity, not just the countries with the biggest defense budgets. Taking military contracts—especially classified ones that won't be subject to public scrutiny—is hard to square with that vision.
The reporting from Gizmodo suggests the NATO contract would involve intelligence analysis, pattern recognition, and operational planning. Not autonomous weapons, at least not directly. But anyone who's been around the defense industry knows that lines blur quickly once you're in the ecosystem.
What worries me more than the specific use cases is the incentive structure this creates. Defense contracts are lucrative, long-term, and relatively insulated from market competition. Once OpenAI becomes dependent on that revenue stream, it becomes much harder to say no to requests that push ethical boundaries.
There's also the geopolitical dimension. OpenAI has positioned itself as a global platform, used by developers and businesses in dozens of countries. Taking sides in great power competition by aligning with makes that position untenable. will accelerate its own AI military programs. will double down on asymmetric AI capabilities. The arms race that everyone feared becomes self-fulfilling.

