OpenAI and Anthropic are pursuing military contracts for surveillance and autonomous systems, arguing that responsible AI companies should engage with defense applications rather than cede the field to less ethical actors. Their pitch to the public boils down to: trust us to build AI for the military—we're the good guys.
That's not a governance model. It's a prayer.
According to reporting from The Intercept, both companies are actively working with the Pentagon on applications ranging from intelligence analysis to autonomous targeting systems. The justification is that if they don't do it, someone else will—and that someone might be China, a less scrupulous defense contractor, or a startup without ethical guardrails.
I've been in enough board rooms to recognize this argument. "If we don't do it, someone worse will" is how every compromised principle gets justified. It's the same logic that led social media companies to prioritize engagement over safety, cryptocurrency projects to ignore money laundering, and surveillance companies to sell tools to authoritarian regimes.
The problem isn't that the argument is wrong—it's often true that someone else will do the thing you're refusing to do. The problem is that it's always true, which means it justifies anything.
OpenAI started with a charter explicitly rejecting military applications. The founding principles emphasized safety, transparency, and building AI that benefits humanity broadly. Those principles were supposed to be the reason OpenAI was different from a conventional tech company optimizing for profit.
Now they're building surveillance AI for the Pentagon and asking for trust. The principles didn't survive contact with economic reality.
To be fair, there's a spectrum of military AI applications. Not all of them involve autonomous weapons. Intelligence analysis, logistics optimization, cybersecurity defense—these are areas where AI can save lives and improve decision-making without directly killing people.
But the line between "intelligence analysis" and "targeting" is blurry and getting blurrier. An AI that identifies enemy combatants in satellite imagery is doing intelligence work. An AI that feeds those identifications into a targeting system for drone strikes is doing something more ethically fraught. And the same underlying technology powers both.
Once you're building AI for military applications, you don't get to control how it's used. You can write guidelines, impose contractual restrictions, and require human oversight. But you're not in the room when operational decisions get made. You don't see the classified intelligence driving targeting choices. You don't audit whether the human-in-the-loop is actually reviewing AI recommendations or just rubber-stamping them under time pressure.
The history of military technology is that constraints erode. Drone strikes were supposed to require high-level approval and verified intelligence. Over time, the approval process got streamlined, the intelligence standards got looser, and the definition of "combatant" got broader. There's no reason to think AI-assisted targeting will be different.
Anthropic's involvement is particularly notable because the company has positioned itself as the more safety-focused alternative to OpenAI. Their research on constitutional AI and mechanistic interpretability was supposed to represent a more principled approach to alignment and oversight.
Now they're also pursuing military contracts, with the same justification that engagement is better than abstention.
Maybe that's true. Maybe responsible AI companies working with the military will result in better safeguards than if defense contractors build everything in-house. But "better than the worst-case scenario" is a low bar.
The real governance question is: what prevents this from going badly?
OpenAI and Anthropic will say they have ethics boards, safety teams, and internal review processes. But those are all internal to the company. There's no external oversight. No independent auditing. No mechanism for the public to verify that the AI systems being deployed meet the safety standards the companies claim to uphold.
"Trust us" is not a safeguard. It's an abdication of accountability.
The Department of Defense has its own oversight mechanisms—congressional approval for certain programs, inspector general reviews, rules of engagement. But those systems were designed for traditional weapons and military operations. They're not equipped to evaluate whether an AI system is biased, whether its training data is poisoned, or whether it will generalize safely to edge cases it wasn't trained on.
Military AI raises questions that existing oversight frameworks can't answer: How do you verify that an AI system won't exhibit unexpected behavior under adversarial conditions? How do you ensure that algorithmic bias doesn't lead to disproportionate targeting of certain populations? How do you maintain meaningful human control when AI recommendations are faster and more comprehensive than human cognition?
These are hard technical and ethical questions. The answer from OpenAI and Anthropic is apparently: trust us to figure it out.
The dual-use problem is unavoidable. Almost any AI capability that's useful for civilian applications is also useful for military ones. Computer vision, natural language processing, predictive analytics—these technologies have both peaceful and harmful applications.
But there's a difference between accepting that dual-use is inevitable and actively seeking military contracts. One is acknowledging reality. The other is making a strategic choice about where to focus development resources and who to partner with.
OpenAI and Anthropic have made that choice. The justification is that engagement allows them to shape how military AI is developed and deployed. That's plausible. It's also convenient, given that defense contracts are extremely lucrative and tend to come with long-term stable funding.
The economic incentives here are obvious. The Department of Defense has massive budgets and a pressing need for AI capabilities. Tech companies that can credibly claim both cutting-edge AI and ethical principles are attractive partners. Everyone wins—except maybe the people on the receiving end of AI-assisted surveillance and targeting.
The comparison that comes to mind is tech companies working with authoritarian governments. Google briefly pursued Project Dragonfly, a censored search engine for China, with the justification that some access to information was better than none. Employee backlash and public pressure killed the project.
Military AI deserves the same level of scrutiny. The stakes—autonomous weapons, mass surveillance, algorithmic warfare—are at least as high as censorship. But the scrutiny has been muted, partly because "national security" shuts down debate in ways that "entering the Chinese market" doesn't.
What's missing is any meaningful public accountability. These contracts are often classified. The capabilities being developed aren't disclosed. The safeguards aren't independently verified. The public is asked to trust that companies with profit motives and government agencies with national security imperatives will do the right thing.
History suggests that's not a safe bet.
The technology for autonomous targeting exists. That's not in question. AI can identify targets faster and more accurately than humans in many contexts. The question is whether deploying that technology is wise, whether the safeguards are adequate, and whether the companies building it should be trusted to self-regulate.
OpenAI and Anthropic are betting that the answer is yes. They're asking the public to trust that their internal ethics processes, combined with DoD oversight, will be sufficient to prevent catastrophic misuse.
Maybe they're right. But trust is not a safeguard. Transparency, independent oversight, and enforceable accountability mechanisms are safeguards. And none of those exist here.
The technology is real. The capabilities are impressive. And the governance model is "trust us."
That's not good enough.





