Hundreds of Google employees have signed an open letter demanding CEO Sundar Pichai reject talks with the Pentagon over classified military AI applications. The protest echoes the 2018 Project Maven uprising that forced Google to establish AI ethics principles—but those principles may not apply to classified work.
The timing is significant. Six years ago, Google employees successfully pressured the company to drop Project Maven, a Pentagon contract to use AI for analyzing drone footage. The backlash was so severe that Google established a set of AI Principles promising not to develop AI weapons or technologies "whose purpose contravenes widely accepted principles of international law and human rights."
Now, according to reports, Google is in discussions with the Pentagon about classified AI applications. And here's where it gets interesting: classified work, by definition, isn't public. Which means Google could potentially do exactly what its employees protested against in 2018, as long as it's stamped "classified."
"Google's 2018 promise not to build AI weapons is being tested," one employee told reporters. "The question is whether 'classified' gives the company cover to do what its own employees—and ethics guidelines—rejected six years ago."
I covered the original Project Maven protests as a reporter, but I'd also just sold my startup to Stripe after four years of building. So I understand both sides of this. Employees at tech companies genuinely care about the impact of their work. And executives are trying to run businesses in an environment where the Pentagon is offering lucrative contracts and competitors aren't squeamish.
But there's a fundamental tension here that "ethics principles" can't resolve. Google's AI Principles say the company won't develop AI for weapons. But they also say Google will work with governments and the military in areas like cybersecurity, training, and recruiting. Where exactly is the line between "cybersecurity AI" and "military AI"? And who decides?
The classified nature of the current discussions makes oversight nearly impossible. In 2018, employees could read the Project Maven contract, understand what the AI would do, and make informed objections. With classified work, there's no transparency. Employees are being asked to trust that leadership will follow the spirit of principles they can't verify are being followed.
This puts Sundar Pichai in an impossible position. Reject the Pentagon talks and Google falls further behind Microsoft, Amazon, and Palantir in the lucrative government AI market. Accept them and risk another employee revolt, or worse, a quiet exodus of top AI talent who didn't sign up to build military technology.
What's changed since 2018 is the competitive landscape. Back then, Google could afford to be principled—it dominated search, advertising, and cloud AI. Today, OpenAI and Anthropic are shipping models that rival Google's capabilities, and Microsoft has integrated AI into every product. Google can't afford to be precious about which customers it serves.
But the employees have a point too. There's a real difference between building AI that helps the military with logistics or recruitment, and building AI that identifies targets or makes kill decisions. The former is defensible. The latter crosses a line that many technologists—myself included—wouldn't want our work contributing to.
The problem is that AI doesn't respect those boundaries. A model trained to identify objects in satellite imagery for "logistics" is the same model that can identify targets. A natural language system that helps military planning is the same system that could automate decisions in conflict zones. The technology is dual-use by nature.
Google has three options: reject classified military work and potentially fall behind competitors; accept it and hope employees don't revolt; or try to establish genuine oversight mechanisms for classified contracts that give employees confidence without compromising national security. That third option is hard, maybe impossible. But it's probably the only one that doesn't end in either business disaster or mass resignations.
The technology is impressive. The question is whether Google can deploy it in ways that satisfy both the Pentagon and its own workforce. Based on the letter from hundreds of employees, that's looking increasingly unlikely.





