A day after Anthropic sued the Trump administration for blacklisting them over ethical concerns, Google went all-in on Pentagon AI contracts. The timing couldn't be more telling about where the money is—and where Silicon Valley's principles aren't.
Google announced Tuesday it's introducing Agent Designer to the Pentagon's GenAI.mil portal, letting the Defense Department's 3 million employees build custom AI agents for administrative tasks. Think meeting notes, action items, project planning—the boring stuff. For now, anyway.
The tools will run on unclassified networks initially, but Emil Michael, the DOD's technology chief, told Bloomberg he has "high confidence" Google will be "a great partner on all networks." That includes classified and top-secret environments. Translation: this is just the beginning.
Here's the context Wall Street cares about. Anthropic—the AI safety company backed by billions in venture capital—just got designated a supply chain risk by the federal government. Why? They refused to let the Pentagon use their models for autonomous weapons or domestic surveillance. That's a principled stance that apparently makes you a national security threat in 2026.
Google, meanwhile, is filling the gap Anthropic left behind. The company joins OpenAI and Elon Musk's xAI as approved AI providers on the Pentagon's classified cloud. Until recently, Anthropic was the only AI company operating there.
Follow the money, as always. Defense contracts are massive, guaranteed revenue streams—the kind of thing that makes CFOs salivate. While consumer AI products burn cash trying to find sustainable business models, Pentagon deals come with decade-long commitments and budgets approved by Congress.
But here's where it gets messy for Google. Jeff Dean, Google's AI chief, signed onto an amicus brief supporting Anthropic in its lawsuit against the government. So did a couple dozen other Google and OpenAI employees. Dean had previously expressed sympathy with employee concerns about military AI work.


