The Pentagon has officially designated Palantir's Maven AI platform as a core military system, with funding exploding from $480 million in 2024 to $13 billion in the current budget. That's not a typo. That's a 27x increase in two years. AI has gone from experimental program to foundational defense infrastructure.
For those who don't remember, Maven is the project that caused a revolt at Google back in 2018. Thousands of employees signed a petition saying Google shouldn't be building AI for warfare. The company eventually pulled out of the project. Peter Thiel's Palantir had no such qualms. They picked up the contract and have been quietly building ever since.
Maven uses computer vision and machine learning to analyze drone footage, identify objects and people, and provide targeting recommendations to military operators. The Pentagon describes it as "AI-assisted decision-making." Critics describe it as automating the kill chain. Both descriptions are accurate.
What's changed is the scale and permanence. This isn't a research program anymore. It's not an experiment. The Pentagon is designating Maven as core infrastructure - the kind of system that's essential to military operations. That means long-term funding commitments, integration into command structures, and a dependence that will be very hard to reverse.
The $13 billion investment tells you everything about where defense priorities are going. For context, that's more than the entire annual budget of NASA. The military is betting that AI will define the next generation of warfare the same way nuclear weapons defined the last. They're probably right.
Here's what keeps me up at night: we're building autonomous targeting systems faster than we're building the ethical frameworks to govern them. The technology for AI-assisted weapons is advancing at exponential speed. International law? Still operating on Geneva Conventions written in 1949. There's no treaty governing autonomous weapons. No international consensus on what's acceptable. Just a race to build the capability before someone else does.
Palantir, for its part, has been very clear that final targeting decisions remain with human operators. Maven recommends. Humans decide. That's the current model. But the pressure in combat situations is always toward automation. When the AI can make targeting decisions in milliseconds and humans take minutes, the incentive structure pushes toward removing the human from the loop. We've seen this pattern in every domain where AI gets deployed.
