A federal judge ruled this week that the Department of Government Efficiency's use of ChatGPT to cancel federal grants was both procedurally flawed and illegal, restoring funding to programs that had been shut down for alleged "DEI prejudice."The ruling marks a landmark case in how government agencies are deploying AI decision-making tools without proper oversight or human review. The technology works fine. The question is whether anyone should be using it this way.According to court documents, DOGE fed grant applications into ChatGPT and used the AI's output to justify canceling funding to programs it deemed to contain diversity, equity, and inclusion components. The problem wasn't the AI's capabilities - it was that federal administrative law requires agencies to follow established procedures, provide evidence for decisions, and allow for meaningful review."You can't outsource federal decision-making to a chatbot," the judge wrote in the ruling. "The Administrative Procedure Act requires agencies to engage in reasoned decision-making based on the administrative record. A ChatGPT prompt is not a substitute for analysis."The case highlights what happens when government agencies treat AI as a magic policy tool. I've built tech. I've shipped code. I know the difference between a demo and a product. And I know that LLMs are probabilistic text generators, not legal analysts or policy experts.ChatGPT can summarize documents, draft emails, and help brainstorm ideas. What it can't do is make legally defensible administrative decisions that affect millions of dollars in federal funding. The model has no concept of administrative law, no understanding of precedent, and no ability to explain its reasoning in a way that satisfies due process requirements.This isn't an anti-AI ruling. It's a pro-accountability ruling. The judge didn't say agencies can't use AI tools. The ruling said they can't use AI as a substitute for human judgment and proper procedure.What makes this case particularly concerning is that DOGE didn't even try to hide what it was doing. Officials apparently believed that feeding grant applications into ChatGPT and acting on the output was a legitimate way to make federal funding decisions. That suggests a fundamental misunderstanding of both AI capabilities and administrative law requirements.The restored grants will now go back through proper review processes - with actual humans making decisions based on established criteria, legal standards, and the administrative record. Revolutionary? No. But it's how government is supposed to work.As AI tools become more sophisticated and more widely deployed in government, cases like this will become more common. The technology is impressive. The question is whether anyone needs to use it to replace basic administrative functions that already work.Every startup founder I know has learned this lesson the hard way: automation is great until it automates the wrong thing. Looks like the federal government just got its first expensive lesson in that reality.
|





