The internet's most trusted encyclopedia just drew a hard line in the sand. Wikipedia has officially banned AI-generated text from its articles - with two exceptions that reveal everything about how the community thinks about artificial intelligence.
The new policy, approved by Wikipedia's community of volunteer editors, prohibits the use of large language models like ChatGPT or Claude to generate article content. But here's where it gets interesting: the exceptions are for translating existing Wikipedia articles into other languages and for generating summaries of articles that already exist.
See what they did there? AI is fine as a tool to help humans do work faster. It's not fine as a replacement for human expertise and judgment.
As someone who's watched the tech industry slap "AI-powered" on everything from toasters to tax software, I find this distinction refreshing. Wikipedia isn't saying AI is useless - they're saying it can't do the fundamental work of determining what's true and what belongs in an encyclopedia.
The policy makes clear that Wikipedia articles must be written by people who understand the sources, can evaluate reliability, and take responsibility for accuracy. An AI can paraphrase existing content or help translate it, but it can't replace the editorial judgment that makes Wikipedia actually trustworthy.
This matters beyond Wikipedia. We're at a moment where everyone's trying to figure out what AI should and shouldn't do. The answer isn't "nothing" and it isn't "everything." It's somewhere in between, and Wikipedia just gave us a pretty good framework.
AI as a tool? Sure. AI as the author? Absolutely not.
The technology is impressive. The question is whether anyone needs machine-generated encyclopedia entries that sound authoritative but might be completely wrong. Wikipedia's community just answered: no, we don't.




