Wikipedia has implemented a comprehensive ban on AI-generated content, requiring all contributions to be human-written and verified. The new policy, accepted on March 20, explicitly states that "text generated by large language models often violates several of Wikipedia's core content policies."
The move comes after volunteer editors found themselves overwhelmed by AI-related submissions. "In recent months, more and more administrative reports centered on LLM-related issues, and editors were being overwhelmed," according to the policy discussion. The community decided enough was enough.
Wikipedia gets it: their entire value proposition is verifiable, human-curated knowledge. Every claim needs a citation. Every edit needs context. AI might be impressive at generating plausible-sounding text, but it doesn't understand the difference between plausible and true.
The controversy came to a head with the TomWikiAssist bot, which sparked community outcry after making thousands of edits. The incident highlighted a fundamental tension: Wikipedia's open editing model assumes good-faith human contributors who can be held accountable. AI-generated content breaks that model.
Large language models are trained to predict the next word, not to verify facts. They hallucinate citations, blend multiple sources incorrectly, and generate confident-sounding nonsense. For an encyclopedia that stakes its reputation on accuracy and verifiability, that's a dealbreaker.
Some see this as Wikipedia being anti-innovation. I see it as Wikipedia understanding what makes Wikipedia valuable. The technology is impressive. The question is whether anyone needs AI-generated encyclopedia entries when the entire point of an encyclopedia is human-verified knowledge.
As one editor put it during the policy debate: "We're not here to see what AI can write. We're here to document what humans know."

