Here's a plot twist nobody saw coming: OpenAI's ChatGPT is quietly citing information from Grokipedia, the conservative-leaning, AI-generated encyclopedia that Elon Musk's xAI launched last October. You know, the same Grokipedia that's been widely criticized for claiming pornography causes AIDS and publishing dehumanizing content about transgender people.
According to reporting from The Guardian, OpenAI's latest GPT-5.2 model referenced Grokipedia nine times when asked a series of questions—not on well-documented topics where its inaccuracy is obvious, but on obscure subjects where fact-checking is harder. The kind of stuff that slips through because nobody's paying attention.
The technology is impressive. The question is whether anyone needs AI models trained on demonstrably unreliable sources.
<h2>What Is Grokipedia, Anyway?</h2>
Grokipedia is Elon Musk's answer to what he sees as Wikipedia's alleged liberal bias. It's an AI-generated encyclopedia that Musk positioned as a "neutral" alternative. The problem? It's been anything but neutral. Early entries included justifications for slavery, false claims linking pornography to HIV/AIDS, and content so problematic that even conservative commentators raised eyebrows.
The platform is built by xAI's Grok model, which scrapes and synthesizes information without the human editorial oversight that makes Wikipedia—for all its flaws—reasonably trustworthy. It's automation without accountability.
<h2>The AI Ouroboros Problem</h2>
Here's where things get concerning: when AI models like ChatGPT start citing other AI-generated content, you create what researchers call "model collapse." AI trained on AI output produces increasingly degraded results—like making a photocopy of a photocopy until the text becomes unreadable.
The Guardian investigation found that GPT-5.2 avoided citing Grokipedia on famous controversies like January 6 or the HIV/AIDS epidemic, where its errors are well-documented. Instead, it referenced the platform for obscure historical claims that are harder to verify. One example: false statements about historian Sir Richard Evans that The Guardian had already debunked.
This isn't random—it's selective contamination. The model seems to "know" when Grokipedia is obviously wrong, but doesn't catch subtler inaccuracies. That's arguably more dangerous than citing it for everything.
<h2>What OpenAI Says (And Doesn't Say)</h2>
An OpenAI spokesperson told TechCrunch that the company "aims to draw from a broad range of publicly available sources and viewpoints." That's a diplomatic way of saying they don't manually vet every source in their training data—which, to be fair, would be nearly impossible at scale.
But "broad range of viewpoints" is doing a lot of work in that sentence. Should AI models treat fringe conspiracy theories and peer-reviewed research as equally valid inputs? That's not balance—that's false equivalence.
Anthropic's Claude also appears to reference Grokipedia in some responses, suggesting this isn't just an OpenAI problem. It's an industry-wide issue with training data quality control.
<h2>Why This Matters Beyond the AI Bubble</h2>
You might think, "So what? People should fact-check AI responses anyway." Sure. But most users don't. A recent study found that people trust AI-generated information at similar rates to human-written content—even when it's wrong.
When ChatGPT cites a source, it gains credibility through association. If that source is an AI-generated platform with documented accuracy problems, you're not democratizing knowledge—you're laundering misinformation through a technical process that makes it harder to challenge.
This isn't about whether Elon Musk's politics are good or bad. It's about whether AI systems should amplify any single person's ideological project—left, right, or center—without transparent disclosure and editorial oversight.
<h2>The Bigger Picture</h2>
The AI industry keeps promising that bigger models with more data will solve accuracy problems. But if that data includes AI-generated content from platforms with no fact-checking standards, you're not building toward truth—you're building toward consensus, which isn't the same thing.
Wikipedia, for all its imperfections, has human editors who argue, cite sources, and occasionally ban people for spreading nonsense. Grokipedia has none of that. It's automated synthesis optimized for speed, not accuracy.
OpenAI's decision to include it in training data—whether intentional or accidental—reveals something important about how these systems are built. The companies creating the most powerful AI models in history are still figuring out basic questions about information quality as they go.
That's not inspiring confidence. It's concerning. And the fact that most users have no idea which sources their AI chatbot is citing makes it worse.
The technology is impressive. But impressive doesn't mean trustworthy.
