Here's a plot twist nobody saw coming: OpenAI's ChatGPT is quietly citing information from Grokipedia, the conservative-leaning, AI-generated encyclopedia that Elon Musk's xAI launched last October. You know, the same Grokipedia that's been widely criticized for claiming pornography causes AIDS and publishing dehumanizing content about transgender people.
According to reporting from The Guardian, OpenAI's latest GPT-5.2 model referenced Grokipedia nine times when asked a series of questions—not on well-documented topics where its inaccuracy is obvious, but on obscure subjects where fact-checking is harder. The kind of stuff that slips through because nobody's paying attention.
The technology is impressive. The question is whether anyone needs AI models trained on demonstrably unreliable sources.
<h2>What Is Grokipedia, Anyway?</h2>
Grokipedia is Elon Musk's answer to what he sees as Wikipedia's alleged liberal bias. It's an AI-generated encyclopedia that Musk positioned as a "neutral" alternative. The problem? It's been anything but neutral. Early entries included justifications for slavery, false claims linking pornography to HIV/AIDS, and content so problematic that even conservative commentators raised eyebrows.
The platform is built by xAI's Grok model, which scrapes and synthesizes information without the human editorial oversight that makes Wikipedia—for all its flaws—reasonably trustworthy. It's automation without accountability.
<h2>The AI Ouroboros Problem</h2>
Here's where things get concerning: when AI models like ChatGPT start citing other AI-generated content, you create what researchers call "model collapse." AI trained on AI output produces increasingly degraded results—like making a photocopy of a photocopy until the text becomes unreadable.
The Guardian investigation found that GPT-5.2 avoided citing Grokipedia on famous controversies like January 6 or the HIV/AIDS epidemic, where its errors are well-documented. Instead, it referenced the platform for obscure historical claims that are harder to verify. One example: false statements about historian that had already debunked.
