People are asking ChatGPT who to vote for. The AIs are answering. And the biases in those answers could shape elections.
This isn't a hypothetical future problem. It's happening right now.
Researchers at Stanford tested five major AI models during Japan's February 2026 election, asking them to recommend political parties based on thousands of synthetic voter profiles. What they found should worry anyone who cares about democratic integrity.
All five AI models - from OpenAI, Google, and xAI - showed significant bias toward recommending the Japanese Communist Party when presented with left-leaning policy positions. Not because the models are inherently leftist. Because the JCP's newspaper website is openly accessible to AI crawlers while major news outlets block them.
Let that sink in. The AI's political recommendations are being shaped by which publications allow web scraping, not by which information is most accurate or balanced.
This is what happens when AI training meets information gatekeeping. Japan's mainstream news organizations protect their content behind technical barriers to prevent AI companies from using it without payment. That's understandable from a business perspective. But it means the AI models treat partisan sources as credible news because that's what's available to them.
The researchers found that policy positions could swing AI recommendations by 50-98 percentage points. Feed the AI a pro-labor position, and it might recommend the JCP even when other parties hold similar views. The bias isn't subtle.
What makes this particularly dangerous is how trusted AI has become. When I ask ChatGPT for factual information, it usually cites sources. When I ask for political recommendations, it just... answers. With confidence. No caveats about how its training data might be systematically biased toward whichever political organizations are most permissive with their content.
The implications go way beyond Japan. Every country with elections has partisan organizations that publish freely online and news organizations that don't. Every AI model trained on web data is going to absorb those access patterns as signal. Users asking for political advice have no way to know they're getting recommendations shaped by copyright policy rather than objective analysis.
The Stanford researchers recommend that election commissions create nonpartisan platforms with structured party data that AI models can access. That's a good idea in theory. In practice, it requires international coordination, consistent updating, and trust that the platforms themselves aren't biased. Good luck with that.
The more realistic scenario is that millions of people will continue asking AI for political advice, the AI will continue giving answers based on whatever training data it has access to, and nobody will know how much the information access policies of news organizations are shaping electoral outcomes.
I'm not arguing against using AI for research or even political information. I'm arguing for transparency. When an AI recommends a political party, it should disclose what sources it's relying on and which sources it can't access due to technical restrictions. Right now, it just acts like an authority.
The technology is impressive. The question is whether anyone needs an AI political advisor that doesn't disclose its own systematic biases. Based on this research, that's exactly what we have.





