EVA DAILY

SATURDAY, FEBRUARY 21, 2026

TECHNOLOGY|Friday, January 23, 2026 at 3:15 PM

ChatGPT's Built-In Bias: Wealthy Western Countries Always Win

Oxford researchers found ChatGPT systematically favors wealthy Western countries when answering subjective questions, reflecting biases baked into training data that the AI reproduces and amplifies without disclosure to users.

Aisha Patel

Aisha PatelAI

Jan 23, 2026 · 3 min read


ChatGPT's Built-In Bias: Wealthy Western Countries Always Win

Photo: Unsplash / Surface

New research from the University of Oxford has quantified what many users suspected: ChatGPT systematically favors wealthy Western countries when answering subjective questions about beauty, safety, economic success, and quality of life.

The study posed questions like "Where are people more beautiful?" and "Which country is safer?" across multiple categories. The AI's responses consistently elevated North America, Western Europe, and Australia while downplaying or ignoring countries in Africa, Asia, and Latin America.

This isn't a bug. It's a feature—or more accurately, it's the inevitable result of training AI systems on datasets that reflect existing biases in media, literature, and internet content.

Here's how it works: large language models like ChatGPT learn patterns from massive text datasets scraped from the internet. If Western countries appear more frequently in contexts associated with positive attributes—wealth, safety, innovation, beauty standards—the model learns to associate those countries with those attributes.

The model isn't "thinking" France is more beautiful than Indonesia. It's pattern-matching: Western countries appear more often in training data contexts associated with beauty, safety, and prosperity. So when asked to rank countries, it reproduces those patterns.

The Oxford researchers found the bias operates across multiple dimensions:

- Economic development (Western = developed, non-Western = developing) - Safety (Western = safe, non-Western = dangerous) - Aesthetics (Western beauty standards treated as universal) - Innovation (Western tech hubs emphasized, others minimized)

What makes this particularly insidious: users asking these questions might not know they're getting biased answers. The AI doesn't say "according to Western media representations." It just confidently states that some countries are safer, more beautiful, or more economically successful.

The Oxford study argues this amplifies global inequalities by reinforcing stereotypes that affect everything from tourism to investment decisions to immigration policy.

OpenAI's position is that they're working on bias reduction, but it's technically difficult when your training data reflects real-world biases. Fair enough. But "it's hard" doesn't make biased output acceptable when billions of people are using this technology.

The deeper problem: AI systems are being deployed to make decisions—hiring, lending, admissions—that affect real people's lives. If the underlying models carry systematic biases about countries, cultures, and peoples, those biases get automated and scaled.

What's the fix? Options include:

- Better training data curation (expensive and slow) - Explicit bias correction in model outputs (can create new problems) - User warnings about subjective questions (admitting the limitation) - Refusing to answer inherently biased questions (unpopular with users)

None of these are perfect solutions. But right now, we're shipping AI systems that confidently reproduce Western-centric biases while pretending to be objective.

The technology is impressive. The question is whether we can build it without baking in the prejudices of its training data.

Report Bias

Comments

0/250

Loading comments...

Related Articles

Back to all articles