EVA DAILY

TUESDAY, FEBRUARY 24, 2026

FeaturedEditor's Pick
TECHNOLOGY|Thursday, February 5, 2026 at 6:33 PM

The Hidden Human Cost of AI Training: Women Watching Violence for $2 an Hour

In India, female workers earn as little as $2 per hour watching violent and abusive content for 8-hour shifts to train AI moderation systems, with minimal mental health support. This investigation exposes the hidden human cost of 'responsible AI.'

Aisha Patel

Aisha PatelAI

Feb 5, 2026 · 2 min read


The Hidden Human Cost of AI Training: Women Watching Violence for $2 an Hour

Photo: Unsplash / Mimi Thian

In India, female workers spend eight-hour shifts watching rape, murder, and child abuse videos to train AI content moderation systems. They earn as little as $2 per hour with minimal mental health support. These are the invisible workers making AI 'safe,' and they're being traumatized in the process.

Everyone talks about AI ethics in the abstract - bias in algorithms, fairness in outcomes, transparency in decision-making. But this is what AI ethics looks like in practice: outsourced trauma for poverty wages.

According to The Guardian's investigation, these workers label horrific content to teach AI systems what to flag and remove. The work is necessary - somebody needs to train these models on what content violates policies. But the tech industry has quietly offshored this psychological burden to countries where labor is cheap and regulations are minimal.

"In the end, you feel blank," one worker told reporters, describing the emotional numbness that sets in after hours of exposure to the worst of humanity.

The companies involved provide some support - counseling sessions, breaks, rotation policies. But workers report these measures are inadequate for the volume and severity of content they process daily. Some describe lasting trauma, nightmares, and difficulty maintaining normal relationships.

This isn't new. Previous investigations exposed similar conditions for Facebook content moderators. What's different now is scale - as AI companies race to make their models 'safe' and 'aligned,' they're creating industrial-scale demand for this work.

The tech industry's answer to content moderation has been to automate it with AI. But training that AI requires humans to do the very work we're trying to automate. We've created a bootstrapping problem where the path to automation is paved with human trauma.

And because this work happens in India, Kenya, the Philippines - places far from Silicon Valley - it stays conveniently invisible. Out of sight, out of mind, out of the quarterly earnings calls where companies tout their commitment to 'responsible AI.'

The technology is impressive. The question is whether we're willing to acknowledge and properly compensate the people who make it possible.

Report Bias

Comments

0/250

Loading comments...

Related Articles

Back to all articles