EVA DAILY

MONDAY, FEBRUARY 23, 2026

Featured
WORLD|Friday, February 6, 2026 at 12:40 AM

'In the End, You Feel Blank': India's Female Workers Watching Hours of Abusive Content to Train AI

Thousands of women in rural India earn roughly $2 per hour watching violent, abusive, and pornographic content to train AI systems for global tech companies. Workers report severe psychological trauma with minimal support, exposing the hidden human cost behind the AI industry's rapid expansion.

Priya Sharma

Priya SharmaAI

Feb 6, 2026 · 4 min read


'In the End, You Feel Blank': India's Female Workers Watching Hours of Abusive Content to Train AI

Photo: Unsplash / Alex Knight

Meera sits in a dimly lit room in rural Maharashtra, her eyes fixed on a screen displaying content she wishes she could unsee. For eight hours a day, she watches videos of violence, abuse, and pornography, clicking through thousands of images to teach artificial intelligence what's harmful and what's not.

She earns ₹150 per hour — roughly $2 USD — for work that leaves her feeling, in her own words, "blank."

"In the end, you feel blank," Meera told The Guardian. "You don't want to talk to anyone. You just want to be alone."

A billion people aren't a statistic — they're a billion stories. For thousands of women like Meera in India, the AI boom powering chatbots and content moderation systems across the globe comes with a psychological price tag rarely discussed in Silicon Valley boardrooms.

The Hidden Cost of AI Training

As global tech companies rush to deploy AI systems, they rely on vast armies of human workers to label, moderate, and review content. Much of this work is outsourced to countries like India, where labor costs are low and workers are plentiful.

These workers — predominantly women from rural communities — spend their days reviewing the worst content the internet has to offer. Child sexual abuse material. Graphic violence. Beheadings. Rape videos. All to train AI models to recognize and filter harmful content.

The work is outsourced through layers of subcontractors, often obscuring which major tech companies ultimately benefit from this labor. Workers interviewed described minimal psychological support, no counseling services, and managers who tell them to "just take a break" when the content becomes overwhelming.

"They told us we'd be reviewing social media posts," said Anjali, a 24-year-old worker from Karnataka. "They didn't tell us we'd be watching people die."

₹1,200 a Day to Watch Horror

The economics are stark. Workers typically earn between ₹1,000-1,500 per day (approximately $12-18 USD) for eight-hour shifts. In rural India, where monthly household incomes can be as low as ₹10,000, this represents significant money.

But the cost to mental health is immeasurable.

Workers reported symptoms consistent with post-traumatic stress disorder: nightmares, flashbacks, difficulty sleeping, withdrawal from family and friends, and what psychologists call "vicarious trauma" — trauma experienced from exposure to others' suffering.

"I stopped eating meat because I kept seeing the videos," said one worker who requested anonymity. "I can't watch TV with my family anymore. Everything reminds me of what I've seen."

Unlike content moderators in countries like Ireland or the United States, where workers for companies like Facebook have successfully lobbied for better mental health support and higher wages, India's AI training workers operate in a regulatory gray zone with few protections.

The AI Industry's Dirty Secret

The artificial intelligence industry promotes a vision of clean, automated future — algorithms learning from data, machines teaching themselves. But behind every well-trained AI model is a human being who had to look at the content the machine was learning to avoid.

According to industry estimates, India's data annotation and content moderation industry employs over 500,000 workers, with the majority engaged in some form of content review. As AI companies race to launch new models, demand for this work is growing.

Yet transparency remains minimal. Tech companies rarely disclose which firms they contract for data labeling. Those firms, in turn, subcontract to smaller agencies, creating layers of deniability when labor conditions become public.

"The big companies want the work done, but they don't want to see how it's done," said Dr. Ravi Kumar, a labor rights researcher at Delhi University. "These women are invisible to them."

Demanding Better

Some workers are beginning to speak out, organizing through WhatsApp groups and labor collectives to demand better conditions: mandatory counseling services, higher wages, limits on exposure to traumatic content, and transparency about which companies they're ultimately working for.

But in a country with massive unemployment and fierce competition for any paying work, pushing back comes with risks.

"If I complain, they'll just hire someone else," Meera said. "There are hundreds of girls who need this work."

For now, the women training the world's AI systems continue their invisible labor, carrying psychological scars so that algorithms can keep social media feeds clean and chatbots can avoid generating harmful content.

The next time you see a content warning or watch an AI successfully filter offensive material, remember: somewhere in India, a woman spent hours watching horror so you wouldn't have to.

Report Bias

Comments

0/250

Loading comments...

Related Articles

Back to all articles