New allegations have surfaced claiming Facebook actively facilitated the spread of incendiary content during Ethiopia's Tigray conflict, raising urgent questions about the role of Western tech platforms in African violence and whether the company did enough to prevent its algorithms from amplifying hate speech that contributed to atrocities.
The allegations, circulating among Ethiopian researchers and digital rights advocates, suggest that Facebook's content moderation failures went beyond negligence to active complicity—allowing posts inciting violence against Tigray civilians to flourish while the platform's recommendation algorithms amplified divisive ethnic nationalism across Ethiopian-language communities.
The Tigray War's Digital Dimension
The Tigray conflict, which erupted in November 2020 and claimed hundreds of thousands of lives through combat, starvation, and atrocities, played out not just on battlefields but across social media. Ethiopian activists have long documented how Facebook became a primary vector for hate speech, disinformation, and calls for ethnic violence during the war.
According to sources cited in online discussions, some allegations go further—claiming intelligence services may have used Facebook's infrastructure to track and target opponents, though these claims remain unverified and require independent investigation.
Platform Failures
What is documented is Facebook's repeated failure to moderate Ethiopian-language content adequately. The platform employed too few Amharic, Tigrinya, and Oromo speakers to review flagged posts. Automated systems failed to detect context-specific hate speech. Posts calling for violence remained online for days or weeks despite user reports.
Dr. Alemayehu Woldemariam, a diaspora researcher who has tracked online incitement, told Ethiopian media that "Facebook's algorithms prioritized engagement over safety, and in a country with deep ethnic tensions, engagement meant outrage, and outrage meant violence."
The platform's failures in Ethiopia echo similar breakdowns in , where Facebook acknowledged its systems were used to incite genocide against Rohingya Muslims, and in , where WhatsApp misinformation has triggered mob violence.

