Meta's AI display glasses are reportedly sending intimate videos to human reviewers without clear user consent. This is the privacy nightmare we warned about - cameras on your face, recording everything, with humans on the other end watching.
According to reporting from Sweden's Svenska Dagbladet, workers in Nairobi, Kenya who annotate data for Meta have seen it all: people using toilets, sexual activity, individuals in states of undress, and even credit card numbers. All captured by Meta's Ray-Ban smart glasses.
Here's the technical reality: AI systems need training data. When you enable the AI assistant feature on these glasses, you're theoretically consenting to human review. But there's a massive gap between "consenting" buried in a terms of service document and understanding that strangers in Kenya might watch you in the bathroom.
Everyone's rushing to put AI-powered cameras everywhere. Meta isn't alone - every tech company is building some version of "AI that sees what you see." But nobody's solved the fundamental problem: who watches the watchers?
The architecture basically guarantees this outcome. Current AI models need human feedback to improve. That means human eyes on the data. Meta can technically claim they disclosed this in their privacy policy, but let's be honest - nobody reads those things, and even if they did, they wouldn't expect this.
I've built products that handled sensitive financial data. The rule was simple: minimize human access, encrypt everything, audit every access. But AI companies are operating under a different playbook - collect everything, human-review everything, improve the model at all costs.
What's particularly galling is that Meta is selling these glasses as a cool consumer gadget while running them through an annotation pipeline designed for AI training. Users think they're getting a personal assistant. is getting training data, and the workers in are getting an eyeful of things they probably didn't sign up to see either.





