Remember when Google Glass failed because people were creeped out by cameras on faces? Meta figured out the problem wasn't the cameras - it was that Glass looked like cameras.
Their Ray-Ban smart glasses look like normal sunglasses. They're stylish. They're subtle. And according to privacy researchers, they're capturing intimate moments and sending them to contract workers for AI training.
The glasses record what you see. That's the feature. What Meta didn't emphasize is that those recordings - your conversations, your home, your family, the inside of restaurants and gyms and doctors' offices - may end up reviewed by strangers halfway around the world.
Meta needs human labelers to train their AI systems to understand what the glasses are seeing. That's normal for machine learning. What's not normal is putting cameras on people's faces and collecting footage from everywhere they go, with no effective way for bystanders to consent.
When someone points a phone at you, you know you're being recorded. When someone wearing normal-looking glasses captures video of you, you have no idea. The technology works, which is exactly the problem.
Meta says the data is handled securely and that labelers are trained on privacy protocols. That's the same company that's been fined billions for privacy violations, so forgive me for being skeptical. Even if their security is perfect, the fundamental issue remains: people are creating surveillance footage without the knowledge or consent of everyone being recorded.
The real question isn't whether Meta can secure the data. It's whether we want to live in a world where every social interaction might be recorded and reviewed by strangers. Where going to dinner with friends means assuming you're being captured for AI training data.
Google Glass failed because society rejected ubiquitous face-mounted cameras. Meta is betting they can succeed by making the cameras invisible. That's not solving the problem. That's making it worse.

