A lawyer claims Google terminated his Gmail, Voice, and Photos accounts after he uploaded public Epstein case files to NotebookLM for analysis. This is the nightmare scenario of platform dependence: upload the wrong thing to an AI tool and lose your email, photos, and phone number—all at once, with no warning.
The documents were public records from a high-profile legal case. They weren't illegal. They weren't private. They were the kind of materials that journalists, researchers, and legal professionals analyze every day. But apparently, uploading them to Google's AI tool triggered an automated moderation system that didn't just ban access to NotebookLM—it shut down the user's entire Google account.
This is what platform dependence actually looks like in practice. Most people don't have a backup email address that's actually functional. Their Gmail is their primary identity for banking, work, shopping, and every other service they use. Lose access to that, and you can't reset passwords, can't receive verification codes, can't prove who you are to half the internet. Add in Google Photos for family pictures and Google Voice for phone service, and you're looking at losing access to years of digital life.
There's no evidence Google provided meaningful warning or a clear path to appeal. That's the problem with automated moderation at scale—the systems that police billions of users can't afford human review for every enforcement action. So you get edge cases like this: someone using an AI tool for legitimate professional purposes gets flagged by an algorithm and loses everything.
I've built products with automated moderation. I understand why companies do it—there's no other way to handle the scale of abuse and illegal content on platforms with billions of users. But there has to be due process. There has to be a way to appeal that doesn't involve shouting into the void and hoping someone at the company notices your Twitter thread.
The Epstein files are a particularly fraught case. They involve serious criminal activity, and it's reasonable for moderation systems to flag content related to child exploitation. But public court documents are not illegal content—they're the basis for journalism, legal scholarship, and public understanding of what happened. If AI tools can't distinguish between illegal material and legal analysis of illegal activity, that's a problem.
