A Chinese official's careless use of ChatGPT just exposed a coordinated global intimidation operation targeting dissidents. The AI chatbot's conversation history revealed tactics, targets, and the scope of Beijing's transnational repression efforts. The irony is delicious: an AI tool designed to be helpful just exposed state-level surveillance operations.
But the real story isn't about ChatGPT being clever—it's about how even sophisticated actors fundamentally misunderstand the digital trails they leave behind.
The OpSec Failure That Keeps on Giving
According to CNN's reporting, a Chinese government official used ChatGPT to help coordinate intimidation campaigns against dissidents living abroad. The problem? ChatGPT keeps conversation histories. And when those histories were inadvertently shared or accessed, they exposed a coordinated effort spanning multiple countries.
This is Operational Security 101 failure. Using a cloud-based AI service—one that explicitly stores your conversations on servers you don't control—for sensitive government operations is like planning a heist over a party line.
The exposure revealed not just the existence of the operation, but specific tactics: how officials identify targets, coordinate across borders, and apply pressure on diaspora communities. It's a master class in what not to do when running covert operations.
Why This Keeps Happening
Having worked in tech long enough to see companies make similar mistakes, I can tell you exactly why this happened: AI tools feel private even when they're not.
ChatGPT's interface is intimate—just you and the chatbot, having a conversation. There's no visible reminder that everything you type is being logged, analyzed, and stored. The UX creates an illusion of ephemerality that the actual architecture doesn't support.
This isn't just a problem for authoritarian governments (though I'm not shedding tears for their OpSec failures). Corporations leak sensitive data through AI tools all the time. Samsung famously banned ChatGPT after employees leaked proprietary code by asking the AI to help debug it.
