Anthropic just published evidence that three Chinese AI companies—DeepSeek, Moonshot AI, and MiniMax—have been systematically stealing their model's capabilities through what's called distillation attacks. This isn't small-scale probing. We're talking about over 16 million API queries using approximately 24,000 fraudulent accounts.
Let me explain what's actually happening here, because this is the AI equivalent of industrial espionage at scale.
When you release an AI model via API, anyone can query it. That's the point—you're providing a service. But if someone queries your model millions of times with carefully crafted prompts, they can use your responses to train their own model. They're essentially using your multi-hundred-million-dollar training run to bootstrap their model for pennies on the dollar.
This is called distillation, and in controlled academic settings, it's a legitimate research technique. What Anthropic detected is something else entirely: coordinated campaigns using thousands of fake accounts to systematically extract their model's most valuable capabilities.
The scale is staggering. MiniMax alone generated 13 million exchanges. Moonshot AI clocked 3.4 million. These aren't random users exploring the API—this is organized, industrial-scale data collection designed to clone proprietary models.
Anthropic's detection methods are fascinating from a technical standpoint. They built classifiers to identify distillation patterns in API traffic, looking for telltale signs: massive volume concentrated in specific capability areas, highly repetitive query structures, coordinated activity across thousands of accounts. They correlated IP addresses, analyzed request metadata, and matched infrastructure indicators to known AI labs.
One particularly telling detail: the attacks specifically targeted "Claude's most differentiated capabilities: agentic reasoning, tool use, and coding." These aren't general-purpose queries. The attackers knew exactly what they were after—the capabilities that make Anthropic's models valuable.
The geopolitical implications matter here. Anthropic points out that these attacks undermine US export controls on AI technology. The whole point of restricting chip exports to China is to slow their AI development. But if Chinese companies can just query American models millions of times and distill the capabilities, export controls on hardware become meaningless.
