Anthropic has accused China based AI companies Deepseek, Moonshot and MiniMax of “illicitly” extracting capabilities of its AI model Claude through distillation- a process by which lesser capable AI models train on the outputs of a stronger one.
In a recent statement, Anthropic, the creator of Claude AI, has accused three Chinese AI firms of extracting information from Claude through alleged “fraudulent” accounts to train their own indigenous AI models.
“We have identified industrial-scale campaigns by three AI laboratories—DeepSeek, Moonshot, and MiniMax—to illicitly extract Claude’s capabilities to improve their own models. These labs generated over 16 million exchanges with Claude through approximately 24,000 fraudulent accounts, in violation of our terms of service and regional access restrictions,” said Anthropic in a statement.
The trio-DeepSeek, Moonshot and MiniMax- are part of the broader Chinese LLM ecosystem that is renowned for offering LLM models at a much cheaper cost. For example, DeepSeek’s entry in 2024 had caused widespread stock market crash due to its low cost model.
According to Anthropic, the three companies had used the technique of distillation to train their own AI models.
“Distillation is a widely used and legitimate training method. For example, frontier AI labs routinely distill their own models to create smaller, cheaper versions for their customers. But distillation can also be used for illicit purposes: competitors can use it to acquire powerful capabilities from other labs in a fraction of the time, and at a fraction of the cost, that it would take to develop them independently,” said Anthropic in a statement.
Also Read: What is Knowledge Distillation (KD) ?


