Back in 2024, three authors Charles Graeber, Kirk Wallace and Andrea Bartz filed a lawsuit in California against Anthropic, a major AI company, for using pirated versions of their books to train their Large Language Models (LLMS).
In the landmark Bartz vs Anthropic lawsuit, the judge observed that Anthropic downloaded 7 million digitized books to train their LLM models including Claude, despite knowing that they were pirated.
Anthropic decided to settle for a $1.5 billion compensation, even before the case went to trial, realizing it would open a Pandora’s box and financially cripple the organization.
Cut to February 23, 2026- Anthropic has accused three China based AI companies- DeepSeek, Moonshot and MiniMax- of illicitly extracting capabilities of their Claude model through “distillation” to train their indigenous models at a much cheaper rate.
Distillation involves a process where a lesser capable AI model trains on the output of a stronger AI model and it is a standard practice in the AI industry. However, Anthropic has accused the three China based companies of engaging in alleged “fraudulent” means to scour their Claude model by creating over 24,000 accounts and creating over 16 million exchanges to train their indigenous AI models.
The irony was not lost as the Twitter (now X) community lambasted Anthropic for accusing others of the exact charges that the company is currently facing in multiple other lawsuits.
In their defense, Anthropic has raised concerns over how an “open source” distillation practice can cause security challenges.
“Foreign labs that distill American models can then feed these unprotected capabilities into military, intelligence, and surveillance systems—enabling authoritarian governments to deploy frontier AI for offensive cyber operations, disinformation campaigns, and mass surveillance. If distilled models are open-sourced, this risk multiplies as these capabilities spread freely beyond any single government’s control,” said Anthropic in a statement.
Claude’s Capabilities: Whose Work was it and Who Stole it?
Artists all over the world have been staging protests against the unethical use of AI to steal their original works and there have been several lawsuits in US and other nations that AI companies currently face.
While the big names including Hollywood actor Matthew McConaughey have already trademarked their voice, mannerisms and likeliness against misuse by AI, it gets tricky for millions of mid-level artists, struggling authors and bootstrapped journalists.
Frontier AI models routinely access vast quantities of data scraped from the internet for training and there is no way for creators to verify if their work was “lifted” in an ethical manner and whether they are liable for any compensation at all.
In fact, what made the Bartz vs Anthropic lawsuit particularly interesting was the fact none of the three authors were big names in their industry. The subsequent findings by the court made Anthropic realize the legal puddle they were entering.
“Fair Use”: Training Data Remains a Grey Area in AI Industry
The entire AI industry has been built on the assumption that scraping any content, including copyrighted, from the internet, to train their AI models, falls under “fair use” policy.
The assumption has been met with lawsuits filed by individuals, media organizations and collectives in US, UK, Europe and India, who claim that fair use doesn’t include stealing someone’s labor without proper compensation.
Legally, courts have observed that training AI models with data is not illegal, based on the logic that AI doesn’t regenerate exact copies but learns patterns and creates its original work.
However, it is the means to access that data, which falls under grey area. Courts have observed that AI companies cannot simply use pirated content without compensation.
Anthropic: DeepSeek-Moonshot-MiniMax Row: A Glimpse of Coming AI Age
The accusations made by Anthropic against DeepSeek, Moonshot and MiniMax is only the beginning of a global level race against time between countries when it comes to AI dominance.
While the U.S. and China are racing against each other, middle powers like India, U.K and Europe are also looking to develop their sovereign AI models that cater to their indigenous needs.
When DeepSeek was launched in January 2025, it led to massive tremors in the AI industry owing to the low-cost GenAI model that a Chinese startup had built, as an alternative to ChatGPT by OpenAI. Similar accusations have been made by OpenAI against DeepSeek in the past, claiming that their Chinese counterparts have illicitly stolen their data to train their model.
The Anthropic row is just a glimpse of what the future holds: a global level race for AI dominance.
Conclusion: Chicken has Come to Roost?
Anthropic has carefully avoided the word “illegal” while blaming the China based companies of training their models on Claude. The creators of Anthropic realize that going down that path will only invite more troubles for them with increased lawsuits.
However, what Anthropic has clearly said is that the technique of distillation isn’t a crime but the manner in which their model has been accessed is wrong.
What Deepseek, Moonshot and MiniMax have allegedly done falls under the larger morality debate. And while we address that, let’s not forget the millions of creators on the internet, who still stand clueless as to who scraped their work and whether they will be compensated at all.
Also Read: What is Knowledge Distillation (KD)?






