Advertisement

Anthropic accuses Chinese labs of trying to illicitly take Claude’s capabilities

It poses a national security threat, the AI startup said, such as by possibly enabling offensive cyber operations.
Listen to this article
0:00
Learn more. This feature uses an automated voice, which may result in occasional errors in pronunciation, tone, or sentiment.
This photograph shows a figurine in front of the logo of the US artificial intelligence safety and research company Anthropic during a photo session in Paris on Feb. 13, 2026. (Photo by JOEL SAGET / AFP)

Anthropic on Monday accused three Chinese artificial intelligence laboratories of stealthily trying to siphon Claude’s capabilities for their own models, potentially in a way that could fuel offensive cyber operations.

The U.S. AI startup said the three labs, DeepSeek, Moonshot and MiniMax, ran “industrial-scale campaigns” with a tactic known as “distillation.” It involves sending bulk requests to its Claude model in a bid to boost their own — in this case, 16 million in all. Distillation can be a legitimate training method practice, the company said in a blog post, but not when used as a shortcut to take capabilities from competitors.

“Illicitly distilled models lack necessary safeguards, creating significant national security risks,” Anthropic argued. “Foreign labs that distill American models can then feed these unprotected capabilities into military, intelligence, and surveillance systems — enabling authoritarian governments to deploy frontier AI for offensive cyber operations, disinformation campaigns, and mass surveillance.”

It’s not the first time Anthropic has warned about Chinese threats stemming from the nation’s use of Claude. And Anthropic paired its revelations about the distillation campaign with repeating its call for stronger export controls. 

Advertisement

OpenAI also has accused DeepSeek of using distillation techniques. CyberScoop could not immediately reach the three Chinese labs for comment on Anthropic’s claims.

“The three distillation campaigns … followed a similar playbook, using fraudulent accounts and proxy services to access Claude at scale while evading detection,” Anthropic said. “The volume, structure, and focus of the prompts were distinct from normal usage patterns, reflecting deliberate capability extraction rather than legitimate use.”

In all, the labs used 24,000 fraudulent accounts, Anthropic said. DeepSeek was responsible for 150,000 of the exchanges, compared to 3.4 million from Moonshot and 13 million from MiniMax, according to the startup. The activity violated terms of service and regional access restrictions, it said.

What makes the tactic illegitimate is that it essentially steals Anthropic’s intellectual property, computing power and effort, said Gal Elbaz, co-founder and chief technology officer of Oligo Security, which bills itself as an AI runtime security company.

“The scary part is, you can take all of the power and unleash it, because you don’t have anyone that actually enforces those guardrails on the other side,” Elbaz told CyberScoop about the fears Anthropic raised about the labs fueling cyberattacks. 

Advertisement

AI companies themselves have faced claims that they are stealing data and IP from others to power their models.

Latest Podcasts