Anthropic Accuses Chinese AI Firms of Stealing Claude’s Capabilities

According to the Economic Desk of Webangah News Agency, Anthropic has accused three Chinese artificial intelligence companies of establishing over 24,000 fake user accounts to improve their own AI models by leveraging Anthropic’s “Claude” AI. These implicated companies include DeepSeek, Moonshot AI, and MiniMax.
These entities allegedly engaged in over 16 million interactions with Claude through these fraudulent accounts, employing a technique known as “knowledge distillation.” Anthropic stated that these labs specifically targeted Claude’s most distinctive capabilities, including reasoning, technology utilization, and coding.
These allegations emerge amid ongoing discussions regarding the precise application of controls on advanced AI chip exports, aimed at curbing China’s AI development.
Knowledge distillation is a prevalent method used by AI labs to create smaller, more cost-effective versions of their models. However, competitors can exploit this technique to replicate the work of other labs. Earlier this month, OpenAI reportedly sent a memo to U.S. House lawmakers accusing DeepSeek of using distillation to imitate its products.
DeepSeek first garnered significant attention approximately a year ago with the release of its open-source reasoning model “R1,” which demonstrated performance comparable to leading U.S. labs at a fraction of the cost. DeepSeek is expected to soon launch its latest model, “DeepSeek V4,” which is anticipated to outperform Anthropic’s Claude and OpenAI’s ChatGPT in coding tasks.
The scale of each alleged attack varied. Anthropic tracked over 150,000 DeepSeek interactions, which appeared to focus on enhancing fundamental logic and alignment, particularly concerning safe alternatives to censorship for sensitive political queries.
Moonshot AI recorded over 3.4 million interactions, targeting agent reasoning, technology utilization, coding, data analysis, computer agent development, and computational insights. Last month, Moonshot AI released its new open-source model, “Kimi K2.5,” along with a coding agent.
MiniMax’s 13 million interactions focused on agent coding, technology utilization, and alignment. Anthropic reported observing MiniMax’s performance, noting that the company directed nearly half of its traffic to the capabilities of Claude’s latest model.
Anthropic indicated its commitment to investing in defensive measures that would make distillation attacks harder to execute and easier to detect. However, the company called for a coordinated response across the AI industry, cloud service providers, and policymakers.
The distillation attacks come at a time when the export of U.S. chips to China remains a contentious issue. Last month, the Trump administration formally permitted U.S. companies like Nvidia to export advanced AI chips, such as the H200, to China. Critics argue that relaxing export controls will boost China’s AI computing capacity during a critical phase of global competition for AI dominance.
Anthropic asserted that the scale of extraction undertaken by DeepSeek, MiniMax, and Moonshot necessitates access to advanced chips. The company stated in its blog: “Distillation attacks reinforce the logic for export controls. Limited chip access constrains both direct model training and the scale of illicit distillation.”
Dmitri Alperovitch, Chairman of the Silverado Policy Accelerator think tank and co-founder of CrowdStrike, expressed no surprise at these alleged attacks.
Alperovitch commented, “It has been known for some time that part of the reason for the rapid advancement of Chinese AI models has been theft through distillation of pioneering American models. Now we know this definitively. This should give us more compelling reasons to refrain from selling any AI chips to any of these (companies).”
Anthropic officials also warned that distillation not only threatens U.S. AI dominance but also poses national security risks. In its blog post, Anthropic stated: “Anthropic and other American companies build systems that prevent state and non-state actors from using AI for tasks such as developing bioweapons or conducting malicious cyber activities. Models built through illicit distillation are unlikely to retain these safeguards, meaning dangerous capabilities could proliferate with the removal of many protections.”

