Anonymous ID: eec4a4 Feb. 24, 2026, 1:19 p.m. No.24302214   🗄️.is 🔗kun

https://x.com/business/status/2022065414662992169

 

OpenAI has warned US lawmakers that its Chinese rival DeepSeek is using unfair and increasingly sophisticated methods to extract results from leading US AI models to train the next generation of its breakthrough R1 chatbot

 

https://www.bloomberg.com/news/articles/2026-02-12/openai-accuses-deepseek-of-distilling-us-models-to-gain-an-edge

https://archive.is/Iday4

 

TL;DR: Thief complains that the things he stole were stolen from him

 

OpenAI Accuses DeepSeek of Distilling US Models to Gain an Edge

 

OpenAI has warned US lawmakers that its Chinese rival DeepSeek is using unfair and increasingly sophisticated methods to extract results from leading US AI models to train the next generation of its breakthrough R1 chatbot, according to a memo reviewed by Bloomberg News.

In the memo, sent Thursday to the House Select Committee on China, OpenAI said that DeepSeek had used so-called distillation techniques as part of “ongoing efforts to free-ride on the capabilities developed by OpenAI and other US frontier labs.” The company said it had detected “new, obfuscated methods” designed to evade OpenAI’s defenses against misuse of its models’ output.

OpenAI began privately raising concerns about the practice shortly after the R1 model’s release last year, when it opened a probe with partner Microsoft Corp. into whether DeepSeek had obtained its data in an unauthorized manner, Bloomberg previously reported. In distillation, one AI model relies on the output of another for training purposes to develop similar capabilities.

Distillation, largely tied to China and occasionally Russia, has persisted and become more sophisticated despite attempts to crack down on users who violate OpenAI’s terms of service, the company said in its memo, citing activity it has observed on its platform.

 

Since DeepSeek and many other Chinese models don’t carry a monthly subscription cost, the prevalence of distillation could pose a business threat to American companies such as OpenAI and Anthropic PBC that have invested billions of dollars in AI infrastructure and charge a fee for their premium services. That imbalance risks eroding the US advantage over China in artificial intelligence.

 

OpenAI also highlighted other national security risks raised by DeepSeek’s gains, including that its chatbot had censored results about topics considered controversial by the Chinese government such as Taiwan and Tiananmen Square. When capabilities are copied through distillation, OpenAI said, safeguards often fall to the wayside, enabling more widespread misuse of AI models in high-risk areas like biology or chemistry.

Representative John Moolenaar, the Republican chair of the House China committee, said in a statement Thursday “this is part of the CCP’s playbook: steal, copy, and kill,” referring to the Chinese Communist Party. “Chinese companies will continue to distill and exploit American AI models to their advantage, just like when they ripped off OpenAI to build DeepSeek.”

OpenAI declined to comment on the memo. Spokespeople for DeepSeek didn’t immediately respond to a request for comment outside of regular business hours in Asia.

Read more Anthropic Clamps Down on AI Services for Chinese-Owned Firms

OpenAI’s memo to the House China committee suggests that its efforts to block distillation have failed to eliminate the problem. The company said an internal review suggests that accounts associated with DeepSeek employees sought to circumvent existing guardrails by accessing models through third-party routers to mask their source.

DeepSeek employees have also developed code to access US AI models and obtain outputs in “programmatic ways,” OpenAI said. It also points to networks of “unauthorized resellers of OpenAI’s services,” also designed to evade the company’s controls.