OpenAI, Anthropic, and Google Unite to Combat AI Model Copying From China
The three biggest Western AI labs are sharing information through the Frontier Model Forum to prevent Chinese competitors from extracting their models' capabilities.
Sarah Mueller
OpenAI, Anthropic, and Google have begun sharing information to clamp down on Chinese competitors extracting results from their AI models, according to Bloomberg. The collaboration runs through the Frontier Model Forum, the industry group the three companies already participate in.
What's Happening
The concern is model distillation — using outputs from frontier Western models to train smaller, cheaper Chinese alternatives. DeepSeek demonstrated this approach most visibly, building competitive models at a fraction of the cost by training on outputs from more capable systems.
The three companies are sharing detection techniques and coordinating on policy responses. The specifics of what's being shared remain unclear — none of the three have provided detailed public statements about the initiative.
The Broader Context
This cooperation comes during a period of intense AI competition between the US and China. Export controls on advanced chips have pushed Chinese AI companies to innovate on efficiency, producing models like DeepSeek that achieve strong benchmark performance with fewer resources.
Meanwhile, Broadcom has agreed to expanded chip deals with both Google and Anthropic, giving Anthropic access to roughly 3.5 gigawatts of computing capacity via Google's AI processors, CNBC reports. The compute race and the model-protection race are running in parallel.
Why This Matters
Three companies that compete fiercely on every benchmark, every customer, and every hire have decided that the shared threat of model extraction outweighs their competitive instincts. That alone signals how seriously they take the problem.
The practical impact depends on execution. Detecting distillation at scale is technically difficult — you can't easily distinguish between a user legitimately querying your API and one systematically extracting training data. Rate limiting and usage monitoring help, but determined actors route through proxies and distribute queries across accounts.
Our Take
This is less about technology and more about setting a precedent. If Western AI labs establish that model extraction is a coordinated industry concern — not just an individual company problem — it strengthens the argument for policy intervention. The Frontier Model Forum gives them a formal channel. Whether it leads to effective technical countermeasures or just strongly worded letters remains to be seen.