Elon Musk said in federal court Thursday that xAI had “Partly” used distillation techniques on OpenAI models to help train Grok.
Testifying in California, Musk was asked whether xAI had used distillation on OpenAI models. He said such practices were common across the artificial intelligence industry. Asked if that meant “yes,” he replied, “Partly.”
Distillation refers to using outputs from publicly accessible chatbots or application programming interfaces, or APIs, to help train new AI models. The practice has drawn scrutiny from OpenAI and Anthropic, particularly over concerns that Chinese firms have used it to build lower-cost open-weight models.
Musk’s testimony offered a public acknowledgment of a practice that could reduce the advantage companies gain from large investments in computing infrastructure. It remains unclear whether distillation is illegal, though it may violate companies’ terms of service.
OpenAI, Anthropic, and Google have reportedly launched an initiative through the Frontier Model Forum to share information on how to counter distillation efforts from China. Those efforts typically involve systematic queries designed to better understand how models respond. AI companies have worked to detect and block suspicious high-volume queries.
Later in his testimony, Musk was asked about a claim he made last summer that xAI would soon rank behind only Google. He said Anthropic was the leading AI company, followed by OpenAI, Google, and Chinese open source models. He described xAI as a much smaller company with a few hundred employees.