Elon Musk confirms xAI used OpenAI’s models to train Grok
In a federal courtroom in California on Thursday, Elon Musk testified that his own AI startup, xAI, has used OpenAI's models to improve its own. The matter at question is model distillation, a common industry practice by which one larger AI model acts as a "teacher" of sorts to pass on knowledge to a smaller […]
He said it was “partly” true that the company had used model distillation to improve xAI’s models.
He said it was “partly” true that the company had used model distillation to improve xAI’s models.
If you buy something from a Verge link, Vox Media may earn a commission. See our ethics statement.
In a federal courtroom in California on Thursday, Elon Musk testified that his own AI startup, xAI, has used OpenAI’s models to improve its own.
The matter at question is model distillation, a common industry practice by which one larger AI model acts as a “teacher” of sorts to pass on knowledge to a smaller AI model, the “student.” Although it’s often used legitimately within companies using one of their own AI models to train another, it’s also a practice that’s sometimes used by smaller AI labs to try to get their models to mimic the performance of a larger competitor’s model.
Asked on the stand whether he knew what model distillation was, Musk said it’s to use one AI model to train another. When asked whether xAI has distilled OpenAI’s technology, Musk seemed to avoid the question, saying that “generally all the AI companies” do such a thing. And when asked if that was a yes, he said, “Partly.”
When pressed, Musk said, “It is standard practice to use other AIs to validate your AI.”
Model distillation has been on the rise and has incited more controversy among AI labs, in recent years, since the lines for what’s legal — and what violates a company’s certain terms or policies — often fall within a gray area. Companies like OpenAI and Anthropic have accused Chinese firms of distilling their models, with OpenAI publicly stating its concerns about DeepSeek, and Anthropic specifically naming DeepSeek, Moonshot, and MiniMax. Google, also, has taken steps to try to prevent what it calls “distillation attacks,” or “a method of intellectual property theft that violates Google’s terms of service.”
In Anthropic’s own blog post on the matter, the company wrote, “Distillation is a widely used and legitimate training method. For example, frontier AI labs routinely distill their own models to create smaller, cheaper versions for their customers. But distillation can also be used for illicit purposes: competitors can use it to acquire powerful capabilities from other labs in a fraction of the time, and at a fraction of the cost, that it would take to develop them independently.”
More in: Live updates from Elon Musk and Sam Altman’s court battle over the future of OpenAI
Apple raises the Mac Mini’s starting price
Meta’s historic loss in court could cost a lot more than $375 million
Anker’s discounted 2-in-1 USB-C cable is a great way to spend $15
Spirit Airlines shuts down after Trump’s war on Iran doubled jet fuel prices
The more young people use AI, the more they hate it
Key takeaways
- Model distillation is a common yet controversial practice that may lead to legal and ethical disputes in the AI sector.
- Brazilian startups may feel pressured to adopt distillation practices to remain competitive, which could affect public trust in AI.
- The reaction of major tech companies to these practices may shape the regulatory environment and acceptance of AI in Brazil.
Editorial analysis
Elon Musk's confirmation regarding xAI's use of OpenAI models to train Grok raises significant questions about the ethics and legality of model distillation in the AI sector. This practice, while common, can be seen as a fine line between innovation and the misappropriation of intellectual property. For the Brazilian tech sector, which is still developing in relation to AI, this discussion is crucial as it may influence how local startups approach the creation and training of their models. Brazil has a growing AI ecosystem, and transparency in training practices is vital for building trust among developers and end-users alike.
Moreover, Musk's revelation may intensify competition among AI companies, especially those operating in emerging markets like Brazil. Brazilian startups may feel pressured to adopt similar practices to remain competitive, potentially leading to an increase in the use of models from other companies, which could result in legal and ethical disputes. How these companies position themselves regarding model distillation could shape public perception and acceptance of AI in the country.
Finally, it is important to observe how major tech companies, such as Google and OpenAI, respond to these practices. The possibility of legal actions against companies utilizing model distillation could create an environment of uncertainty for Brazilian startups, which may hesitate to invest in AI if they perceive a heightened risk of litigation. The regulatory landscape surrounding AI in Brazil is still forming, and incidents like this could accelerate the need for clear guidelines defining what is acceptable in terms of AI model training.
What this coverage includes
- Clear source attribution and link to the original publication.
- Editorial framing about relevance, impact, and likely next developments.
- Review for readability, context, and duplication before publication.
Original source:
The Verge AIAbout this article
This article was curated and published by AIDaily as part of our editorial coverage of artificial intelligence developments. The content is based on the original source cited below, enriched with editorial context and analysis. Automated tools may assist with translation and initial structuring, but publication decisions, factual review, and contextual framing remain editorial responsibilities.
Learn more about our editorial process