Elon Musk appeared more petty than prepared
Today the first witness was sworn in in Musk v. Altman: Elon Musk. I was surprised by how flat he seemed. This is not the first time I've seen Musk in court. During his defamation suit, he turned on the charm and the jury responded by finding him not guilty. Today he looked adrift and […]
In his opening testimony against Sam Altman, Musk was unfocused and uncharming.
In his opening testimony against Sam Altman, Musk was unfocused and uncharming.
Today the first witness was sworn in in Musk v. Altman: Elon Musk. I was surprised by how flat he seemed.
This is not the first time I’ve seen Musk in court. During his defamation suit, he turned on the charm and the jury responded by finding him not guilty. Today he looked adrift and unprepared. The only times he showed real animation were when he was bragging about how much he’d done for OpenAI .
The direct examination is a way of telling a story through questions; it’s important to make the narrative clear. For a suit that accuses Sam Altman of straying from OpenAI’s mission, Musk spent a weird amount of time talking about himself, recounting his biography, and hyping up the various ventures he’s undertaken that have nothing to do with OpenAI.
“I came up with the idea, the name, recruited the key people. taught them everything I know, provided all the initial funding. Besides that, nothing.”
For instance, he told jurors that he worked between “80 to 100 hours a week,” which was how he got so much done. It is unclear to me whether his prolific posting habits count as part of the workweek. I hope the defense asks.
We did eventually get around to OpenAI, where Musk portrayed himself as the driving force. He’d been worried about AI since childhood, and who had finally felt that someone needed to prevent Google from developing it. He testified that he became involved in AI safety because he had a conversation with Google’s own Larry Page and asked, “What if AI wipes out all the humans?” Page essentially shrugged — as far as he was concerned, as long as the AI didn’t also go extinct, things were all right. “I said, ‘That’s insane,’ and he called me a species-ist for being pro-human.” So OpenAI, for Musk, was born specifically to keep Google from having too much power in AI. Petty! Musk also said that after he recruited Ilya Sutskever, then a research scientist at Google, to OpenAI that “Larry Page refused to speak to me ever again.”
What did Musk do at OpenAI? “I came up with the idea, the name, recruited the key people. taught them everything I know, provided all the initial funding. Besides that, nothing.” He paused for laughter, and one or two people obligingly chuckled. But most of the courtroom was silent. I thought he sounded petulant. “I could have started it as a for-profit and I chose not to,” Musk said.
It’s hard to preempt the argument you are expecting without making it yourself
I do wonder how much of this the jury is following. We went very quickly through a lot of ideas, including “artificial general intelligence,” an imaginary thing that many AI researchers are nonetheless afraid of. Musk defined this as being when a computer “becomes as smart as any human, arguably smarter than any human.” ( Large language models are not the same as intelligence , and AGI has been defined downward for quite some time. But whatever! This case is not about that!)
At another point, Musk was asked to explain who former OpenAI board member Shivon Zilis was. “Shivon was the, um, my chief of staff and, uh, you know,” Musk said. One person in the gallery — presumably familiar with the fact that Zilis is the mother of a few of Musk’s kids — burst out in loud laughter. But the jury looked puzzled.
During discussions of how best to get OpenAI the vast amounts of funding it would need for compute, there was indeed discussion of a for-profit arm of OpenAI with Musk. The strategy here, I think, was to make clear that Musk’s intentions were very different than the for-profit that came to pass. (That’s true! He did not get 55 percent equity in it, as one possible cap table suggested he should.) This all seemed pretty mushy, and we got bogged down in a discussion of what, in Musk’s opinion, a reasonable equity split between founders and funders would be; it’s hard to preempt the argument you are expecting without making it yourself.
This is also kind of a distraction from the core point of the trial: Did OpenAI betray its mission statement and fool Musk into making a charitable donation? I agreed to a for-profit model but not THAT for-profit model isn’t a strong argument.
We’ll be back with more Musk testimony and presumably his cross-examination. If there’s a clearer story from the defense, this trial is effectively all over but the shouting. I’ve seen a strong performance from Musk on the stand before. Today he just didn’t seem dialed in. Maybe he’s grumpy about this trial because he knows he’s wasting his own time.
More in: Live updates from Elon Musk and Sam Altman’s court battle over the future of OpenAI
Jury selection in Musk v. Altman: ‘People don’t like him’
It’s a busy time for sci-fi, but don’t miss Aphelion
Valve launches the Steam Controller without the Steam Machine
Elon Musk and Sam Altman’s court battle over the future of OpenAI
Samsung’s first smart glasses have leaked
Key takeaways
- Musk's apathy in court may indicate a shift in public perception regarding his relevance in the AI sector.
- The case highlights the importance of founders' responsibility concerning the technologies they create.
- Lessons learned may influence governance and ethics in AI in Brazil and emerging markets.
Editorial analysis
Elon Musk's testimony in the case against Sam Altman reveals not only the personal dynamics between the two but also broader issues regarding the future of artificial intelligence and the governance of emerging technologies. Musk's apparent apathy in court may signal that he is no longer at the center of attention he once dominated, especially at a time when OpenAI, under Altman's leadership, has become a leading reference in AI. This shift in prominence could impact public perception and trust in figures who were once seen as visionaries in the sector.
Moreover, the narrative Musk constructed about his contribution to OpenAI raises questions about the responsibility of founders regarding the future of the technologies they helped create. Musk's insistence on portraying himself as the driving force behind OpenAI while criticizing the organization's current direction may be viewed as an attempt to reaffirm his relevance in a rapidly evolving field. This is particularly relevant for Brazil, where AI development is on the rise and the governance of these technologies is a growing topic of debate.
The case also highlights the importance of transparency and ethics in AI, especially at a time when concerns about safety and control are high. How Musk positions himself on these issues could influence public perception and acceptance of AI technologies in Brazil and other emerging markets. What we are witnessing is a growing need for open dialogue about the role of tech companies in society and how they should be held accountable for their innovations.
Finally, the outcome of this case could have significant implications for OpenAI and the AI sector as a whole. As Brazil positions itself as an important player in AI development, the lessons learned from this process could be crucial in shaping how startups and large companies approach ethics and responsibility in their innovations. What follows in this case may serve as a warning or a guide for future entrepreneurs and sector leaders in Brazil and beyond.
What this coverage includes
- Clear source attribution and link to the original publication.
- Editorial framing about relevance, impact, and likely next developments.
- Review for readability, context, and duplication before publication.
Original source:
The Verge AIAbout this article
This article was curated and published by AIDaily as part of our editorial coverage of artificial intelligence developments. The content is based on the original source cited below, enriched with editorial context and analysis. Automated tools may assist with translation and initial structuring, but publication decisions, factual review, and contextual framing remain editorial responsibilities.
Learn more about our editorial process