LLMs

Elon Musk’s lawsuit is putting OpenAI’s safety record under the microscope

Publicado porRedacao AIDaily
6 min de leitura
Autor na fonte original: Tim Fernholz

Elon Musk's legal effort to dismantle OpenAI may hinge on how its for-profit subsidiary enhances or detracts from the frontier lab's founding mission of ensuring that humanity benefits from artificial general intelligence.

Compartilhar:

Elon Musk’s legal effort to dismantle OpenAI may hinge on how its for-profit subsidiary enhances or detracts from the frontier lab’s founding mission of ensuring that humanity benefits from artificial general intelligence.

On Thursday, a federal court in Oakland, California, heard a former employee and board member say the company’s efforts to push AI products into the marketplace compromised its commitment to AI safety.

Rosie Campbell joined the company’s AGI readiness team in 2021, and she left OpenAI in 2024 after her team was disbanded. Another safety-focused team, the Super Alignment team, was shut down in the same time period.

“When I joined, it was very research-focused and common for people to talk about AGI and safety issues,” she testified. “Over time it became more like a product-focused organization.”

Under cross-examination, Campbell acknowledged that significant funding was likely necessary for the lab’s goal of building AGI but said creating a super-intelligent computer model without the right safety measures in place wouldn’t fit with the mission of the organization she originally joined.

Campbell pointed to an incident where Microsoft deployed a version of the company’s GPT-4 model in India through its Bing search engine before the model had been evaluated by the company’s Deployment Safety Board (DSB). The model itself did not present a huge risk, she said, but the company needed “to set strong precedents as the technology gets more powerful. We want to have good safety processes in place we know are being followed reliably.”

OpenAI’s attorneys also had Campbell admit that in her “speculative opinion,” OpenAI’s safety approach is superior to that at xAI, the AI company that Musk founded that was acquired by SpaceX earlier this year.

This Week Only: Buy one pass, get the second at 50% off

This Week Only: Buy one pass, get the second at 50% off

OpenAI releases evaluations of its models and shares a safety framework publicly, but the company declined to comment on its current approach to AGI alignment. Dylan Scandinaro, its current head of preparedness, was hired from Anthropic in February. Altman said the hire would let him “sleep better tonight.”

The deployment of GPT-4 in India, however, was one of the red flags that led OpenAI’s non-profit board to briefly fire CEO Sam Altman in 2023. That incident took place after employees, including then-chief scientist Ilya Sutskever and then-CTO Mira Murati, complained about Altman’s conflict-averse management style. Tasha McCauley, a member of the board at the time, testified about concerns that Altman was not forthcoming enough with the board for its unusual structure to function.

McCauley also discussed a widely reported pattern of Altman misleading the board. Notably, Altman lied to another board member about McCauley’s intention to remove Helen Toner, a third board member who published a white paper that included some implied criticism of OpenAI’s safety policy. Altman also failed to inform the board about the decision to launch ChatGPT publicly, and members were concerned about his lack of disclosure of potential conflicts of interest.

“We are a non-profit board and our mandate was to be able to oversee the for-profit underneath us,” McCauley told the court. “Our primary way to do that was being called into question. We did not have a high degree of confidence at all to trust that the information being conveyed to us allowed us to make decisions in an informed way.”

However, the decision to boot Altman came at the same time as a tender offer to the company’s employees. McCauley said that when OpenAI’s staff started to side with Altman and Microsoft worked to restore the status quo, the board ultimately reversed course, with the members opposed to Altman stepping down.

The apparent failure of the non-profit board to influence the for-profit organization goes directly to Musk’s case that the transformation of OpenAI from research organization into one of the largest private companies in the world broke the implicit agreement of the organization’s founders.

David Schizer, a former dean of Columbia Law School who is being paid by Musk’s team to act as an expert witness, echoed McCauley’s concerns.

“OpenAI has emphasized that a key part of its mission is safety and they are going to prioritize safety over profits,” Schizer said. “Part of that is taking safety rules seriously, if something needs to be subject to safety review, it needs to happen. What matters is the process issue.”

With AI already deeply embedded in for-profit companies, the issue goes far beyond a single lab. McCauley said the failures of internal governance at OpenAI should be a reason to embrace stronger government regulation of advanced AI — “[if] it all comes down to one CEO making those decisions, and we have the public good at stake, that’s very suboptimal.”

When you purchase through links in our articles, we may earn a small commission . This doesn’t affect our editorial independence.

StrictlyVC Athens is up next. Hear unfiltered insights straight from Europe’s tech leaders and connect with the people shaping what’s ahead. Lock in your spot before it’s gone.

Hackers deface school login pages after claiming another Instructure hack Lorenzo Franceschi-Bicchierai Zack Whittaker

Hackers deface school login pages after claiming another Instructure hack

Hackers deface school login pages after claiming another Instructure hack

Hackers steal students’ data during breach at education tech giant Instructure Lorenzo Franceschi-Bicchierai

Hackers steal students’ data during breach at education tech giant Instructure

Hackers steal students’ data during breach at education tech giant Instructure

As workers worry about AI, Nvidia’s Jensen Huang says AI is ‘creating an enormous number of jobs’ Lucas Ropek

As workers worry about AI, Nvidia’s Jensen Huang says AI is ‘creating an enormous number of jobs’

As workers worry about AI, Nvidia’s Jensen Huang says AI is ‘creating an enormous number of jobs’

Anthropic and OpenAI are both launching joint ventures for enterprise AI services Russell Brandom

Anthropic and OpenAI are both launching joint ventures for enterprise AI services

Anthropic and OpenAI are both launching joint ventures for enterprise AI services

Ouster’s new color lidar is coming to replace cameras Sean O'Kane

Ouster’s new color lidar is coming to replace cameras

Ouster’s new color lidar is coming to replace cameras

This tiny, magnetic e-reader could stop you from doomscrolling Amanda Silberling

This tiny, magnetic e-reader could stop you from doomscrolling

This tiny, magnetic e-reader could stop you from doomscrolling

Uber wants to turn its millions of drivers into a sensor grid for self-driving companies Connie Loizos

Uber wants to turn its millions of drivers into a sensor grid for self-driving companies

Uber wants to turn its millions of drivers into a sensor grid for self-driving companies

Pontos-chave

  • A disputa legal de Musk destaca a importância da segurança e ética no desenvolvimento de IA, especialmente para empresas brasileiras.
  • Mudanças na governança corporativa podem impactar a missão original das empresas de tecnologia, evidenciando a necessidade de estruturas sólidas.
  • A comparação entre abordagens de segurança de OpenAI e xAI pode indicar divisões crescentes nas filosofias de desenvolvimento de IA.

Análise editorial

A disputa legal envolvendo Elon Musk e a OpenAI destaca questões cruciais sobre a segurança e a ética no desenvolvimento de inteligência artificial, que são particularmente relevantes para o setor de tecnologia no Brasil. À medida que empresas brasileiras começam a explorar e implementar soluções de IA, a pressão por práticas seguras e transparentes se torna cada vez mais evidente. A experiência de Rosie Campbell, que testemunhou sobre a mudança de foco da OpenAI de pesquisa para produtos, pode servir como um alerta para startups e empresas estabelecidas no Brasil, que devem garantir que suas inovações não comprometam a segurança e a ética em prol de resultados financeiros rápidos.

Além disso, a situação levanta questões sobre a governança corporativa em empresas de tecnologia. O fato de que a OpenAI enfrentou uma crise interna que resultou na demissão temporária de seu CEO sugere que a pressão por resultados pode levar a decisões que não estão alinhadas com a missão original da empresa. Para o mercado brasileiro, isso enfatiza a importância de uma estrutura de governança sólida que priorize a segurança e a responsabilidade social, especialmente em um cenário onde a IA está se tornando cada vez mais integrada em diversas indústrias.

Por fim, a comparação entre a abordagem de segurança da OpenAI e a da xAI, de Musk, pode indicar uma divisão crescente entre diferentes filosofias no desenvolvimento de IA. À medida que o Brasil avança na regulamentação e na implementação de IA, será essencial observar como essas dinâmicas se desenrolam e quais lições podem ser aprendidas. O futuro da IA no Brasil pode ser moldado por essas discussões, e a necessidade de um compromisso com a segurança e a ética será fundamental para o sucesso a longo prazo do setor.

O que esta cobertura entrega

  • Atribuicao clara de fonte com link para a publicacao original.
  • Enquadramento editorial sobre relevancia, impacto e proximos desdobramentos.
  • Revisao de legibilidade, contexto e duplicacao antes da publicacao.

Fonte original:

TechCrunch AI

Sobre este artigo

Este artigo foi curado e publicado pelo AIDaily como parte da nossa cobertura editorial sobre desenvolvimentos em inteligência artificial. O conteúdo é baseado na fonte original citada abaixo, enriquecido com contexto e análise editorial. Ferramentas automatizadas podem auxiliar tradução e estruturação inicial, mas a decisão de publicar, a revisão factual e o enquadramento de contexto seguem responsabilidade editorial.

Saiba mais sobre nosso processo editorial