Startups de IA

What happens when AI starts building itself?

Publicado porRedacao AIDaily
7 min de leitura
Autor na fonte original: Russell Brandom

Richard Socher's new $650 million startup wants to build an AI that can research and improve itself indefinitely — and he insists it will actually ship products.

Compartilhar:

Richard Socher has been a major figure in AI for some time, best known for founding the early chatbot startup You.com and, before that, his work on ImageNet. Now he’s joining the current generation of research-focused AI startups with Recursive Superintelligence, a San Francisco-based startup that came out of stealth on Wednesday with $650 million in funding.

Socher is joined in the new venture by a cohort of prominent AI researchers, including Peter Norvig and Cresta co-founder Tim Shi. Together, they’re working to create a recursively self-improving AI model, one that can autonomously identify its own weaknesses and redesign itself to fix them, without human involvement — a long-held holy grail of contemporary AI research.

I spoke with him on Zoom after the launch, digging into Recursive’s unique technical approach and why he doesn’t think of this new project as a neolab, the informal term for a new generation of AI startups that prioritize research over building products.

This interview has been edited for length and clarity.

We hear a lot about recursion these days! It feels like a very common goal across different labs. What do you see as your unique approach?

Our unique approach is to use open-endedness to get to recursive self-improvement, which no one has yet achieved. It’s an elusive goal for a lot of people. A lot of people already assume it happens when you just do auto-research. You know, you can take AI and ask it to make some other thing better, which could be a machine learning system, or just a letter that you write, or, you know, whatever it might be, right? But that’s not recursive self-improvement. That’s just improvement.

Our main focus is to build truly recursive, self-improving superintelligence at scale, which means that the entire process of ideation, implementation, and validation of research ideas would be automatic.

First [it would automate] AI research ideas, eventually any kind of research ideas, even eventually in the physical domains. But it's particularly powerful when it's AI working on itself, and it's developing a new kind of sense of self-awareness of its own shortcomings.

You used the term open-ended — does that have a specific technical meaning?

It does. In fact, Tim Rocktäschel, one of our co-founders, led the open-endedness and self-improvement teams at Google DeepMind and particularly worked on the world model Genie 3, which is a great example of open-endedness. You can tell it any concept, any world, any agent, and it just creates it, and it's interactive.

In biological evolution, animals adapt to the environment, and then others counter-adapt to those adaptations. It's just a process that can evolve for billions of years, and interesting stuff keeps happening, right? That's how we developed eyes in our [heads].

Another example is rainbow teaming, from another paper from Tim . Have you heard of red teaming?

So, red teaming also has to be done in an LLM context. Basically you try to get the LLM to tell you how to build a bomb, and you want to make sure that it doesn’t do it.

Now, humans can sit there for a long time and come up with interesting examples of what the AI shouldn't say. But what if you tested this first AI with a second AI, and that second AI now has the task of making the first AI [try to] say all the possible bad things. And then they can go back and forth for millions of iterations.

You can actually allow two AIs to co-evolve. One keeps attacking the other, and then comes up with not just one angle but many different angles, and hence the rainbow analogy. And then you can inoculate the first AI, and you become safer and safer. This was an idea from Tim Rocktaeschel, and it’s now used in all the major labs.

How do you know when it’s done? I suppose it’s never done.

Some of these things will never be done. You can always get more intelligent. You can always get better at programming and math and so on. There are some bounds on intelligence; I’m actually trying to formalize those right now, but they’re astronomical. We’re very far away from those limits.

As a neolab, it feels like you’re supposed to be doing something that the major labs aren’t doing. So part of the implication here is that you don’t think the major labs are going to reach RSI [recursive self-improvement] by doing what they’re doing. Is that fair to say?

I can’t really comment on what they’re doing, but I do think we’re approaching it differently. We really embrace the concept of open-endedness, and our team is entirely focused on that vision. And the team has been researching this and doing papers in this space for the last decade. And the team has a track record of really pushing the field forward significantly and shipping real products. You know, Tim Shi built Cresta into a unicorn. Josh Tobin was one of the first people at OpenAI and eventually led their Codex teams and the deep research teams.

I actually sometimes struggle a little bit with this neolab category. I feel like we're not just a lab. I want us to be become a really viable company, to really have amazing products that people love to use, that have positive impact on humanity.

So when do you plan to ship your first product?

I’ve thought about that a lot. The team has made so much progress, we may actually pull up the timelines from what we had initially assumed. But yes, there will be products, and you’ll have to wait quarters, not years.

One of the ideas around recursive self-improvement is that, once we have this sort of system, compute becomes the only important resource. The faster you run the system, the faster it will improve, and there’s no outside human activity that will really make a difference. So the race just becomes, how much processing power can we throw at this? Do you think that’s the world we’re headed toward?

Compute is not to be underestimated. I think in the future, a really important question will be: how much compute does humanity want to spend to solve which problems? Here’s this cancer and here’s that virus — which one do you want to solve first? How much compute do you want to give it? It becomes a matter of resource allocation eventually. It’s going to be one of the biggest questions in the world.

When you purchase through links in our articles, we may earn a small commission . This doesn’t affect our editorial independence.

StrictlyVC Athens is up next. Hear unfiltered insights straight from Europe’s tech leaders and connect with the people shaping what’s ahead. Lock in your spot before it’s gone.

Musk’s xAI is running nearly 50 gas turbines unchecked at its Mississippi data center Tim De Chant

Musk’s xAI is running nearly 50 gas turbines unchecked at its Mississippi data center

Musk’s xAI is running nearly 50 gas turbines unchecked at its Mississippi data center

AI voice startup Vapi hits $500M valuation after winning Amazon Ring over 40 rivals Jagmeet Singh

AI voice startup Vapi hits $500M valuation after winning Amazon Ring over 40 rivals

AI voice startup Vapi hits $500M valuation after winning Amazon Ring over 40 rivals

Amazon launches 30-minute delivery across the US Sarah Perez

Amazon launches 30-minute delivery across the US

Amazon launches 30-minute delivery across the US

Fintech startup Parker files for bankruptcy Anthony Ha

Fintech startup Parker files for bankruptcy

Fintech startup Parker files for bankruptcy

Laid-off Oracle workers tried to negotiate better severance. Oracle said no. Julie Bort

Laid-off Oracle workers tried to negotiate better severance. Oracle said no.

Laid-off Oracle workers tried to negotiate better severance. Oracle said no.

San Francisco’s housing market has lost its mind Connie Loizos

San Francisco’s housing market has lost its mind

San Francisco’s housing market has lost its mind

US defense contractor who sold hacking tools to Russian broker ordered to pay $10M to former employers Lorenzo Franceschi-Bicchierai

US defense contractor who sold hacking tools to Russian broker ordered to pay $10M to former employers

US defense contractor who sold hacking tools to Russian broker ordered to pay $10M to former employers

Pontos-chave

  • A Recursive Superintelligence pode inspirar startups brasileiras a adotarem práticas de pesquisa autônoma.
  • Questões éticas e de governança da IA se tornam mais relevantes com o desenvolvimento de sistemas autossuficientes.
  • Parcerias entre academia e indústria podem ser aceleradas por inovações como a proposta de Socher.

Análise editorial

A iniciativa de Richard Socher com a Recursive Superintelligence representa um avanço significativo no campo da inteligência artificial, especialmente no que diz respeito à automação da pesquisa e ao desenvolvimento de sistemas autossuficientes. Para o setor de tecnologia brasileiro, essa abordagem pode inspirar startups locais a explorarem conceitos semelhantes, promovendo um ambiente de inovação que prioriza a pesquisa e o desenvolvimento de soluções autônomas. O Brasil, com sua crescente comunidade de IA, pode se beneficiar ao adotar práticas de open-endedness, que incentivam a criatividade e a autonomia em projetos de IA.

Além disso, a proposta de um modelo de IA que se autoaperfeiçoa pode ter implicações profundas para a ética e a governança da IA. À medida que esses sistemas se tornam mais autônomos, questões sobre responsabilidade, segurança e controle se tornam cada vez mais relevantes. O Brasil, que já enfrenta desafios em regulamentação e ética em tecnologia, deve estar atento a essas discussões, buscando estabelecer diretrizes que garantam um desenvolvimento responsável da IA.

Por fim, a Recursive Superintelligence pode servir como um catalisador para parcerias entre academia e indústria no Brasil. A colaboração entre pesquisadores e empresas pode acelerar a adoção de tecnologias emergentes e a criação de soluções inovadoras. O país deve observar de perto os desenvolvimentos dessa startup, pois o sucesso ou fracasso de suas iniciativas pode moldar o futuro da IA e inspirar novas direções para a pesquisa e o desenvolvimento tecnológico no Brasil.

O que esta cobertura entrega

  • Atribuicao clara de fonte com link para a publicacao original.
  • Enquadramento editorial sobre relevancia, impacto e proximos desdobramentos.
  • Revisao de legibilidade, contexto e duplicacao antes da publicacao.

Fonte original:

TechCrunch AI

Sobre este artigo

Este artigo foi curado e publicado pelo AIDaily como parte da nossa cobertura editorial sobre desenvolvimentos em inteligência artificial. O conteúdo é baseado na fonte original citada abaixo, enriquecido com contexto e análise editorial. Ferramentas automatizadas podem auxiliar tradução e estruturação inicial, mas a decisão de publicar, a revisão factual e o enquadramento de contexto seguem responsabilidade editorial.

Saiba mais sobre nosso processo editorial