LLMs

Is Anthropic limiting the release of Mythos to protect the internet — or Anthropic?

Publicado porRedacao AIDaily
5 min de leitura
Autor na fonte original: Tim Fernholz

Anthropic said this week that it limited the release of its newest model, dubbed Mythos, because it is too capable of finding security exploits in software relied upon by users around the world. Are real cybersecurity concerns a cover for a bigger problem at the frontier lab?

Compartilhar:

Anthropic said this week that it limited the release of its newest model, dubbed Mythos , because it is too capable of finding security exploits in software relied upon by users around the world.

Instead of unleashing Mythos on the public, the frontier lab will share it with a group of large companies and organizations that operate critical online infrastructure, from Amazon Web Services to JPMorgan Chase.

OpenAI is reportedly considering a similar plan for its next cybersecurity tool. The ostensible idea is to let these big enterprises get ahead of bad actors who could leverage advanced LLMs to penetrate secure software.

But the “e-word” in the sentence above is a hint that there might be more to this release strategy than cybersecurity — or the hyping of model capabilities.

Dan Lahav, the CEO of the AI cybersecurity lab Irregular , told TechCrunch in March, before the release of Mythos, that while the discovery of vulnerabilities by AI tools matters, the specific value of any weakness to an attacker depends on many factors, including how they can be used in combination.

“The question I always have in my mind,” Lahav said, “is did they find something that is exploitable in a very meaningful way, whether individually or as part of a chain?”

Anthropic says Mythos is able to exploit vulnerabilities far more than its previous model, Opus. But it’s not clear that Mythos is actually the be-all and end-all of cybersecurity models. Aisle, an AI cybersecurity startup, said it was able to replicate much of what Anthropic says Mythos accomplished using smaller, open-weight models. Aisle’s team argues that these results show there is no single deep learning model for cybersecurity, but instead depends on the task at hand.

Given that Opus was already seen as a game changer for cybersecurity, there’s another reason that frontier labs may want to limit their releases to big organizations: It creates a flywheel for big enterprise contracts, while making it harder for competitors to copy their models using distillation, a technique that leverages frontier models to train new LLMs on the cheap.

“This is marketing cover for fact that top-end models are now gated by enterprise agreements and no longer available to small labs to distill,” David Crawshaw, a software engineer and CEO of the startup exe.dev, suggested in a social media post. “By the time you and I can use Mythos, there will be a new top-end rev that is enterprise only. That treadmill helps keep the enterprise dollars flowing (which is most of the dollars) by relegating distillation companies to second rank,” said Crawshaw.

That analysis jibes with what we’re seeing in the AI ecosystem: A race between frontier labs developing the largest, most capable models, and companies like Aisle that rely on multiple models and see open source LLMs, often from China and often allegedly developed through distillation, as a path to economic advantage.

The frontier labs have been taking a harder line on distillation this year, with Anthropic publicly revealing what it says are attempts by Chinese firms to copy its models, and three leading labs — Anthropic, Google, and OpenAI — teaming up to identify distillers and block them, according to a Bloomberg report . Distillation is a threat to the business model of frontier labs because it eliminates the advantages conveyed by using huge amounts of capital to scale. Blocking distillation, then, is already a worthwhile endeavor, but the selective release approach to doing so also gives the labs a way to differentiate their enterprise offerings as the category becomes the key to profitable deployment.

Whether Mythos or any new model truly threatens the security of the internet remains to be seen, and a careful rollout of the technology is a responsible way forward.

Anthropic didn’t respond to our questions about whether the decision also relates to distillation concerns at press time, but the company may have found a clever approach to protecting the internet — and its bottom line.

StrictlyVC kicks off the year in SF. Get in the room for unfiltered fireside chats with industry leaders, insider VC insights, and high-value connections that actually move the needle. Tickets are limited.

Google quietly launched an AI dictation app that works offline Ivan Mehta

Google quietly launched an AI dictation app that works offline

Google quietly launched an AI dictation app that works offline

Apple’s foldable iPhone is on track to launch in September, report says Aisha Malik

Apple’s foldable iPhone is on track to launch in September, report says

Apple’s foldable iPhone is on track to launch in September, report says

AI startup Rocket offers vibe McKinsey-style reports at a fraction of the cost Jagmeet Singh

AI startup Rocket offers vibe McKinsey-style reports at a fraction of the cost

AI startup Rocket offers vibe McKinsey-style reports at a fraction of the cost

North Korea’s hijack of one of the web’s most used open source projects was likely weeks in the making Zack Whittaker

North Korea’s hijack of one of the web’s most used open source projects was likely weeks in the making

North Korea’s hijack of one of the web’s most used open source projects was likely weeks in the making

In Japan, the robot isn’t coming for your job; it’s filling the one nobody wants Kate Park

In Japan, the robot isn’t coming for your job; it’s filling the one nobody wants

In Japan, the robot isn’t coming for your job; it’s filling the one nobody wants

Embattled startup Delve has ‘parted ways’ with Y Combinator Anthony Ha

Embattled startup Delve has ‘parted ways’ with Y Combinator

Embattled startup Delve has ‘parted ways’ with Y Combinator

Anthropic says Claude Code subscribers will need to pay extra for OpenClaw usage Anthony Ha

Anthropic says Claude Code subscribers will need to pay extra for OpenClaw usage

Anthropic says Claude Code subscribers will need to pay extra for OpenClaw usage

Pontos-chave

  • A limitação do acesso ao modelo Mythos pode favorecer grandes corporações em detrimento de startups e laboratórios menores.
  • A eficácia de modelos de IA em descobrir vulnerabilidades levanta questões sobre responsabilidade e regulamentação no setor de tecnologia.
  • A estratégia de lançamento da Anthropic pode ser uma forma de proteger seus interesses comerciais, criando barreiras para concorrentes.

Análise editorial

A decisão da Anthropic de limitar o lançamento do modelo Mythos levanta questões importantes sobre a dinâmica do mercado de IA e suas implicações para a segurança cibernética. Para o setor de tecnologia brasileiro, essa abordagem pode sinalizar uma tendência de priorização de grandes corporações em detrimento de startups e laboratórios menores, que muitas vezes carecem de recursos para competir em um ambiente onde os modelos mais avançados estão se tornando cada vez mais inacessíveis. Isso pode resultar em uma concentração de poder nas mãos de poucas empresas, dificultando a inovação e a diversidade de soluções no mercado.

Além disso, a alegação de que Mythos é capaz de descobrir vulnerabilidades de forma mais eficaz do que seus antecessores sugere que as empresas devem estar atentas não apenas às capacidades dos modelos de IA, mas também às suas implicações éticas e de segurança. A possibilidade de que esses modelos possam ser utilizados para explorar falhas em sistemas críticos levanta um debate sobre a responsabilidade das empresas que desenvolvem tais tecnologias e a necessidade de regulamentações mais rigorosas para garantir que não sejam utilizadas de forma maliciosa.

Por fim, a estratégia de limitar o acesso a tecnologias avançadas pode ser vista como uma forma de proteger não apenas a segurança da internet, mas também os interesses comerciais da própria Anthropic. À medida que as empresas buscam contratos com grandes organizações, é essencial observar como essa dinâmica afetará a competição no setor de IA e quais medidas serão tomadas para garantir que a inovação não seja sufocada em favor de acordos corporativos. O cenário brasileiro, com suas próprias startups e iniciativas de IA, deve se preparar para navegar nesse ambiente em evolução, buscando formas de se destacar e competir de maneira eficaz.

O que esta cobertura entrega

  • Atribuicao clara de fonte com link para a publicacao original.
  • Enquadramento editorial sobre relevancia, impacto e proximos desdobramentos.
  • Revisao de legibilidade, contexto e duplicacao antes da publicacao.

Fonte original:

TechCrunch AI

Sobre este artigo

Este artigo foi curado e publicado pelo AIDaily como parte da nossa cobertura editorial sobre desenvolvimentos em inteligência artificial. O conteúdo é baseado na fonte original citada abaixo, enriquecido com contexto e análise editorial. Ferramentas automatizadas podem auxiliar tradução e estruturação inicial, mas a decisão de publicar, a revisão factual e o enquadramento de contexto seguem responsabilidade editorial.

Saiba mais sobre nosso processo editorial