LLMs

Is Anthropic limiting the release of Mythos to protect the internet — or Anthropic?

Published byAIDaily Editorial Team
5 min read
Original source author: Tim Fernholz

Anthropic said this week that it limited the release of its newest model, dubbed Mythos, because it is too capable of finding security exploits in software relied upon by users around the world. Are real cybersecurity concerns a cover for a bigger problem at the frontier lab?

Share:

Anthropic said this week that it limited the release of its newest model, dubbed Mythos , because it is too capable of finding security exploits in software relied upon by users around the world.

Instead of unleashing Mythos on the public, the frontier lab will share it with a group of large companies and organizations that operate critical online infrastructure, from Amazon Web Services to JPMorgan Chase.

OpenAI is reportedly considering a similar plan for its next cybersecurity tool. The ostensible idea is to let these big enterprises get ahead of bad actors who could leverage advanced LLMs to penetrate secure software.

But the “e-word” in the sentence above is a hint that there might be more to this release strategy than cybersecurity — or the hyping of model capabilities.

Dan Lahav, the CEO of the AI cybersecurity lab Irregular , told TechCrunch in March, before the release of Mythos, that while the discovery of vulnerabilities by AI tools matters, the specific value of any weakness to an attacker depends on many factors, including how they can be used in combination.

“The question I always have in my mind,” Lahav said, “is did they find something that is exploitable in a very meaningful way, whether individually or as part of a chain?”

Anthropic says Mythos is able to exploit vulnerabilities far more than its previous model, Opus. But it’s not clear that Mythos is actually the be-all and end-all of cybersecurity models. Aisle, an AI cybersecurity startup, said it was able to replicate much of what Anthropic says Mythos accomplished using smaller, open-weight models. Aisle’s team argues that these results show there is no single deep learning model for cybersecurity, but instead depends on the task at hand.

Given that Opus was already seen as a game changer for cybersecurity, there’s another reason that frontier labs may want to limit their releases to big organizations: It creates a flywheel for big enterprise contracts, while making it harder for competitors to copy their models using distillation, a technique that leverages frontier models to train new LLMs on the cheap.

“This is marketing cover for fact that top-end models are now gated by enterprise agreements and no longer available to small labs to distill,” David Crawshaw, a software engineer and CEO of the startup exe.dev, suggested in a social media post. “By the time you and I can use Mythos, there will be a new top-end rev that is enterprise only. That treadmill helps keep the enterprise dollars flowing (which is most of the dollars) by relegating distillation companies to second rank,” said Crawshaw.

That analysis jibes with what we’re seeing in the AI ecosystem: A race between frontier labs developing the largest, most capable models, and companies like Aisle that rely on multiple models and see open source LLMs, often from China and often allegedly developed through distillation, as a path to economic advantage.

The frontier labs have been taking a harder line on distillation this year, with Anthropic publicly revealing what it says are attempts by Chinese firms to copy its models, and three leading labs — Anthropic, Google, and OpenAI — teaming up to identify distillers and block them, according to a Bloomberg report . Distillation is a threat to the business model of frontier labs because it eliminates the advantages conveyed by using huge amounts of capital to scale. Blocking distillation, then, is already a worthwhile endeavor, but the selective release approach to doing so also gives the labs a way to differentiate their enterprise offerings as the category becomes the key to profitable deployment.

Whether Mythos or any new model truly threatens the security of the internet remains to be seen, and a careful rollout of the technology is a responsible way forward.

Anthropic didn’t respond to our questions about whether the decision also relates to distillation concerns at press time, but the company may have found a clever approach to protecting the internet — and its bottom line.

StrictlyVC kicks off the year in SF. Get in the room for unfiltered fireside chats with industry leaders, insider VC insights, and high-value connections that actually move the needle. Tickets are limited.

Google quietly launched an AI dictation app that works offline Ivan Mehta

Google quietly launched an AI dictation app that works offline

Google quietly launched an AI dictation app that works offline

Apple’s foldable iPhone is on track to launch in September, report says Aisha Malik

Apple’s foldable iPhone is on track to launch in September, report says

Apple’s foldable iPhone is on track to launch in September, report says

AI startup Rocket offers vibe McKinsey-style reports at a fraction of the cost Jagmeet Singh

AI startup Rocket offers vibe McKinsey-style reports at a fraction of the cost

AI startup Rocket offers vibe McKinsey-style reports at a fraction of the cost

North Korea’s hijack of one of the web’s most used open source projects was likely weeks in the making Zack Whittaker

North Korea’s hijack of one of the web’s most used open source projects was likely weeks in the making

North Korea’s hijack of one of the web’s most used open source projects was likely weeks in the making

In Japan, the robot isn’t coming for your job; it’s filling the one nobody wants Kate Park

In Japan, the robot isn’t coming for your job; it’s filling the one nobody wants

In Japan, the robot isn’t coming for your job; it’s filling the one nobody wants

Embattled startup Delve has ‘parted ways’ with Y Combinator Anthony Ha

Embattled startup Delve has ‘parted ways’ with Y Combinator

Embattled startup Delve has ‘parted ways’ with Y Combinator

Anthropic says Claude Code subscribers will need to pay extra for OpenClaw usage Anthony Ha

Anthropic says Claude Code subscribers will need to pay extra for OpenClaw usage

Anthropic says Claude Code subscribers will need to pay extra for OpenClaw usage

Key takeaways

  • Limiting access to the Mythos model may favor large corporations over smaller startups and labs.
  • The effectiveness of AI models in discovering vulnerabilities raises questions about responsibility and regulation in the tech sector.
  • Anthropic's release strategy may be a way to protect its commercial interests, creating barriers for competitors.

Editorial analysis

Anthropic's decision to limit the release of the Mythos model raises important questions about the dynamics of the AI market and its implications for cybersecurity. For the Brazilian tech sector, this approach may signal a trend of prioritizing large corporations over smaller startups and labs, which often lack the resources to compete in an environment where the most advanced models are becoming increasingly inaccessible. This could result in a concentration of power in the hands of a few companies, hindering innovation and the diversity of solutions in the market.

Moreover, the claim that Mythos can discover vulnerabilities more effectively than its predecessors suggests that companies must pay attention not only to the capabilities of AI models but also to their ethical and security implications. The possibility that these models could be used to exploit weaknesses in critical systems raises a debate about the responsibility of companies developing such technologies and the need for stricter regulations to ensure they are not used maliciously.

Finally, the strategy of limiting access to advanced technologies can be seen as a way to protect not only the internet's security but also Anthropic's commercial interests. As companies seek contracts with large organizations, it is essential to observe how this dynamic will affect competition in the AI sector and what measures will be taken to ensure that innovation is not stifled in favor of corporate agreements. The Brazilian landscape, with its own startups and AI initiatives, must prepare to navigate this evolving environment, seeking ways to stand out and compete effectively.

What this coverage includes

  • Clear source attribution and link to the original publication.
  • Editorial framing about relevance, impact, and likely next developments.
  • Review for readability, context, and duplication before publication.

Original source:

TechCrunch AI

About this article

This article was curated and published by AIDaily as part of our editorial coverage of artificial intelligence developments. The content is based on the original source cited below, enriched with editorial context and analysis. Automated tools may assist with translation and initial structuring, but publication decisions, factual review, and contextual framing remain editorial responsibilities.

Learn more about our editorial process