Cybersecurity

Anthropic is suing the Department of Defense

Published byAIDaily Editorial Team
3 min read
Original source author: Hayden Field

Anthropic has sued the US government over its designation as a supply-chain risk, the latest move in a weekslong battle between it and the Pentagon over the acceptable use cases for its military AI tech. The suit, filed in a California district court, accuses the Trump administration of illegally punishing the company for setting "red […]

Share:

The Trump admin labeled Anthropic’s AI a supply-chain risk after the company wouldn’t back down on acceptable uses for its tech.

The Trump admin labeled Anthropic’s AI a supply-chain risk after the company wouldn’t back down on acceptable uses for its tech.

Anthropic has sued the US government over its designation as a supply-chain risk , the latest move in a weekslong battle between it and the Pentagon over the acceptable use cases for its military AI tech. The suit, filed in a California district court, accuses the Trump administration of illegally punishing the company for setting “red lines” on mass domestic surveillance and fully autonomous weapons.

“The federal government retaliated against a leading frontier AI developer for adhering to its protected viewpoint on a subject of great public significance — AI safety and the limitations of its own AI models — in violation of the Constitution and laws of the United States,” the suit reads . “Defendants are seeking to destroy the economic value created by one of the world’s fastest-growing private companies, which is a leader in responsibly developing an emergent technology of vital significance to our Nation.”

The Pentagon formally labels Anthropic a supply-chain risk

Anthropic doesn’t want its AI killing people unsupervised. The Pentagon isn’t happy.

How OpenAI caved to the Pentagon on AI surveillance

The lawsuit follows a rollercoaster couple of weeks for Anthropic, in which the company was faced with the threat of — and then the official designation of — being made a supply chain risk. The designation is typically not made public and often applies to foreign companies that could be a cybersecurity threat or other material risk to national security, rather than companies headquartered in the US. Additionally, President Donald Trump ordered all government agencies to cease using Anthropic’s tech within six months. Anthropic’s blacklisting raised eyebrows and caused significant bipartisan controversy due to fears that disagreeing with a current presidential administration could significantly affect a company’s bottom line — and whether it was able to operate as a business altogether.

Anthropic argues that the government’s actions penalize it for speech protected under the First Amendment and violate its Fifth Amendment rights. Moreover, it says the demand for all government agencies to drop it falls outside the authority of the executive branch.

Anthropic has said since the original announcement that it would challenge the supply chain risk designation in court. In recent days, some of the company’s biggest clients, like Microsoft , have made clear that they’re continuing to work with Anthropic but setting up processes so that their work with Anthropic has no involvement in any of their own work with the Pentagon.

However, the suit notes that government agencies outside the Department of Defense have cut ties with Anthropic. The General Services Administration terminated its OneGov contract, “ending the availability of Anthropic services to all three branches of the federal government.” And multiple other agencies — including the Department of the Treasury and State Department — have publicly or (reportedly) privately said they plan to cease using it as well.

More in: AI vs. the Pentagon: killer robots, mass surveillance, and red lines

The uncomfortable truth about hybrid vehicles

Apple is going high-end with new ‘Ultra’ products next

Sony appears to be testing dynamic pricing on PlayStation games

A bite-sized adventure that puts a wrench into the classic Zelda formula

What this coverage includes

  • Clear source attribution and link to the original publication.
  • Editorial framing about relevance, impact, and likely next developments.
  • Review for readability, context, and duplication before publication.

Original source:

The Verge AI

About this article

This article was curated and published by AIDaily as part of our editorial coverage of artificial intelligence developments. The content is based on the original source cited below, enriched with editorial context and analysis. Automated tools may assist with translation and initial structuring, but publication decisions, factual review, and contextual framing remain editorial responsibilities.

Learn more about our editorial process