LLMs

Google expands Pentagon’s access to its AI after Anthropic’s refusal

Published byAIDaily Editorial Team
3 min read
Original source author: Julie Bort

After Anthropic refused to allow the DoD to use its AI for domestic mass surveillance and autonomous weapons, Google has signed a new contract with the department.

Share:

Google has granted the U.S. Department of Defense access to its AI for classified networks, essentially allowing all lawful uses, according to multiple news reports.

This deal follows Anthropic’s public stand against the Trump administration after the model maker refused to grant the DoD the same terms. The Pentagon wanted unrestricted use of AI, whereas Anthropic wanted guardrails to prevent its AI from being used for domestic mass surveillance and autonomous weapons.

Because Anthropic refused those use cases, the DoD branded the model maker a “supply-chain risk” — a designation normally reserved for foreign adversaries. Anthropic and the DoD are now embroiled in a lawsuit, with a judge last month granting Anthropic an injunction against the designation while the case proceeds.

Google marks the third AI company to try and turn Anthropic’s loss into its own gain. OpenAI immediately signed a deal with the DoD, as did xAI . Google’s agreement includes some language saying that it doesn’t intend for its AI to be used for domestic mass surveillance or in autonomous weapons, The Wall Street Journal reports , which is similar to contract language with OpenAI. But it is unclear whether such provisions are legally binding or enforceable, per the WSJ.

Google entered this deal even though 950 of its employees have signed an open letter asking it to follow Anthropic’s lead and not sell AI to the Defense Department without similar guardrails. Google did not respond to a request for comment.

When you purchase through links in our articles, we may earn a small commission . This doesn’t affect our editorial independence.

StrictlyVC kicks off the year in SF. Register now for unfiltered fireside chats and VC insights with leaders from Uber, Replit, Eclipse, and more. Plus, high-value connections that actually move the needle. Tickets are limited.

Two college kids raise a $5.1 million pre-seed to build an AI social network in iMessage Dominic-Madori Davis

Two college kids raise a $5.1 million pre-seed to build an AI social network in iMessage

Two college kids raise a $5.1 million pre-seed to build an AI social network in iMessage

Meta’s loss is Thinking Machines’ gain Connie Loizos

Google to invest up to $40B in Anthropic in cash and compute Rebecca Bellan

Google to invest up to $40B in Anthropic in cash and compute

Google to invest up to $40B in Anthropic in cash and compute

OpenAI releases GPT-5.5, bringing company one step closer to an AI ‘super app’ Lucas Ropek

OpenAI releases GPT-5.5, bringing company one step closer to an AI ‘super app’

OpenAI releases GPT-5.5, bringing company one step closer to an AI ‘super app’

Microsoft offers buyout for up to 7% of US employees Amanda Silberling

Microsoft offers buyout for up to 7% of US employees

Microsoft offers buyout for up to 7% of US employees

Surveillance vendors caught abusing access to telcos to track people’s phone locations, researchers say Lorenzo Franceschi-Bicchierai

Surveillance vendors caught abusing access to telcos to track people’s phone locations, researchers say

Surveillance vendors caught abusing access to telcos to track people’s phone locations, researchers say

Duolingo is now giving users access to advanced learning content Lauren Forristal

Duolingo is now giving users access to advanced learning content

Duolingo is now giving users access to advanced learning content

Key takeaways

  • Google's decision may impact public trust in AI technologies, especially in security contexts.
  • Growing internal pressure in tech companies highlights the importance of clear ethical guidelines.
  • The Anthropic case may set legal precedents influencing the relationship between AI companies and governments.

Editorial analysis

Google's decision to expand Pentagon access to its AI, in contrast to Anthropic's stance, raises important questions about ethics and responsibility in the use of emerging technologies. For the Brazilian tech sector, this situation may serve as a warning about the need to establish clear and robust guidelines regarding the use of AI in sensitive contexts such as national security and surveillance. The pressure on companies to meet government demands can result in ethical compromises that impact public trust in the technologies developed.

Moreover, the fact that 950 Google employees signed a letter urging the company not to sell AI to the Department of Defense without adequate protections indicates a growing internal concern about the military use of technology. This dynamic may reflect a broader movement among tech workers who are increasingly aware of the social and ethical implications of their innovations. Brazil, with its growing AI startup ecosystem, should consider how these issues may affect its own development trajectory.

The scenario also highlights the fierce competition among AI companies, where the ability to secure government contracts can be seen as a strategic differentiator. Brazilian companies looking to position themselves in the global market must pay attention to these trends, especially regarding compliance with ethical standards and transparency in their operations. What we observe is a bifurcation between companies that prioritize ethics and those that seek to maximize profits at any cost, which may shape the future of the AI industry in Brazil and worldwide.

Finally, it is crucial to observe how the Anthropic case unfolds in court and what legal precedents may be established. The outcome could influence not only the relationship between AI companies and governments but also how society views the responsibility of companies regarding the use of their technologies. Brazil, which is still in the process of building its AI legislation, can benefit from monitoring these developments to avoid similar pitfalls in the future.

What this coverage includes

  • Clear source attribution and link to the original publication.
  • Editorial framing about relevance, impact, and likely next developments.
  • Review for readability, context, and duplication before publication.

Original source:

TechCrunch AI

About this article

This article was curated and published by AIDaily as part of our editorial coverage of artificial intelligence developments. The content is based on the original source cited below, enriched with editorial context and analysis. Automated tools may assist with translation and initial structuring, but publication decisions, factual review, and contextual framing remain editorial responsibilities.

Learn more about our editorial process