LLMs

US government increases AI suppliers and rethinks Anthropic’s role

Published byAIDaily Editorial Team
3 min read
Original source author: Joe Green

The US administration has added four more AI companies to its roster of favoured suppliers, with the Pentagon signing agreements with Microsoft, Reflection AI (which has yet to release a publicly-available model), Amazon, and Nvidia that mean their products can be used on classified operations. The companies join OpenAI, xAI, and Google as companies that […] The post US government increases AI suppliers and rethinks Anthropic’s role appeared first on AI News .

Share:

The US administration has added four more AI companies to its roster of favoured suppliers, with the Pentagon signing agreements with Microsoft, Reflection AI (which has yet to release a publicly-available model), Amazon, and Nvidia that mean their products can be used on classified operations. The companies join OpenAI, xAI, and Google as companies that the Department for Defense can deploy “for any lawful use.” The phrase “any lawful use” formed the centre of the recent disagreement between Anthropic AI and the US administration, with CEO Darius Amodei claiming that it would let the US government use Anthropic technology to subject the American civilian population to surveillance, and produce autonomous weapons, areas of Anthropic’s use that he wanted walled off. The Pentagon cancelled a $200 million contract with the company, a decision which Anthropic swiftly took to court, claiming millions in lost revenues from the government and others influenced by the government’s decision. The Trump administration termed the company a “supply chain risk”, the first time a US-based company had ever been given such a status. Ensuing statements from government sources described Anthropic as a “woke” company. The Pentagon’s statement on its new agreements reads, “The Department will continue to build an architecture that prevents AI vendor lock-in and ensures long-term flexibility for the Joint force.” The technologies will “give warfighters the tools they need to act with confidence and safeguard the nation against any threat.” The AIs will be used for ‘Impact Levels’ six (secret data) and seven (the most highly-classified materials) use-cases, helping create what the statement describes as an “AI-first fighting force”. The Pentagon’s current use of generative AI is largely confined to non-classified tasks carried out inside the various defence departments, such as working on document drafting and summary, and research. The new suppliers will help defence forces “streamline data synthesis” too, but also “elevate situational understanding, and augment warfighter decision-making in complex operational environments.” It’s not clear whether those descriptions include domestic deployments inside US borders. The expansion of the raft of AI suppliers to the US military and security forces means it will become more immune to apparent changes of heart by individual vendors affecting military and security operations. By broadening their technological base, the personal whims of individual company leaders become less relevant. Google and Amazon have in the past fired employees for protesting against their companies’ technology being used in weaponry and warfare. Anthropic’s Claude AI had been used on classified material as part of Palantir’s Maven toolset, a role which the most recent signees may replace. However, the company’s Mythos model is reportedly in use currently by the National Security Agency in the context of the platform’s purported cyber warfare and defence abilities. Worldwide, Anthropic’s Mythos is currently under assessment by 40 organisations, of which only 12 have been named, with the UK’s MI5 and the US NSA thought to be among the remaining 28. According to Axios , the US administration may be walking back on its most recent public stance on Anthropic. The website said it had a source in the White House who stated the administration was trying to find ways to “save face and bring ’em back in.” Anthropic’s Claude coding model is allegedly still in use by US government security organisations, and has been throughout recent events. According to the White House, the US government “continues to proactively engage across government and industry to protect our country and the American people, including by working with frontier AI labs.” (Image source: “BEST OF THE MARINE CORPS – May 2006 – Defense Visual Information Center” by expertinfantry is licensed under CC BY 2.0. Licence .) Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media . Explore other upcoming enterprise technology events and webinars here . The post US government increases AI suppliers and rethinks Anthropic’s role appeared first on AI News .

Key takeaways

  • The expansion of the US government's AI supplier list indicates a strategy of diversification and strengthening military technological infrastructure.
  • The controversy surrounding Anthropic highlights the need for an ethical debate on AI usage, which Brazil should consider when developing its own guidelines.
  • The strengthening of the US military's technological base may pressure other countries, including Brazil, to accelerate their military and security AI initiatives.

Editorial analysis

The recent decision by the US government to expand its list of AI suppliers, including giants like Microsoft, Amazon, and Nvidia, reflects a clear strategy of diversification and strengthening military technological infrastructure. For the Brazilian tech sector, this move may signal what is to come: a growing need for innovation and adaptation to national security and defense demands. Brazil, already facing challenges in its own AI industry, should closely monitor how these partnerships develop and which emerging technologies can be incorporated into its own defense and public security operations.

Moreover, the controversy surrounding Anthropic and its relationship with the US government raises important questions about ethics in AI usage. CEO Darius Amodei's concerns about civilian surveillance and the use of technologies for autonomous weaponry highlight the need for a deeper debate on the limits and regulation of AI. For Brazil, this could be a call to establish clear guidelines that ensure technology is used responsibly and ethically, avoiding the mistakes made in other contexts.

The strengthening of the US military's technological base may also have global implications, especially in an increasingly tense geopolitical landscape. As the US seeks to become an "AI-driven fighting force," other countries, including Brazil, may feel pressured to accelerate their own military and security AI initiatives. This could lead to a technological arms race where innovation in AI becomes a decisive factor for sovereignty and national security.

Finally, the expansion of the Pentagon's AI supplier list may result in a more competitive environment for startups and tech companies looking to collaborate with the government. In Brazil, this could inspire new partnerships between the public and private sectors, particularly in areas such as cybersecurity and public safety. What remains to be seen is how Brazil will position itself in this new landscape and whether it can leverage lessons learned from the US experiences to shape its own approach to AI and security.

What this coverage includes

  • Clear source attribution and link to the original publication.
  • Editorial framing about relevance, impact, and likely next developments.
  • Review for readability, context, and duplication before publication.

Original source:

AI News

About this article

This article was curated and published by AIDaily as part of our editorial coverage of artificial intelligence developments. The content is based on the original source cited below, enriched with editorial context and analysis. Automated tools may assist with translation and initial structuring, but publication decisions, factual review, and contextual framing remain editorial responsibilities.

Learn more about our editorial process