Google DeepMind workers are unionizing over AI military contracts
Staffers at Google DeepMind's headquarters have voted to unionize in an effort to prevent the AI firm's technology from being used by Israel and the US military. In a letter to Google management on Tuesday, employees requested that the Communication Workers Union (CWU) and Unite the Union be recognized as joint representatives, with 98 percent […]
Staffers don’t want to be complicit in ‘helping make genocide cheaper, faster, and more efficient.’
Staffers don’t want to be complicit in ‘helping make genocide cheaper, faster, and more efficient.’
Staffers at Google DeepMind’s headquarters have voted to unionize in an effort to prevent the AI firm’s technology from being used by Israel and the US military. In a letter to Google management on Tuesday, employees requested that the Communication Workers Union (CWU) and Unite the Union be recognized as joint representatives, with 98 percent of CWU members at DeepMind voting in support of the move.
“We don’t want our AI models complicit in violations of international law, but they already are aiding Israel’s genocide of Palestinians,” an unnamed DeepMind employee said in a statement shared by the CWU. “Even if our work is only used for administrative purposes, as leadership has repeatedly told us, it is still helping make genocide cheaper, faster, and more efficient. That must end immediately, as must harm to Iranians and human lives anywhere.”
If successful, the unionization bid would secure representation for at least 1,000 staff based out of Google DeepMind’s London headquarters. Management now has 10 working days to voluntarily recognize the unionization efforts before legal processes are formally launched to force recognition.
Google defends allowing US military use of AI for classified operations.
We don’t have to have unsupervised killer robots
Google reportedly worked directly with Israel’s military on AI tools
The union bid includes specific demands that staffers want Google to address, which include making a clear commitment to not pursue the development of weapons, technologies or contracts that harm or surveillance people; negotiations around the use of AI that “materially affect our roles, workloads, or job security;” and the right for workers to abstain from projects that violate their “personal moral or ethical standards.” DeepMind staff globally are also reportedly considering in-person protests and “research strikes,” where they abstain from working on improvements to Google AI services like Gemini AI assistant, as part of a wider campaign against Google’s military-industrial AI contracts.
We have reached out to Google for comment.
This comes a week after hundreds of Google employees signed an open letter to CEO Sundar Pichai demanding the company refuse signing classified AI contracts with the Pentagon. Shortly after, Google — alongside OpenAI and Nvidia — signed deals that reportedly allow the US Department of Defense to use their AI models for “any lawful government purpose.” In 2024, the company fired more than 50 staffers in response to a protest over Google’s military ties to the Israeli government.
“This is a really important moment where tech workers at Google’s frontier AI lab are connecting with some of the most oppressed people in communities around the world in meaningful ways, based on foundational values of solidarity and trade unionism,” said John Chadfield, CWU national officer for tech workers. “By exercising their rights to collectivize they are in a strong position to demand their employer stop circling the ethical drain of military-industrial contracts, echoing the sentiment of many working people in the UK and elsewhere.”
Valve just imported 50 tons of game consoles in two days
Hisense aggressively cuts the price of its RGB LED TV on release day
Homebridge 2.0 is here, and it speaks Matter
Tesla hits Musk’s threshold for ‘safe unsupervised’ driving
Amazon’s trying to turn its massive shipping operation into another AWS
Key takeaways
- Unionization at Google DeepMind highlights the growing ethical concerns about the use of AI in military contexts.
- Similar movements may arise in Brazilian tech companies, encouraging social responsibility in the sector.
- Google's response to the unionization may influence corporate culture and how other companies approach ethical issues.
Editorial analysis
The recent decision by Google DeepMind employees to unionize raises crucial questions about ethics in technology, especially in a context where artificial intelligence is increasingly integrated into military operations. For the Brazilian tech sector, this movement can serve as a wake-up call regarding the need to discuss the social responsibility of tech companies, particularly concerning the use of their innovations in conflict situations. Brazil, with its growing AI market, should closely observe how local companies handle similar ethical issues, avoiding the repetition of mistakes that could lead to legal and moral complications.
Moreover, the pressure on Google DeepMind may inspire similar movements in other tech companies, both in Brazil and globally. Unionization can be seen as a model of resistance against exploitation and the use of technologies for purposes that contradict ethical principles and human rights. This may encourage workers in Brazilian companies to organize and demand greater transparency and accountability from their employers, especially in a sector that often operates in ethical gray areas.
What to watch for next is how Google will respond to this pressure. The decision to recognize or not recognize the union could have significant implications for the company’s corporate culture and for how other tech companies approach similar issues. Additionally, the possibility of protests and research strikes by DeepMind employees could intensify the debate about corporate responsibility regarding the use of their technologies in military contexts, which may resonate in discussions about regulation and ethics in Brazil.
Finally, the situation highlights the growing intersection between technology and ethics, a topic that needs to be increasingly addressed by companies and regulators in Brazil. As the country advances in its technological innovation journey, the need for an open dialogue about the responsible use of AI becomes even more urgent, especially at a time when technology can be used for both good and destruction.
What this coverage includes
- Clear source attribution and link to the original publication.
- Editorial framing about relevance, impact, and likely next developments.
- Review for readability, context, and duplication before publication.
Original source:
The Verge AIAbout this article
This article was curated and published by AIDaily as part of our editorial coverage of artificial intelligence developments. The content is based on the original source cited below, enriched with editorial context and analysis. Automated tools may assist with translation and initial structuring, but publication decisions, factual review, and contextual framing remain editorial responsibilities.
Learn more about our editorial process