Artificial Intelligence

A blueprint for using AI to strengthen democracy

Published byAIDaily Editorial Team
5 min read
Original source author: Andrew Sorota, Josh Hendler

Every few centuries, changes in how information moves reshape how societies govern themselves. The printing press spread vernacular literacy, helping give rise to the Reformation and, eventually, representative government. The telegraph made it possible to administer vast nations like the US, accelerating the growth of the modern bureaucratic state. Broadcast media created shared national audiences,…

Share:

Every few centuries, changes in how information moves reshape how societies govern themselves. The printing press spread vernacular literacy, helping give rise to the Reformation and, eventually, representative government. The telegraph made it possible to administer vast nations like the US, accelerating the growth of the modern bureaucratic state. Broadcast media created shared national audiences, which in turn fueled mass democracy. We are now in the early stages of another such shift. Faster than many realize, AI is becoming the primary interface through which we form beliefs and participate in democratic self-governance. If left unchecked, this shift could further strain America’s already fragile institutions. But it could also help address long-standing problems, like lagging civic engagement and deepening polarization. What happens next depends on design choices that are already being made, whether we know it or not. Start with what might be called the epistemic layer—how we come to know things. People are increasingly relying on AI to know what is true, what is happening, and whom to trust. Search is already substantially AI-mediated. The next generation of AI assistants will synthesize information, frame it, and present it with authority. For a growing number of people, asking an AI will become the default way to form views on a candidate, a policy, or a public figure. Whoever controls what these models say therefore has increasing influence over what people believe. Technology has always shaped the way citizens interact with information. But a new problem will soon arise in the form of personal AI agents, which can change not only how people receive information but how they act on it. These systems will conduct research, draft communications, highlight causes, and lobby on a user’s behalf. They will inform decisions such as how to vote on a ballot measure, which organizations are worth supporting, or how to respond to a government notice. They will, in a meaningful sense, begin to mediate the relationship between individuals and the institutions that govern them. We’ve already seen with social media what happens when algorithms optimize for engagement over understanding. Platforms do not need to have an explicit political agenda to produce polarization and radicalization. An agent that knows your preferences and your anxieties—one shaped to keep you engaged—poses the same risks. And in this case the risks may be even more difficult to detect, because an agent presents itself as your advocate. It speaks for you, acts on your behalf, and may earn trust precisely through that intimacy. Now zoom out to the collective. AI agents and humans could soon participate in the same forums, where it may be impossible to tell them apart. Even if every individual AI agent were well-designed and aligned with its user’s interests, the interactions of millions of agents could produce outcomes that no individual wanted or chose. For example, research shows that agents displaying no individual bias can still generate collective biases at scale. And setting aside what agents do to each other, there is what they do for their users. A public sphere in which everyone has a personalized agent attuned to their existing views is not, in aggregate, a public sphere at all. It is a collection of private worlds, each internally coherent but collectively inhospitable to the kind of shared deliberation that democracy requires. Taken together, these three transformations—in how we know, how we act, and how we engage in collective governance—amount to a fundamental change in the texture of citizenship. In the near future, people will form their political views through AI filters, exercise their civic agency through AI agents, and participate in institutions and public discussions that are themselves shaped by the interactions of millions of such agents. Today’s democracy is not ready for this. Our institutions were designed for a world in which power was exercised visibly, information traveled slowly enough to be contested, and reality felt more shared, if imperfectly. All of this was already fraying long before generative AI arrived. And yet this need not be a story of decline. Avoiding that outcome requires us to design for something better. On the informational layer, AI companies must ramp up existing efforts to ensure that models’ outputs are truthful. They should also explore some promising early findings that AI models can help reduce polarization . A recent field evaluation of AI-generated fact checks on X found that people with a variety of political viewpoints deemed AI-written notes more helpful than human-written ones. The paper is yet to be peer-reviewed, but that is a potentially revolutionary finding: AI-assisted fact-checking may be able to achieve the kind of cross-partisan credibility that has eluded most manual human efforts. Greater understanding of and transparency about how models make these assertions and prioritize sources in the process could help build further public trust. On the agentic layer, we need ways to evaluate whether AI agents faithfully represent their users. An agent must never have an agenda of its own or misrepresent its user’s views—a technically daunting requirement in domains where users may have not explicitly stated any preferences. But faithful representation also cannot become an accessory to motivated reasoning. An agent that refuses to present uncomfortable information, that shields its user from ever questioning prior beliefs or fails to adjust to a change of heart, is not acting in the person’s best interest. Finally, on the institutional level, policymakers should hurry to harness AI’s potential to make governance more responsive and legitimate. Several states and localities are already using AI-mediated platforms to conduct democratic deliberation at scale, building on research showing that AI mediators can help citizens find common ground. As agents become increasingly common participants in public input processes—and there is already evidence that bots are skewing those processes —identity verification for both humans and their agentic proxies must be built in from the start. What is needed is a new generation of democratic infrastructure, technological and institutional, built for the world that is actually here. Failing to design for democratic outcomes, in a domain this consequential, means designing for something else. And the history of unaccountable power does not leave much room for optimism about what that something else tends to be. Andrew Sorota and Josh Hendler lead work on AI and democracy at the Office of Eric Schmidt.

Key takeaways

  • AI could exacerbate political polarization in Brazil if there is no adequate regulation.
  • Personal AI agents have the potential to transform civic engagement but may also create dependency on technology.
  • It is urgent to promote digital literacy to prepare citizens for the challenges of AI in democracy.

Editorial analysis

The discussion about the role of AI in democracy is particularly relevant in Brazil, where political polarization and misinformation have intensified in recent years. The rise of AI assistants as intermediaries in opinion formation could exacerbate these problems, as algorithm manipulation can subtly yet powerfully influence public perception. Brazil, with its cultural and social diversity, may face unique challenges in integrating these technologies into its democratic ecosystem, especially if there is no adequate regulation promoting transparency and accountability.

Moreover, the implementation of personal AI agents could transform how citizens interact with government institutions. These systems have the potential to facilitate civic engagement, but they may also create an over-reliance on technology, leading to a disconnection between citizens and the democratic process. The central question will be how to ensure that these tools are used to empower citizens rather than manipulate them.

Brazil should closely observe the experiences of other countries, particularly the United States, where the influence of AI on politics is already a debated topic. The need for public discourse on the ethics of AI and its impact on democracy is urgent. Initiatives that promote digital literacy and critical understanding of emerging technologies will be crucial in preparing the population for the challenges ahead. The future of democracy in Brazil may depend on how AI is adopted and regulated, making inclusive and informative dialogue on these issues essential.

What this coverage includes

  • Clear source attribution and link to the original publication.
  • Editorial framing about relevance, impact, and likely next developments.
  • Review for readability, context, and duplication before publication.

Original source:

MIT Technology Review AI

About this article

This article was curated and published by AIDaily as part of our editorial coverage of artificial intelligence developments. The content is based on the original source cited below, enriched with editorial context and analysis. Automated tools may assist with translation and initial structuring, but publication decisions, factual review, and contextual framing remain editorial responsibilities.

Learn more about our editorial process