Fri. Nov 7th, 2025

Political actors around the world recognize the transformative potential of AI—unlocked by advancements in computing power, data, and modeling techniques—but they have different ideas about which directions things should go. Politicians and campaigns hope to gain advantages in advertising, fundraising, or messaging by being the first to adopt AI assistive tools. Civic technologists worldwide are trying to use AI to make government more equitable, more efficient, and more responsive. Researchers are building AI-based crowdsourcing tools that solicit policy consensus from human participants online.

[time-brightcove not-tgx=”true”]

Some people think that AI can transform democracy for the better. Countries including Denmark, Japan, the U.S., and the UK have already seen AI avatars standing for election or forming political parties. Meanwhile, leaders of the military-industrial complex, policy hawks, and Big Tech boosters see AI as a martial tool, critical to winning a post–Cold War arms race between the West and China.

Whether AI’s risks to democracy outweigh its benefits depends on how the dynamics emerging between these democratic actors unfolds. Many of these same tools can be exploited by people who want to make democracy more authoritarian. AI surveillance tools intended to make policing fairer and more accountable can easily be redirected to enforce repression and injustice. AI tools designed to make civil servants more effective can be turned to removing human judgment and compassion from bureaucratic systems. AI is not a solution to the problem of democracies devolving to authoritarianism, but will inhabit the territory between these forces for years to come.

To understand the potential impacts of AI on democracy, we must always think beyond the capabilities and innate properties of AI, and focus on the systems, incentives, and political forces within which the AI is built, deployed, and wielded over time.

Most governments are understandably concerned about AI’s risks. AI systems can be biased, make mistakes, and be used to facilitate illicit activities. In practically every election of the past few years, there have been urgent concerns over AI deepfakes: synthetically produced images, audio, or video that create the perception a candidate did something they didn’t really do, or said something they didn’t really say. Some governments have taken concern over AI to the extreme. In 2023, the UK hosted the first global summit on the existential risks—that is, to the very survival of humanity—of AI. Yet these governments’ constituents have more immediate fears triggered by technology, like its near-term harms and misuse, and are rightfully skeptical of the motivations of companies and politicians worrying about far-future existential AI risks.

There is another framing of the modern story of AI, one that puts the systemic risks of inequitable outcomes due to bias front and center. This is, in effect, a fight for justice. Computer scientist Joy Buolamwini founded the Algorithmic Justice League to advocate for equitable AI, as does sociologist Ruha Benjamin’s Ida B. Wells Just Data Lab. The movement to make AI more just is large and diverse, led by trailblazers from varied fields, including data journalist Meredith Broussard, sociologist Safiya Umoja Noble, data scientist Rumman Chowdhury, and mathematician Cathy O’Neil. They are taking the right approach by holding corporate power accountable for delivering ethical, fair, and just AI systems.

Meanwhile, governments are only just beginning to regulate the Big Tech companies developing and marketing AI. In 2024, the European Union passed the first comprehensive regulation of AI technology: the EU AI Act. The EU should be lauded for acting where other governments have not, but criticized for its weak protections for human rights and ample loopholes for companies to do as they please. And democratic governments are just beginning to recognize their potential for actively shaping the AI ecosystem. A largely unconstrained industrial sector, supercharged by massive U.S. private capital investment, has raced to a dominant position controlling the most advanced AI models. Echoing the development of the internet, this industry benefits enormously from government-funded basic research, and yet now returns its profits and delegates its values almost exclusively to private entities.  So far, only a few governments have extended beyond basic research, seed funding, and computational infrastructure to directly build and provision AI models—that is, to create Public AI.

When new technologies are introduced in governance, they frequently have the effect of concentrating power, and AI will do this efficiently. Consider the technology of government bureaucracies. The size of large agencies endowed selected leaders with enormous potency to implement policy, but also necessarily dilutes decision-making across many layers and individuals. Those humans each act from their own remit, perspectives, and (one hopes) ethics. AI offers leaders the centralizing capacity to control government action at an even larger scale, unconstrained by the costs of public servant salaries, with instantaneous coordination and unquestioning compliance.

Meanwhile, advancement of AI has emboldened corporate profiteers who have already extracted fortunes from government technology investment, and see orders of magnitude greater opportunities to sell tech products with AI. The implications for leaders with autocratic and fascist interests are profound. 

We are still at the start of a long journey in democracies’ interaction with AI. The topics that have dominated the public discourse are only a few of the ways AI will impact democracy. AI deepfakes are just the latest version of an age-old political practice. We have been photoshopping and, before that, airbrushing and, before that, staging political images and propaganda for decades. Stalin airbrushed his enemies out of photographs in the 1950s  and the U.S. Civil War photographer Alexander Gardner staged powerful battle scenes a century earlier. The technology has changed—and AI image generators are much easier to use than Photoshop—but you have never been completely able to take for granted the images in front of your eyes. 

The more interesting changes to democracy from AI will come in the places where few are looking. The early pioneers of radio were not setting out to change how politicians communicate with their electorate, yet politicians gradually found transformative ways to use the new technology. The changes caused by technological advancement tend to come from the bottom up, not the top down. They tend to be incremental; their most significant effects compound overtime. This is especially true in an era where technological advancement comes from universities and industry. This is not the era of the Manhattan Project or the Apollo program; an AI arms race metaphor just doesn’t make sense.

And yet, technologies tend to push society in particular directions—even when they weren’t developed to do so—because of the conditions within which they are developed and used. In the 19th century, railroads had enormous potential to connect the disconnected and equalize access to people, places, and power. But their most visible impact was the creation of unprecedented wealth among a new class of oligarchs. In the following century, both the promise and actual impacts of the internet were much the same: many benefited somewhat from connecting through the internet, and a few profited staggeringly from the rest of us using it. These technologies betrayed their promises to the public interest because their value was captured and commandeered by private interests. 

Today, AI has similar potential to empower, but faces the same risks of subversion to benefit the few instead of the many. The potential capture of AI technologies by a few powerful companies will exacerbate its risks to democracy.

This article has been adapted and excerpted with permission from Rewiring Democracy: How AI Will Transform Our Politics, Government, and Citizenship by Bruce Schneier and Nathan E. Sanders (The MIT Press, 2025). 

By

Leave a Reply

Your email address will not be published.