254
Views
0
CrossRef citations to date
0
Altmetric
Introduction

Democratization in the age of artificial intelligence: introduction to the special issue

Pages 899-921 | Received 21 Nov 2023, Accepted 28 Mar 2024, Published online: 09 Jul 2024

ABSTRACT

As artificial intelligence (AI) technologies become ubiquitous, research on their implications for democratization has blossomed. Will AI empower citizens and strengthen democracy or fuel the rise of autocracy? This special issue brings together diverse theoretical, conceptual, and empirical contributions to offer a synoptic perspective on this question across political contexts. In this introduction, we argue that the most significant socio-technical novelty pertaining to AI - the possibility for automated hyper-personalization - catalyses four structural changes that capture the complex relationship between AI, democratization, and autocratization. First, technology corporations have emerged as a new quasi-governing class that holds political power without democratic legitimacy. Second, automated hyper-personalization has transformed the citizen-state relationship by fostering hyper-technocracy and paternalism in democracies, and enhancing regimes' repertoires for controlling citizens in autocracies. Third, AI is changing the environment in which citizens exercise their political rights, exacerbating political polarization, societal fragmentation, and apathy. Finally, at the international level, AI has created new arenas for competition between democratic and authoritarian states, engendering a struggle for both technological capabilities and the values and norms that will govern them. The articles in this special issue explore these dynamics from micro impacts to structural changes in state behaviour and international relations.

Introduction

In November 2023, leading politicians, corporations, civil society groups, and experts convened for the first global AI Safety Summit in London to discuss the risks posed by advanced artificial intelligence (AI). While part of the summit was dedicated to countering the existential risks that “frontier” AI could pose in the future – caused, among other things, by advanced systems not aligned with human intentions – some leaders emphasized a more immediate danger: the threat AI poses to democratic norms and institutions. Most notably, U.S. Vice President Kamala Harris stated that “when a young father is wrongfully imprisoned because of biased AI facial recognition, is that not existential for his family? […] And when people around the world cannot discern fact from fiction because of a flood of AI-enabled mis  – and disinformation, I ask, is that not existential for democracy?”Footnote1 Echoing her concerns, United Nations Secretary General António Guterres warned of “threats to democracy and human rights from AI-assisted misinformation, manipulation, and surveillance.”Footnote2 In the subsequent Bletchley Declaration, states as diverse as China, the United States, Germany, and Indonesia pledged to intensify international cooperation to better understand risks of advanced AI and ensure the development of “human-centric, trustworthy and responsible AI that is safe.”Footnote3

Though the AI Safety Summit was not exceptional in its outcomes, it serves to illustrate how the advent of ChatGPT, Bard, DALL-E, and other instances of generative AI has supercharged policymakers’ fears about the democracy-eroding potential of AI. The palpable sense of urgency at the Summit was not unfounded: Just as political leaders are beginning to grapple with advanced AI’s unprecedented capabilities and uncertain impacts, more than half of the world’s population is headed for the polls in what The Economist has dubbed “the biggest election year in history,”Footnote4 – seemingly rendering 2024 a litmus test for the strength of democracy in the age of AI. Will AI empower citizens and strengthen democratic processes or further fuel the rise of autocracy? Among experts, this question has been debated for more than a decade and inspired a multidisciplinary body of literature. Scholars have examined the distinct risks resulting from AI applications such as facial recognition software, empirically traced how regimes have employed AI in specific democratic or authoritarian contexts, and dissected AI’s democratic implications in philosophical terms. Despite the diversity of approaches, a common narrative is slowly crystallizing: AI, for all its potential, undermines democracy and exacerbates authoritarian tendencies. These observations, however, are usually derived from specific contexts such as disinformation campaigns in the United States or the establishment of a high-technology surveillance state in China. They have yet to be integrated into a unifying framework that spells out the broader transformations AI is introducing to democratic norms, institutions, and practices across the world.

Against this background, the present special issue aims to provide a comprehensive perspective on how AI is transforming democracies and autocracies across institutional and political contexts. Our aim, however, is not to make a simple and generalizable claim about the relationship between AI and democratization. In fact, we argue that narrowly focusing on whether AI leads to an overall “strengthening” or “weakening” of democracy misses the point. Instead, we aim to flesh out the manifold mechanisms through which AI, as a complex sociotechnical phenomenon, transforms democratic and authoritarian practices at various levels of politics. We argue that fully grasping AI’s impact requires a synoptic perspective – a comprehensive lens that recognizes not only the versatility of AI technologies but also of the political contexts in which they are developed, deployed, and regulated. Therefore, the objective of this special issue are twofold: first, to map those political areas where we believe the introduction of AI will have the most significant impact; and second, to deepen our understanding of these changes through new theoretical, conceptual, and empirical perspectives.

This introductory article lays the groundwork for our synoptic perspective. We start by identifying the most significant socio-technical novelty pertaining to AI: the possibility for automated hyper-personalization, i.e. the use of AI systems to make individualized decisions, predictions, content, and recommendations based on data that is either closely related or unique to an individual or a group. We argue that this automated form of influencing humans at the micro-level of society acts as the catalyst for four structural changes that, taken together, capture AI’s transformative impact on the functioning of democratic and authoritarian politics.

First, AI is changing the relationship between citizens and corporations. By virtue of their command over individual data and the AI systems to process it, technology corporations have conflated citizens and consumers, emerging as a new quasi-governing class that holds political power without democratic legitimacy or accountability. Second, the possibility for automated hyper-personalization has also transformed the relationship between citizens and the state. While in democracies, it has facilitated the continuous monitoring of public opinion, technology-based decision-making (hyper-technocracy), and its swaying in desired directions (paternalism), in autocracies it has expanded the repertoire of repressive regimes to control and manipulate citizens’ behaviour. Third, AI is changing the relationship between citizens and the environment in which they exercise their political rights. This pertains to political polarization, societal fragmentation, and the rise of distrust and apathy, exacerbated by the extensive use of profiling, targeting, and social bots. Finally, the capabilities that come with AI have introduced a new dynamic to the relationship of democratic and authoritarian states at the international level. Not only have they entered a strategic competition over technological assets and leadership, but also increasingly struggle over defining the values, rules, and norms that will determine the course of AI’s development.

The nine contributions that form this special issue zoom in on aspects of these four structural changes. Covering a wide range of political and cross-regime contexts, from Syria and Turkey to the United States and the European Union, they offer novel empirical and conceptual perspectives on democratization in the age of AI. Some contributions present new evidence from in-depth case studies, showing how autocratic regimes take advantage of AI for innovative authoritarian governance. Others offer a new lens on well-established phenomena such as polarization, particularly how the combination of information technology and the psychopolitical disposition of citizens exacerbates divisions and weakens democracy. Yet others use new empirical data to question established beliefs, including the popular notions that government agencies uncritically adopt AI systems from the private sector or that autocracies and hybrid regimes prefer to buy AI technologies from China. While these contributions throw only a selective spotlight on the complex relationship between AI and political regimes, in combination with the analytical framework provided in this introduction, they offer a well-rounded survey of the actors, settings, and practices that define AI’s interactions with democratization.

The remainder of this introduction proceeds as follows. First, we survey how the relationship between digital technologies and democratization has been conceptualized prior to the ascent of AI. We then explore in more depth the two concepts that lie at the heart of our project, AI and democratization. Here, we pay particular attention to AI’s unprecedented capabilities at the micro-level of society. Building on this conceptual foundation, the following section outlines the four AI-driven political transformations that form the overarching framework for the special issue. We conclude with a summary of the special issue contributions, highlighting how they relate to our broader framework and contribute to our understanding of democratization in the age of AI.

From liberation technology to digital authoritarianism: the unfulfilled promise of digital technologies for democratization

The potential of digital technologies to strengthen or weaken democracy had been subject to scholarly discussion long before AI entered the political debate. Most notably, in the early 2000s, the global proliferation of the internet engendered optimism about a potential “liberation technology” that would empower citizens in authoritarian states.Footnote5 The internet seemed to offer pro-democracy activists an unprecedented opportunity to mobilize, to distribute evidence of government wrongdoing, and to freely express their political beliefs.Footnote6 In particular, the widespread use of social media platforms in the 2011 Arab Spring drove a wave of research into interactions between the internet and democratization, along with journalistic accounts eager to herald a coming “Twitter revolution.”Footnote7

However, as failed regime transitions in the Arab Spring showed the limit of “the democratizing power of the internet,”Footnote8 scholars shifted their focus to the downsides of digital technologies – i.e. how the internet and social media could help authoritarian regimes to strengthen their grip on power. The same features that were originally believed to empower anti-authoritarian resistance were now recognized as a boon to authoritarian leaders who, keen on expanding control over their people, had gained access to low-cost tools for preventive repression.Footnote9 Studies have increasingly converged around the finding that across autocracies, digital repression is on the rise and the internet has constrained rather than extended citizens’ freedoms.Footnote10

In fact, the increasing experimentation of authoritarian regimes with digital technologies has sparked discussions about a novel form of authoritarian politics labelled digital authoritarianism. According to a widely used definition, digital authoritarianism is “the use of digital information technology by authoritarian regimes to surveil, repress, and manipulate domestic and foreign populations.”Footnote11 China has received most attention for spearheading this brand of authoritarianism due to its extensive online censorship, a hyper-digitized surveillance state in the Xinjiang region, and exports of surveillance technologies. Similarly, Russia’s combination of information controls and legal sanctions, though less sophisticated than the Chinese model, has been discussed as a low-cost alternative for poorer authoritarian states.Footnote12

At the same time, the recent surge of right-wing populism in consolidated democracies has initiated a debate over where digital technologies could pose the greatest threat to democratic norms and institutions. The use of targeted misinformation on social media platforms ahead of Brexit and the U.S. elections in 2016 raised alarm about the potential role of digital technologies in accelerating democratic regression even in established democracies.Footnote13 Prior to the rise of right-wing populism across Europe and the U.S., scholarly work on the democracy-eroding potential of digital technologies had largely focused on the dangers of excessive government surveillance – concerns that were seemingly validated with Edward Snowden’s 2013 revelations about systematic and far-reaching digital surveillance by the U.S. government.Footnote14 The sudden political salience of fake news and social media bots in the mid-2010s significantly altered threat perceptions, provoking scholars to ponder whether democracy can “survive the internet.”Footnote15

In summary, breakthroughs in AI have come at a time when, first, concerns over a third wave of autocratizationFootnote16 have mounted and, second, digital technologies are increasingly considered to harm rather than strengthen democracy. Therefore, this special issue’s quest to understand AI’s implications for democratization does not start from a tabula rasa; as AI gets integrated with existing digital technologies such as social media or predictive policing software, there is reason to expect that AI will exacerbate democratic harms. Yet, the question is also whether there is anything distinct about AI; does AI merely amplify the tendencies of previous digital technologies or does it also introduce a more fundamental change to autocratic and democratic politics? In this special issue, we make the case for the latter. To develop this argument, it is necessary to first establish a clear conceptual foundation.

AI and democratization: shedding light on contested concepts

Defining AI

The concept of artificial intelligence has been fiercely contested for decades and continues to evade expert consensus to date.Footnote17 In politics and science, it continues to subsume a diverse range of scientific methods, technological artifacts, academic disciplines, and visions of the future. On some occasions, specific systems such as ChatGPT, facial recognition software, or self-driving cars are labelled “an AI”; on others, the term refers to techniques such as machine learning, deep learning, heuristic search, or symbol processing. Some scholars understand AI as a broader research area composed of subfields concerned with systems that underpin intelligent machines, such as natural language processing, computer vision, speech recognition, and robotics; yet others reject such conceptions as “narrow” and understand AI as the yet-unachieved vision of machines with human-level intelligence or even superintelligence. Accordingly, AI remains an “ill-defined and moving target”Footnote18 – a contested term whose meaning is unstable and flexible. Separating what AI is from what it could become can be a daunting task, as hype and inflated expectations have been inseparably tied to AI from its birth.Footnote19 Therefore, how one defines AI has political implications; it allows “epistemic actors [to] exploit the object’s amorphous nature to (re-)construct its meaning in line with their beliefs and political interests.”Footnote20

Consequently, any effort to assign a specific meaning to AI will necessarily be vulnerable to criticism. Yet, as this special issue aims to explore how progress in AI shapes democratization in the present, we focus on recent breakthroughs in machine learning, particularly in the area of deep learning. Starting in 2012, deep neural networks – a specific class of algorithms – began demonstrating drastically improved predictive accuracy on a variety of classification tasks. Rather than making decisions based on predefined rules, AI systems were now trained on millions, billions, and sometimes trillions of data points to detect patterns and autonomously infer decisions that meet the system’s goals. Within a decade, deep learning became the technical foundation for most social media algorithms, facial recognition software, and content moderation tools, fundamentally changing how they function and how people interact with them.

How does AI stand out in terms of its impact on society?

When gauging the implications of this technical change for democratic politics, various characteristics of deep learning come into view: most notably, the lack of transparency in systems’ decision-making processes or their propensity to reinforce societal bias along the lines of gender, class, and race. We, however, propose that deep learning’s most profound socio-technical feature is what we call automated hyper-personalization. Individual human beings and the information about them – demographic information, living circumstances, preferences, and past behaviour – are integral to the functioning of current AI systems. Programmed to collect data on individuals and generate personalized profiles, the resulting predictions, recommendations, or decisions are uniquely connected to each individual or a group to which they belong. Social media feeds, for instance, now generate content based on a person’s past interactions, just as predictive policing systems predict individuals’ future crimes based on data about their past behaviour and the neighbourhoods they live in.

According to Shoshana Zuboff, this impact of AI on society at the micro-level is directed towards a specific goal: behavioural manipulation.Footnote21 She views this emerging trend as the realization of J.B. Skinner’s concept of radical behaviourism, which manifests as a form of social engineering and dominance that, unlike twentieth century authoritarianism and totalitarianism, pays little attention to people’s convictions, meanings, and deeply-seated motivations. Instead, all forms of human experience are treated and harvested as a free resource, which can be converted into behavioural data and transformed into prediction products that foresee and shape current, near-term, or future actions.Footnote22 The aim is not only to know our behaviour but to manipulate it at scale via “nudges,” “choice architectures,” or outright deception.Footnote23 The most common AI-driven tools for such manipulation include profiling, targeted messaging, social bots, including those that disseminate deceptive content on social media, and deepfakes capable of convincingly mimicking individuals.Footnote24

The ability to predict and shape human behaviour with great precision and on a vast scale represents an ultimate instrument for obtaining and maintaining power in both democracies and autocracies.Footnote25 In democracies, it allows for an unparalleled ability to shape public opinion and people's political will more broadly. In autocracies, it additionally provides means for exerting control and dominance less conspicuously, with reduced overt coercion, and overall more cost-effectively. Across regime types, therefore, governments face significant structural incentives to adopt and extensively deploy AI. The same goes for actors operating in the capitalist and consumerist economies that now underpin most of the world’s democracies and autocracies. Here, too, the power to influence human behaviour with precision and predictability stands as a fundamental instrument for achieving the system’s core incentive: profit maximization. As political and economic actors adopt AI to further their interests, the unprecedented capabilities of AI at a micro-social level begin to translate into a spectrum of structural changes within and across various political systems – changes that invite us to critically reconsider the nature of democratization and autocratization processes in the age of AI.

Democratization, autocratization, and beyond

The concepts of democratization and its antithesis, autocratization, have been subject to extensive scholarly debate. Our focus differs from the recent work on the democratization of AI – i.e. the normative demand to involve a greater number of people in decisions about the development and deployment of AI technologies. Instead, we are concerned with the broader political phenomenon of democratization, defined by this journal as “the way democratic norms, institutions and practices evolve and are disseminated or retracted both within and across national and cultural boundaries.”Footnote26 Importantly, neither democratization nor autocratization are strictly understood as abrupt regime changes, i.e. a discrete and observable event in which an authoritarian country moves across a particular threshold and transitions to democracy and vice versa. Instead, they are seen as a gradual, non-linear, and multifaceted process that can be observed in changes to various political institutions and norms.

Recent scholarship has conceptualized democratization and autocratization as opposing movements on a continuum between the ideal types of closed autocracy and liberal democracy. These works have largely relied on Robert Dahl’s polyarchy, defining democracy in terms of several traits: free and fair elections, freedom of association, an elected executive, freedom of expression, and access to alternative sources of information.Footnote27 From this perspective, democratization describes episodes that exhibit substantial and sustained improvement on these traits, whereas autocratization refers to the “decline of democratic attributes that may – but do not have to result in democratic breakdown.”Footnote28 Importantly, both democratization and autocratization can take place at any point on the continuum. Consolidated democracies can experience further democratization, leading to democratic deepening, but also autocratization, leading to democratic regression.Footnote29 Likewise, authoritarian regimes can both democratize and experience authoritarian consolidation.Footnote30

This understanding of democratization allows for a multilayered look at the effects of AI on democratic and autocratic politics, i.e. how it impacts different stages of democratization or how it shapes distinct democratic traits – elections, freedom of expression, and access to information. At the same time, this special issue aspires to go beyond a mere conceptualization of AI as a technology that moves the needle towards either “more” democracy or “more” autocracy across various dimensions. Some of the observed trends outlined in the next section, such as the rise of data-driven plutocrats, hyper-technocracy, and state paternalism, cannot strictly be placed on this continuum and warrant further conceptual and normative scrutiny. We therefore align with a growing number of scholars who suggest – some more overtly than others – that the sociopolitical changes engendered by AI are distinct and may necessitate more nuanced analyses, complete with new and more precise descriptors, as exemplified by the recent proliferation of new concepts such as digital empires, surveillance capitalism, and techno-feudalism.Footnote31

A synoptic perspective on AI: from micro-level effects to structural transformations within and between democracies and autocracies

Informed by these conceptual clarifications and based on a review of the existing literature, this section examines four key AI-driven structural transformations within democracies and autocracies, alongside the evolving dynamics of their competition and cooperation on the international stage. This framework addresses some of the most pressing questions raised by the advent of AI: Where does political power reside in the age of AI? How does the use of AI change notions of legitimacy, authority, and control in the relationship between citizens and the state? And how and to what extent does the pursuit of AI shape the growing cleavage between democracies and autocracies at the international level?

Structural change 1: tech corporations as a new quasi-governing class

The emergence of a new quasi-governing class, one we call data-driven plutocrats, represents the first significant structural change brought about by AI and the technologies that preceded and enabled it. Digital technologies have predominantly entered society through their commercial application. Viewed as a hallmark of progress, the rise of tech companies was initially regarded as an economic phenomenon. However, this rapidly changed. Driven by credos such as “move fast and break things,” these companies have not only accumulated sizable financial capital but also unprecedented amounts of data.Footnote32 Moreover, the ownership of social media platforms and the right to moderate content have given them an outsized influence over public discourse. As a result, a handful of tech companies have become the de facto rulers of new public spaces for societal, political, and cultural exchange, a rule that is increasingly consolidated through the integration of AI technologies.

The rise of data-driven plutocrats contradicts fundamental principles of democracy in several ways. First, this new quasi-governing class is primarily motivated by profit. While many tech companies publicly acknowledge the importance of the common good, their primary responsibility is towards their stakeholders, not citizens or democratic principles. As a consequence, they are likely to prioritize consumer desires over civic values, efficacy over justice, and comfort over liberty.Footnote33 Second, tech companies have acquired a public authority that lacks legitimacy and operates without democratic oversight. Their ownership of social media platforms has allowed them to act as custodians of political discourse and continually blur the line between citizens and customers. They have, in other words, translated the ownership of technical resources into control of human beings at the expense of public consent.Footnote34 Third, in a democracy, citizens should be able to form their choices, opinions, and interests free from excessive external pressures. However, the operations of tech companies are increasingly undermining this autonomy and self-determination, as individuals navigate an information landscape heavily influenced by hyper-personalization and targeted messaging.

Disregard for the common good, unaccountable authority, and the suppression of people's freedom and autonomy are all signs of autocratization. However, in the current scenario where companies, not states, drive this shift, we hesitate to consider the rise of data-driven plutocrats an instance of autocratization in the traditional sense. Instead, the growing influence of technology corporations on the public sphere suggests a more substantial change in the nature of democratic politics itself – a move away from democracy by other means, the full complexity of which is yet to come into view.

The growing power of tech companies is not limited to the United States and Western democracies. Autocratic countries like China have also recognized the importance of technology for achieving social and economic progress. In this pursuit, the Chinese government has not only welcomed investments, at least initially, from U.S. venture capital firms into its tech businesses, but has also created protective regulations to allow these businesses to be more competitive than their U.S. counterparts.Footnote35 These measures have helped several private companies, such as Alibaba, Baidu, and Tencent, to accumulate economic wealth unprecedented in Chinese communist history. However, unlike in Western democracies, this economic power has not translated into social, political, and cultural power. Even before the recent crackdown by the Chinese Communist Party (CCP), tech companies have had to align their goals with the party’s political objectives. They have had to act as the party’s surrogates, obligated to provide any required data and reinforce the party’s ideological agenda on social media.Footnote36 Consequently, in comparison to the West, the evolution of tech companies in China indicates a more clear-cut path towards reinforcing state authoritarianism.

Structural change 2: citizens as data and the rise of automated governance

The second structural change that results from the use of AI in democratic processes concerns the relationship between governments and their citizens. This change is particularly evident in two domains: first, in the methods used to collect, aggregate, and shape citizens’ preferences, and second, in the evolving practices of criminal justice administration and law enforcement.

When assessing citizens’ preferences, governments and other political actors increasingly rely on the aforementioned corporate practices that blur the line between citizens and consumers and commodify public discourse. With access to digital data, they can leverage knowledge of citizens’ economic and cultural choices to anticipate and influence their political preferences.Footnote37 For instance, in the U.S., purchasing habits like buying vegan products could be used to infer someone’s likelihood of supporting the Democratic Party and thus the type of political messages to which they would be susceptible. Implemented on a large scale, practices of this kind are known as demos scraping.Footnote38 They involve employing AI and other automated tools to continuously collect and analyse citizens’ “digital traces,” encompassing everything from their social media interactions to browsing habits. The outcome is intended to provide a comprehensive view of societal and political trends, applicable to a broad spectrum of uses, from the routine practices of public administration to the high-stakes decisions made by political representatives.Footnote39 Proponents argue that this approach represents a superior method for gauging the nation’s pulse and reaching optimal and efficient decisions. Critics, however, warn that such continuous surveillance risks reducing citizens to passive sources of data rather than active participants in the political process.Footnote40

While nudging citizens’ preferences, demos scraping, and AI-based decision-making may conflict with key tenets of democracy, we argue that it is premature to conclusively label these trends as a move towards authoritarianism. Instead, we propose that they signify a departure from traditional democratic ideals – where consensus is typically forged through the clash of diverse interpretations, debate, and negotiations – towards some version of hyper-technocracy and (liberal) paternalism.Footnote41 This shift could potentially herald a new form of digital democracy – a society characterized by high levels of certainty that can still be distinguished from the one subjected to autocratic control. Yet, given the tenuous boundary between certainty and control, it might merely represent a transitional phase on the path towards the autocratization of democratic systems.

The recent surge of AI-driven technologies in law enforcement and criminal justice could lend weight to the latter assessment. AI systems have become integral to areas such as sentencing, parole, and assessing the risk of crime recidivism. The police, in particular, have been quick to adopt AI-supported surveillance cameras and facial recognition software to detect potential threats in public spaces.Footnote42 Besides continuous surveillance and the overextension of what is considered a threatening situation in non-digital public spaces, a central issue with these technologies is their propensity to produce biased and discriminatory output. Often trained on already biased data and models embedded in flawed theories, AI technologies used in law enforcement and criminal justice may mask rather than alleviate social ills such as racism and classism, potentially even exacerbating them due to the possibility of these systems’ large-scale deployment.Footnote43 Adding to these challenges is the problem of AI’s explainability.Footnote44 When the rationale behind algorithmic decisions is either inaccessible or too costly to discern, it significantly weakens due process and justice systems’ transparency, pushing democracies towards practices more commonly associated with autocracies.

In autocratic regimes, too, AI has fundamentally changed the relationship between citizens and the government, but in contrast to democracies, these changes more readily qualify as authoritarian consolidation. China is a paradigmatic case. Supported by tech companies aligned with party interests, the CCP has amplified state power by being more pervasively involved in its citizens’ lives. This has been realized through practices such as automated online censorship, extensive surveillance of both social media and non-digital public spaces, and the implementation of a controversial social scoring system.Footnote45 While the practice of online censorship has a long history in China, as exemplified by the long standing “Great Firewall,” state agencies have only recently begun to rely on AI-powered technologies to more effectively and comprehensively monitor politically sensitive content and enforce its stricter suppression.

Under the pretext of maintaining national security and promoting social stability, China has also initiated a project known as “Sharp Eyes.” This initiative involves deploying hundreds of millions of surveillance cameras and facial recognition systems to forge an “omnipresent, fully integrated, always working and fully controllable” national surveillance network.Footnote46 The aim is to usher in the era of “smart cities,” improving crime prevention, law enforcement, and overall urban administration, including traffic management. Yet, it has become increasingly evident that the system could also be harnessed for broader authoritarian surveillance. This is also true for the “social credit system.”Footnote47 While the extent to which AI is integrated into this system is still debated, its purpose is to evaluate citizens based on their perceived trustworthiness. The system draws on a range of data, such as individuals’ legal, medical, and financial histories, as well as their digital footprint, to formulate scores, which are used to determine whether individuals will receive certain social benefits or face restrictions.

China is not the only authoritarian government employing digital technologies and AI to bolster state authority and repress its citizens.Footnote48 Through its Digital Silk Road initiative, China has become a major provider of digital infrastructure to developing and authoritarian states seeking a cost-effective path towards digital development. As a result, instances of digital authoritarianism can be observed in the Philippines, Thailand, Ethiopia, Bangladesh, Colombia, and Guatemala, suggesting that this model of authoritarian politics is spreading.Footnote49 In fact, a 2021 study revealed that, of the 64 nations employing Chinese “safe and smart city” technologies, 41 were classified by Freedom House as “not free” or “partly free”.Footnote50 Yet, while the import of Chinese digital infrastructure like surveillance cameras and facial recognition systems may steer many states towards digital authoritarianism, it is crucial to recognize that the authoritarian hold is also reinforced by Western technology imports – especially in states that have the means to diversify their technological sources, such as Saudi Arabia.Footnote51

Structural change 3: divided societies and declining trust in democracy

The third structural change that AI’s micro-level capabilities bring about concerns the relationship between citizens and the structure of the environment in which they exercise their political rights. The transformation we observe here includes intensifying levels of political fragmentation, pernicious polarization, distrust, and apathy. As various studies indicate, AI might not be the direct cause of these challenges, but its extensive use is set to considerably worsen them.Footnote52 Indeed, while algorithms by themselves did not create “echo chambers,” they have still significantly contributed to disseminating hate speech and fake news.Footnote53 Similarly, although exposure to fake information on social media during the 2016 U.S. presidential election was not the sole responsible factor, it significantly contributed to persuading a notable segment of former Obama voters to vote for Donald Trump.Footnote54 Finally, synthetic media such as deepfakes may not necessarily mislead citizens, but it both increases their uncertainty about the truthfulness of content and decreases their trust in news sources.Footnote55

For democracies, high levels of polarization, fragmentation, and distrust point towards democratic regression. They signal that society is no longer able to reach common ground on fundamental matters, that political participation is in decline, and that there is a growing lack of trust in political institutions. These structural challenges are not easily solved by those seeking to strengthen democratic institutions. First, democratic harm of this sort builds up over time, coming from different people using technology in different ways.Footnote56 As a result, it is difficult to identify a single cause or assign clear responsibility. Additionally, these issues often arise not from overt violations of citizens’ rights such as privacy, but from individuals consenting to being profiled and targeted. Finally, they are the outcomes of political struggle, however imperfect, and in democracies outcomes cannot be prescribed in advance.

For autocratic regimes, by contrast, growing polarization might be beneficial: guided by the principle of divide and conquer, many autocrats deliberately foster societal fragmentation. Additionally, autocrats can use AI-based technologies to disrupt democracies, as evidenced by Russian interference in elections in the U.S. and throughout Europe. Large language models (LLM), by generating convincing and seemingly authentic news articles within seconds, could render such authoritarian misinformation campaigns abroad even “more potent, more scalable, and more efficient.”Footnote57

Structural change 4: AI as a focal point of democratic-authoritarian competition

The fourth and final structural change that the pursuit of AI brings about occurs at the international level, manifesting as new forms of competition between democracies and autocracies. As governments recognize the transformative potential of AI, their desire to gain a competitive advantage has started to fuel a fierce competition for data, hardware, and talent – a competition headed by the U.S. and China. At the same time, democracies and autocracies are engaged in a struggle over values. With the need for global rules on AI becoming more pressing, the question whose norms, principles, and standards should guide AI development and deployment has become a focal point of international regulatory debates.

In recent years, the U.S.-China tech rivalry has shifted from targeting specific companies and addressing unfair economic practices such as industrial espionage to encompassing the entire technological ecosystem of each country.Footnote58 The U.S. has enforced export restrictions on technology to China, highlighted by Joe Biden’s CHIPs Act, and limited Chinese technologies in its domestic market, notably with Donald Trump’s crackdown on Huawei’s 5G network.Footnote59 Chinese measures also encompass export controls on critical technological assets through its 2020 Export Control Law, the Unreliable Entities List, and tighter antitrust laws to restrict foreign technological investments and acquisitions.Footnote60 From the perspective of both countries, pursuing technological self-sufficiency and establishing safeguards against supply chain disruptions like those experienced during the COVID-19 pandemic is now viewed as essential for national security, economic resilience, and geostrategic supremacy.

Consequently, AI is accelerating a global shift away from free market and globalization towards techno-protectionism and digital sovereignty. This transformation is characterized by reduced technological investments and exports, increased subsidies for domestic production, and the localization of data.Footnote61 The shift is already apparent within the E.U., which has emphasized the need for technological independence and the protection of fundamental rights in a world increasingly shaped by America First and Made in China 2025 strategies.Footnote62 The danger, however, is that techno-protectionism in one region will produce a cascading effect in other regions, risking the emergence of a “New Age of Autarky.”Footnote63

The competition for AI technologies between autocracies and democracies extends beyond a simple “Race to AI,” encompassing also a “Race to AI Regulation”Footnote64 – a phenomenon driven by governments’ growing ambition to convert their AI governance models into global standards. The E.U. initially led the way with its proposed AI Act, hailed as the first comprehensive AI regulatory framework. Yet, seeking strategic advantage, the U.S. and China have also emerged as key players in AI regulation. Anu Bradford identifies three distinct and largely incompatible regulatory models: the U.S. market-driven model favouring minimal government interference, the E.U.'s model focusing on rights and democratic values, and China’s state-centric model prioritizing economic and social control, often at the expense of individual rights.Footnote65 Each model reflects distinct priorities: the U.S. values techno-optimism and innovation freedom, the E.U. stresses fundamental rights and fair markets, and China focuses on reinforcing state power and political authority.Footnote66 Regarding the worldwide influence of these models, evidence suggests the Chinese approach is becoming more dominant, with nations who import Chinese AI technologies and infrastructure tending to also embrace its form of digital authoritarianism. On their part, democracies are hesitant to follow the U.S.’s “techno-libertarianism,” and are instead favouring the E.U.'s more protective approach.Footnote67

Finally, the large number of AI-related multilateral initiatives provide additional settings in which governments compete and cooperate over steering the direction of AI regulation. As a consequence, we are witnessing the emergence of a comprehensive global governance framework for AI, though this governing landscape is still nascent and remains fragmented.Footnote68 Aiming to simultaneously tap into AI’s potential while warding off its various harms, a range of organizations – from the G7 and G20 to UNESCO, OECD, and the Council of Europe – are producing a plethora of documents to guide the direction of AI development.Footnote69 These efforts have given birth to an array of guiding principles such as “human-centric AI,” “trustworthy AI,” “ethical AI,” and a “future-proof approach to AI.” Yet, for our purposes, it is important to note that this burgeoning governance field also highlights significant fault lines and divergent visions among nations, with some debates sharply delineated along the lines of democracy versus autocracy.

As the previous sections have demonstrated, the transformations AI may hold for democracies and autocracies are extensive. The link between AI and democratization is made and remade in various contexts simultaneously – in how individuals interact with the products of technology corporations, how governments use AI in their provision of public services, how citizens communicate with each other, and how governments negotiate in international fora. Against this background, this special issue is best understood as a map that charts the territory from the micro-level impact of AI on individuals to how this impact culminates in structural changes within democracies and autocracies as well as in the relations between them. In the subsequent section, we show how the individual contributions to this special issue fit into this map.

Contributions to this special issue

Contributors to this special issue draw on a variety of theoretical perspectives from Political Science, Science and Technology Studies, Surveillance Studies, and International Relations, to explore the most pressing questions that AI’s rapid integration into political life has sparked. Taken together, they chart the territory we previously outlined, shifting from AI’s micro-level influences to its culminated structural effects on state behaviour and international relations. Furthermore, in a burgeoning research area that yearns for simple answers, our contributions uncover complexities, tensions, and paradoxes. They illuminate how the E.U. tries to export a regulatory model grounded in democratic values, while its technology exports simultaneously enable authoritarian consolidation abroad. They shed light on how activists accuse state agencies in democracies of techno-authoritarian tendencies, while these same agencies try to uphold democratic values vis-à-vis corporate providers of technology. All in all, our contributions provide those interested in the intersection of AI and democratization with a synoptic yet nuanced conceptual toolkit for further engagement with the topic.

First, Ahmed Maati, Mirjam Edel, Koray Saglam, Oliver Schlumberger and Chonlawit Sirikupt offer a new theoretical lens on how the micro-level impacts of digital information technologies translate into structural threats to democracy. They start by outlining how these technologies can shape citizens’ psychopolitical dispositions and identify the two most damaging features of digital information environments: the abundance of contradictory information and the use of personalized algorithms. They illustrate two possible effects of these developments. First, citizens can experience information overload, leading them to become politically apathetic. Second, they may expose themselves to information that confirms their existing beliefs, further exacerbating political polarization and impeding social cohesion. To support their claims, Maati and colleagues leverage case studies of anti-COVID protests in Germany and the 2021 attack on Capitol Hill in the U.S. Additionally, they show how misinformation campaigns caused some citizens to distance themselves from democratic debates altogether. In the context of this special issue, the focus of this article is on the impact of digital technologies at the micro-level of society, viewing AI as a catalyst that intensifies pre-existing digital tensions in democracies.

While AI-supported technologies may drive some to political apathy or attacks on democratic institutions, others see the arrival of AI as a trigger for activism to prevent democracies from veering towards authoritarianism. As Hendrik Schopmans and Irem Tuncer-Ebetürk demonstrate, the growing use of AI systems by government agencies has incited fears among civil society actors that AI’s most pertinent threat stems from its abuse for authoritarian surveillance and control. The authors argue that in the case of the E.U. and the U.S. campaigns against facial recognition technology, activists use problem frames that are embedded in techno-authoritarian imaginaries, defined as collectively held visions of an undesirable future where democratic rights are systematically curtailed by the states and private actors. Yet, the content of these imaginaries is highly context-dependent. Although in both cases, facial recognition technology is framed as a tool of oppression, activists construct diverging ideas of whose rights are under threat: in the U.S., this technology has been portrayed primarily as a tool of racial suppression, whereas in the E.U., it has been contested as a tool of totalitarian surveillance that endangers all citizens.

The growing tension between citizens, private corporations, and the state over upholding democratic principles in the use of AI-enhanced technologies also builds the backbone of Matthias Leese’s detailed study of the development and use of predictive policing tools. Drawing on interviews carried out in 12 police departments across Germany and Switzerland, Leese challenges the common portrayal of police as mainly concerned with adopting new technological tools to increase efficiency and effectiveness. Instead, he highlights their internal struggles over aligning new technology with democratic principles and maintaining digital sovereignty. He finds that police officers often prefer in-house tools due to a lack of trust in private actors and the opaque nature of their technology. The article also addresses accountability issues that arise from sharing crime data with private companies, which can compromise the police's responsibility to protect citizens’ data. Leese’s article therefore emphasizes that the push for digital sovereignty is not confined to civil society alone. Police departments, too, seek to maintain digital sovereignty to uphold democratic principles. The study highlights the intricate interplay between technology providers and state institutions and underscores the importance of accountability in technology choices made by state actors.

Whereas the previous articles show that in established democracies, complex negotiations about the use of AI are taking place between citizens, corporations, and the state, two of our contributions offer insight into political dynamics over digital technologies that play out in autocracies. More specifically, they cover how authoritarian governments utilize AI and related technologies to repress, control, and surveil their populations and as a tool for regime survival. First, Dara Conduit demonstrates that digital technologies expand the repertoire of authoritarian governments beyond “classical” top-down tools of repression to enable more adaptive and subtle forms of control. By examining the case of Syrian Electronic Army (SEA), she shows how the Syrian regime manipulates the closeness to “patriotic hackers” – semi-independent voluntary hacker groups  – to ensure regime survival and stabilization. The SEA serves two crucial purposes. Firstly, the group’s activities blur the line between government and hacker activity so that citizens develop a sense of an omnipresent regime. Secondly, the group simultaneously plays a key role in building legitimacy for the regime by, at times, presenting itself as an independent civil society group that backs the government and is willing to engage in risky behaviour to demonstrate its support. The SEA is, therefore, an example of how authoritarian regimes can leverage the perceived decentralized nature of digital actors and tools to their advantage.

Gözde Böcü and Noura Al-Jozawi also delve into how authoritarian states use AI and other digital technologies, examining the case of Turkey. While the article primarily focuses on the use of these technologies in border and migration governance, it argues that their implications extend further. Specifically, they are employed for the repression of political opposition and civil society actors. Additionally, the article explores how international dynamics influence the development of these repressive capabilities. They demonstrate the significant role the E.U. has played in building Turkey’s repressive digital infrastructure. Initially intended to assist Turkey in managing its borders during increased migration, this technology has evolved into a means for surveillance and control over those deemed enemies of the regime. Considering these unintended consequences, the authors underscore the need for critical reflection on the transferability of digital technologies in governance and politics, as well as on their origins.

As can be seen, how governments use AI systems – and the extent to which their practices are contested – differ starkly between democratic and authoritarian contexts. Yet, what kind of AI is used in one state, how, and by whom is not independent of other states. As our final set of contributions emphasize, studying the patterns of conflict and cooperation between regimes at the international level offers important insights into what kind of technologies governments seek to acquire, and which norms and values they consider when using them. In their article, Akın Ünver and Arhan Ertan investigate the role technology imports play in countries’ democratization and autocratization. Specifically, they ask if acquiring AI from the U.S. or China renders countries more authoritarian or technologically less advantageous. Contrary to popular belief, they find that there is no clear correlation between importing AI technology from China or the U.S. and democratic backsliding or autocratization. Rather, they find that income level differences between countries more accurately predict importing countries’ preferences: less affluent nations tend to favour more affordable Chinese technology, while wealthier nations prefer U.S. technology.

States, however, do not only export hard technology: they are also keen to export their own norms and standards for how AI should be used, i.e. the aforementioned “race to AI regulation”. In this context, Steven Feldstein investigates the E.U.’s efforts at developing an AI regulatory framework as a case of norm-building and diffusion. Focusing particularly on the implications of the European Artificial Intelligence Act (AI Act), he offers two perspectives on whether the E.U. framework on AI will influence global norms. On the one side, the E.U. framework’s diffusion could be facilitated by its “first-mover advantage” that could provide a regulatory model for others to emulate. At the same time, Feldstein points out that the case of AI as a new technology poses specific challenges for the diffusion process. Countries often feel the need to catch up with technological innovations to preserve their competitive advantage, which can lead to a reluctance to regulate AI and take its effects on democracy and human rights more seriously.

Seeking to provide a deeper analysis of the E.U.'s AI regulation, Jelena Cupać and Mitja Sienknecht ask what kind of instruments the E.U. envisions to shield against AI harm to democracy. Grounded in a systematic understanding of deliberative democracy, they begin their analysis by distinguishing between rights-based and systemic AI harm. Rights-based harm involves using technology to limit people’s participation in the democratic process. Conversely, systemic harm emerges from technology creating or exacerbating societal and political conditions that hinder democratic deliberation, such as fragmentation, polarization, distrust, and political apathy. With this framework as a guide, the authors identify not only the AI Act but also several other crucial EU documents for safeguarding democracy from AI: The General Data Protection Regulation (GDPR), the Digital Services Act (DSM), and the Proposal for a Regulation on the Transparency and Targeting of Political Advertising, along with various non-binding texts. By analysing these documents, the authors pinpoint four main strategies for the E.U. envisions to insulate democracy from AI harm: prohibition, transparency, risk management, and digital literacy. In addition to detailing the instruments, the authors also assess them critically, noting, among other things, that although they provide a relatively robust defense against rights-based harm, they are unlikely to effectively mitigate the systemic harm arising from AI’s widespread adoption.

The E.U. is not the only multilateral setting in which steps to regulate AI, including its harmful impact on democracy, are being taken. In fact, UNESCO’s Recommendation on the Ethics of Artificial Intelligence, adopted in November 2021, represents the first global normative framework for AI development and deployment. In his article, Michal Natorski delves into the vigorous negotiations that preceded the document’s adoption, noting that these discussions unveiled several controversies among UNESCO member states. Drawing on a unique set of primary sources, including written positions and recorded deliberations, he explains how a worldwide compromise on A.I. regulation was reached, notwithstanding the varied perspectives of UNESCO member states that reflect a spectrum of liberal and sovereigntist preferences. Building upon Boltanski’s pragmatic sociology, the author conceptualizes the practice of multilateral negotiations and attributes the multilateral compromise to two embedded types: structural normative hybridity and situated normative ambiguity. He further shows how these compromises allowed member states to reach an agreement by linking macro-normative structures with situated debates of multilateral negotiations.

Conclusion

As recent years have shown, the impacts of AI on democratic and authoritarian politics no longer reside in the hypothetical realm: from nudging individuals’ preferences to surveilling citizens’ movements in public, AI systems are transforming politics in the here and now. And yet, it is important to recognize that we are only at the beginning of AI’s journey into the political realm. The release of ChatGPT by OpenAI in 2022, a landmark event that provided people across the world with access to an advanced AI system, coincided with a major shift in global politics: For the first time in many years, the number of closed autocracies surpassed that of liberal democracies, rolling back the global state of democracy to a level last observed in 1986.Footnote70 For a technology that has already demonstrated its potential to exacerbate societal divisions and facilitate mass surveillance, this backdrop offers little optimism that AI will be used to promote democratization. And it is in this context that we enter 2024 – the biggest election year in history, with elections in 76 countries, including 43 democracies – a year in which we will witness AI’s first major foray into political campaigning.

Despite the considerable challenges we have sketched out in this introduction, an AI-driven wave of autocratization is not a foregone conclusion. As the AI Safety Summit and a plethora of other national and multilateral initiatives demonstrate, policymakers are increasingly determined to take measures that protect democratic institutions and processes against AI-related risks. Yet, understanding the exact nature of these challenges is the indispensable precondition for developing sound policies. It is this understanding that our special issue aims to contribute to through new empirical data and innovative theoretical concepts. Most importantly, our synoptic view makes a case for going beyond narrow conceptions that portray AI’s impact on democracy in terms of confined risks such as misinformation campaigns or deepfakes in the context of elections. Instead, we argue that this impact should be understood as a multifaceted phenomenon unfolding across various societal levels, where the micro-level feature of automated hyper-personalization paves the way for macro-structural changes in the nature of democratization. Our special issue therefore aims to provide a broader framework to structure research in this complex and rapidly evolving field. We hope that present and future research in this area can inspire new approaches by civil society, public-private partnerships, and regulatory bodies to align AI with democratic principles – and thus ensure AI’s development and deployment do not remain the sole preserve of those aiming for political manipulation and profit.

Acknowledgments

The authors wish to express their gratitude to Yushu Soon for her assistance and patience during the literature review for this article, and to the editors for their insightful comments.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Notes on contributors

Jelena Cupać

Jelena Cupać is a Post–Doctoral Research Fellow at the WZB Berlin Social Science Center. She holds a PhD from the European University Institute (EUI) in Florence. Her research explores the transformation of international organizations, the global anti–gender movement, and efforts to govern AI's impact on democracy.

Hendrik Schopmans

Hendrik Schopmans is a Guest Researcher at the WZB Berlin Social Science Center. He holds a PhD in Political Science from Free University Berlin and an MPhil in International Relations from the University of Oxford. His research focuses on the role of expertize and knowledge in the creation of global governance architectures for artificial intelligence.

İrem Tuncer-Ebetürk

İrem Tuncer–Ebetürk is a Post–Doctoral Research Fellow at the WZB Berlin Social Science Center. She holds a PhD from Emory University, Department of Sociology, Atlanta, USA. Her research areas are global and transnational sociology, and political sociology.

Notes

5 Diamond, “Liberation Technology.”

6 Tufekci, Twitter and Tear Gas; Shirky, “The Political Power of Social Media”; Reuter and Szakonyi, “Online Social Media and Political Awareness”; Diamond, “Liberation Technology.”

7 Cottle, “Media and the Arab Uprisings”; Hamanaka, “The Role of Digital Media”; Hounshell, “The Revolution Will be Tweeted.”

8 Morozov, Net Delusion.

9 Gohdes, “Repression Technology”; Dragu and Lupu, “Digital Authoritarianism and the Future of Human Rights.”

10 Frantz, Kendall-Taylor, and Wright, “Digital Repression in Autocracies”; Choi and Jee, “Differential Effects of Information and Communication Technology”; Keegan, “Corruption and Digital Authoritarianism.”

11 Polyakova and Meserole, “Exporting Digital Authoritarianism.”

12 Morgus, “The Spread of Russia’s Digital Authoritarianism.”

13 Miller and Vaccari, “Digital Threats to Democracy.”

14 Lyon, “Surveillance, Snowden and Big Data.”

15 Persily, “Can Democracy Survive the Internet?”

16 Lührmann and Lindberg, “A Third Wave of Autocratization.”

17 Cardon, Cointet and Mazières, “Neurons Spike Back.”

18 Kuhlmann, Stegmaier and Konrad, “The Tentative Governance of Emerging Science and Technology.”

19 Bareis and Katzenbach, “Talking AI into Being.”

20 Schopmans, “From Coded Bias to Existential Threat.”

21 Zuboff, The Age of Surveillance Capitalism, 351–75.

22 Ibid., 8.

23 Ibid., 369.

24 See: Kertysova, “Artificial Intelligence and Disinformation”; McKay and Tenove, “Disinformation as a Threat to Deliberative Democracy”; Tenove, “Protecting Democracy from Disinformation”; Kaplan, “Artificial Intelligence, Social Media, and Fake News”; Lee, “Fake News, Phishing, and Fraud”; Diakopoulos, “Automating the News”; García-Orosa et al., “Algorithms and Communication”; Keller et al. “Social Bots in Election Campaigns”; Habgood-Coote, “Deepfakes and the Epistemic Apocalypse”; Hameleers et al., “You Won’t Believe What They Just Said!”; Jacobsen and Simpson, “The Tensions of Deepfakes.”

25 See: Brkan, “Artificial Intelligence and Democracy”; Djeffal, “AI, Democracy and the Law”; Hinsch, “Differences That Make a Difference”; Ienca, “On Artificial Intelligence and Manipulation”; Milan and Agosti, “Personalisation Algorithms and Elections.”

27 Maerz et al., “A Framework for Understanding Regime Transformation”.

28 Merkel and Lührmann, “Resilience of Democracies”; Lührmann and Lindberg, “A Third Wave of Autocratization.”

29 Maerz et al.

30 Lührmann and Lindberg, “A Third Wave of Autocratization.”

31 Bradford, Digital Empires; Zuboff, The Age of Surveillance Capitalism; Varoufakis, Technofeudalism.

32 Nemitz, Paul. “Constitutional Democracy and Technology.”

33 Susskind, The Digital Republic, 111.

34 Ibid, 88.

35 Bradford, Digital Empires, 92; Mallaby, The Power Law.

36 Bradford, Digital Empires, 88–91.

37 O’Neil, Weapons of Math Destruction, 179–97.

38 Ulbricht, “Scraping the Demos.”

39 Ulbricht, “Scraping the Demos.” 427; König, “Citizen Conceptions of Democracy”; Chiusi et al., Automating Society; Coglianese and Ben Dor, “AI in Adjudication and Administration”; Kuziemski and Misuraca, “AI Governance in the Public Sector.”

40 Ulbricht, “Scraping the Demos,” 433; König and Wenzelburger. “Between Technochauvinism and Human-Centrism,” 139.

41 Helbing et al., “Will Democracy Survive Big Data,” 80; König, “Citizen Conceptions of Democracy,” 5; König and Wenzelburger, “Opportunity for Renewal or Disruptive Force?”

42 Raso et al., “Artificial Intelligence & Human Rights”; Bennett and Chan. “Algorithmic Prediction in Policing.”

43 O'Neil, Weapons of Math Destruction.

44 See: Büthe et al. “Governing AI–Attempting to Herd Cats?”; Gunning et al., “DARPA’s Explainable AI (XAI) Program”; Samek et al., Explainable AI.

45 Bradford, Digital Empires, Chapter 2.

46 Schell, “Technology Has Abetted China’s Surveillance State.”

47 For an overview, see: Bradford, Anu. Digital Empires, 87–8.

48 Feldstein, “How Artificial Intelligence is Reshaping Repression”; Feldstein, The Rise of Digital Repression.

49 Feldstein, The Rise of Digital Repression; Zaman, “Mechanisms of Digital Authoritarianism”; Wilson, “The Anti-Human Rights Machine.”

50 Kynge et al., “Exporting Chinese Surveillance.”

51 Feldstein, “The Global Expansion of AI Surveillance.”

52 Allcott and Gentzkow, “Social Media and Fake News in the 2016 Election”; Bennett and Livingston, “The Disinformation Order.”

53 Stark, Stegmann, Magin and Jürgens, “Are Algorithms a Threat to Democracy?”; Cho et al. 2020, “Do Search Algorithms Endanger Democracy?”

54 Gunther, Beck and Nisbeth, “Fake News and the Defection of 2012 Obama Voters.”

55 Vaccari and Chadwick, “Deepfakes and Disinformation.”

56 Smuha, “Beyond the Individual,” 10.

57 Buchanan et al., “Truth, Lies, and Automation.”

58 For a comprehensive overview, see: Bradford, Anu. Digital Empires, 183–220.

59 Ibid., 196.

60 Ibid., 200–7.

61 Floridi, “The Fight for Digital Sovereignty”; Pohle and Thiel, “Digital Sovereignty.”

62 Calderaro and Blumfeld, “Artificial Intelligence and E.U. Security.”

63 Malcomson, “The New Age of Autarky.”

64 Smuha, “From a ‘Race to AI’ to a ‘Race to AI Regulation”; Bradford, Digital Empires.

65 Bradford, Digital Empires, 33–145.

66 For disagreements within models, see: Schopmans and Cupać, “Engines of Patriarchy.”

67 Bradford, Digital Empires, 287–9.

68 Schmitt, “Mapping Global AI Governance.”

69 Cihon et al., “Fragmentation and the Future.”

Bibliography

  • Alcott, Hunt, and Matthew Gentzkow. “Social Media and Fake News in the 2016 Election.” Journal of Economic Perspectives 31, no. 2 (2017): 211–236.
  • Bareis, Jascha, and Christian Katzenbach. “Talking AI Into Being: The Narratives and Imaginaries of National AI Strategies and Their Performative Politics.” Science, Technology, & Human Values 47, no. 5 (2022): 855–881.
  • Bennett Moses, Lyria, and Janet Chan. “Algorithmic Prediction in Policing: Assumptions, Evaluation, and Accountability.” Policing and Society 28, no. 7 (2018): 806–822.
  • Benett, W. Lance, and Steven. Livingston. “The Disinformation Order: Disruptive Communication and the Decline of Democratic Institutions.” European Journal of Communication 33, no. 2 (2018): 122–139.
  • Benkler, Y., R. Faris, and H. Roberts. Network Propaganda: Manipulation, disinformation, and Radicalization in American Politics. Oxford: Oxford University Press, 2018.
  • Bradford, Anu. Digital Empires: The Global Battle to Regulate Technology. Oxford: Oxford University Press, 2023.
  • Bradford, Anu. The Brussels Effect: How the European Union Rules the World. Oxford: Oxford University Press, 2020.
  • Brkan, Maja. “Artificial Intelligence and Democracy: The Impact of Disinformation, Social Bots and Political Targeting.” Delphi 1, no. 2 (2019): 66–71.
  • Buchanan, Ben, Andrew Lohn, Micah Musser, and Katerina Sedova. Truth, Lies, and Automation: How Language Models Could Change Disinformation. Washington: Center for Security and Emerging Technology, 2021.
  • Büthe, Tim, Christian Djeffal, Christoph Lütge, Sabine Maasen, and Nora von Ingersleben-Seip. “Governing AI – Attempting to Herd Cats? Introduction to the Special Issue on the Governance of Artificial Intelligence.” Journal of European Public Policy 29, no. 11 (2022): 1721–1752.
  • Calderaro, Andrea, and Stella Blumfelde. “Artificial Intelligence and EU Security: The False Promise of Digital Sovereignty.” European Security 31, no. 3 (2022): 415–434.
  • Cardon, Dominique, Jean-Philippe Cointet, and Antoine Mazières. “Neurons Spike Back: The Invention of Inductive Machines and the Artificial Intelligence Controversy.” Reseaux 5, no. 211 (2018): 173–220.
  • Chiusi, Fabio, Brigitte Alfter, Minna Ruckenstein, and Tuukka Lehtiniemi. Automating Society Report 2020. Berlin: Algorithm Watch & Bertelsmann Stiftung, 2020.
  • Cho, Jaeho, Saifuddin Ahmed, Martin Hilbert, Billy Liu, and Jonathan Luu. “Do Search Algorithms Endanger Democracy? An Experimental Investigation of Algorithm Effects on Political Polarization.” Journal of Broadcasting & Electronic Media 64, no. 2 (2020): 150–72.
  • Choi, Changyong, and Sang Hoon Jee. “Differential Effects of Information and Communication Technology on (De-) Democratization of Authoritarian Regimes.” International Studies Quarterly 65, no. 4 (2021): 1163–1175.
  • Cihon, Peter, Matthijs M. Maas, and Luke Kemp. “Fragmentation and the Future: Investigating Architectures for International AI Governance.” Global Policy 11, no. 5 (2020): 545–556.
  • Coglianese, Cary, and Lavi M. Ben Dor. “AI in Adjudication and Administration.” Brook. L. Rev 86, no. 3 (2020): 791–838.
  • Cottle, Simon. “Media and the Arab Uprisings of 2011: Research Notes.” Journalism 12, no. 5 (2011).
  • Diakopoulos, Nicholas. Automating the News: How Algorithms are Rewriting the Media. Cambridge: Harvard University Press, 2019.
  • Diamond, Larry. “Liberation Technology.” Journal of Democracy 21, no. 3 (2010): 69–83.
  • Djeffal, Christian. “AI, Democracy, and the Law.” In The Democratization of Artificial Intelligence, edited by Andreas Sudmann, 255–284. Bielefeld: Transcript, 2019.
  • Dragu, Tiberiu, and Yonatan Lupu. “Digital Authoritarianism and the Future of Human Rights.” International Organization 75, no. 4 (2021): 991–1017.
  • Feldstein, Steven. The Rise of Digital Repression: How Technology is Reshaping Power, Politics, and Resistance. Oxford: Oxford University Press, 2021.
  • Feldstein, Steven. “How Artificial Intelligence is Reshaping Repression.” Journal of Democracy 30, no. 1 (2019): 40–52.
  • Feldstein, Steven. “The Global Expansion of AI Surveillance.” Carnegie Endowment for International Peace, September 17, 2019. https://carnegieendowment.org/2019/09/17/global-expansion-of-ai-surveillance-pub-79847.
  • Floridi, Luciano. “The Fight for Digital Sovereignty: What It Is, and Why it Matters, Especially for the EU.” Philosophy & Technology 33 (2020): 369–378.
  • Frantz, Erica, Andrea Kendall-Taylor, and Joseph. Wright. “Digital Repression in Autocracies.” V-Dem Working Paper 27 (2020).
  • García-Orosa, Berta, João Canavilhas, and Jorge Vázquez-Herrero. “Algorithms and Communication: A Systematized Literature Review.” Comunicar 31, no. 74 (2023): 9–21.
  • Gohdes, Anita. “Repression Technology: Internet Accessibility and State Violence.” American Journal of Political Science 64, no. 3 (2020): 488–503.
  • Gunning, David, Eric Vorm, Yunyan Wang, and Matt Turek. “DARPA’s Explainable AI (XAI) Program: A Retrospective.” Authorea (2021).
  • Gunther, Richard, Paul A. Beck, and Erik C. Nisbet. “Fake News’ and the Defection of 2012 Obama Voters in the 2016 Presidential Election.” Electoral Studies 61 (2019): 1–17.
  • Habgood-Coote, Joshua. “Deepfakes and the Epistemic Apocalypse.” Synthese 201, no. 103 (2023): 1–23.
  • Hamanaka, Shingo. “The Role of Digital Media in the 2011 Egyptian Revolution.” Democratization 27, no. 5 (2020): 777–796.
  • Hameleers, Michael, Toni G.L.A. van der Meer, and Tom Dobber. “You Won’t Believe What They Just Said! The Effects of Political Deepfakes Embedded as Vox Populi on Social Media.” Social Media + Society 8, no. 3 (2022): 1–12.
  • Helbing, Dirk, Bruno S. Frey, Gerd Gigerenzer, Ernst Hafen, Michael Hagner, Yvonne Hofstetter, Jeroen Van Den Hoven, Roberto V. Zicari, and Andrej Zwitter. “Will Democracy Survive Big Data and Artificial Intelligence?” In Towards Digital Enlightenment, edited by Dirk Helbing, 73–98. Cham: Springer, 2019.
  • Hinsch, Wilfred. “Differences That Make a Difference: Computational Profiling and Fairness to Individuals.” In The Cambridge Handbook of Responsible Artificial Intelligence, edited by Silja Voeneky, Philipp Kellmeyer, Oliver Mueller, and Wolfram Burgard, 229–251. Cambridge: Cambridge University Press, 2022.
  • Hounshell, Blake. “The Revolution Will be Tweeted.” Foreign Policy, June 20, 2011, https://foreignpolicy.com/2011/06/20/the-revolution-will-be-tweeted/.
  • Ienca, Marcello. “On Artificial Intelligence and Manipulation.” Topoi 42 (2023): 1–4.
  • Jacobsen, Benjamin N., and Jill Simpson. “The Tensions of Deepfakes.” Information, Communication & Society (2023): 1–15. https://www.tandfonline.com/doi/full/10.1080/1369118X.2023.2234980.
  • Kaplan, Andreas. “Artificial Intelligence, Social Media, and Fake News: Is This the End of Democracy?” In Digital Transformation in Media & Society, edited by Ayşen Akkor Gül, Yıldız Dilek Ertürk, and Paul Elmer, 149–161. Istanbul: Istanbul University Press, 2020.
  • Keegan, Katrina. “Corruption and Digital Authoritarianism: Political Drivers of E-Government Adoption in Central Asia.” Democratization 31, no. 1 (2023): 23–46.
  • Keller, Tobias R., and Ulrike Klinger. “Social Bots in Election Campaigns: Theoretical, Empirical, and Methodological Implications.” Political Communication 36, no. 1 (2019): 171–189.
  • Kertysova, Katarina. “Artificial Intelligence and Disinformation: How AI Changes the Way Disinformation Is Produced, Disseminated, and Can Be Countered.” Security and Human Rights 29, no. 1-4 (2018): 55–81.
  • König, Pascal D. “Citizen Conceptions of Democracy and Support for Artificial Intelligence in Government and Politics.” European Journal of Political Research 62, no. 4 (2023): 1280–1300.
  • König, Pascal D., and Georg Wenzelburger. “Between Technochauvinism and Human-Centrism: Can Algorithms Improve Decision-Making in Democratic Politics?” European Political Science 21 (2021): 132–149.
  • König, Pascal D., and Georg Wenzelburger. “Opportunity for Renewal or Disruptive Force? How Artificial Intelligence Alters Democratic Politics.” Government Information Quarterly 37, no. 3 (2020): 101489.
  • Kuhlmann, Stefan, Peter Stegmaier, and Kornelia. Konrad. “The Tentative Governance of Emerging Science and Technology—A Conceptual Introduction.” Research Policy 48, no. 5 (2019): 1091–1097.
  • Kuziemski, Maciej, and Gianluca Misuraca. “AI Governance in the Public Sector: Three Tales from the Frontiers of Automated Decision-Making in Democratic Settings.” Telecommunications Policy 44, no. 6 (2020): 101976.
  • Kynge, James, Valerie Hopkins, Helen Warrell, and Kathrin Hille. “Exporting Chinese Surveillance: The Security Risks of ‘Smart Cities’.” Financial Times, June 9, 2021. https://www.ft.com/content/76fdac7c-7076-47a4-bcb0-7e75af0aadab.
  • Lee, Nicole M. “Fake News, Phishing, and Fraud: A Call for Research on Digital Media Literacy Education Beyond the Classroom.” Communication Education 67, no. 4 (2018): 460–466.
  • Lührmann, Anna, and Staffan Lindberg. “A Third Wave of Autocratization Is Here: What Is New About It?” Democratization 26, no. 7 (2019): 1095–1113.
  • Lyon, David. “Surveillance, Snowden, and Big Data: Capacities, Consequences, Critique.” Big Data & Society 1, no. 2 (2014).
  • Maerz, Seraphine, Amanda Edgell, Matthew Wilson, Sebastian Hellmeier, and Staffan. Lindberg. “A Framework for Understanding Regime Transformation: Introducing the ERT Dataset.” V-Dem Working Paper 113 (2021).
  • Malcomson, Scott. “The New Age of Autarky: Why Globalization’s Biggest Winners are Now on a Mission for Self-Sufficiency.” Foreign Affairs 26 (2021).
  • Mallaby, Sebastian. The Power Law: Venture Capital and the Making of the New Future. New York: Penguin, 2022.
  • McKay, Spencer, and Chris Tenove. “Disinformation as a Threat to Deliberative Democracy.” Political Research Quarterly 74, no. 3 (2021): 703–717.
  • Merkel, Wolfgang, and Anna Lührmann. “Resilience of Democracies: Responses to Illiberal and Authoritarian Challenges.” Democratization 28, no. 5 (2021): 869–84.
  • Milan, Stefania, and Claudio Agosti. “Personalisation Algorithms and Elections: Breaking Free of the Filter Bubble.” Internet Policy Review 7 February (2019).
  • Miller, Michael, and Christian Vaccari. “Digital Threats to Democracy: Comparative Lessons and Possible Remedies.” The International Journal of Press/Politics 25, no. 3 (2020): 333–356.
  • Morgus, Robert. “The Spread of Russia’s Digital Authoritarianism.” In Artificial Intelligence, China, Russia, and the Global Order, edited by Nicholas D. Wright, 89–97. Montgomery: Air University Press, 2019.
  • Morozov, Evgeny. Net Delusion: The Dark Side of Internet Freedom. New York: PublicAffairs, 2012.
  • Nemitz, Paul. “Constitutional Democracy and Technology in the Age of Artificial Intelligence.” Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences A376 (2018): 20180089.
  • O'Neil, Cathy. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. London: Penguin Books, 2016.
  • Persily, Nathaniel. “Can Democracy Survive the Internet?” Journal of Democracy 28, no. 2 (2017): 63–76.
  • Pohle, Julia, and Thorsten Thiel. “Digital Sovereignty.” Internet Policy Review 9, no. 4 (2020): 1–19.
  • Polyakova, Alina, and Chris. Meserole. “Exporting Digital Authoritarianism: The Russian and Chinese Models.” Brookings: Democracy and Disorder (2019).
  • Raso, Filippo A., Hannah Hilligoss, Vivek Krishnamurthy, Christopher Bavitz, and Levin Kim. “Artificial Intelligence & Human Rights: Opportunities & Risks.” Berkman Klein Center Research Publication 2018-6 (2018).
  • Reuter, Ora John, and David. Szakonyi. “Online Social Media and Political Awareness in Authoritarian Regimes.” British Journal of Political Science 45, no. 1 (2015): 29–51.
  • Samek, Wojciech, Grégoire Montavon, Andrea Vedaldi, Lars Kai Hansen, and Klaus-Robert Müller. Explainable AI: Interpreting, Explaining and Visualizing Deep Learning. Cham: Springer, 2019.
  • Schell, Orville. “Technology Has Abetted China’s Surveillance State: Beijing’s Plan for a Digital Currency Presents Yet Another Opportunity for Citizens to Be Monitored.” Financial Times, September 2, 2020. https://www.ft.com/content/6b61aaaa-3325-44dc-8110-bf4a351185fb.
  • Schmitt, Lewin. “Mapping Global AI Governance: A Nascent Regime in a Fragmented Landscape.” AI and Ethics 2, no. 2 (2022): 303–314.
  • Schopmans, Hendrik, and Jelena Cupać. “Engines of Patriarchy: Ethical Artificial Intelligence in Times of Illiberal Backlash Politics.” Ethics & International Affairs 35, no. 3 (2021): 329–342.
  • Schopmans, Hendrik. “From Coded Bias to Existential Threat: Expert Frames and the Epistemic Politics of AI Governance.” AIES ‘22: Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society, 2022: 627–640.
  • Shirky, Clay. “The Political Power of Social Media: Technology, the Public Sphere, and Political Change.” Foreign Affairs 90, no. 1 (2011): 28–41.
  • Smuha, Nathalie A. “From a ‘Race to AI’ to a ‘Race to AI Regulation’: Regulatory Competition for Artificial Intelligence.” Law, Innovation and Technology 13, no. 1 (2021): 57–84.
  • Smuha, Nathalie A. “Beyond the Individual: Governing AI’s Societal Harm.” Internet Policy Review 10, no. 3 (2021): 1–32.
  • Stark, Birgit, Daniel Stegmann, M. Magin, and P. Jürgens. Are Algorithms a Threat to Democracy? The Rise of Intermediaries: A Challenge for Public Discourse. Berlin: AlgorithmWatch, 2020.
  • Susskind, Jamie. The Digital Republic: On Freedom and Democracy in the 21st Century. London: Bloomsbury Publishing, 2022.
  • Tenove, Chris. “Protecting Democracy from Disinformation: Normative Threats and Policy Responses.” The International Journal of Press/Politics 25, no. 3 (2020): 517–537.
  • Tufekci, Zeynep. Twitter and Tear Gas: The Power and Fragility of Networked Protest. New Haven: Yale University Press, 2018.
  • Ulbricht, Lena. “Scraping the Demos. Digitalization, Web Scraping and the Democratic Project.” Democratization 27, no. 3 (2020): 426–442.
  • Vaccari, Cristian, and Andrew Chadwick. “Deepfakes and Disinformation: Exploring the Impact of Synthetic Political Video on Deception, Uncertainty, and Trust in News.” Social Media & Society 6, no. 1 (2020): 1–13.
  • Varoufakis, Yanis. Technofeudalism: What Killed Capitalism. London: Bodley Head, 2023.
  • Wilson, Richard. “The Anti-Human Rights Machine: Digital Authoritarianism and The Global Assault on Human Rights.” Human Rights Quarterly 44, no. 4 (2022): 704–739.
  • Zaman, Fahmida. “Mechanisms of Digital Authoritarianism: The Case of Bangladesh.” SAIS Review of International Affairs 42, no. 2 (2022): 85–101.
  • Zuboff, Shoshana. The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. London: Profile Books, 2019.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.