120
Views
0
CrossRef citations to date
0
Altmetric
Research Article

The European union: assessing global leadership through actorness in artificial intelligence

, &

ABSTRACT

This article investigates the EU’s global potential leadership and actorness in the development and diffusion of Artificial Intelligence (AI). In the EU, AI has raised societal and policy concern due to persisting evidence of the technology disregarding ethical principles and fundamental rights. To address these risks, European institutions and politicians project leadership claims for global norms underpinned by a human-centric and rights-based approach. This article qualitatively assesses the extent to which such leadership claims can be realised through the lens actorness. It concludes that whilst EU official discourse is ambitious in tone, for the EU to realise its potential and exercise effective exemplary and diplomatic leadership in the global AI domain, it must address existing deficiencies in its actorness capabilities internally and externally.

1. Introduction

The rapid technological evolution of Artificial Intelligence (AI) over the past 10 years or so has raised societal and policy concern that technology will autonomously evolve in a direction that disregards ethical values and human rights. In line with a coherent and functional development of the Digital Single Market, the European Union (EU) has made the mitigation of AI risks a political priority.

In April 2018, the Commission launched the European AI Strategy and a Coordinated Action Plan, aiming to make the EU a world-class hub for AI, while ensuring that AI is ethical and human-centric. The Commission’s White Paper on AI, published in February 2020, set out a vision for AI in Europe and in April 2021, the Commission presented a proposal for a regulatory framework (the AI Act) and a revised Coordinated Action Plan to ‘promote the development of AI and address the potential high risks it poses to safety and fundamental rights’ (European Commission Citation2021c,b). The AI Act was adopted in Parliament and Council in March 2024. The ambition of the EU is to create rules on AI that ensure the EU’s central position as a competitive leader in the development of new AI norms, ethical AI technology and rules which foster a rights-based environment for citizens and businesses in the evolving global AI ecosystem.

This article seeks to theoretically unpack the EU’s AI leadership claims. Drawing on literature that has sought to illuminate EU leadership in other critical global governance issues as well as EU actorness (Bretherton and Vogler Citation2006; Drieskens Citation2021; Jupille and Caporaso Citation1998; Liefferink and Wurzel Citation2017; Oberthür and Dupont Citation2021; Renda et al. Citation2021; Schunz Citation2019; Wurzel, Liefferink, and Torney Citation2019), we provide a systematic, conceptually informed assessment of the EU’s leadership in AI. We explore and address the research question, what is the European Union’s potential as an international leader and actor in Artificial Intelligence? We propose a novel combined framework of leadership and actorness, analysing and measuring exemplary leadership as authority, autonomy, coherence and credibility, and diplomatic leadership as attractiveness, recognition and credibility. Given the newness of AI in the global governance milieu, we are not aiming to provide a comprehensive review of EU action, effectiveness, and outcomes in the AI space. Rather, our contribution is to shed light on the EU’s potential leadership role as an actor in AI based on a more concrete assessment. Using actorness indicators, we assess the EU’s exemplary and diplomatic toolkit and skills in this evolving but critical area of global governance. The emphasis then, is on input rather than action, output or outcome at this formative stage of the EU’s AI journey, and we acknowledge that the internal and external aspects are dynamic and interconnected in the constitution of the EU’s leadership. We recognise that this brings with it certain limitations in terms of our approach and findings, but we maintain that there is merit in such an assessment of EU leadership and actorness, even if only at a certain point in time. Indeed, it provides a more informed, substantive premise for measuring the EU’s progress as a leader in this field going forward. To articulate our argument, the article is structured in three parts. First, we chart an analytical framework within which to understand EU leadership and actorness in AI and explain our methodology. Secondly, we provide context for understanding the EU’s ambitions within the broader AI opportunity structure. Thirdly, we apply our framework and analyse the EU’s exemplary and diplomatic leadership in AI through a qualitative assessment of internal and external actorness indicators. We conclude that whilst the EU official discourse is ambitious in tone, for the EU to realise its potential and exercise effective exemplary and diplomatic leadership in the global AI domain, it must address existing deficiencies in its actorness capabilities internally and externally.

2. Conceptualising the EU as an actor and leader in AI

From a think tank or policy perspective, insightful work exists on the extent to which the EU’s AI Act will have a de facto or de jure ‘Brussels Effect’ internationally (Engler Citation2022; Siegmann and Anderljung Citation2022), how and to what degree it coheres with the EU’s expansive digital acquis (Bogucki et al. Citation2022) and what issues it raises for international cooperation (Meltzer and Tielemans Citation2022). Broader works also recognise the importance of strengthening international cooperation on AI in order to ensure the challenges and opportunities that such a technology represents are navigated in a safe, secure and ethical manner (Kerry et al. Citation2021). Academically, although more comprehensive works have been published that analyse the EU approach from a legal, regulatory (Harasimiuk and Braun Citation2021) and governance perspective (Büthe et al. Citation2022; Calcara, Csernatoni, and Lavallée Citation2020), there has been less focus on how the EU’s approach can translate to leadership in AI. Work on EU leadership and its role in facilitating and influencing bilateral, multilateral and multistakeholder processes, deliberations and negotiations on AI in order to achieve its goals, is emergent at best, focusing, for example, on leadership in relation to EU policy for drones and AI (Csernatoni and Lavallée Citation2020) and on the false promise of digital sovereignty in AI and EU security (Calderaro and Blumfelde Citation2022). Comparative works seek to assess the EU’s claims to an ‘ethical’ and ‘human-centric’ approach against other emerging international governance frameworks for AI, such as that, for example, being developed within the Council of Europe (Almada and Radu Citation2023).

In addition to analyses of the Brussels effect, we suggest that connecting leadership and actorness can provide a broader contribution to existing accounts; that is, actorness indicators can help provide a nuanced and informed account of what type of leadership the EU can exert. Here, it is recognised that there is an abundance of relevant literature that engages with and seeks to conceptualise the EU as power – civilian (Orbie Citation2006; Telo Citation2007), normative (Manners Citation2002, Citation2021), market (Damro Citation2012, Citation2021), ethical (Aggestam Citation2008), etc. Indeed, conceptualisations of the EU as power are intellectually germane to the discussion of the EU’s leadership and actorness. This said, such conceptualisations that limit the EU to a single source of power, do not allow a broader consideration of the EU as a hybrid actor that is able to employ and exert its power and act, in multiple ways, to demonstrate its leadership. Whilst acknowledging the weaknesses of the actorness approach (see Drieskens Citation2017, Citation2021), we posit that there is considerable merit in utilising actorness indicators to provide a systematic assessment of the potential leadership qualities and attributes of the EU in AI, and indeed in other issue areas where the EU is globally active. Well-established conceptualisations of EU actorness (Bretherton and Vogler Citation1999, Citation2006, Citation2013; Drieskens Citation2017, Citation2021; Jørgensen Citation2009; Jupille and Caporaso Citation1998; Renda et al. Citation2021; Sjöstedt Citation1977) provide a way of understanding the underlying capacity, capability, identity and credibility of the EU in its internal and external dimensions. Connecting actorness and leadership provides us with an innovative approach that sheds light on what the EU can do within the confines of the framework conditions that underpin its action and the opportunity structure within which it operates in AI. In other words, actorness indicators allow us to add further conceptual depth, specificity and nuance to the leadership types discussed.

Further, following others who have sought to utilise leadership and distinguish it from effects and effectiveness (Bäckstrand and Elgström Citation2013; Elgström and Smith Citation2006; Liefferink and Wurzel Citation2017; Oberthür and Dupont Citation2021; Oberthür and Roche Kelly Citation2008), we take the position that the EU’s high-level ambition in the AI space qualifies it as a potential leader in this field. We distinguish between those that intend to simply attract followers, to those actors that have the intention of leading whether others follow or not. The EU in the field of AI, we contend, represents the former rather than the latter; it intends to create ‘global’ AI norms with the ambition that they will be adopted worldwide to ensure the ethical and human-centric use of the technology within an excellence and trust-based framework. Moreover, it intends to ensure, in the context of accelerated digitalisation, that it can leverage its normative and regulatory power, and diplomacy, to advance its digital sovereignty – through the AI Act, but also more broadly in relation to setting codes of conduct and standards for other new and emerging technologies (Bendiek and Stürzer Citation2022). Whilst others have sought to offer extended conceptualisations of leadership (Liefferink and Wurzel Citation2017; Wurzel, Liefferink, and Torney Citation2019), in this article, following Oberthür and Dupont (Citation2021), we simplify our definition of leadership to the ‘exemplary’, which incorporates the EU’s ambitions, its legislation and outputs and outcomes; and the ‘diplomatic’, which captures the ability of the EU to influence through its authority and power, knowledge and ideas and ability to negotiate.

It should be noted that the internal (exemplary leadership) and external (diplomatic leadership) dimensions are not mutually exclusive; both intersect and interrelate in shaping the EU’s leadership. What happens internally in shaping and constructing the EU’s exemplary leadership can impact credibility (depending on internal actorness indicators, coherence, autonomy, authority) and determine the way the EU can leverage its influence (e.g. regulatory, the Brussels effect, and more broadly digital sovereignty) and how it can project a unified position on an EU framework for global use of AI. Its diplomatic (leadership) capability determines, in turn, the extent to which it can persuade (or coerce) others of its exemplary leadership, but also the attractiveness of the EU’s model and approach (ideas, knowledge, norms, values) and how other key actors within the AI opportunity structure view and recognise the EU as a legitimate, attractive and credible partner in this field (external actorness indicators). We conceive the credibility indicator to be important in both the internal (exemplary) and external (diplomatic) dimensions of the analytical framework.

Finally, we posit that to understand the EU’s ability to lead, we need to contextualise the framework conditions that affect its actorness (Oberthür and Dupont Citation2021). Here, the EU’s institutional and policy authority, autonomy and coherence, as well as the broader international opportunity structure and the attractiveness, recognition and credibility of the EU are important indicators of actorness (see ). Internally, it is suggested that the legal-institutional policy milieu for AI will significantly shape how the EU acts and can lead in this area. Thus, to the extent that the ordinary legislative procedure (OLP) applies to the EU AI Act, we might expect facilitation but also inter- and intra-institutional contestation and deliberation in the agreement and adoption of this regulation by EU institutions. Similarly, given that there is no exclusive EU competence in this area, vertical incoherence (between EU institutions and Member State approaches to AI) may impact on the ability of the EU to coordinate and project international policy and exert diplomatic presence. Externally, the EU operates within the international opportunity structure for AI – within different spaces, processes and with relevant actors – and how it constructs policy internally is shaped by the international environment both in terms of social (culture, discourse, values) and material (power, interests) dimensions (Drieskens Citation2021; Jørgensen Citation2009). We acknowledge – even though not the central focus of the article – that the international environment and organisations within which the EU operates (Costa and Jørgensen Citation2012), in relation to AI, can and does affect the construction and substance (ideas, norms, etc.) of the EU’s approach.

Table 1. Actorness Indicators and Leadership.

3. Methodological considerations

summarises our analytical framework. Our approach merits reflection on methodological issues in the measurement of actorness, and indeed how this translates to leadership. Importantly, we apply the existing framework by the ‘TRends In Global Governance and Europe’s Role’ (TRIGGER) project (CEPS Citation2023; Renda et al. Citation2021) to assess actorness indicators. We rely on data that are qualitative in nature and make our judgement of levels of actorness on this basis, drawing from the TRIGGER criteria for operationalising and evaluating actorness. Related to such a qualitative measurement, in , we suggest that a judgement can be made on the degree to which an indicator has been achieved (high, medium, low, Collins Citation2021) and subsequently on the EU’s potential leadership in the AI field.

Table 2. Actorness–Leadership Scale.

Our working assumption in positing this actorness-leadership model is that, in a qualitative sense, the ‘higher’ the actorness indicators at any given point in time, the more potential there is for the EU to realise its ambition and exercise leadership. The TRIGGER framework allows us to differentiate and assess the progress and development of each actorness indicator to more precisely identify the shortcomings and opportunities for leadership in this field, with the understanding that this is mediated by the framework conditions discussed; the dynamic EU institutional and global opportunity structure that exists for AI.

In applying the actorness framework, our analysis uses EU policy documents (speeches by key officials, Strategies, Action Plans, proposed Acts, and any related legislative proposals) and broader literature on AI (think tank and academic sources) as primary data points. We used manual text analysis to locate key features of each actorness indicator and triangulated this with evidence from in-depth interviews conducted with policy officials and experts from the field. These experts were based in the European Commission, European Parliament, Centre for European Policy Studies, The Brookings Institution, and Center for AI and Digital Policy, providing insights on EU AI policy developments from inside the EU as well as abroad. The interviews serve as secondary data points in our assessment.

Finally, we recognise that we are only able to capture the EU’s potential leadership at a given point in time through actorness in what is a dynamic field of study. Nonetheless, we deem that our contribution will allow a substantive assessment of EU strengths and limitations in AI and provide valuable insight on constraints and opportunities for realising EU ambitions and objectives going forward.

4. AI opportunity structure

Internationally, the opportunity structure for the EU to act on AI relates to the increasingly geo-economic and geopolitical nature of AI development and use, coupled with the lack of a binding international framework to mitigate the main risks emerging from AI. Meanwhile, internally, the opportunity structure for EU actorness arises from the need to build capacity and protect the rights and freedoms of EU citizens while using AI and other digital technologies.

AI is predicted to be able to contribute to up to $15.7 trillion to the global economy by 2030 and approximately 14% of GDP growth (PWC Citation2020), but China and the US, as leaders in this field, have invested far more in research and innovation (remaining the world’s largest investment market) than leading European states (Arnold Citation2020). As recognised by the European Commission, [f]aced with the rapid technological development of AI and a global policy context where more and more countries are investing heavily in AI, the EU must act as one to harness the many opportunities and address challenges of AI in a future-proof manner (European Commission Citation2021c). The need to strengthen the EU competitiveness in AI becomes even more urgent in strategic terms if the EU wants to meet its leadership ambition in AI which it sees as sitting ‘at the crossroads of geopolitics, commercial stakes and security concerns’ (European Commission Citation2021c). The EU has recognised the urgency of investing and attracting more AI investment to the tune of €20 billion annually in AI (European Commission Citation2019). This said, whilst European private investment in AI increased from $2 billion to nearly $6 billion from 2020 to 2021, it is clear that the EU still has a long way to go before reaching the investment levels of the US ($53 billion) and China ($17 billion) over the same time period (Engler Citation2022).

Structural competitive factors are important, but so are ethical, legal and normative factors related to the opportunities and risks that the rapid development of AI brings with it. Threats to fundamental rights and security risks of AI frequently surface when considering the wider EU policy context. For instance, in the EU Security Union Strategy (European Commission Citation2020), the danger of misuse of AI by criminals or for launching cyber-attacks is highlighted and reliable AI is deemed necessary. Indeed, whilst it is widely recognised that AI pervades all aspects of life and can have a positive impact, it is clear that this is dependent on a political approach grounded in legally binding rules to ensure that human rights and fundamental freedoms are protected.

The Regulation for Harmonised Rules on Artificial Intelligence or the AI Act (European Parliament and Council Citation2024) introduces a risk-based approach to the development and deployment of AI in the EU digital single market. The regulation provides a legal basis for its ethical principles but limits these regulatory obligations mainly to high-risk AI systems. The requirements for high-risk AI systems establish, maintain and ensure the documentation of data and technical processes, logging of activities, high quality of datasets, clear information for the user, human oversight measures and high level of robustness, security and accuracy. The AI Act also prohibits certain uses of AI, including subliminal techniques or social scoring by governments. This risk-based approach seeks to balance competitiveness with trustworthiness.

5. Internal actorness

So how far does the EU have the credentials to act and lead within such an opportunity structure? What do its internal actorness indicators tell us about its potential exemplary leadership in AI? In what follows, we review the actions of the three EU legislative bodies under the OLP – Council of the European Union, European Commission and European Parliament – individually and as a whole, to assess the degree of internal actorness.

5.1. Authority

Within the EU, AI mostly falls under the single market competence, which forms the legal basis for the AI Act under Article 114 of the Treaty on the Functioning of the European Union (TFEU) and Article 16 of the TFEU on the protection of personal data (European Commission Citation2016). Considering the definition of ‘authority’, understood as the legal personality and the competence to act under the Actorness framework, we now analyse the legal/institutional authority of each EU institution.

In its 2016 Conclusions, the European Council provided the mandate to the European Commission to prepare an AI strategy and in October 2020 to proceed with the AI Act and Coordinated Action Plan. At the level of the Council of the EU, a Working Party on Fundamental Rights, Citizens Rights and Free Movement of Persons analysed the topic of AI, including the compatibility of automated systems with EU values, and the 2020 German Presidency continued the work in its Conclusions on AI and Fundamental Rights (Council of the EU Citation2020). Within the Council of the EU, the AI Act was assigned to the Internal Market and Industry Council, with each six-monthly rotating presidency taking the lead on negotiations of the file.

Within the European Commission, AI is part of the digital single market strategy and led by the Directorate-General (DG) for Communications Networks, Content and Technology (CNECT) in consultation with other DGs. Work on AI started with the publication of a Communication on AI for Europe and the instalment of a High-Level Expert Group (European Commission Citation2018), followed by a Communication on Building Trust in Human-Centric AI (European Commission Citation2019) and a White Paper on a European Approach for Excellence and Trust in AI (European Commission Citation2020). This culminated in the proposal for a regulation laying down harmonised rules on Artificial Intelligence (European Commission Citation2021d). The High-Level Expert Group on AI also produced and piloted an Assessment List on Trustworthy AI (ALTAI) for entities deploying AI systems. Vertically, the European Commission aims for coherence with the Member states, primarily through the 2018 Coordinated Action Plan, and its 2021 update. The latest Coordinated Action plan (European Commission Citation2021a) sets out three joint actions in its quest for EU global leadership on trustworthy AI: (1) accelerating investments in AI technologies through EU funding; implementing AI strategies and programmes fully; (2) aligning AI policy to remove fragmentation and (3) addressing global challenges. To remove fragmentation, the review proposed six policy areas for focused joint actions.

The European Parliament in turn has been quite prolific in its activities on AI. Its reports cover diverse topics and were led by Members of the European Parliament (MEPs) from different political parties. From June 2020 – Spring 2022, a Special Committee on Artificial Intelligence in a Digital Age (AIDA) was established with the goal to study the ‘impact and challenges of rolling out AI, identify common EU-wide objectives, and propose recommendations on the best ways forward’ (European Parliament Citation2021). The AIDA Committee published a large number of working papers and organised regular hearings to exchange views with external experts from government, industry and civil society. Within the European Parliament, the AI Act negotiations were co-led by two rapporteurs (Committee on Internal Market and Consumer Protection and Committee on Industry, Justice and Home Affairs) under a joint committee procedure. The AI Act as a legislative file was of high prominence with over 3,000 amendments.

Overall, we can say that the EU is equipped with the necessary competences by the Treaties to act on AI. One notable exception in the scope of the AI Act is the use of AI in and for defence: here, the EU does not have exclusive competencies to act, as laid down in the Treaty of the European Union (Article 42, European Commission Citation2016). The degree of EU authority to act on AI as a policy area under the TFEU thus is medium.

5.2. Autonomy

With regard to ‘autonomy’, a complementary criterion to ‘authority’ which goes beyond the legal competency, we include the strategies and narratives, types of decision-making power, the involvement of Member States, and the availability of resources to implement policy, under the Actorness framework. In terms of strategies and narratives, as demonstrated throughout this article, policy initiatives across institutions are prolific and the salience of the topic is high. Commission speeches and press releases that mention AI, for instance, span a wide range of mandates (Europe Fit for the Digital Age; Internal Market; Innovation, Research, Culture, Education and Youth; Interinstitutional Relations and Foresight; Values and Transparency; Justice; Home Affairs; Equality), highlighting the perceived importance and impact of AI in their policy fields. However, human and financial resources are harder to find. Interview respondents from multiple EU institutions emphasised that human and financial resources are currently limited, and where internal capacity on specific and mostly technical issues, such as general-purpose AI, is lacking, the institutions necessarily draw on external consultants. In addition, as AI fits within broader overarching digital strategies, this requires coordination across policy teams, DGs and institutions; such coordination though is far from evident in terms of horizontal policy coherence (Interviews, 2022). Also, the increased spending on AI through various funding mechanisms and programmes shows the EU’s willingness to increase attracting investments, even though in relative terms, such investments are not internationally competitive (as detailed in Section 4).

Further the EU’s autonomy to act on AI depends on the extent to which EU institutions are involved in the implementation of the AI Act. Ensuring compliance and overseeing enforcement mechanisms by supervisory authorities is almost exclusively the competence of Member States, which will need to enforce the provisions through their existing or newly established national agencies and regulators. Here, the effective authority of the EU may be limited once the AI Act has fully entered into force by 2026. While the European Commission issues information and guidance, hosts expert meetings, and accompanies Member States in the application of regulations, the EU institutions as such do not have a formal role throughout the implementation process. Instead, national authorities will have to assess highly technical and complex questions to oversee the enforcement of the AI Act’s provisions across a range of sectors and applications, a task which most agencies have not yet dealt with so far. A similar case in point is the GDPR, where certain underequipped national data protection authorities were perceived as a ‘weak link’ in the system and hampered the EU’s formal authority to enforce the GDPR across the entire EU Digital Single Market (Collins Citation2021). To support the implementation, the final agreement foresees a more comprehensive implementation structure on EU level (European Commission Citation2024). Central to the coordination of the oversight mechanisms will be the AI Office. Its announced tasks are to support the coherent implementation of the AI Act across Member States through providing assistance in evaluation and classification of models, investigating possible infringements, among other coordination and evaluation tasks. The AI Office shall collaborate with the European Artificial Intelligence Board, consisting of representatives by EU Member States and the European Centre for Algorithmic Transparency (ECAT). Additional multi-stakeholder entities support the institutions. Additionally, a Scientific Panel, composed of independent experts, has been announced with the purpose of involving the scientific community, and an Advisory Forum will assemble technical expertise through convening diverse array of stakeholders such as the industry, startups, SMEs, academia, think tanks, and civil society. Further, the AI Office may collaborate with individual experts and organisations to facilitate exchange between providers. However, at the time of writing, most of these entities have not yet been established and/or started their activities, which leaves open questions around the actual division of competencies between the newly established EU entities and Member States, as well as effective involvement of, and coordination between, relevant stakeholders.

Overall, the EU’s internal autonomy to act on AI is medium due to an increased (but still not internationally competitive) budget, the widespread references to AI as a key EU policy issue across institutions and departments (albeit lacking technical expertise), and the uncertainty around effective legislative supervision and enforcement (despite reinforced institutions on EU level).

5.3. Coherence

Generally, EU institutions have all individually recognised the importance to act on AI. As with many legislative files, however, the AI Act negotiations were characterised by divergent aims and approaches between the Council, the Parliament and the Commission. The three institutions finalised the OLP on a very tight timeline, but political and technical divergences over, for example, the definition of ‘AI’, what should be considered as ‘high-risk’ AI, how to regulate general-purpose AI and the enforcement mechanisms, put the internal coherence of the EU under question. In what follows, we assess the positions of the three EU institutions individually.

Although often seen as the most ‘neutral’ entity in the literature, the Commission appeared to be a committed negotiator and proactively defended the 2021 draft proposal. This positioning is in line with findings on the Commission’s behaviour during trilogues (Panning Citation2021). The Parliament (shadow) rapporteurs are usually divided by party and interest lines when negotiating legislative files. Despite divergent approaches over the definition and what counts as high-risk AI application, all Rapporteurs closely collaborated in Parliament negotiations, and dealt with concessions between the parties to facilitate alignment on one united position of the Parliament. This single position is perceived to strengthen the Parliament’s leveraging power during the trilogues. One interviewee was struck by the collegiality of the MEPs during this process (Interview, 2022). One of the reasons for this alignment is the belief in the ability of political institutions to reach positive outcomes as well as genuine urgency about the impact of AI and real concerns about enormous concentrations of power, the lack of accountability, and major fiascos with AI systems used to discriminate and cause harms (Interview, 2022). These elements provided a sense of common purpose and willingness among Parliament staff to constructively work together in view of impact for the EU at large. In the Council, dividing lines between Member States existed likewise over the definition and scope of the high-risk category, with little room for compromise both among Member States as well as between institutions (Interview, 2022).

During the trilogues, where the Commission, Parliament and Council have to agree on one final version of the legislative text of the AI Act, institutions’ positions align if it is convenient for both sides. For example, the Parliament collaborates with the Commission on some articles while the same institution works closely with the Council on other articles to reach a compromise. Similarly, if there are aspects which two parties do not agree with, they join forces to tweak the proposal to their preference. Both technical and political matters are discussed between accredited parliamentary assistants, Commission and Council staff, who shouldered the bulk of the AI Act amendment work. However, with its adoption and subsequent implementation, redundancies with other digital policies may impact the effectiveness and value added of the AI Act. Amid the current proliferation of EU digital legislation (DSA/DMA, Cybersecurity Act, Data Governance Act, Data Act, among others), the legislative overlaps as well as potential gaps will only be revealed over time (Bogucki et al. Citation2022).

To summarise, the on-average significant disagreements among institutions, limited coordination among Member States, and potential redundancies or overlaps between the AI Act and other digital policies lead us to conclude that the EU internal coherence on AI policy currently is low.

5.4. Credibility in exemplary leadership

The AI Act has culminated in EU efforts to influence the development and use of AI and will put the EU on the AI policy map in a way that lagging investments in AI have not achieved thus far. Considering credibility in exemplary leadership, which is understood as being a reliable actor by adhering to one’s own commitments, we can only partly assess this criterion at this stage, based on our analysis of the EU’s internal actorness indicators. The current perceived urgency to steer the future of these landmark technologies, coupled with the EU’s objective to harmonise AI development and legislation across its 27 Member States and project its approach beyond the EU, demonstrates commitment to deliver a comprehensive answer to the issues posed by AI. Likewise, the broad impact of the AI Act on EU market developments and on innovation remain to be seen, as does the EU’s ability to develop effective mechanisms to enforce the rules under the AI Act. Regarding the perception of trustworthiness and credibility by key EU stakeholders, there is broad support for human-centred AI governance, but the details of implementation are disputed – as such, civil society organisations continue to heavily criticise the AI Act for insufficient human rights safeguards (European Digital Rights EDRi Citation2023), while industry associations complain about overly restrictive measures (BusinessEurope Citation2023). Overall, acknowledging the partial assessment at this point, the internal credibility of the EU to act on AI is medium. The internal credibility will also, of course, depend on external actorness indicators, to which we turn next.

6. External actorness

At the international level, the EU’s self-declared aim is to lead the international development and deployment of trustworthy AI based on European norms, values and principles. This ambition is reflected in numerous EU policy documents (Coordinated Plan on AI, AI White Paper, AI Act), speeches and non-documents. The AI Act proposal (European Commission Citation2021d) stated that its regulatory requirements are ‘largely consistent with other international recommendations and principles, which ensures that the proposed AI framework is compatible with those adopted by the EU’s international trade partners.’ One of the tasks of the newly established AI Office is to ‘contribute to a strategic, coherent, and effective EU approach’ internationally by ‘promoting the EU’s approach to trustworthy AI’ internationally and ‘support the development and implementation of international agreements on AI’. What does the analysis of external actorness indicators tell us about its potential diplomatic leadership in AI?

6.1. Attractiveness

In the Actorness framework, attractiveness is understood as the willingness of other actors to cooperate with the EU, both on hard and soft law. More generally, the EU’s market size and power, clearly recognised by the set of literature on the so-called Brussels Effect (Bradford, Citation2019), and the overall notable track record of the EU to pursue and implement policies based on its values and principles, speak for the attractiveness of the EU as an international partner. These observations are also relevant to AI policy where like-minded partners are generally interested in discussing the EU’s ideas and approach to AI in relevant multilateral fora and through bilateral partnerships, agreements and dialogues. This does not necessarily imply that all constituent parts of the EU approach must be attractive to others, but that more broadly there is a zone of agreement on at least one of these parts that will allow meaningful collaboration and cooperation, rather than confrontation and competition in relation to the emerging global AI regime. We can see that most clearly with non-EU, ‘European’ countries, including Switzerland, Iceland, Norway, Ukraine and The Vatican, as well as for countries such as South Korea, Singapore and Japan, where high degrees of convergence exist with the EU’s approach ethically, normatively and in regulatory terms (Interview, 2022; see also Bradford, Citation2019). In turn, the EU rights-based, human-centred, and risk-based approach is said to be less attractive to non-democratic states that seek to utilize AI to capture political influence, for purposes of mass surveillance and for societal control. As one interviewee noted, ‘I don’t think [the EU] can influence Saudi Arabia, but I do think [the EU] can influence Mexico or Israel’ (Interview, 2022).

This said, even when like-minded states are convergent in a regulatory sense, or there might be an economic incentive to cooperate, the picture is complicated by existing tensions in relation to the EU’s ethical approach (e.g. its high-risk AI classifications). More concretely, whilst there is certainly convergence between the EU and US on pushing and projecting a ‘Western’ philosophy for the evolution and development of AI technology based on shared values and standards, some have highlighted that ‘there are still fundamental differences on what types of values … should be at the forefront’, e.g. in relation to AI-driven autonomous weapons (Scott Citation2023), or the extent to which competition rules are applied, especially to big tech (Roberts et al. Citation2021). Such tensions in terms of values and the perceived value of cooperation with the EU are exacerbated further by alternative technological offerings from leading AI states such as China, that has been successful in selling its products – for example, for facial recognition and surveillance purposes – to countries globally, in particular in Latin America, even though such countries have a historical and legal connectivity and closer alignment to EU values.

In the corporate realm, the attractiveness of cooperation with the EU, certainly for larger tech companies, will primarily be driven by ensuring competitive advantage is not lost through non-compliance with EU regulations on AI and other applicable legislation. This said, leading commentators have also highlighted that whilst ‘it is reasonable to assume that many foreign manufacturers will adapt the AI Act’s rules, and once they have, will often have a strong incentive to keep any domestic laws as consistent as possible with the EU’ (Engler Citation2022), the EU is not the only organisation setting the rules on AI. Those corporations most likely to be affected will have a say in shaping AI rules – whether through lobbying the EU institutions directly in the negotiations or implementation, or exerting influence in global and European standard setting organisations. This, in addition to the activity of other governments in influencing standards for AI, will also determine the attractiveness of any finally agreed EU rules and norms in this area (Engler Citation2022) and the implementation and impact of these on the governance of AI. All in all, the outstanding questions around the compatibility of the regulatory framework with other nations’ AI governance frameworks, as well as alignment with the private sector and standards lead us to conclude that EU external attractiveness on AI is medium.

6.2. Recognition

Recognition is defined as being a legitimate negotiation partner and having a formal status as an actor by others under the Actorness framework. Increasing political and social polarisation in many countries has made it difficult to achieve even simple legislation in many jurisdictions. Comparatively, then, the EU has certainly demonstrated leadership and gained international recognition with precedent digital policy files, as the adoption of the GDPR and the resulting ‘Brussels Effect’ suggests. The EU’s ability to propose and pass legislation on key enabling technologies is particularly important regarding AI. Not only because AI is a highly technical and complex topic for policymakers who often lack specific technical knowledge but also because effective global governance mechanisms on AI are lacking, the EU is internationally recognised for pursuing ‘purposeful’ and ‘effective’ legislation (Interview, 2022). Other significant actors are engaged in a similar AI policy process, clearly being influenced by EU legislation and initiatives on AI. This said, whilst China recognises the EU as an authority in this area and borrows broad elements from the AI Act in its regulatory frameworks, it does so with the aim of internally implementing and exporting a distinct ‘Beijing Effect’ (Erie and Streinz Citation2021) underpinned by a normatively and ideologically different national framework of social control and national security.

The AI Act demonstrates the EU’s willingness to use its regulatory power, even if this has caused tensions with like-minded countries in the past. For example, the US has long criticised the EU’s approach for being overly strict and thus not allowing innovation. More recently, however, the US has come closer to the EU’s position on certain issues due to the consequences of non-compliance, including legal disputes and high fines for companies in the past, as well as the ongoing competition with China. Other geopolitical events such as the war in Ukraine and the increasing relevance of lessening dependence on other countries for key technologies, including AI, often termed as digital sovereignty and open strategic autonomy, are additional reasons for both the EU and the US to collaborate closely. Both pursue a ‘shared interest in supporting international standardisation efforts and promoting trustworthy AI on the basis of a shared dedication to democratic values and human rights’ (European Commission Citation2021b) and the work on AI is pursued in the EU-US Trade Technology Council (TTC) under a dedicated Work Stream in Working Group 1 – Technology Standards. A dedicated ‘Joint Roadmap for Trustworthy AI and Risk Management’ aims to align the risk-based approaches, standardisation and interoperable shared terminologies between the EU and the US. As the TTC negotiations took place in parallel to the AI Act negotiations, the EU increased its recognition as an international actor in two ways: it took forward its own regulatory process through input from working-level exchanges with the US, while at the same time demonstrating commitment to align key terms before the AI Act comes into force. Ultimately, the EU has established a strong partnership with the US as one of the leading countries in AI development and investment, while also promoting its risk-based and human-centric approach to governing AI within the partnership through the TTC.

At the same time, the EU is also trying to find alternative solutions to regulatory divergences such as AI sandboxes and diplomatic means, and engagement in international fora and conferences. Such digital agreements featuring AI include the EU-Japan Digital Partnership (May 2022), the EU-South Korea Digital Partnership (November 2022) and the EU–Singapore Digital Partnership (February 2023). In all agreements, both entities commit to promote trustworthy AI without further specifying details. Considering emerging and developing economies, it remains to be seen whether the EU strategy and its AI Act can exert influence on governments to adopt an ethical and human-centred approach to AI. To this end, the EU strengthened digital cooperation with the African Union, for example, through the AU-EU Digital Economy Task Force, which aims to drive digital transformation and market integration in Africa. The EU aims to strengthen ties with countries close to it, such as Australia and Argentina, and to collaborate with regions in Africa, South America, and Southeast Asia, where the EU believes it can still potentially have an impact on the development of technology and AI (Interview, 2022). These joint agreements and bilateral engagements indicate a notable level of recognition of the EU as a digital actor, and clearly initiatives on AI are mentioned oftentimes.

However, also considering the divergence with most authoritarian and non-democratic states, we can conclude that in terms of recognition, we categorise the EU as medium. Ultimately, as interview evidence suggests, the AI Act can only become globally relevant and the EU recognised further as a legitimate leader and partner, if it continues to build alliances and consensus with like-minded states (Interview, 2022; see also Renda Citation2022).

6.3. Credibility in diplomatic leadership

Credibility relates to whether other global governance actors have good reason to believe that the EU will follow through on what it commits to do, based on the results from the analysis of external actorness indicators (authority, autonomy and coherence) as per the actorness framework. In terms of external actorness, the EU’s diplomatic leadership is reliant on interest-based and normative convergence on key values, and on regulatory outcomes. On the former, the EU must be attractive as a model (economic and political) and be recognised as an actor with the necessary geopolitical and geoeconomic presence and power to influence the global governance of AI. We have concluded in the previous two sections that in terms of both of these actorness indicators, we can classify the EU’s actorness as medium. If the EU cannot demonstrate tangible, meaningful results for all stakeholders from the AI Act that reflects the work of its political institutions, then the EU’s ability to influence others is significantly diminished. Alongside a convergence of ideas and interests, there is a need for effective supervision and enforcement of the AI Act to ensure the EU strengthens its credibility in its diplomatic leadership. The EU’s aim to lead in AI also comes with claims of digital sovereignty and strategic autonomy, but it is too early to determine whether the AI Act will allow the EU to exercise greater global outreach to impact other nations’ AI governance frameworks, especially those that are not like-minded. Additionally, the establishment of an effective AI ecosystem in the EU based on AI infrastructure, private sector investment, compute power, talent and data governance, among others, will impact the extent to which the EU may be able to strengthen its external credibility. In terms of external credibility then, overall, we can conclude that credibility is medium at this point in the EU policy cycle, with the caveat that an effective implementation and enforcement framework and creating alliances and consensus with like-minded states such as the US, bilaterally and in international fora, will be key in raising its credibility in the AI field going forward. summarises our findings on EU actorness and leadership in AI during this (pre)legislative phase.

Table 3. EU Actorness and Leadership in AI (pre-legislative stage, 2018–2024).

7. Conclusion

This article set out to provide a more nuanced and conceptual purchase on the EU’s potential leadership in the AI field through its actorness. Given the formative nature of the AI domain and the fluid and contested geopolitical environment, there were limitations to the analytical discussion in relation to effectiveness, outcome and implementation. Such contextual confines in the opportunity structure acknowledged, our contribution identifies several key implications for the EU’s potential to lead in the AI field.

First, in relation to its internal actorness indicators, that whilst the EU has the legal authority to act, develop and agree legislation on AI on solid legal ground based in the Treaties and its OLP, there exists a clear need for the EU to ensure that internal contestation does not adversely impact the scope, robustness and effectiveness of the agreed AI Act. There is also a need to further invest in its internal knowledge, capacity and resources in AI and other new technologies if it is to build expertise, construct and sustain strategic (autonomous, sovereign) advantage, and avoid jeopardising its autonomy to act. On coherence and credibility, how key technical, normative and governance challenges are resolved will impact on the EU’s ability to demonstrate legitimacy in the AI field through effective action and outcome. Thus, demonstrating horizontal and vertical coherence within the EU on key issues (e.g. management of high-risk AI, avoiding fragmented enforcement) and indeed in relation to broader digital legislation that has implications for AI, will be paramount. Based on our assessment of internal actorness indicators, we conclude that the EU is at a medium level of potential exemplary leadership. How much first mover legislative advantage and credibility the EU can achieve, will depend on ensuring strong (enforcement) authority is embedded in the AI Act and that it strategically targets initiatives to further elevate its autonomy and coherence, and in turn, credibility.

Second, in relation to its external actorness indicators, we observed that the existing regulatory power of the EU, its geoeconomic global standing, and its first mover (legislative) advantage alongside other major players, at a minimum, provide it with attractiveness and recognition. To this end, the EU AI Act sets a precedent for other countries to develop governance on AI and acts as a source of emulation for like-minded countries. It is also a Regulation that major corporate tech players must engage and comply with to remain globally competitive. This said, although the EU AI Act offers a model based on a risk-based approach designed to balance innovation and protection of harm – a formula desired by many liberal democracies – normative attraction and regulatory convergence do not automatically imbue the EU with high levels of credibility, and in turn, in terms of diplomatic leadership, authority and power in international deliberations and negotiations. Indeed, even with like-minded states with which the EU is engaged through bilateral and multilateral fora, and where there is recognition of the EU as credible actor in its main ideas for AI (rights-based, human centric, ethical, risk-based), there is regulatory divergence and obstacles to aligned governance mechanisms, as many countries, including the US, do not prioritise comprehensive regulation but rather soft law or sector-specific approaches. If there are limits, then, to the attractiveness of the EU as a model because of ideological, normative or regulatory divergence (and competing models), the EU must elevate its recognition and credibility through continuing to build effective digital alliances to diffuse its ideas, participate in knowledge sharing and development in the AI field, build and cultivate partnerships for research and innovation in AI, and demonstrate its geoeconomic significance in the AI sector to achieve its digital sovereignty ambitions in this field. For these reasons, at this point in time, we assess the EU’s potential as a diplomatic leader as medium. Whether this will translate to influencing states that are not like-minded is doubtful, but continued dialogue with such states can provide a platform for mutual recognition of key principles and interests, creating the conditions for the development of mutual understanding on the governance of new technologies.

Finally, a core aim of this article has been to start a more conceptually oriented and nuanced conversation on how we might understand the EU’s leadership ambitions through its actorness. We recognise that the EU’s own identity and action on AI is still under construction, but also that addressing weaknesses in its actorness through a more targeted approach can help to mitigate and minimise the risks associated with the ambition of global standard setting and create the conditions through which the EU can lead effectively. Beyond the AI policy domain, we also think that the analytical framework articulated and operationalised in this article, with further refinement going forward, can be applied to a plethora of other policy issue areas where the EU seeks to play a leading global role.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

The work was supported by the Institute of Advanced Studies, University of Warwick.

References

  • Aggestam, L. 2008. “Ethical Power Europe.” International Affairs 84 (1): 1–11. https://doi.org/10.1111/j.1468-2346.2008.00685.x.
  • Almada, M., and A. Radu. 2023. “The Brussels Side-Effect: How the AI Act Can Reduce the Global Reach of EU Policy.” Accessed June 9, 2023, from https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4592006.
  • Arnold, Z. 2020. What Investment Trends Reveal About the Global AI Landscape. Brookings Institution, 29 September 2020.
  • Bäckstrand, K., and O. Elgström. 2013. “The EU’s Role in Climate Change Negotiations: From Leader to ‘Leadiator’.” Journal of European Public Policy 20 (10): 1369–1386. https://doi.org/10.1080/13501763.2013.781781.
  • Bendiek, A., and I. Stürzer. 2022. Advancing European Internal and External Digital Sovereignty: The Brussels Effect and the EU-US Trade and Technology Council.
  • Bogucki, A., A. Engler, C. Perarnaud, and A. Renda. 2022. The AI Act and Emerging EU Digital Acquis: Overlaps, Gaps and Inconsistencies.
  • Bradford, A. 2019. The Brussels effect: How the European Union rules the world. USA: Oxford University Press.
  • Bretherton, C., and J. Vogler. 1999. The European Union as a Global Actor. Milton Park: Routledge.
  • Bretherton, C., and J. Vogler. 2006. The European Union as a Global Actor. 2nd ed. Milton Park: Routledge.
  • Bretherton, C., and J. Vogler. 2013. “A Global Actor Past its Peak?” International Relations 27 (3): 375–390. https://doi.org/10.1177/0047117813497299.
  • BusinessEurope. 2023. “Joint Industry Statement on the EU AI Act.” Accessed November 6, 2023, from https://www.businesseurope.eu/sites/buseur/files/media/position_papers/internal_market/2023-02-23_joint_industry_statement_-_ai_act_-_final.pdf.
  • Büthe, T., C. Djeffal, C. Lütge, S. Maasen, and N. von Ingersleben-Seip. 2022. “Special Issue: The Governance of Artificial Intelligence.” Journal of European Public Policy 29 (11): 1721–1752. https://doi.org/10.1080/13501763.2022.2126515.
  • Calcara, A., R. Csernatoni, and C. Lavallée. 2020. “Introduction: Emerging Security Technologies–An Uncharted Field for the EU.” In Emerging Security Technologies and EU Governance, edited by A. Calcara, R. Csernatoni, and C. Lavallée, 1–22. Milton Park: Routledge.
  • Calderaro, A., and S. Blumfelde. 2022. “Artificial Intelligence and EU Security: The False Promise of Digital Sovereignty.” European Security 31 (3): 415–434. https://doi.org/10.1080/09662839.2022.2101885.
  • CEPS. 2023. “TRIGGER: Trends in Global Governance and Europe’s Role.” Accessed November 6, 2023, from https://www.ceps.eu/ceps-projects/trigger/.
  • Collins, A. 2021. In D7.6 Report/Book Including the Case Studies, Reports and Lessons Learnt for PERSEUS, WP7 Deep Dives, edited by A. Renda, C. del Giovane, M. Laurer, A. Modzelewska, A. Sipiczki, T. Yeung, J. Arroyo, H. Nguyen, J. Teebken, K. Jacob, R. Ayadi, S. Ronco, and A. Collins.
  • Costa, O., and K. E. Jørgensen. 2012. “The Influence of International Institutions on the EU: A Framework for Analysis.” In The Influence of International Institutions on the EU: When Multilateralism Hits Brussels, edited by O. Costa and K.E. Jørgensen, 1–22. London: Palgrave Macmillan.
  • Council of the EU. 2020. Artificial Intelligence: Presidency Issues Conclusions on Ensuring Respect for Fundamental Rights.
  • Csernatoni, R., and C. Lavallée. 2020. “Drones and Artificial Intelligence: The EU’s Smart Governance in Emerging Technologies.” In Emerging Security Technologies and EU Governance, edited by A. Calcara, R. Csernatoni, and C. Lavallée, 206–223. Milton Park: Routledge.
  • Damro, C. 2012. “Market Power Europe.” Journal of European Public Policy 19 (5): 682–699. https://doi.org/10.1080/13501763.2011.646779.
  • Damro, C. 2021. “The European Union as ‘Market Power Europe.” In The External Action of the European Union: Concepts, Approaches, Theories, edited by S. Gstöhl and S. Schunz, 46–58. Basingstoke: Bloomsbury.
  • Drieskens, E. 2017. “Golden or Gilded Jubilee? A Research Agenda for Actorness.” Journal of European Public Policy 24 (10): 1534–1546. https://doi.org/10.1080/13501763.2016.1225784.
  • Drieskens, E. 2021. “Actorness and the Study of the EU’s External Action.” In The External Action of the European Union: Concepts, Approaches, Theories, edited by S. Gstöhl and S. Schunz, 27–39. Bloomsbury: Basingstoke.
  • Elgström, O., and M. Smith. 2006. The European Union’s Roles in International Politics. Milton Park: Taylor & Francis.
  • Engler, A. 2022. The EU AI Act Will have Global Impact but a Limited Brussels Effect.
  • Erie, M. S., and T. Streinz. 2021. “The Beijing Effect: China’s ‘Digital Silk Road’ as Transnational Data Governance.” Accessed November 6, 2023, from https://papers.ssrn.com/abstract=3810256.
  • European Commission. 2016. “Consolidated Versions of the Treaty on European Union and the Treaty on the Functioning of the European Union.” 2016/C 202/01, Accessed June16, 2016.
  • European Commission. 2018. Coordinated Plan on Artificial Intelligence. Communication 795 final.
  • European Commission. 2019. Building Trust in Human-Centric Artificial Intelligence. Communication 168 final.
  • European Commission. 2020. “White Paper on Artificial Intelligence: A European Approach to Excellence and Trust.” Communication 65 final.
  • European Commission. 2021a. “ANNEX. Fostering a European Approach to Artificial Intelligence. Coordinated Plan on Artificial Intelligence 2021 Review.” Communication 205 final – Annex.
  • European Commission. 2021b. EU-US Trade and Technology Council Inaugural Joint Statement. European Commission.
  • European Commission. 2021c. Fostering a European Approach to Artificial Intelligence. Communication 205 final.
  • European Commission. 2021d. Proposal for a Regulation Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts. Communication 206 final.
  • European Commission. 2024. “European AI Office. European Commission.” 8 March 2024. Accessed 22 March 2024, from https://digital-strategy.ec.europa.eu/en/policies/ai-office#:~:text=The%20AI%20Office%20is%20uniquely,for%20general%2Dpurpose%20AI%20models.
  • European Commission High-Level Expert Group for Artificial Intelligence (HLEG). 2019. Ethics Guidelines for Trustworthy AI.
  • European Digital Rights (EDRi). 2023. “AI Act: EU Must Protect Human Rights.” Accessed November 6, 2023, from https://edri.org/our-work/civil-society-statement-eu-close-loophole-article-6-ai-act-tech-lobby/.
  • European Parliament. 2021. AIDA – About, European Parliament. Accessed November 6, 2023, from https://www.europarl.europa.eu/committees/en/aida/about.
  • European Parliament and Council. 2024. Regulation (EU) 2024/Xxx of the European Parliament and of the Council of Xx March 2024 on Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts.
  • Harasimiuk, D., and T. Braun. 2021. Regulating Artificial Intelligence: Binary Ethics and the Law. 1st ed. Milton Park: Routledge.
  • Jørgensen, K. E. 2009. “The European Union in Multilateral Diplomacy.” The Hague Journal of Diplomacy 4 (2): 189–209. https://doi.org/10.1163/187119109X440906.
  • Jupille, J., and J. A. Caporaso. 1998. “States, Agency and Rules: The European Union in Global Environmental Politics.” In The European Union in the World Community, edited by C. Rhodes, 213–229. Boulder: Lynne Rienner Publishers.
  • Kerry, C. F., J. P. Meltzer, A. Renda, A. C. Engler, and R. Fanni. 2021. Strengthening International Cooperation on AI. Brookings Institution.
  • Liefferink, D., and R. K. W. Wurzel. 2017. “Environmental Leaders and Pioneers: Agents of Change?” Journal of European Public Policy 24 (7): 951–968. https://doi.org/10.1080/13501763.2016.1161657.
  • Manners, I. 2002. “Normative Power Europe: A Contradiction in Terms?” JCMS: Journal of Common Market Studies 40 (2): 235–258. https://doi.org/10.1111/1468-5965.00353.
  • Manners, I. 2021. “Normative Power Approach to European Union External Action.” In The External Action of the European Union: Concepts, Approaches, Theories, edited by S. Gstöhl and S. Schunz, 61–74. Basingstoke: Bloomsbury.
  • Meltzer, J., and A. Tielemans. 2022. The European Union and the AI Act: Next Steps and Issues for Building International Cooperation.
  • Oberthür, S., and C. Dupont. 2021. “The European Union’s International Climate Leadership: Towards a Grand Climate Strategy?” Journal of European Public Policy 28 (7): 1095–1114. https://doi.org/10.1080/13501763.2021.1918218.
  • Oberthür, S., and C. Roche Kelly. 2008. “EU Leadership in International Climate Policy: Achievements and Challenges.” The International Spectator 43 (3): 35–50. https://doi.org/10.1080/03932720802280594.
  • Orbie, J. 2006. “Civilian Power Europe: Review of the Original and Current Debates.” Cooperation and Conflict 41 (1): 123–128. https://doi.org/10.1177/0010836706063503.
  • Panning, L. 2021. “Building and Managing the European Commission’s Position for Trilogue Negotiations.” Journal of European Public Policy 28 (1): 32–52. https://doi.org/10.1080/13501763.2020.1859597.
  • PricewaterhouseCoopers. 2020. Sizing the Prize: What’s the Real Value of AI for Your Business and How Can You Capitalise.
  • Renda, A. 2022. Beyond the Brussels Effect: Leveraging Digital Regulation for Strategic Autonomy.
  • Renda, A., C. del Giovane, M. Laurer, A. Modzelewska, A. Sipiczki, T. Yeung, J. Arroyo, et al. 2021. (TRends in Global Governance and Europe’s Role - TRIGGER).
  • Roberts, H., J. Cowls, E. Hine, F. Mazzi, A. Tsamados, M. Taddeo, and L. Floridi. 2021. “Achieving a ‘Good AI Society’: Comparing the Aims and Progress of the EU and the US.” Science and Engineering Ethics 27 (6). https://doi.org/10.1007/s11948-021-00340-7.
  • Schunz, S. 2019. “The European Union’s Environmental Foreign Policy: From Planning to a Strategy?” International Politics 56 (3): 339–358. https://doi.org/10.1057/s41311-017-0130-0.
  • Scott, M. 2023. Digital Bridge: Transatlantic AI Confusion — Anatomy of a (Failed) Digital Coup — the $220 Billion Tax Question, Politico. 19 January 2023.
  • Siegmann, C., and M. Anderljung. 2022. The Brussels Effect and Artificial Intelligence: How EU Regulation Will Impact the Global AI Market. ArXiv Preprint arXiv2208.12645.
  • Sjöstedt, G. 1977. The External Fole of the European Community. Vol. 7. Aldershot: Gower Publishing Company.
  • Telo, M. 2007. “The EU as an Incipient Civilian Power. A Systemic Approach.” Politique Européenne n° 22 (2): 35–54. https://doi.org/10.3917/poeu.022.0035.
  • Wurzel, R. K. W., D. Liefferink, and D. Torney. 2019. “Pioneers, Leaders and Followers in Multilevel and Polycentric Climate Governance.” Environmental Politics 28 (1): 1–21. https://doi.org/10.1080/09644016.2019.1522033.