2,036
Views
6
CrossRef citations to date
0
Altmetric
Articles

Constructing expertise: the front- and back-door regulation of AI’s military applications in the European Union

ORCID Icon & ORCID Icon
Pages 1230-1254 | Received 25 Apr 2022, Accepted 24 Jan 2023, Published online: 14 Feb 2023
 

ABSTRACT

The regulation of military applications of artificial intelligence (AI) is a growing concern. The article investigates how the EU as a multi-level system aims at regulating military AI based on epistemic authority. It suggests that the EU acts as a rule-maker and a rule-taker of military AI predicated on constructing private, corporate actors as experts. As a rule-maker, the EU has set up expert panels such as the Global Tech Panel to inform its initiatives, thereby inviting corporate actors to become part of its decision-making process through the front-door. But the EU is also a rule-taker in that its approach to regulating on military AI is shaped through the backdoor by how corporate actors design AI technologies. These observations signal an emerging hybrid regulatory security state based on ‘liquid’ forms of epistemic authority that empowers corporate actors but also denotes a complex mix of formal political and informal expert authority.

Acknowledgements

We would like to express our gratitude to the editors of the SI, Andreas Kruck and Moritz Weiss, for their smooth organization of the process and their constructive input and feedback, as well as to our fellow SI contributors for their insightful comments on earlier drafts. Finally, we remain indebted to the AutoNorms project team (Anna Nadibaidze, Guangyu Qiao-Franco, and Tom Watts) for their constructive comments and support throughout.

Disclosure statement

No potential conflict of interest was reported by the authors).

Correction Statement

This article has been corrected with minor changes. These changes do not impact the academic content of the article.

Notes

1 LAWS ‘can select and apply force to targets without human intervention’ (ICRC, Citation2021).

2 We explore the potential emergence of a European RSS in the security subfield of military AI rather than providing a full picture of the ‘so-called European Military-Industrial Complex (MIC) in-the-making’ (Csernatoni, Citation2019, p. 119).

3 The AI Act does explicitly not cover military or defence applications of AI.

4 The EU’s AI HLEG has also expressed support for a ‘ban treaty’.

5 The EEAS does not highlight tweets by GTP members consistently, but only in articles that highlight seven tweets by five different GTP members: Risto Siilasma (2x but same tweet), Thomas Fletcher (2x), Mustafa Suleymen (2x), Sundar Pichai, and Cassandra Kelly. We discuss these tweets due to their selective inclusion into EEAS reporting about the GTP and resulting greater exposure. This selective highlighting is noteworthy because it only showcases views expressed by specific GTP members, rather than providing a broader perspective.

6 However, Google was reported to be finalizing a bid for the US-financed Joint Warfighter Cloud Capability (JWCC) project at a multi-billion dollar budget in 2021 (Wakabayashi & Conger, Citation2021). Other Big Tech companies, such as Microsoft and Amazon, already hold US defence contracts (Brustein & Bergen, Citation2019).

7 The fact that the US Air Force has become the lead agency for Project Maven and its successor programmes evidences this trajectory (Shultz & Clarke, Citation2020).

8 The authors would like to thank an anonymous source for their contributions to this section.

9 The US, for example, co-proposed a list of principles and good practices on LAWS to the GGE on LAWS in March 2022 together with a list of partner countries such as Australia, Canada, or Japan. Notably, no EU member states are part of this proposal (Australia et al., Citation2022).

Additional information

Funding

This research is part of the AutoNorms project which has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 852123.