ABSTRACT
This article takes stock of the ongoing debates on algorithmic warfare in the social sciences. It seeks to equip scholars in International Relations and beyond with a critical review of both the empirical context of algorithmic warfare and the different theoretical approaches to studying practices related to the integration of algorithms (including automated, autonomous, and artificial intelligence (AI) technologies) into international armed conflict. The review focuses on discussions about (1) the implications of algorithmic warfare for strategic stability, (2) the morality and ethics of algorithmic warfare, (3) how algorithmic warfare relates to the laws and norms of war, and (4) popular imaginaries of algorithmic warfare. The article foregrounds a set of open research questions capable of moving the field toward a more interdisciplinary research agenda, as well as by introducing the contributions made by other articles in this Special Issue.
Disclosure statement
No potential conflict of interest was reported by the author(s).
Notes
1 Both automated and autonomous technologies denote systems that, once activated, can perform some tasks without human input. In robotics, automation implies less “sophistication” than autonomy because automated systems follow a pre-programmed sequence of actions (Winfield Citation2012, 12). However, integrating automated or autonomous technologies into military decision-making and targeting triggers similar problematic consequences for human control because such technologies increase system complexity.
2 AWS are defined as systems that are able to make targeting “decisions” without immediate human intervention. They may or may not be based on AI technologies (Garcia Citationforthcoming).
3 Such dynamics are not restricted to the study of algorithmic warfare as the study of remote warfare, for instance, demonstrates (Biegon, Rauta, and Watts Citation2021).
4 These include, for example, the Realities of Algorithmic Warfare project (PI: Lauren Gould) at the University of Utrecht and the DILEMA project (PI: Berenice Boutin) at the Asser Institute in The Hague.
5 Loitering munitions manufacturers hold that such systems require human assessment and authorisation prior to the release of force. But their marketing material also appears to point to a latent technological capability such systems may have to release the use of force without prior human assessment (Bode and Watts Citation2023).
6 The Martens Clause first appeared in the preamble to the 1899 Hague Convention. It is said to “fill a gap” when existing international law fails to address a situation by referring the principles of humanity and dictates of public conscience (Docherty Citation2018).
7 Protocol Additional to the Geneva Conventions of 12 August 1949, and Relating to the Protection of Victims of International Armed Conflicts (Protocol I), Art 31/2, Art. 51/4 (b) and Art. 51/4 (c).
8 Protocol Additional to the Geneva Conventions of 12 August 1949, and Relating to the Protection of Victims of International Armed Conflicts (Protocol I). Distinction: Article 48, 51(2), and 52 (2). Proportionality: Articles 51(5)(b), 57(2)(a)(iii) and 57(2)(b). Precautions: Article 57 and customary international law.