10,766
Views
26
CrossRef citations to date
0
Altmetric
Regulating warfare

How (not) to stop the killer robots: A comparative analysis of humanitarian disarmament campaign strategies

ORCID Icon &
 

ABSTRACT

Whether and how Lethal Autonomous Weapons Systems (LAWS) can and should be regulated is intensely debated among governments, scholars, and campaigning activists. This article argues that the strategy of the Campaign to Stop Killer Robots to obtain a legally binding instrument to regulate LAWS within the framework of the United Nations Convention on Certain Conventional Weapons is not likely to be effective, as it is modeled after previous humanitarian disarmament successes and not tailored to the specifics of the issue. This assessment is based on a systematic comparison of the autonomous weapons case with the cases of blinding laser weapons and anti-personnel landmines that makes use of an analytical framework consisting of issue-related, actor-related, and institution-related campaign strategy components. Considering the differences between these three cases, the authors recommend that the LAWS campaign strategy be adjusted in terms of institutional choices, substance, and regulatory design.

This article is part of the following collections:
Bernard Brodie Prize

Acknowledgements

This article has benefited greatly from suggestions from Hylke Dijkstra and the anonymous reviewers. We are also grateful to Michael Brzoska and the participants of the EWIS 2018 Workshop on autonomous weapons, convened by Ingvild Bode and Hendrik Huelss, for their valuable comments on earlier versions. Hannah Mück provided excellent language editing.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Notes on contributors

Elvira Rosert is a Junior Professor for International Relations at Universität Hamburg and the Institute for Peace Research and Security Policy in Hamburg. Her research is concerned with the emergence, robustness, and interaction of international norms, mainly in the fields of International Humanitarian Law and Humanitarian Arms Control.

Frank Sauer is a Senior Researcher at Bundeswehr University Munich. His work covers nuclear issues, terrorism, and cyber-security, as well as emerging military technologies. He is a member of the International Committee for Robot Arms Control (ICRAC) and serves on the International Panel on the Regulation of Autonomous Weapons (iPRAW).

Correction Statement

This article has been republished with minor changes. These changes do not impact the academic content of the article.

Notes

1 Artificial Intelligence is a broad, underdefined umbrella term for various computer-based techniques and procedures to automate tasks that previously required the application of human intelligence. The goalposts of what is considered “artificially intelligent” are constantly moving. Despite its fuzziness, the term AI is used ubiquitously.

2 AI is not necessarily required to automate a weapon system—terminal defense systems have engaged targets without it for decades. But AI is a very powerful enabling technology. So, while “weapon autonomy” is not brand new, it is only the recent innovations in AI that allow it to come to full fruition.

3 There are currently operational weapon systems—most notably the loitering munition ‘“Harpy”—that qualify as fully autonomous, performing target selection and engagement without human intervention.

4 We are aware that we should refrain from using the designation “LAWS” altogether at this point in the article. We chose not to do so for reasons of reader-friendliness.