8,643
Views
37
CrossRef citations to date
0
Altmetric
Nuclear options

How viable is international arms control for military artificial intelligence? Three lessons from nuclear weapons

ORCID Icon
 

ABSTRACT

Many observers anticipate “arms races” between states seeking to deploy artificial intelligence (AI) in diverse military applications, some of which raise concerns on ethical and legal grounds, or from the perspective of strategic stability or accident risk. How viable are arms control regimes for military AI? This article draws a parallel with the experience in controlling nuclear weapons, to examine the opportunities and pitfalls of efforts to prevent, channel, or contain the militarization of AI. It applies three analytical lenses to argue that (1) norm institutionalization can counter or slow proliferation; (2) organized “epistemic communities” of experts can effectively catalyze arms control; (3) many military AI applications will remain susceptible to “normal accidents,” such that assurances of “meaningful human control” are largely inadequate. I conclude that while there are key differences, understanding these lessons remains essential to those seeking to pursue or study the next chapter in global arms control.

Acknowledgements

The author would like to thank the participants of the Cambridge Conference on Catastrophic Risk (CCCR) 2016, as well as Hin-Yan Liu, Sophie-Charlotte Fischer, Peter Cihon, Jade Leung, the editor, and two anonymous reviewers for valuable feedback on the argument throughout the process. He would also like to thank Emma Dam Olesen for support in preparing the manuscript.

Disclosure statement

No potential conflict of interest was reported by the author.

Additional information

Notes on contributors

Matthijs M. Maas

Matthijs M. Maas is a PhD Fellow at the University of Copenhagen (Centre for International Law, Conflict and Crisis, Faculty of Law), and a Research Affiliate with the Center for the Governance of AI (Future of Humanity Institute, University of Oxford). His research focuses, amongst others, on exploring effective and legitimate global governance strategies for emerging technologies, particularly artificial intelligence.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.