551
Views
4
CrossRef citations to date
0
Altmetric
Articles

THE MORAL PERMISSIBILITY OF AUTOMATED RESPONSES DURING CYBERWARFARE

&
Pages 18-33 | Published online: 17 Apr 2013
 

Abstract

Automated responses are an inevitable aspect of cyberwarfare, but there has not been a systematic treatment of the conditions in which they are morally permissible. We argue that there are three substantial barriers to the moral permissibility of an automated response: the attribution, chain reaction, and projection bias problems. Moreover, these three challenges together provide a set of operational tests that can be used to assess the moral permissibility of a particular automated response in a specific situation. Defensive automated responses will almost always pass all three challenges, while offensive automated responses typically face a substantial positive burden in order to overcome the chain reaction and projection bias challenges. Perhaps the most interesting cases arise in the middle ground between cyber-offense and cyber-defense, such as automated cyber-exploitation responses. In those situations, much depends on the finer details of the response, the context, and the adversary. Importantly, however, the operationalizations of the three challenges provide a clear guide for decision-makers to assess the moral permissibility of automated responses that could potentially be implemented.

Acknowledgements

Thanks to two anonymous reviewers for their valuable comments on an earlier version of this article. DD was partially supported by a James S. McDonnell Foundation Scholar Award. JHD's work was supported, in whole or in part, with funding from the US government. Any opinions, findings and conclusions, or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the University of Maryland, College Park, and/or any agency or entity of the US government.

Notes

1. And because we are interested in automated actions, we focus solely on responses. It is unclear whether it even makes sense to talk about an ‘automated’ unprovoked action.

2. There are also interesting questions about the combatant status of various individuals involved in the cyberwarfare process. For example, how should we think about someone who writes code but does not actually put it into action? Is such an individual comparable to a worker in a munitions factory, or a military support individual, or some other role? For space reasons, we do not address these issues in this article.

3. We assume throughout that only humans, either individuals or groups of individuals, can (in these settings) be a locus of moral responsibility. In particular, we assume that computers cannot be morally responsible for their actions. We also take as given that these automated responses are at least potentially morally justified in particular situations, as the mere passage of time between the decision and the action is not sufficient (on its own) to eliminate the possibility of moral justification. We instead focus on when the potential justification is actual.

4. One might hope to have a function that takes H and R (and undoubtedly other factors) as input and then outputs an appropriate threshold of justification, perhaps expressed as a probability. This possibility seems unlikely for at least two reasons. First, the ‘other factors’ mentioned in parentheses are likely to be a hopelessly complex set, as almost anything can be relevant under sufficiently unusual circumstances. Second, any proposed threshold would likely be vulnerable to a sorites-type objection: if τ is acceptable, then τϵ will almost certainly be acceptable as well, and so we have a wedge to show that no threshold is defensible. More about the issue of what level of justification is required can be found in, for example, Zimmerman (Citation1997), Rosen (Citation2003), and Guerrero (Citation2007).

5. But see our discussion of defensive automated responses in the next section.

6. In general, a projection bias is the (unjustified) attribution of one's own psychological attributes (beliefs, desires, etc.) to others. Projection bias often involves ‘mirroring’ one's own perspectives onto an adversary, rather than adopting the adversaries’ point of view. In this research literature, one's own future states are regarded as ‘other people’ relative to one's current state, and so ‘future self-projection bias’ is a specific case of this more general bias.

7. One particularly striking example that shows the pervasive nature of this phenomenon is that people who buy winter jackets on cold days are more likely to subsequently return those jackets (Conlin et al. Citation2007). Why? Because they are cold at the moment of purchase and so expect to be cold in the future, and thus overestimate the likelihood that they will use the jacket in the future. (And items that people do not use as frequently as they expect are more likely to be returned.) A second example comes from the stability of older adults’ preferences for life-sustaining medical treatment (Ditto et al. Citation2003). Such preferences were collected from elderly adults three times at yearly intervals. Their preferences on whether to receive a treatment or not were only moderately stable – about 70% were the same each year as the year before. So people cannot anticipate their preferences even for serious life decisions.

8. Of course, a similar possibility exists in the kinetic realm. However, those cases are much more likely to involve the possibility or requirement of action at the moment, and so those actions would presumably be better aligned with actual decision-maker preferences.

9. Installing a back door obviously changes the adversary's systems, but the simple act of installation should not change their functioning, assuming that the adversary does not have detection routines for just such back doors. Of course, using such a back door could easily change their functioning, but then it would fall into the category of either impactful CNE (if no adverse impacts) or cyber-attack.

10. To be a bit more precise, the challenge only indicates whether the automated response is impermissible; meeting the challenge is necessary for permissibility, but not sufficient.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.