2,970
Views
0
CrossRef citations to date
0
Altmetric
Original Article

Proportionality in cyberwar and just war theory

ORCID Icon &
Pages 1-24 | Received 15 Sep 2022, Accepted 08 Feb 2023, Published online: 14 Feb 2023

ABSTRACT

Which harms and benefits should be viewed as relevant when considering whether to launch cyber-measures? In this article, we consider this question, which matters because it is central to determining whether cyber-measures should be launched. Several just war theorists hold a version of what we call the ‘Restrictive View’, according to which there are restrictions on the sorts of harms and benefits that should be included in proportionality assessments about the justifiability of going to war (whether cyber or kinetic). We discuss two such views – the Just Cause Restrictive View and Rights-based Restrictive View – and find both wanting. By contrast, we defend what we call the ‘Permissive View’. This holds that all potential goods and bads should be included in proportionality decisions about cyber-measures, even those that appear to be trivial, and where the various harms and benefits are given different weights, according to their agent-relative and agent-neutral features. We argue further that accepting the Permissive View has broader implications for the ethical frameworks governing cyberwar, both in terms of whether cyberattack provide just cause for coercive responses, including kinetic warfare and cyber-responses, and whether cyber-measures should be governed by just war theory or a new theory for cyber-operations.

Introduction

Cyber-operations can bring about significant harms. Most famously, the Stuxnet operation resulted in the destruction of Iranian nuclear facilities. Cyberattacks have also threatened democracies, most notably with Russian interference in the US presidential election in 2016, but also with attacks on countries such as Estonia and Georgia (Rid Citation2012, 5–32; 12–15).Footnote1 More recently, the hacking attack into the email system of the Norwegian parliament in August 2020 was attributed to Russian actors, with the Foreign Minister of Norway, Ine Eriksen Søreide, claiming that this affects our ‘most important democratic institution’ (BBC Citation2020a).Footnote2 In February 2022, Ukrainian banks and government websites were subject to multiple cyberattacks, including by ‘wiper’ malware and distributed-denial-of-service (DDoS) attacks, which were attributed to Russian hackers (BBC Citation2022). Some worry that attacks on critical infrastructure could undermine states’ ability to provide vital public services for citizens, including hospitals, dams, and air-traffic service. In February 2021, the water treatment plant of the city of Oldsmar, Florida, was hacked, with the level of sodium hydroxide increased to dangerous levels (New York Times Citation2021).Footnote3 Cyberattacks can also violate privacy rights, undermine access to the Internet, and have been even launched against the vaccine efforts against COVID-19 (BBC Citation2020b).

Conversely, cyberwar can provide several seemingly reputable benefits. It can protect agents from financial losses, for instance, from being subject to a ransom attack. For example, the US government used cyber-measures to retrieve much of the ransom paid after the attack against the Colonial Pipeline (BBC Citation2021). It can also protect state secrets, technology, and confidential information about citizens from theft. It can, more generally, protect the digital infrastructure of states, thereby encouraging economic growth and further technological innovation, such as increased digitization and AI. Moreover, given the potential for cyberwar to be more precisely targeted compared with kinetic operations, it might become a tool for states that seek to intervene against rights-violating practices in other states. In its 2021 Strategic Defence Review, Global Britain in a Competitive Age, the UK notes that its recently created National Cyber Force could include ‘[i]interfering with a mobile phone to prevent a terrorist from being able to communicate with their contacts’, ‘[h]elping to prevent cyberspace from being used as a global platform for serious crimes’, and ‘[k]eeping UK military aircraft safe from targeting by weapons systems’ (UK Gov Citation2021, 42).

Which harms and benefits should be viewed as relevant when considering the permissibility of cyberwar? In this article, we consider this question. This question matters because it is central to determining whether cyber-measures should be launched. If, for instance, many of the potential benefits are excluded, cyber-measures may be permissible only in a limited number of cases since there will be far fewer opportunities for them to be proportionate. By contrast, if several potential benefits are included, cyber-measures may be more likely to be proportionate and therefore potentially permissible in many more instances, such as those posited in the UK’s Strategic Defence Review.

Several just war theorists argue that there are restrictions on the sorts of harms and benefits that are included in proportionality assessments about the justifiability of going to war.Footnote4 Most notably, economic benefits and technological innovations should not be included, when weighing up the potential harms and benefits. This is what we call the ‘Restrictive View’. On a leading version of the Restrictive View, only the goods relevant to just cause count towards proportionality calculations. In the context of the cyber-realm, the Restrictive View would hold that only certain types of goods are relevant for thinking about the justifiability of launching a cyberwar and not, for instance, economic gain or technological advances.

By contrast, in this article, we defend what we call the ‘Permissive View’. This holds that all potential goods and bads should be included in proportionality assessments about resorting to cyber-measures and their conduct, even those that appear to be trivial. We argue that the various harms and benefits should be given different weights, according to their agent-relative and agent-neutral features. In doing so, we argue that the Permissive View also applies to wars, so that kinetic military operations, pace several prominent just war theorists, should also be assessed by all potential effects, even those that appear on the face of it to be frivolous.

We argue further that this has broader implications for the ethical frameworks governing cyberwar and just war theory. Existing discussions of the ethics of cyberwar largely (but not completely) revolve around two questions.Footnote5 The first is whether a cyberattack provide just cause for a coercive response, including kinetic warfare and cyber-responses. The second is whether cyberwar should be governed by just war theory or whether there should be a new theory for cyberwar, such as ‘just information warfare’ (Taddeo Citation2016). The question of which harms and goods should be included in proportionality assessments of cyberwar has, as far as we are aware, been largely overlooked (and we reject the prevailing accounts offered by just war theorists). Yet, we will argue, proportionality assessments are central to these two questions. In response to the first issue, we argue that assessments of just cause for a coercive response ultimately depend on proportionality assessments. On the second issue, we argue that, once we reject the Restrictive View, it becomes clearer that there is a more plausible alternative to the two leading camps, which suggest that cyber-measures should be governed by just war theory or there should be a new theory of cyberwar. This holds that cyber-measures should be governed by the principles underpinning just war theory, but there still should be a new theory for cyberwar for more practical guidance.

In what follows, we first outline the Permissive View (Section II), then consider two leading forms of the Restrictive View (Section III), before defending the Permissive View in more detail (section IV). We then turn to the broader implications (section V). All told, there are two central contributions of the article: (1) to provide a better understanding of proportionality assessments for cyber-measures and, more generally, in just war theory and (2), as a secondary aim, to defend a broader understanding of the ethical norms that should govern cyber and the role of just war theory.

Before beginning, a clarification is required. When using the term ‘cyberwar’, we do not suggest that cyber-operations will meet the requirements for an operation to be ‘war’. According to leading accounts of international law, armed conflicts require ‘(1) the presence of organized groups that are (2) engaged in intense armed fighting’ (O’connell Citation2008).Footnote6 We still use the term ‘cyberwar’ (albeit sparingly) to be in line with the leading literature on the ethics of cyber-measures. That said, we often use the terms ‘cyber-measures’ or ‘cybersecurity’ instead and follow Myriam Dunn Cavelty in holding that ‘cybersecurity’ is a ‘multifaceted set of technologies, processes and practices designed to protect networks, computers, programs, and data from attack, damage or unauthorized access, in accordance with the common information security goals: the protection of confidentiality, integrity and availability of information’ (Citation2015, 89). As we will explain later, cyber-measures can be both offensive and defensive.

The permissive view introduced

In this section, we will introduce the Permissive View. In doing so, we will (i) clarify the issues at stake in this article further and the contribution of the article more specifically, (ii) delineate what we mean by ‘proportionality’, and (iii) consider the scope of the Permissive View in relation to offensive and defensive cyber-security.

Randall Dipert (Citation2014, 27) rightly notes that cyberwar differs from warfare in that the sorts of harms involved are not typically that severe. To be sure, some potential cyberattacks might have lethal consequences, such as the destruction of critical infrastructure that leads to deaths. But, importantly, others may not. In his words:

Attacks in cyberwarfare often will inflict a distinctive form of harm: they may not kill or wound human beings, or even destroy or damage the physical objects of value, but may impair the functioning of systems that are important for welfare and happiness, such as systems of making economic transactions or of communication. This harm may be severe in no single region or facet of a culture, but may be a highly distributed harm, afflicting tens or hundreds of millions of people in important, but non-life-threatening ways. This problem is thus twofold. The kind of harm is not one dealt with in traditional moral and legal theory of war, and there is a problem of the possibly large sum of harms.

In this article, we focus on the first issue. The second issue – the aggregation of harms – has been significantly explored elsewhere in recent work in philosophy.Footnote7 The first issue concerns how we should view the various harms of cyberwar and, conversely, the potential benefits as well, which may also, like the harms, be highly distributed and provide important but not life-changing benefits. The proliferation of cyber-measures as well as their potential to bring about harms and benefits that have not been much discussed in the just war literature means that this is of central importance. For example, launching cyber-measures to frustrate rights-violating practices in another country can lead to harms such as reduced performance of key systems in the target state, compromised personal information for innocent public servants with associated psychological stress and anxiety, reputational damage, and the disruption of systems with negative downstream effects for innocent citizens (Agrafiotis et al. Citation2018, 7–8). How are such harms to be weighed against potential benefits, such as frustrating these practices and maintaining cybersecurity that enables economic growth?

Indeed, existing accounts of proportionality in the ethics of cyberwar are underdeveloped. One concern highlighted in this literature is that cyber might have unintended consequences, such as unintended proliferation and escalation (Lin, Allhoff, and Abney Citation2014, 43). In this regard, Mariarosaria Taddeo comes closest to presenting the need for a new account of proportionality with cyberwar. Pointing out that conventional accounts of harm have the counterintuitive implication of excluding ‘harms such as the destruction of important digital records’, she argues that ‘it is not the prescription that the goods should be greater than the harm in order to justify the decision to conduct a war, but rather […] the set of criteria endorsed to assess the good and the harm that shows its inadequacy when considering [cyber]’ (Citation2014, 130–1). We think this is helpful but does not develop which sorts of goods and harms should matter. This is our task.

We understand proportionality in general as comparing the goods and benefits of the measure, compared to not launching the measure. On this view of proportionality, considerations about necessity or last resort are external to proportionality (these concern comparing the goods and benefits of the measure to the goods and benefits of other options, such as economic sanctions or diplomacy). This is similar to recent accounts of proportionality proposed by just war theorists, such as by Jeff McMahan (Citation2018b).Footnote8

The Permissive View holds that all potential harms and benefits should be included in proportionality assessments for cyber and for war. There is no restriction on certain types of benefits or harms. As we will discuss in section IV, the Permissive View is not, however, a consequentialist viewpoint, since the goods and bads involved may be deontological (more specifically, they are morally weighted, according to their agent-relative and agent-neutral features).

To help understand the Permissive View, consider the response to the attack on the Colonial Pipeline. After Colonial Pipeline paid $4.4 million worth of cryptocurrency in ransom to the attackers, an FBI taskforce (apparently) hacked back and seized 63.7 Bitcoins (approximately $2.3 million) (Department of Justice Citation2021). On the Permissive View, the proportionality of the measure is determined not only by ‘standard’ harms and benefits, such as any material harm to DarkSide’s computing devices and the direct benefit of retrieving the 63.7 Bitcoin. It also includes other less obvious harms and benefits. These include, for instance, the benefits of increased experience of dealing with unjust attacks, as well as the encouragement of others to work with law enforcement to take similar action. In regard to the latter, an FBI spokesperson stated that, ‘[t]he message we are sending today is that if you come forward and work with law enforcement, we may be able to take the type of action that we took today to deprive the criminal actors of what they’re going after here, which is the proceeds of their criminal scheme’ (CSO Citation2021).

What is the scope of the Permissive View? As already alluded to, it is vital to distinguish between two central types of cyber-operation: defensive and offensive. This distinction draws on a central distinction in International Relations theory between offensive and defensive kinetic force, following Robert Jervis’ seminal discussion where he notes that ‘[t]he essence of defence is keeping the other side out of your territory. A purely defensive weapon is one that can do this without being able to penetrate the enemy’s land’, whereas an offensive weapon is one that is operated within the enemy’s land (Citation1978, 203 emphasis added). Analogously, in the cyber-sphere, defensive measures are those that largely occur within the defender’s network, whereas offensive ones also operate beyond the defender’s own network.

Following on from this, there are different sorts of both defensive and offensive measures. Defensive measures differ in the degree to which they are passive or active.Footnote9 On the one hand, ‘passive’ measures concern no activity beyond the defender’s network and comprise measures such as patch management procedures, firewalls, antivirus software, and limits to administrative authority. On the other hand, ‘active’ defensive measures do involve some activity beyond the defender’s network, but, crucially, involve little or no disruption of others’ networks. They comprise measures such as ‘tarpits’ (which slow down malicious traffic to disincentivise attackers from connecting to the network), ‘honeypots’ (which attract intruders and log their behaviour), and ‘beaconing’ (which alert the owner of unauthorized entry attempts and provide information about the IP addresses and network configuration of the attackers) (Hoffman and Nyikos Citation2019, 19; CCHS Citation2016, 10–11).Footnote10 These active defence measures, which are not disruptive or significantly intrusive, can be part of ‘active-cyber defence’ (ACD), which is an umbrella notion used to capture several more proactive measures in policy debates about cyber-operations. Importantly, ACD also includes some offensive measures, such as entering into an attacker’s network to obtain information about them (such as capturing an image through their webcam) and ‘botnet takedowns’ (the disabling of the systems of infected attackers) (CCHS Citation2016, 10–12; Hoffman and Levite Citation2017, 8). The final category to note is ‘hacking back’. This is more offensive still (and beyond the notion of ACD) since it comprises the intention to disrupt or destroy the defender’s network (rather than solely to defend against the attack or to retrieve stolen data) (CCHS Citation2016, 12).

For this article we are concerned with proportionality assessments for both defensive and offensive cyber-operations.Footnote11 Although the former are far more common, the frequency of offensive cyber-operations is likely to expand further in the future, given that, first, there is a growth in the market of private cybersecurity firms providing offensive operations and, second, governments relax laws governing offensive operations (Maurer Citation2018, 19). In the US, the Trump Administration adopted a more aggressive approach than that endorsed by the Obama Administration, with a much greater willingness to use offensive cyber-weapons, including against ISIS (Valeriano and Jenson Citation2019; Taillat Citation2019, 375; Temple-Raston Citation2019). It seems unlikely that the Biden Administration will step back (Fidler Citation2021). Moreover, other influential states, such as Russia, China, Germany, and France, perceive that there is a need to adopt more offensive postures (Eilstrup-Sangiovanni Citation2018, 386). In its 2021 review, the UK makes clear that it intends to engage in cyber offensive measures (UK Gov Citation2021). As we understand it then, the Permissive View applies to both defensive and offensive cyber-operations.Footnote12

The restrictive view

The leading alternative to the Permissive View is what we call the ‘Restrictive View’. In this section, we will outline the Restrictive View of proportionality assessments in general amongst just war theorists. There are two main versions of the Restrictive View: we call these the ‘Just Cause Restrictive View’ and the ‘Rights-based Restrictive View’. On the former, the relevant goods and harms are those that stem from the just cause. For instance, when assessing a war launched to defend oneself against an unjust attack, the relevant goods will (largely) concern self-defence. When a war is launched to tackle mass atrocities in another state, the relevant goods (largely) will concern the effects on the mass atrocities. On the Rights-based Restrictive View, by contrast, the goods and harms are relevant to the extent that they concern the protection or violation of rights. Let us examine each in turn.

The just cause restrictive view

The most well-known defence of the Just Cause Restrictive View in just war theory is provided by Thomas Hurka (Citation2005).Footnote13 Hurka argues that the proportionality calculations are asymmetrical, in that, on the one hand, there is no restriction on which harms inflicted are to be included (including economic harms) but, on the other, there is a restriction on which goods are to be included. According to Hurka, only the goods that are relevant for the just aims of the war are included, but not incidental goods, such as boosting the economy or science. In his words:

Imagine that our nation has a just cause for war but is also in an economic recession, and that fighting the war will lift both our and the world’s economies out of this recession, as World War II ended the depression of the 1930s. Although the economic benefits of war here are real, they surely cannot count toward its proportionality or make an otherwise disproportionate conflict proportionate. Killing cannot be justified by merely economic goods, and the same is true of many other goods. A war may boost scientific research and thereby speed the development of technologies such as nuclear power; it may also satisfy the desires of soldiers tired of training and eager for real combat. Neither of these goods seems relevant to proportionality or able to justify killing; an otherwise disproportionate war cannot become permissible because it has these effects (2005, 40).

To develop his argument, Hurka draws on a distinction between ‘sufficient’ and ‘contributing’ just causes: the former establishes the justness of the war and the latter provides further aims and can contribute to its justifiability (Citation2005, 41). For Hurka, goods that concern sufficient just causes are relevant in proportionality calculations, whereas goods that concern contributing just causes are only relevant if there is also a sufficient just cause. He argues, for instance, that the ‘Taliban’s repression of Afghan women was not a sufficient just cause; a war fought only to end that repression would have been wrong. But once there was a sufficient just cause in the Taliban’s harbouring of terrorists, the fact that the war would improve the lot of Afghan women became a factor that counted in its favour and helped make it proportionate’ (Citation2005, 42).Footnote14 By contrast, peripheral benefits – those that do not concern either the sufficient or contributing just cause – are not to be included in the proportionality calculations.Footnote15

How would this Just Cause Restrictive View play out in the cyber-realm?Footnote16 First, all harms of a cyber-measure would be included, such as, in the more extreme offensive cases, the destruction of the enemy’s critical infrastructure, as well as the economic harms that result from the measure. It would also include more minor harms, such as the denial of Internet access for certain citizens and the invasion of privacy.Footnote17 Some of these might appear to be rather trivial, including psychological harms stemming from lack of Internet access.

We can distinguish between direct and indirect cyber-harms here relevant to the Just Cause Restrictive View.Footnote18 Direct harms concern those that are the direct result of your cyber-measure, such as those just listed. Indirect harms concern those that are a reasonably foreseeable secondary effect of your cyber-measures. These include a greater investment by the enemy in cyber-spending, resulting in cuts elsewhere. It might also include a greater curtailment of cyber freedoms in the state subject to the cyberattack, in order to reduce its vulnerabilities to future cyberattacks, as well as increased fear of being online, harming economic growth.Footnote19

We can also distinguish between harms to those beyond our borders or network (i.e., external costs) and harms to ourselves (i.e., internal costs).Footnote20 The former is straightforward and includes all the examples listed so far. The latter would include the indirect harms that may come from reprisals from the enemy, which may be foreseeable, and perhaps particularly with a cyber response. It might also include the direct harms of launching the measure, such as the financial costs of doing so (or, for kinetic operations, the costs in terms of soldiers’ lives). In addition, it may include reputational harms that stem from launching a controversial kinetic or cyber-operation, such as targeted strikes or hacking back, which reduces your state’s international legitimacy (Agrafiotis et al. Citation2018, 7–8). It can also include the disclosure of your cyber-capabilities and technologies, making it possible for others to imitate your attack in the future and for others to understand better your capabilities, particularly if your attack is only partially successful (this could render you vulnerable to future attacks).

As far as benefits are concerned, this depends on how we conceive of sufficient and contributing just causes for cyber-measures. What comprises sufficient just cause for a cyber-response? We surmise that they would concern, first, the sorts of sufficient just causes that provide sufficient just cause for war. Traditionally, these comprise the goals of self-defence from serious external aggression, such as invasion and annexation, and humanitarian intervention to tackle mass atrocities. In addition, scholars writing on the ethics of cybersecurity argue that cyberattacks that not only impose costs but rather attempt to bypass the sovereignty and agency of the target’s citizens by imposing the will of the attacking state upon the targeted state can be sufficient just cause for a cyber-response (Smith Citation2018).

For kinetic operations, contributing just causes, we can suppose, would concern significant human rights violations that would by themselves be unlikely to provide sufficient just cause to go to war, such as severe oppression (as in Hurka’s Afghanistan example). In the cyber-realm, contributing just causes, again we surmise, would concern significant human rights violations that would themselves be unlikely to provide sufficient just cause for cyber-measures, particularly offensive cyber-measures, perhaps again such as severe oppression. Peripheral benefits, such as economic benefits from cyberwar would, again, be excluded.

Thus, on the Just Cause Restrictive View, the list of potential accounts of harms is long, and includes direct and indirect harms, as well as harms that are relatively minor, but the list of potential benefits is short, comprising only those relevant to the sufficient and contributing just cause. The implications for cyber are that many cyber-operations would be deemed impermissible because they would fail to bring about sufficient morally relevant benefits to outweigh the harms. Given that the sorts of benefits are limited to those that concern the just cause, but the account of harms is far broader, many cyber-operations, particularly offensive ones, will be deemed disproportionate.

Against the just cause restrictive view

There are, however, three problems with the Just Cause Restrictive View, and these are particularly brought out by the application to the cyber-realm.

The first problem is with the notion of ‘contributing’ just causes. ‘Sufficient’ just causes, by definition, establish whether there is just cause. It is simply a matter of whether the war, or a defensive or offensive cyber-measure is undertaken in response to a just cause, such as external aggression or a significant threat to vital infrastructure. Contributing just causes appear to play no role in establishing just cause. As such, it seems to us that calling them ‘just cause’ is misleading. Instead, the notion of contributory just cause might be seen as an effort to capture the fact that proportionality assessments concern more than the goods related to the just cause (conceived in terms of sufficient just cause). A potential positive effect, for instance, on tackling repression in Afghanistan is clearly a relevant concern that should be factored into the assessments of the justifiability of the war and, in particular, any assessment of the proportionality of the war. The seemingly arbitrariness of the notion of contributing just cause is indicated by Hurka’s claim that there is not a unifying feature - ‘they are just the items on a list’ (Citation2005, 43).

The second problem is that it is unclear why, if a particular consideration is relevant for proportionality when it involves harm, it is not also relevant for proportionality when it concerns a benefit. For instance, it seems that economic costs and benefits can – and should—be included. Note here that one might reject the Just Cause Restrictive View and still think that harms should be weightier in proportionality calculations. Some of the literature on the metaphysics of harm holds that it is more important to tackle a harm than it is to give an equivalent benefit (see Shiffrin Citation2012). We do not challenge this here. Our point is that, even if non-just cause benefits are less weighty, they still matter and should be included in proportionality calculations and cyber and just war. Pace Hurka, this is even when there is not a sufficient just cause. That is to say, the goods and bads relevant for proportionality calculations are not dependent on just cause.

For one, we need to include benefits to know whether a cyber-response, or a kinetic operation, will on balance cause economic harm to our state or to those beyond our borders (or network). For instance, a war against rebels targeting your border towns might be hugely expensive, but bring about significant economic benefits as well, for instance, by stopping incursions into your borders and thus enabling economic growth (e.g., by halting theft of minerals or such like). Likewise, an offensive cyber-operation that completely disables a nonstate actor in another state from being able to launch future cyberattacks may cause some economic harm to the state that is failing to properly stop such attacks from within its borders, but in the longer term bring economic benefits to the state by improving its perceived trustworthiness, enabling it to obtain better trade deals.Footnote21 (Indeed, at one point Hurka (Citation2014, 422) further argues that economic benefits are relevant when they reduce net economic harm, but not relevant when they create a net economic benefit.) It is unclear why, if one accepts this, we should not view economic benefits as relevant more generally. More than that, we believe this discussion highlights the point that economic goods are rarely merely economic. Economic benefits affect the wellbeing and opportunities of individuals and should for that reason count towards proportionality.

Third, with cyber-measures, both offensive and defensive, it is often unclear whether they will result in significant harm to individuals. If a cyber-measure will not result in much harm, why should the potential benefits be delimited to those that concern the sufficient and contributing just causes? Suppose, for instance, an offensive cyber-measure would simply track an attacker’s network to obtain information about them. It seems the potential economic benefits that come from this, such as obtaining information that enables those engaged in the cyber-measure to better prepare their defences and developing cyber-capabilities that could improve the longer-term efficacy of cyber-defences, are relevant. In their discussion of the limiting of the good effects of war, Jeff McMahan and Robert McKim suggest that there should be different restrictions on the relevant goods depending on the means being used and that, if there is an alternative that is so benign, ‘it seems that virtually any good effect that it would have would count in its favour in comparing its effects’ (Citation1993, 527). It seems, then, that, when cyber-measures will not be likely to be harmful, all positive effects should be included. But if we accept this for cyberattacks that are not harmful, it is unclear why we should hold that harmful cyberattacks be judged by a different standard.Footnote22

It might be reasoned here that agents are permitted or required to discount fully (or partially) peripheral benefits that arise from their doing harm. Thus, peripheral benefits that arise from a state’s harmful offensive cyberattack are not included because they involve doing harm. Yet, as we will see below when we outline the Permissive View that assigns weights to goods and bads according to their agent-relative and agent-neutral features, the wrongfulness of doing harm can be accounted for (it is given greater weight in the proportionality calculation) and peripheral benefits still be included. To that extent, the harms of an offensive cyber-measure on innocent civilians by doing harm are already given significant weight as a reason against the option. Thus, the fact that a cyberattack may do harm does not provide a reason to exclude potential peripheral benefits to other innocent civilians, such as increased economic growth. Of course, these benefits may be relatively small in comparison to the harms (e.g., potential death of innocent). We accept this; agent-neutrally, some harms and benefits are clearly more significant than others, for instance because of their greater negative effects on individual well-being. But it still seems that peripheral benefits are relevant, most obviously in marginal cases where the harms of an offensive cyber-measure are relatively minor (e.g., disruption of Internet services).Footnote23

It might be reasoned instead that agents are permitted or required to discount fully (or partially) peripheral benefits when doing harm because only intended benefits matter. These intended benefits concern the just cause, which should be aimed at. Thus, when a state launches an offensive cyber-measure, which harms some innocent civilians (e.g., denies them Internet services), to counter cyber-aggression in another state it is only the intended benefits in regard to tackling aggression that matter. However, it is unclear why unintended benefits should be excluded, providing that they are reasonably foreseeable. To illustrate, to the extent that lockdowns allowed some people to enjoy more time with their families during the pandemic (albeit involuntarily), this was surely a foreseeable benefit even if it was not intended, and it seems clear that this benefit should count towards the proportionality assessments of the measure (along with harms of lockdowns, such as increased risks of domestic violence). Indeed, unintentional but foreseen consequences are widely viewed as relevant across moral and political philosophy, including in the Doctrine of Double Effect. Intentional harms, of course, might be given greater negative weight, if one holds that intentions are morally relevant. There is a seemingly plausible explanation for this: an agent’s problematic motive, such as wanting to harm innocents for pleasure or out of hatred for their ethnic group, should be viewed a reason against it being permissible for them to harm.

However, there does not seem to be a plausible explanation for viewing intentional benefits as much weightier positively, to the extent that unintended benefits are excluded. At best, there is a very minor reason to favour intended benefits, that is, benefits brought about where agents are well-motivated (to achieve the just cause) better than those brought about when they are neutrally motivated. In other words, it is better to have good consequences (e.g., tackling of cyber-aggression) brought about for the right reasons (e.g., the tackling of unjust cyber-aggression) than for other reasons (e.g., as a side-effect of other cyber-activities). Even if this is right – and we are not sure it is – this only provides a seemingly small reason to weight intended benefits greater than unintended ones. It does not provide a reason to exclude or significantly discount unintended but foreseen benefits. Accordingly, looking to the two central potential agent-relative considerations – the doctrine of doing and allowing and intentionality – does not explicate why peripheral benefits should be excluded in proportionality calculations.Footnote24

The rights-based restrictive view

Let us turn to an approach that may appear to address these problems. The Rights-based Restrictive View holds that proportionality assessments in kinetic and cyber-operations should include harms and benefits in so far as they concern the violation/protection of rights. This would differ from the Just Cause Restrictive View as rights that are not relevant to the just cause would be included. It could be asymmetric in that it holds that the benefits be measured in terms of rights, but all harms be included, including those that concern rights-violations.Footnote25 Yet adopting an asymmetric approach would run into the same problem as encountered by the asymmetric Just Cause Restrictive View with providing an overall judgement that excludes peripheral benefits. In this case, the issue is that it is impossible to know whether an option has overall caused non-rights harms without also take into account non-rights benefits.

It seems more plausible to adopt a symmetric rights-based approach that reflects rights in both the assessment of harms and of benefits. To that extent, Christopher Eberle goes on to offer his own account of the goods and bads relevant to proportionality, proposing what he calls a ‘restricted, symmetrical’ account. On this account, ‘only those goods to which human beings actually have rights are relevant and only those evils, the infliction of which actually wrongs human beings, are relevant’ (Citation2016, 81).

This might appear to have some intuitive sense, particularly if we view rights in terms of what Henry Shue (Citation1980) famously calls ‘basic rights’. For instance, when considering whether to go to war, we should look to assess the aggregate effects on basic rights, for both within our state and those beyond our borders (perhaps weighing the effects on the latter as more important, to the extent that one holds that states have strong duties to reflect to a greater extent the effect on basic rights of their citizens).Footnote26 This would include both direct and indirect effects on basic rights, such as, in regard to the latter, the potential for unilateral war to undermine international norms governing force and to increase international instability and lead to human rights violations in the longer term.

Yet, in regard to cyberwar, this option seems less attractive. The central problem is that cyber-operations concern many considerations that would not be included under leading accounts of basic rights, most notably the right to privacy, but also the right to Internet access and, even, on some accounts, rights to political participation.Footnote27 To remedy this, one might draw on a broader notion of rights still, to include non-basic rights, such as the right to privacy (Marmor Citation2015). But this runs into two problems.

The first is informational. We would need a complete account of rights, both basic and non-basic, to be able to determine which effects are to be included in proportionality assessments. Of course, this is more of a pragmatic issue than an insurmountable problem. One might borrow from leading typologies of rights, such as those provided by philosophers (e.g., Steiner’s (Citation1984) will theory of rights). Alternatively, one might adopt an account of human rights as a basis. However, these accounts are notoriously contested, with significant controversies about what exactly is to be included in the account of rights.Footnote28 It would lead to proportionality assessments, then, subject to the disagreement about whether something is viewed as a right or not.

This is particularly an issue in the cyber-realm given that there are numerous fast-developing new rights issues, which are not easily covered by existing accounts of rights or human rights, including the right to the Internet (e.g., free from cyber-intrusion), the right to reasonably fast broadband access, and the right to net neutrality. Again, this is not a knockout blow: these might be included in a new, updated version of rights. But as the cyber-realm develops further, e.g., with AI, we would constantly need to be updating the accounts of rights, which renders the subject of proportionality assessments needed to be constantly revised. Perhaps more significant, assessments of cyber-operations will need to also consider, it seems, the likely effects on things that are not currently viewed as rights in the mainstream literature on rights, such as those regarding AI (Risse Citation2019). Do we, for instance, have a right not to be subject to a major decision taken by AI, rather than a human? If a cyber-measure were to knock out the AI capacities of an enemy, this would be a relevant consideration.

More strongly against the Rights-based Restrictive View, it seems that we should include benefits that do not clearly concern rights. The effects of a measure on future economic growth are one example. It does not seem that we have a right to future economic growth. A robust cyber-response might enable us to reduce future spending on cybersecurity, enabling us to, say, decrease business rates, which leads to growth. Those benefited by the growth might be wealthy, but increasing wealth due to the cyber-measure would still enable them to have a greater amount of leisure time, such as spending more time with their family, and increase their well-being.

Some of these problems become clearer when we explore Eberle’s account of the Rights-Based Restrictive View. Eberle excludes what he calls ‘irrelevant evils’, which are those that do not concern rights. He gives the example of harms to culpable perpetrators but also harms to those related to those who are culpable (which seems dubious). Although, he notes, family, friends, and acquaintances of soldiers who engage in genocidal attacks may be innocent, he argues that their suffering is irrelevant to proportionality (Citation2016, 82). This is because they lack any right against their infliction; it is, he argues, a function of the forfeiture of the rights of the fully culpable soldiers. We agree with Eberle that such suffering is unlikely to be sufficiently weighty to preclude victims of genocidal attacks from permissibly defending themselves. However, it still seems that it has some weight. Suppose that one can choose between two measures, both of which will kill a culpable attacker. The first will be a sniper shot to her head that will disfigure her hugely, thus causing significant anguish to her family. The second is to poison her, which would keep her cadaver intact. In both cases, the suffering to the culpable attacker will be the same. However, it seems that, other things being equal, we should prefer the latter, to reduce the avoidable anguish to her family. If they matter for such necessity judgements, they should also matter for proportionality assessments.

Eberle also claims that reductions in investments in green technologies due to the launching of a just war is not a relevant evil, again, because no rights are violated (Citation2016, 82). Yet here it seems particularly clear that the focus on rights is unhelpful. If the government has an option to wage war that avoids decreasing the portfolio of investment in green technologies, this should surely be preferred, as this could for example protect human and non-human welfare. These effects therefore do seem to matter. And, again, if they matter for such necessity judgements, they should also matter for proportionality assessments. They might be easily outweighed (e.g., by the importance of tackling genocide), but might become relevant in borderline decisions. This is clearest with cyber-operations, if, for instance, the defence of investments in green technologies is a likely upshot of the cyber-measure.

In addition, Eberle argues, following Hurka, that aesthetic goods, technological developments, and entertainment goods are irrelevant, because people do not have rights to these (Citation2016, 83). First, it could be plausibly argued that people do have rights to these goods, such as Article 27 of the Universal Declaration of Human Rights that ‘everyone has the right freely to participate in the cultural life of the community, to enjoy the arts and to share in scientific advancement and its benefits’. But even if we do not have rights to these goods, it does seem that they matter in proportionality assessments. If, for instance, it was reasonably foreseeable that radar would be developed by the launching of World War II by the allies, and that such a major technological advance would likely help significantly future generations (e.g., by furthering their interests), this would be a reason to count it in the assessment of proportionality, alongside other considerations. Likewise, if a cyberattack will help to protect entertainment goods this should be seen as valuable. In 2014, Sony Pictures was subject to a cyberattack that, amongst other effects, resulted in the pulling from cinemas of the satirical film about Kim Jong Un, The Interview (BBC Citation2015). Suppose that Sony could have used ACD measures to protect it against the cyberattack. That it would have enabled thousands to enjoy the film in the cinema, seems clearly relevant to the assessment of the proportionality of launching the ACD measures.Footnote29 Thus, we could either adopt a broader view of rights that takes into account these various harms and goods, or a parsimonious view of rights that excludes important harms and goods that are particularly brought out by cyber-operations. Either way, we should have a much broader notion of the relevant goods and harms for cyber-operations and other measures, including kinetic warfare. This brings us to the next section.

The permissive view in comparison

Having rejected the Restrictive View, we will now return to the Permissive View and delineate how it is superior to this approach. The Permissive View, recall, holds that all potential harms and benefits should be included in proportionality assessments for cyber and for war. There is no restriction on certain types of benefits or harms.Footnote30

Scholars are attracted to the Just Cause Restrictive View because this avoids a consequentialist approach that includes all goods and bads (Macnish Citation2015, 534; Hurka Citation2010). There are two major concerns. The first concerns the question of whether smaller peripheral benefits, when aggregated, can outweigh harms. As said above, we bracket this issue for the article. However, it is worth reiterating that this is a separate question – about the aggregation of small harms and benefits – to the question of the relevance of peripheral benefits to proportionality calculations. One could hold that smaller benefits and harms should not be aggregated, but that peripheral (e.g., large economic) benefits should be included in proportionality calculations. The second concerns lowering the bar for war, which we will consider shortly. But before we get there, it is important to make clear that accepting a more permissive view does not require adopting consequentialism. One can hold that all peripheral benefits are relevant to proportionality calculations but accept a deontological view that gives greater weight to certain types of harms and benefits. Indeed, this is exactly what we propose with the Permissive View.

On the Permissive View, various benefits and harms are morally weighted, according to their agent-relative and agent-neutral features. This means that, if we accept Campbell Brown’s claim that consequentialist theories do not admit of agent relativity, then not only does the Permissive View not require adopting consequentialism, but in fact makes it incompatible with consequentialism (Citation2011, 760–3). One central agent-relative feature concerns harms that involve you doing harm, where your agency directly harming innocent others (and this harm is not mediated), should be viewed as worse – weighted more heavily against – than harms where your agency is not directly involved. Other weighted considerations include whether those subject to the harm are liable to it by, for instance, engaging in culpable behaviour, and whether those subject to harm freely consent to it. This view also reflects the distributive fairness of the benefits and harms, and thus assigns different weights to benefits and harms depending on where they (are expected to) land.Footnote31 This means that some forms of benefit will be weighted negatively, for example because they land at the feet of people who have no reasonable claim to them, either because they are unjustly privileged or have acted wrongly. Harms and benefits that befall collectives, such as to state sovereignty, could also be included, most clearly when this value is ultimately reducible to the value for individuals (even if not straightforwardly reducible). The Permissive View is not, then, a consequentialist approach, given its inclusion of agent-relative considerations in the weighting of harms and benefits. Instead, it is a deontological approach that includes peripheral benefits. As noted above, this allows for the doing of harm to be given greater (negative) weight in the proportionality calculations, but still includes peripheral benefits.

We are now in a better position to better understand the central reason that the Permissive View is preferable. Since it does not overlook peripheral benefits, such as minor economic benefits, it can better reflect within proportionality judgements the importance of the agent-relative and agent-neutral value of these benefits, such as considerations of fairness. For instance, a cyber-measure that brings about a small economic benefit for those who are underprivileged could reduce their degree of underprivileged. Suppose that the US’s offensive cyber-measures against North Korean hackers have the unintended benefit of increasing economic stability in North Korea, helping many North Korean citizens. Excluding such economic benefits would potentially exclude improvements in fairness of the distribution of goods in North Korea. Economic benefits are one of the key ways that distributive fairness will be affected. Indeed, as indicated above, we think that the potential effects on fairness should be included as an important assessment under proportionality calculations on the Permissive View. This seems commonsensical: we should not simply be sensitive to whether a measure will provide a benefit, but to the fair distribution of the benefit. But if one were to exclude economic benefits, as per the Restrictive View, then the implications for fairness are missed.

The Permissive View coheres with the way that we think the wind is blowing in recent just war theory, that is, to move towards a much wider view of the relevant considerations. Seth Lazar’s helpful account of necessity (which views proportionality as part of necessity) holds that morally relevant goods and bads are to be included in the assessment of necessity, weighted according to their agent-relative features (Citation2012; Citation2016). His account does not exclude peripheral benefits. However, on Lazar’s view of necessity, agents are required to compare only those measures that aim to tackle the threat in question. More recently still, Patrick Tomlin (Citation2021) and Kieran Oberman (Citation2020) have argued that it is important to broaden this out, so that necessity also compares the measure in question with measures that aim to address other threats, given the opportunity costs involved when one tackles one threat but not another, and should include distributive considerations.Footnote32 This broadening out of necessity takes us closer to an approach whereby necessity requires comparing all the various options, including those that address the threat in question and those that do not. It is a broad, comparative approach that is simply about the weighted goods and bads of all the options. The Permissive View’s understanding of proportionality, similarly, adopts a broad, comparative approach that is simply about the weighted goods and bads of all the options.

The Permissive View also coheres with the way that we think the wind is blowing in other fields in their proportionality (and necessity) assessments. Most notably, recent important work in health ethics and global health has argued that health interventions should not be limited to assessment according to the effects on QALYs (quality-adjusted life years).Footnote33 This has been the predominant way of assessing health interventions in practice, but this misses other considerations, such as the effects of the intervention on the social determinants of health, effects on civil and political rights, and fairness. For instance, in regard to the latter, although more lives might be saved by placing a treatment centre in a city, this could be unfair on rural communities, who although smaller in number, are often poorer. It seems important to reflect, then, the fairness of the health intervention as well, so that (at least to some extent) those who are underprivileged have the opportunity to access the relevant services. Thus, on this view of global health, non-health benefits matter (beyond the ‘just cause’ of tackling the threat to health). In similar vein, the literature on climate change examines measures to tackle climate change not simply in terms of their narrow effects on climate change (beyond the ‘just cause’ of tackling climate change), but also their other, broader positive and negative effects, such as on economic wealth. For instance, the move to green energy is assessed not simply by its effects on climate change, but also economic growth (Caney Citation2012).

Beyond proportionality

As noted above, there are two central questions in the existing literature on the ethics of cyberwar. The first is what concerns just cause. Does a cyberattack provide a just cause for a coercive response, including a cyber response? And, second, whether cyberwar should be governed by just war theory. In this section, we will use the preceding discussion to show how our account of proportionality can help address both questions.

To explicate, on what we view as the most plausible account of just cause, proportionality plays a crucial role in determining what is a just cause or not. In the morality of self-defence and recent work in just war theory, it is presumed that some form of wrongful behaviour provides just cause for a response.Footnote34 This is most clearly a serious rights violation, such as a major act of aggression or an attack on one’s bodily integrity. However, it seems plausible that a single instance of wrongful behaviour, such as a rights violation, could provide just cause for warfare. Suppose, for instance, that your small neighbouring state is about to brutally torture and kill the youngest child of your democratically elected leader and your state could wage a very swift war that would stop it doing so. This seems to provide just cause for the response. Moreover, if the war really would be likely to not harm any innocents, and would save one, it seems that it might be justified, that is, if it is proportionate.

Similarly, we might think that a single form of wrongful behaviour can provide just cause for a robust cyber-response, that is, if it is proportionate.Footnote35 This might, for instance, be a single phishing attempt that provides us with just cause to hack back. But here we face a potential major objection against the Permissive View that we have proposed. If we adopt this view of just cause and hold that proportionality should include all harms and goods, then many more cyber-responses might be deemed to be justified. This is what we can call the ‘Open the Floodgates Objection’. It will render permissible far too many cyber-measures. This could in turn lead to international instability in the cybersphere as well as the kinetic sphere, as states cyber-operations spiral. The Permissive View of proportionality could also be easily abused by states to claim that their cyber-response will be proportionate (which, presumably, will be more difficult when the list of benefits is reduced). By contrast, on the Restrictive View, far fewer cyber-operations would be deemed as justifiable.

Yet we do think that this is the right standard for just cause, ideally. To respond to the Open the Floodgates Objection, it is important to note that there can be different standards of just cause depending on the level of idealization that one is working at. If we are concerned with what revisionists such as Jeff McMahan call the ‘deep’ morality of war, then we are adopting a more idealized approach, where there are more favourable circumstances and a greater degree of compliance with idealized moral principles (apart from the initial wrongful behaviour that provides just cause in the first place) (Citation2008, 36).Footnote36 This more idealized approach strives to reflect closely the underlying moral principles, consistent with individual self-defence and moral and political philosophy more generally.Footnote37 On this idealized approach, just cause for a cyber-operation would be include singular cases of, for instance, phishing. It would also include all the relevant costs and benefits as per the Permissive View. This would potentially permit several offensive cyber-operations.

However, there is also a more non-idealized approach to just cause, where there are much more unfavourable circumstances, such as increased likelihood of mistakes in attribution, and far less compliance with idealized moral principles, such as abuse of moral principles (Pattison Citation2018). This non-idealized approach reflects underlying moral principles, but in doing so, also reflects to a greater degree the problematic prevailing circumstances that renders it difficult (and potentially problematic) to put in place the idealized principles. For instance, virtually all feasible wars will lead to significant collateral harm on innocents, as, for instance, bombs are not properly targeted (because, in part, of technological limits to the targeting capacities of bombs) and so damage vital infrastructure and kill civilians, soldiers make mistakes and commit abuses (because, in part, of non-compliance with moral rules), and generals fail to reduce civilian casualties sufficiently with their planning. This provides reason to maintain a higher bar for just cause, so that only particularly serious cases are viewed as just cause, where there is the possibility of doing sufficient good to outweigh these likely harms and so war might still be proportionate (McMahan Citation2005, 11; Pattison Citation2018).

Similarly, in the cyber-realm, we propose a nonideal approach to just cause, so that only more serious cases are viewed as providing just cause, where the possibility of doing sufficient good outweigh these likely harms. Here the concerns concern questions about collateral harm on innocent civilians’ computer systems, so that only particularly serious cases are viewed as providing just cause, where there is, again, the possibility of doing sufficient good to outweigh likely harms and so the cyber-response might be proportionate. This is likely to preclude most offensive cyber-operations, especially those beyond ACD, including those listed above in the UK’s recent defence review given the harms that they might cause, such as accidentally targeting civilians or undermining infrastructure used by non-liable individuals.

Although we remain sceptical about drawing a distinction between certain goods and harms, there could also be a non-ideal approach to proportionality, so that, in practice, certain goods and harms are viewed as relevant, even though, ideally, all goods and harms are relevant. These goods and harms would be those particularly relevant to the cyber-domain, which could, potentially, reduce the likelihood of abuse, as those making the decisions to launch a cyber-response are not attracted to potential economic and technological benefits of cyberattacks with little regard for cyber-harms.Footnote38 Much will depend on the prevailing political context to determine whether such a nonideal approach to proportionality is necessary. Our current view is that a nonideal account of just cause for cyber will be sufficient.

This brings us to the second central question: should cyberwar be governed by just war theory? Much depends on how we understand just war theory. If we see it as sui generis, as a completely different moral realm, then it is very clear that cybersecurity should be governed by different moral principles to warfare (which requires its own domain). However, if we see the morality of war as ultimately stemming from underlying principles of the morality of self-defence and political philosophy more generally, as revisionists suggest, there is nothing special about war. It is, in particular, the underlying principles of proportionality and necessity (and, for some, liability or just cause) applied to the context of war. These principles could then be applied to the cyber-context, as they could be applied to other contexts as well, such as the use of kinetic force short of war and economic sanctions.

This is at the more abstract level. At the more practical level, the various nonideal considerations that stem from unfavourable circumstances and lack of compliance are likely to vary according to the specific measure in question.Footnote39 War and cyber, in other words, are likely to face different sets of issues when we attempt to apply the ideal principles to the real world. Thus, it is appropriate to adopt different sets of non-idealized principles for the cyber and war domains, although these ultimately stem from the same underlying principles (from the morality of individual self-defence and/or moral and political philosophy more generally). These can reflect, for instance, particular issues that are prevalent in the cyber-realm, such as wrongful attribution (which, although still a concern with war, is less serious).

As Randall Dipert claims, since many forms of cyber do not involve death and destruction, but lesser forms of intentional harm, in both qualitative and quantitative terms, we need ‘more fine-grained moral theories for international relations that include lesser forms of intentional harm, such as economic damage and espionage’ and, he continues, ‘full ethics of cyberwarfare’ to ‘for all of international relations – of what any state, or political organization, morally may do, or not do, to another state, or that affects another state, even if short of killing and permanent destruction’ (Citation2016, 58–60). Importantly, this can still be seen as highly congruent with just war theory. It also presents some grist to the mill for those, such as Taddeo (Citation2014), who call for a new theory to address cyber, beyond just war theory. It does so by defending the need for such an approach as a matter of nonideal theory, to fully respond to the prevailing unfavourable circumstances and non-compliance in the cybersphere.

To summarize: our account of proportionality in the Permissive View is idealized. There is also a non-ideal approach, which is more limited for non-idealized reasons, rather than the reasons proposed by the Restrictive View. This two-level model also helps us to provide an account of just cause for cyber-response (again, which should be two-level) and to fully understand the relationship between cyberwar and just war theory.

Conclusion

In this article, we have considered the harms and benefits that should be included in proportionality assessments when assessing cyber-measures. We have argued that a broad approach – the Permissive View – should be adopted, which includes all relevant goods and harms, over two leading alternatives – the Just Cause Restrictive View and the Rights-based Restrictive View. This account of the Permissive View is idealized, potentially permitting offensive cyber-measures, but these would most likely be prohibited on a nonideal approach to just cause in the cybersphere. This ideal/nonideal approach to cyber also helps to navigate between debates about whether cyberwar should be covered by just war theory or whether there needs to be a new theory for cyberwar. On this approach, although cyberwar would be ideally coherent with just war theory, a new approach may be necessary, particularly where the nonideal issues differ significantly, such as for just cause.

Acknowledgments

We would like to thank the reviewers for their helpful comments, as well as the audience of the Norwegian Practical Philosophy Network Workshop and the 11th edition of the Braga Meetings. Fredrik D. Hjorthen would like to thank the Research Council of Norway for their financial support (grant number 288654).

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

The work was supported by the Norges Forskningsråd [288654].

Notes

1 To be sure, Rid argues that the effects of both attacks were ‘rather small’.

2 Russia denied the attack, but the Norwegian Police Security Service (PST) attributed it to the GRU-linked cyber-actor APT28, or Fancy Bear (PST Citation2020).

3 The attack was mitigated before the effects materialized.

4 These are discussed below.

5 For instance, Simpson (Citation2014, 141–54) argues that cyberattacks should be understood in terms of harm to property (and this provides just cause) and Finlay (Citation2018) argues that the notion of violence is a helpful way of capturing cyberwar.

6 Cyberwar may sometimes meet the first requirement but will not meet the second.

7 See, for instance, Horton (Citation2018).

8 To be sure, our account differs subtly in that McMahan views proportionality as comparing the measure to doing nothing, whereas we view it as comparing to not launching it. In this way, we can use proportionality to assess further actions (e.g. a cyberattack) when we are already doing something (such as military action). For an in-depth account of proportionality in war and the relevant counterfactuals, see Tomlin (Citation2021, 42–3).

9 We largely follow here the typology offered by the Center for Cyber & Homeland Security (CCHS Citation2016). The boundaries of Active Cyber Defence (ACD) and hacking back vary between accounts, with, for instance, the UK seeing ACD as purely defensive (Stevens et al. Citation2019).

10 Honeypots can be also be active if they, for instance, actively search out malicious servers. See Schmitt (Citation2017, 174).

11 Some might object that purely defensive cyber-measures such as the use of antivirus software do not raise questions about proportionality, given that no one is likely to be harmed (and no justification needed). While we agree that proportionality may not matter equally across all cyber-measures, we still argue that it is relevant. One reason why proportionality also matters for ostensibly harmless measures is that they involve opportunity costs. Assuming that opportunity costs matter for proportionality, at least when they represent a duty violation (Oberman Citation2019; Pattison Citation2020b), and that no state is currently in perfect compliance with their duties, proportionality assessments seem relevant. Another reason is that defensive cyber-measures may still lead to some harms, even if unintended, such as the deflection of attacks onto those who are more vulnerable and potentially the least well off (Pattison Citation2020a).

12 It might be replied that there should be a different ethics for offensive operations than for defensive operations. We do not preclude this but note reason for caution: offensive and defensive cyber-operations often blur, with operations sometimes having a mix of both elements.

13 Also, see McMahan and McKim (Citation1993) and Rodin (Citation2011). Kamm (Citation2011) endorses a different set of restrictions on the goods to be included. See Hurka (Citation2014) and McMahan (Citation2014). McMahan has shifted his view and now holds that ‘non – just-cause goods’, i.e., peripheral benefits, can be relevant for wide proportionality (which concerns the proportionality of those not liable), but still, as far as we understand, views only just cause goods as relevant to narrow proportionality (which concerns the proportionality of those liable) (McMahan Citation2014, Citation2018a).

14 He largely restates this account in Hurka (Citation2008; 2014Hurka Citation2014,), using the terminology of ‘independent’ and ‘conditional’ just causes.

15 In his 2010 chapter and 2014 article, Hurka very tentatively adds to the list of relevant goods. Peripheral benefits that result from achievement of the just cause are included, but not if they result from the means to achieving the just cause. For instance, economic benefits are relevant when they accrue from the achievement of a just peace at the end of the war and the stabilization of the region, but not when they result from the war economy during the war. Given the tentativeness with which he adds this suggestion (he states ‘I am by no means sure this view is correct’) we leave this aside here (2010Hurka Citation2010, Citation2014; McMahan Citation2018a, Citation2018a, 425).

16 Hurka’s account has been adopted by some of those working on issues related to cyber-conflict, such as Bellaby (Citation2016) and fields related to cyber, such as surveillance, e.g., Macnish (Citation2015).

17 For a useful typology of the harms that might result from cyber-measures, see Agrafiotis et al., (McMahan Citation2018b).

18 One could distinguish between direct and indirect benefits, but this seems less appropriate, given that Hurka is concerned with the just aims of an operation. It unclear to us how these could be indirect.

19 The fear of cybercrime is highlighted in Home Office (Citation2018).

20 It might be worth noting that while borders and networks may often coincide, this is not necessarily the case.

21 See, further, Eberle (Citation2016).

22 Indeed, it might be thought that there could be a version of the Just Cause Restrictive View that holds that there are different proportionality assessments for harmful and non-harmful measures, with peripheral benefits being included only with non-harmful measures. But, again, this would need to explain why we should assess harmful and nonharmful measures by a different standard.

23 A similar point could be made in terms of what McMahan, (Citation2018b) calls ‘narrow’ and ‘wide’ proportionality, which concern harms to liable and non-liable people respectively: peripheral benefits can still be relevant for wide proportionality judgments, especially in marginal cases, even if less weight than in narrow proportionality judgements.

24 There may be other potential defences of the Just Cause Restrictive View, applied from Brock’s (Citation2003) partial repudiation of peripheral benefits in medical ethics, such the notion that there should be ‘separate spheres’, or concerns over fairness and treating people as a means. These are considered – and repudiated – by Lippert-Rasmussen and Lauridsen (Citation2010) and Persad and du Toit (Citation2020), so we do not consider them.

25 We presume here that one can harm without transgressing rights, although, admittedly, this might depend on one’s understanding of harm. For our purposes, we presume that it makes sense to talk about harming someone who is liable to the harm because they lack the right to the protection (e.g., they are not the rightful owner of the goods in question).

26 On national partiality, see e.g., McMahan and McKim (Citation1993, 516).

27 For the view that the right to internet access is a basic right, see Reglitz (Citation2020).

28 Nickel (Citation2019) provides an excellent overview.

29 It might be objected here that because the measure would seemingly do no harm there is no need to consider proportionality. But even though there might not be any direct harms resulting from the measure, there can be indirect harms (for example if there are opportunity costs to the measure) and also unintended harms. See also footnote 11.

30 Someone might claim that the Permissive View implies that economic or aesthetic benefits through cyber measures can be commensurate with the harms of war, but these are not commensurable without adopting a problematic quantitative comparison. Note, however, that we do not hold that these things need to be strictly commensurable. To explicate, we can distinguish between strong and moderate forms of incommensurability. On strong incommensurability, it is strictly impossible to weigh values. They cannot be compared, and all-things-considered judgements are extremely problematic. By contrast, on moderate incommensurability values may be incommensurable, yet possible to weigh. Although it is impossible to plot them against a single value, such as utility, to provide a clear cardinal ranking, it is possible to provide a sense of their relative importance. Moderate incommensurability accepts value pluralism and the fact that values can still be weighed, for example through the method of reflective equilibrium. This view on incommensurability is also in line with much of Just War Theory and, more generally, the morality of self-defence, which attempts to offer determinate judgements that weigh various instrumental and non-instrumental considerations.

31 This means that our account must rely on a background theory of distributive justice. Such an account could take different forms, but for the present purposes a useful point of departure is John Broome’s account, whereby ‘fairness is concerned only with how well each person’s claim is satisfied compared with how well other people’s are satisfied’, and where fairness requires that ‘claims should be satisfied in proportion to their strength’ (Citation1990, 95).

32 Also see Pattison (Citation2020b).

33 This is a central theme of recent work on priority-setting in global health – Norheim, Emanuel, and Millum (Citation2020) – featuring many of the leading bioethicists. Also see Du Toit and Millum (Citation2016), Lippert-Rasmussen and Lauridsen (Citation2010) (replying to Brock Citation2003), and Sharp and Millum (Citation2018).

34 We are not even fully convinced that this is necessary for just cause since they could be a just cause to respond to cases where there is no wrongful action, such as natural disasters where the state lacks the capacity to address the situation, but leave this open here.

35 This goes further than Dipert, who argues that the ‘widely accepted “high” barrier for just cause – namely armed invasion by an enemy with an intention to use lethal force – does not seem to apply to many forms of cyberwarfare’ (2016, 66).

36 Here, McMahan uses the terminology of ‘basic, first-order principles of the morality of war’ rather than ‘deep morality’.

37 For instance, Fabre (Citation2012, 6) sees her account as ‘[u]nearthing first-best principles’.

38 Relatedly, the non-ideal approach to proportionality provides a response to another objection, namely that calculating all the potential goods and harms would be too time consuming and demanding.

39 See, further, Pattison (Citation2019).

References