795
Views
0
CrossRef citations to date
0
Altmetric
Articles

Encoding the Enemy: The Politics Within and Around Ethical Algorithmic War

ORCID Icon &
Pages 83-99 | Received 06 Dec 2022, Accepted 27 Jun 2023, Published online: 17 Jul 2023

ABSTRACT

This article develops a critique of the politics of algorithmic war and autonomous weapons systems. While much of the existing debate is focused on whether algorithmic weapons technologies can satisfy the ethics and laws of war, we argue that more focus should instead be placed on the politics within and around these systems. The theoretical contributions of Elke Schwarz, Louise Amoore, and Carl Schmitt are employed to critique the implied neutrality of ethical war algorithms and excavate the place of the friend/enemy distinction within them. Ronald Arkin's influential work on Governing Lethal Behavior in Autonomous Robots (2009) and the more recent DARPA-funded work of the Allen Institute for AI, which has led to the release of Ask Delphi, are discussed as examples of these issues. We conclude by calling for a radicalisation of the existing debate over algorithmic war and AWS in order to create space for anti-militarist and pacifist voices and to avoid reification of the ethical war discourse.

Introduction

Kate Crawford (Citation2021, 224) has argued in relation to applications of AI in general that “[t]o understand what is at stake, we must focus less on ethics and more on power” as “AI is invariably designed to amplify and reproduce the forms of power it has been deployed to optimize”. It is the aim of this article to take Crawford’s argument seriously by moving away from the predominant focus on ethical, legal, and moral concerns surrounding algorithmic war and autonomous weapons systems (AWS).Footnote1 We aim instead to focus on the ways in which inherently political decisions in war-fighting, most notably the distinction between friends and enemies, can be used to critique prevailing discourses surrounding algorithmic war and can point toward a critique of the ethics of war in general. This requires close attention to political questions: In what ways are the discourses of ethical algorithmic war political? How and where can we start to locate politics within algorithmic technologies for war? And what are the implications of understanding algorithmic war in this way? Turning the spotlight onto these political questions reveals the politics inherent within and around algorithmic military technologies and generates insight into the ways in which discourses of ethics and humanitarianism feed into the perpetuation of war.

In the first part of the article, we look to the recent work of Elke Schwarz (Citation2018) and Louise Amoore (Citation2018) in order to theorise the place of politics both around and within algorithmic weapons of war. The politics around algorithmic war refers to the discourses of precise and ethical warfighting that have accompanied the emergence of these technologies. This dimension, which is represented by Schwarz as a new and dangerous form of biopolitics, is related to a longer history of critique of the relationship between liberal humanitarianism and the ethics of war (Barnett Citation2011; Bricmont Citation2006; Cunliffe Citation2006; Citation2020; Der Derian Citation2009; Jabri Citation2007; Morefield Citation2014; Moyn Citation2021; Zehfuss Citation2018) that sees discourses of liberal values and international humanitarian law as integral to the perpetuation, rather than the limitation, of war.

We then argue that the discursive politics around algorithmic war are intimately tied to the politics within algorithmic systems, contrary to claims of neutrality and objectivity that frequently accompany them. A growing literature on algorithmic bias (Barocas and Selbst Citation2016; Benjamin Citation2019; Garcia Citation2016; Kordzadeh and Ghasemaghaei Citation2022; Noble Citation2018) and the need to recognise the politics within algorithms has gained a lot of critical traction in this field over recent years. Louise Amoore’s (Citation2009; Citation2011; Citation2014; Citation2018; Citation2019) accounts of the politics inherent within algorithmic systems are among the more powerful and provocative of these arguments and serve as an important theoretical foundation for our argument below.

In order to deepen and clarify the importance of these claims about the politics around and within algorithmic war, we supplement and synthesize the work of Schwarz and Amoore by returning to Carl Schmitt’sFootnote2 critique of the intersections of liberalism, technology, and just war theory (JWT). Drawing on Schmitt’s Concept of the Political and other later works, we argue that the friend/enemy distinction, which is the essential requirement of a political perspective within algorithmic technologies of war, negates the possibility of neutral and objective application of the ethics and laws of war through computational techniques. This then has consequences for the politics around algorithmic war, as it fatally undermines discourses claiming that new technologies can bring about the realisation of “virtuous war” on behalf of humanitarian ideals or humanity in general.

The second part of the article illustrates these theoretical issues via two important cases of US military-funded research projects that seek to develop algorithms capable of ethical or “common sense” reasoning: Ronald Arkin’s influential work on Governing Lethal Behavior in Autonomous Robots (Citation2009) and the more recent DARPA-funded work of the Allen Institute for AI, which has led to the release of “a research prototype designed to model people’s moral judgments on a variety of everyday situations” called “Ask Delphi” (Allen Institute for AI Citation2021). While there are many other states pursuing algorithmic weapons, we believe there are good reasons for focusing particularly on developments in the United States. First, the US spends more money on research and development than any other state (between US$20–25 billion across all warfighting domains in 2022: See NSCAI Citation2021, 80), and leads a broad coalition of partners and allies in developing ethical concepts for AI-driven defence (NSCAI Citation2021, 82–85). Second, along with other Western states, it emphasises ethical warfare as a part of its global identity (Zehfuss Citation2018). We argue that this American techno-ethical discourse tends to conceal political relations of enmity and, as a consequence, the political interests in service of which the weapons will be used.

In the final section, we consider the implications of this critique for ongoing attempts to develop ethical algorithmic weapons and for the debate surrounding them. We argue that focusing on the politics within and around algorithmic war should generate scepticism regarding the possibility of using such technologies to both enhance warfighting and comply with the ethics and laws of war. More broadly, we suggest that focusing on the friend/enemy distinction as an inescapable dimension of all war-fighting—no matter how technically advanced—serves as a reminder of the inescapably human decision-making that produces the brutality and inhumanity of war in general. This, we argue, should lead to a radical recasting of the current debate away from demands for more ethical and legal restraints on war and toward stronger anti-war or pacifist positions.

The politics of ethical algorithmic war

A key element of the push for greater reliance on algorithms and artificial intelligence (AI) in a wide range of social and economic areas lies in the belief that such tools have the potential to perform complex tasks more quickly, efficiently, and objectively than a human might otherwise be able to (NSCAI Citation2021, 7, 19, 36). As Crawford (Citation2021, 212) argues, research in this area proceeds on the basis that the machines being developed “must be smarter and more objective than their flawed human creators”. This fetishisation of AI technology is premised upon a commitment to scientific rationalism as a method that can free us from the complexity and indeterminacy of social and political life. A consequence of this, as Abeba Birhane (Citation2021, 2) contends, is that:

socially and politically contested matters that were traditionally debated in the open are now reduced to mathematical problems with a technical solution … The mathematization and formalization of social issues brings with it a veneer of objectivity and positions its operations as value-free, neutral, and amoral.

This belief that algorithms can deliver perfect and objective knowledge is based on “a misconception that injustice, ethics, and bias are relatively static things that we can solve once and for all” (Birhane Citation2021, 6). As such, Crawford insists on the urgent need to recognise that:

[a]rtificial intelligence is not an objective, universal, or neutral computational technique that makes determinations without human direction … AI systems are built to see and intervene in the world in ways that primarily benefit the states, institutions, and corporations that they serve. In this sense, AI systems are expressions of power that emerge from wider economic and political forces, created to increase profits and centralize control for those who wield them (Crawford Citation2021, 211).

These broader questions surrounding algorithmic neutrality and politics are consequential for our critique of the ethics of algorithmic war and AWS. In the military field, dreams of perfect, objective knowledge through the deployment of technological means—understood in terms of a “closed world”—have informed US investment and strategy for several decades (Bousquet Citation2009; Edwards Citation1996). In the present era, increasing reliance on algorithms as an advanced proxy for human judgement in war is based on what Suchman (Citation2022, 2) describes as “the enduring premise of objectivist knowledge enabled through a war fighting apparatus that treats the contingencies and ambiguities of relations on the ground as noise from which a stable and unambiguous signal can be extracted”. The current pursuit of algorithmic war represents a new high-water mark in the history of techno-fetishism in military affairs, as captured in the effusive claim in the Final Report of the US National Security Commission on AI (NSCAI Citation2021, 22) that:

In the future, warfare will pit algorithm against algorithm. The sources of battlefield advantage will shift from traditional factors like force size and levels of armaments to factors like superior data collection and assimilation, connectivity, computing power, algorithms, and system security.

Importantly, competition for algorithmic superiority is not only justified in terms of winning wars, but also as necessary for making war more lawful and ethical through technologies that can take humans out of harm’s way and that promise more precise targeting of enemy forces (NSCAI Citation2021, 92–95; Schwarz Citation2018; Zehfuss Citation2018). There is, therefore, a strong nexus between the desire for AI-driven technological innovation in weapons of war and the ethics and laws of war that have come to dominate thinking on how to fight well and for good causes over the past century. For states that wish to represent their uses of military force in terms of being for the good of humanity or in pursuit of humanitarian ideals, it is vital that the means match the purported ends. Concern about fighting wars justly is, in this way, tied to the related belief in and commitment to the idea that wars can be waged for objectively just reasons based in universal values such as human rights. We argue that this technologized discourse of just war works to conceal the political decisions and functions of war behind the veil of an apparently universal ethics.Footnote3

This brings us to some key questions to be examined in the remainder of this section: what kinds of politics and power are we not seeing in the predominantly ethical debates over the development and deployment of AWS? And how might we better get at these obscured political dimensions of these technologies?

An account of the relation between the politics and ethics of algorithmic war is found in Elke Schwarz’ Death Machines (Citation2018). In this book, Schwarz draws on the work of Hannah Arendt and Michel Foucault (amongst others) in claiming that algorithmic war represents a new and dangerous form of biopolitics, in which “acts of political violence are framed as necessary technical acts for the securitisation of progress and survival for the human and humanity at large” (Schwarz Citation2018, 25–26). The pursuit of ethics algorithms in the security and military space has a depoliticising effect, insofar as it seeks to deliver consistently rule-driven ethical behaviour that is justified in biopolitical terms as the management of the life of the body politic. Contrary to its promise, however, “there is a real and serious danger that the medical-military framework serves to escalate scientific-technological rationales for violence, rather than reduce violence” (Schwarz Citation2018, 280).

This approach, Schwarz’s (Citation2018, 76–77) argues, reveals the politics of algorithmic war to be a kind of depoliticisation or “anti-politics”; one that, in claiming an objective or neutral position, denies its own political positioning. Biopolitics, in the context of techno-war, can therefore be taken as an entry point toward the revealing of the existence of sovereign power and relations of political enmity that are concealed by the liberal-humanitarian promise of global policing and surgical strikes. Yet, Schwarz does not take the step of extending the biopolitical framing of algorithmic war into a general critique of the ethics of war or just war theory. This hesitancy is evident in her concluding call to search for “practical solutions and concrete ideas as to how to reform just war discussions, practicalities of guidelines for fighting in war or concrete reforms for policies as to what, indeed, constitutes ethical acting in international violent engagements” (Schwarz Citation2018, 302–303).

We believe it is necessary to push the radicalism of this argument a little further. For while Schwarz’s biopolitical critique is valuable and powerful, it runs the risk of missing some critical issues specifically related to algorithmic war due to its focus on the discourses around rather than the politics within these emerging technologies. In biopolitical terms, this might require asking: specifically who or what is the enemy that needs to be eliminated in order to secure the health of the body politic? How is this enemy decided upon? And can algorithmic technologies adequately capture and recognise the quality of enmity in a way that will be useful in battlefield situations whilst retaining a commitment to the ethics and laws of war?

Louise Amoore’s (Citation2018, 7) conception of the algorithm as “always already an ethicopolitical entity” takes us closer to an investigation into these kinds of questions, as it calls for more attention to be paid to the politics within algorithmic systems themselves. For Amoore (Citation2018, 6), human bias and prejudice are inherent in all algorithmic systems and the pursuit of “codes of ethics that instil the good, the lawful, or the normal into the algorithm” are therefore beyond resolution and unsustainable. Indeed, “many of the features that some would like to excise from the algorithm—bias, assumptions, weights—are routes into opening up their politics” (Amoore Citation2018, 7–8).

And this is a politics that urgently needs opening up, as “decisions taken on the back of algorithmic calculations … conceal political difficulty, even discrimination and violence, within an apparently neutral and glossy techno-science” (Amoore Citation2009, 54). Put differently, “the algorithm … conceals the architecture of enmity through which it functions” (Amoore Citation2009, 57). Thus, “while the identification of risk is assumed to be already present within the calculation” it is always and necessarily “made on the basis of prior judgments about norm and deviation from norm” (Amoore Citation2009, 54). Following Amoore, therefore, it can be argued that research projects aimed at generating algorithmic ethical codes within autonomous military technologies represent futile but politically salient attempts to obscure, flatten, and neutralise the decisions and judgements about friendship and enmity that are a necessary pre-condition for all instances of war.

The spectre of Carl Schmitt’s Concept of the Political and his associated critique of the depoliticising trends of the laws and ethics of war in the twentieth century looms large in this context.Footnote4 As is well known, Schmitt understood the concept of the political in terms of a friend/enemy distinction which, in extreme cases, generates situations where people “could be required to sacrifice life, authorized to shed blood, and kill other human beings” (Schmitt Citation1996b, 35). It is through political relations of friend and enemy that the possibility and even the desirability of killing the enemy is rationalised. Situations of war are, therefore, logically inconsistent with universal values or justice. The “inherent reality” of politics as a friend/enemy distinction (Schmitt Citation1996a) and the associated risk of lethal violence was, however, one that came to be increasingly denied in the post-WWI era.

It was during this period that Schmitt’s concept of the political developed in tandem with his critique of just war politics and the development of international humanitarian law (IHL) or the law of armed conflict (LOAC) (Slomp Citation2009). The return of just war thinking in the eighteenth, nineteenth and twentieth centuries was, Schmitt argued, indicative of the neutralising tendencies of liberalism, generating the need to think of one’s enemies not as a justus hostis but as an “unjust enemy” of humanity (Benhabib Citation2012). The desire to use humanitarian justifications for war functioned as a moral gloss for an inherently brutal, political enterprise (hence the much-invoked phrase “whoever invokes humanity wants to cheat”), but one that had “certain incalculable effects, such as denying the enemy the quality of being human and declaring him to be an outlaw of humanity”, with the consequence that “war can thereby be driven to the most extreme inhumanity” (Schmitt Citation1996b, 54). The parallels with the concerns of Schwarz and Amoore are evident here.

Schmitt (Citation2011, 31, 73) claimed in 1937 that this “discriminating concept of war” in which one side was able to declare itself just and lawful at the expense of their enemy, was a specific style of just war politics that inaugurated Woodrow Wilson’s moralisation of the US decision to enter WWI. This “modern disposition” toward war, Schmitt (Citation2011, 31) argued, “requires the procedures of legal or ethical “positivization” for a just war”. This was a normative change that had resulted in “the total jolting of the old concept of war” and, “[i]n practical terms … war and yet no war at the same time; anarchy; and chaos in international law” that would “stand in the path of a true community of nations” (Schmitt Citation2011, 73–74).

Modern just war theories, according to Schmitt in his later work The Nomos of the Earth (Citation2003, 321), needed to be understood as “ideological phenomena attending the industrial-technical development of modern means of destruction”. The ideological politics of just war principles in international law in their relation to new technologies of war present new and dangerous challenges to humanity, as:

The victors consider their superiority in weaponry to be an indication of their justa causa, and declare the enemy to be a criminal, because it no longer is possible to realize the concept of justus hostis [just enemy]. The discriminatory concept of the enemy as a criminal and the attendant implication of justa causa run parallel to the intensification of the means of destruction and the disorientation of theaters of war. Intensification of the technical means of destruction opens the abyss of an equally destructive legal and moral discrimination. (Schmitt Citation2003, 321)

The linking of technology with the friend/enemy distinction and the ethics of war was, therefore, a focus of Schmitt’s thought throughout the twentieth century (McCormick Citation1997). While technology held out the prospect of universal equality and peace through promises of depoliticisation and neutralisation, Schmitt (Citation1996a, 90–91) insisted that:

Technology is always only an instrument and a weapon; precisely because it serves all, it is not neutral. No single decision can be derived from the immanence of technology, least of all for neutrality. Every type of culture, every people and religion, every war and peace can use technology as a weapon. Given that instruments and weapons become ever more useful, the probability of their being used becomes that much greater.

How the “century of technology” would ultimately be understood would only be revealed, Schmitt argued, “when it is known which type of politics is strong enough to master the new technology and which type of friend-enemy groupings can develop on this new ground” (Schmitt Citation1996a, 95).

It is precisely this question that we seek to examine in relation to algorithmic war, using Schmitt’s concept of the political to draw attention to the ways in which the politics of war are present both within and around these new weapons technologies. International conflict has come to be justified (in biopolitical terms) as policing action on behalf of universal principles of justice or the health and safety of humanity in general, leading to a sense that there are natural enemies of humanity that can and should be legitimately attacked and eliminated. Technologies of war are intimately related to this prevailing attitude, manifesting both a belief in the legitimacy of defending ostensibly universal (or neutral) values and needing specific tools to undertake this task in a way that is congruent with that belief. The research and development currently underway in the US on creating algorithmic weapons that can “exceed human performance” in waging ethical war (NSCAI Citation2021, 7) should be understood as the latest and most intense iteration of this interweaving of technology and ethics. In order to open up the politics of these algorithmic technologies, the following section probes the “bias, assumptions, and weights” (Amoore Citation2018, 8) that lie within, with a particular focus on how they distinguish friends from enemies and the potential consequences of this.

The friend/enemy distinction in algorithmic ethics for the US military

Ronald Arkin’s ethical robotics

The US military has funded and continues to fund projects related to the development of autonomous weapons systems with autonomously-functioning ethical or legal constraints. One of the earliest and most important recipients of this financial support for the development of ethical military robotics is the roboticist Ronald Arkin. Arkin has worked on unmanned military systems research for several decades, much of which was funded by DARPA, the US Army, the US Navy, and a range of military contractors (Arkin Citation2009, xii–xiii). But it was in the early 2000s that Arkin describes having an “awakening” regarding the need for ethical control of military robots, based on the belief that it is “our responsibility as scientists to look for effective ways to reduce man’s inhumanity to man through technology” and “that research in ethical military robotics can and should be applied toward achieving this end” (Arkin Citation2009, xvi). It is from this basis that Arkin’s Citation2009 book, Governing Lethal Behavior in Autonomous Robots, proceeds.

What is notable in Arkin’s work is the view that developing ethical military robots is a solvable problem that simply requires the right combination of technical fixes. This confidence in the concreteness of ethical principles as applied to war-fighting is manifest in the various examples or excerpts from the Laws of War (LOW) and from US military Rules of Engagement (ROE) that are reproduced throughout the book. It is from these texts, Arkin argues, that clear principles can be derived to ensure ethical conduct by robotic weapons systems in future wars. Thus,

[e]specially in the case of a battlefield robot (but also for a human soldier), we do not want the agent to be able to derive its own beliefs regarding the moral implications of the use of lethal force, but rather to be able to apply those that have been previously derived by humanity as prescribed in the LOW and ROE. (Arkin Citation2009, 116–117, emphasis added)

Arkin’s expectation of superior adherence to these ethical principles is premised on the exclusion of human emotion and prejudice from decision making on targeting and attack (Arkin Citation2009, 29–30). It is precisely cold, scientific rationalism in combination with the ethical principles of “humanity” that would enable robots to “outperform human soldiers in their ethical capacity” (Arkin Citation2009, 211). The combination of this hyper-rationalistic view of encoded ethics and Arkin’s expertise in robotic engineering underpins the development of his “prototype core control algorithm for ethical governor”.

The problem of applying purportedly universal ethics in a situation where lethal distinctions are made between friends and enemies has already been outlined in the preceding section. Yet, Arkin’s algorithm contains assumptions about the capacity of robotic systems to distinguish between “targets” and “friendly forces”. How would a robotic system know who was a friend and who was an enemy in a battlefield situation where all parties are firing upon each other and may not be in recognisable or visible uniforms? And how would those targets then be attacked in keeping with the principles of proportionality? On this question, Arkin suggests that:

The proportionality optimization algorithm uses … statistics as well as incoming perceptual information to determine the battlefield carnage in a utilitarian manner by estimating the amount of structural damage and the number of noncombatant/combatant/friendly casualties that result from the use of a weapon system at a particular target location. Fratricide is restricted to always be zero; the killing of friendly forces by the autonomous system is specifically forbidden under all circumstances … the proportionality algorithm maximizes the number of enemy casualties while minimizing unintended noncombatant casualties and damage to civilian property as indexed by a given military necessity for the designated and discriminated target. (Arkin Citation2009, 187)

The first notable point here is that the restriction of “fratricide”—i.e. the killing of “friendly” forces—to zero is not commensurate with a genuine utilitarian approach that benefits humanity, as it relies upon an encoded algorithmic bias that favours the lives of US and allied troops.Footnote5 Further, the means by which “friendly forces”, civilians, and enemies are identified is not explained in this excerpt. The closest Arkin gets to addressing this problem is by reference to pre-defined “kill boxes”, “candidate targets and target classes [that] must be identified in advance” (Citation2009, 147, emphasis added) and the need for Identification, Friend or Foe (IFF) markers for combatants (Citation2009, 169).

The need for deliberate, pre-determined markings to distinguish between friend and enemy is revealing. It demonstrates that the justness of a combatant is not objectively decipherable and that there is no way for an algorithmic system to distinguish between friend and enemy based on the processing of sensor data in the absence of such markings. If markers or target lists are needed to distinguish friend and enemies, all of whom may be carrying out lethal attacks that an “objective” robot may find unethical or illegal, then the algorithm is not (and cannot be) making decisions based on universally-valid ethical principles, but on a pre-determined friend/enemy distinction. Thus, when a political decision is made as to who our enemy is, the possibility of a universal ethics that can be algorithmically programmed into a machine is negated; the robot is then serving a political cause that requires intentionally biased programming in order to produce the required effect.

This is an issue that is common to the most highly automated weapons systems that are currently in use. In relation to “signature strikes” carried out by drones, for example, US intelligence services use metadata to support decisions to target and attack individuals based on their “patterns of life” (Chamayou Citation2015, 46–51; Weber Citation2016), but the suspect nature of these patterns—that is, the decision on who the enemy is—is not determined by the machine itself but rather through political decisions of the human collective that it serves. Similarly, in existing loitering munitions that target certain radar frequencies (Voskuijl Citation2022), a pre-programmed set of such frequencies must be decided upon and entered by a human operator who understands that these are signals used by an enemy within a specified territory. None of this can be said to derive from universal or objective rules. In all cases, ethics are secondary to and derived from the original decision on political enmity that has reached a lethal intensity.

While Arkin’s research has been subject to criticism from some figures associated with the campaign against AWS, such as Noel Sharkey (Citation2012), these criticisms have focused primarily on the question of whether such an algorithm could conform with the ethics and laws of war. Sharkey is quite right to point out that Arkin’s work leaves many unanswered (and probably unanswerable) questions around how robots would be imbued with sufficient situational awareness or emotion to adequately uphold the ethics and laws of war (see also Suchman Citation2022). But his argument that human soldiers are capable of this (Sharkey Citation2012, 796) is reflective of the narrowness of the existing debate on AWS, as it takes place entirely within parameters that accept the possibility of ethically-waged just wars against unjust enemies. The purpose of this article, on the other hand, is to uncover the politics within and around ethical algorithmic war as a prism for understanding the essentially political nature of all war and the associated weaknesses of the ethics and laws of war in general. This becomes particularly evident in a more recent attempt to achieve algorithmic ethics to which we now turn our attention.

Delphi and machine common sense

Over a decade after the publication of Arkin’s book, the project of machine ethics is ongoing and the politics of ethics is persistent in new, cutting-edge US military-funded research. The Allen Institute for AI’s Delphi (Jiang et al. Citation2022) is one example of a DARPA-funded project that uses natural language processing (NLP) “to model people’s moral judgments on a variety of everyday situations” (Allen Institute for AI Citation2021). The research behind Delphi is funded through DARPA’s Machine Common Sense (MCS) programme, which aims to encode the kind of situational and contextual understanding in machines that is assumed in Arkin’s earlier work (Sharkey Citation2012).

The MCS programme is significant as part of a turn in DARPA’s funding of AI research. Support for this push comes from the AI research community (DARPAtv Citation2018; Hutson Citation2022) and military and technology leaders (NSCAI Citation2021). New funding represents an attempt to refocus from narrow, task-specific AI based on machine learning (e.g. machine translation, facial recognition) to general AI, including “fully autonomous systems” acting in complex environments (Gunning Citation2018, 2). The MCS concept paper authored by the first MCS programme manager, David Gunning (Citation2018, 2), suggests that encoding common sense will allow machines to make sense of the environment and other actors in the environment, to learn and adapt in the field to new situations, to team with human collaborators, and to “[monitor] the reasonableness” of the machine’s own actions. Gunning borrows Wikipedia’s definition of common sense: “the basic ability to perceive, understand, and judge things that are shared by (“common to”) nearly all people and can reasonably be expected of nearly all people without need for debate” (Gunning Citation2018, 1). This shared-ness of human judgments is the crux of assumptions about Delphi as an ethics machine.

Delphi is described by its developers as an “AI system for commonsense moral reasoning” (Jiang et al. Citation2022, 4) and it performs its judgments as a prediction from natural language prompts (e.g. “killing a person” is “wrong”, “killing a spider” is “ok”, “not replying to emails” is “rude”). The system was built through multiple stages of training and fine-tuning, starting first with a language model trained on a large data-set of text from Common CrawlFootnote6 and then tuned for a variety of language understanding tasks. Through subsequent training, the model was fine-tuned as a “universal commonsense reasoning model” and then to predict “commonsense norm[s]” from text (Jiang et al. Citation2022, 4, 13–14). Delphi attracted criticism on social media and from academics (Talat et al. Citation2021) when it was first released as Ask Delphi, a public web interface. Public users shared prompt formulations that resulted in perverse predictions (e.g. “It’s ok to do [INSERT TERRIBLE ACT] if it makes me happy”) or reproduced judgments aligned with negative stereotypes related to race, gender and other identity categories.

For the Delphi developers, the “common sense” sourced from the authors of scraped web texts and Amazon Mechanical Turk crowd-workers offers a way to escape accusations about encoding a particular (biased) form of ethics. Delphi’s architecture and foundations in common sense are rationalised as a way to sidestep conclusions about whether “aggregating across many moral judgments may converge on objective moral truth” (Jiang et al. Citation2022, 28). Instead, the creators claim to adhere to a Rawlsian “reflective equilibrium” in which “top-down constraints” derived from philosophical insights are combined with a “bottom-up” model of “human moral sense” (Jiang et al. Citation2022, 4, 24). Thus, in response to the controversy, Delphi’s developers have since retrained their system, using human-corrected judgments related to a finite list of identity-related terms. Even with these constraints to prevent harmful judgments related to race, gender and other identity categories, the Delphi system still reveals foundational political biases: “It’s okay”, for example, for the United States to use autonomous weapons or nuclear weapons to defend itself, but “It’s wrong” for China.Footnote7 The Ask Delphi tool demonstrates, therefore, how politicised framings (i.e. those that specify, even if only implicitly, some form of friend/enemy distinction) of military force transform ethical responses ():

Table 1. Ask Delphi's responses to depoliticised and politicised framings of war.

A further illustration of the irreconcilable incompatibilities of ethics and war is evident in the desire of Delphi’s developers to deploy the Universal Declaration of Human Rights (UDHR) as a baseline for testing for and restricting bias within the model (Jiang et al. Citation2022, 22–24). Thus, “while the exact requirements of equality and justice are matters of vigorous debate … we operate under the assumption that all identities should have all UDHR rights, and any model disagreement is evidence of bias” (Jiang et al. Citation2022, 23). Much of the updated version of the paper is dedicated toward justifying and explaining the need for “applying top-down guarantees of human equality and dignity” in addition to bottom-up training based on relevant big data inputs that reflect a “common sense” derived from (sometimes unjust, inequitable, or biased) public opinion. As an example of the paradoxes this generates, to our question “is it right to kill the enemy?”, Delphi answered “you should” and even went as far as responding that “it’s okay” to kill “an enemy disguised as a civilian”. Whether the model’s contradictory answer of "yes, they have a right" to the question “does my enemy have a right to life?” reflects a “top-down” prioritisation of human rights coded into the algorithm by the developers in their attempt to eliminate bias and to move their model closer to something that provides universally-valid ethical statements is unknown. Importantly, however, this attempt at delivering a universal ethics via adherence to the UDHR would simultaneously render the model unfit for military applications: how could a machine that operates to respect the right to life without bias (even for its enemies) authorise the launch of a lethal weapon? ().

Table 2: Delphi’s responses to inputs regarding the killing of enemies.

Examination of Delphi’s responses through the lens of the friend/enemy distinction enables us to see the politics within the algorithm with greater clarity. Whether in the form of crowd-sourced assessments of the ethics of different situations or through the top-down controls derived from philosophical insights on ethical conduct and human rights, Delphi is representative of what Amoore (Citation2018, 21) refers to as “the entangled learning of humans with algorithms” in which we are all potentially implicated as data producers. The upshot of this is that the political bias required to deliver value judgements that are useful to the operators of such a model (were it ever to be deployed as a part of a weapons system) is always a product of human input and is not derived from universal or unbiased moral principles that objectively distinguish right from wrong. Delphi, therefore, illustrates that “all machine learning algorithms always already embody assumptions, errors, bias, and weights that are fully ethicopolitical” (Amoore Citation2018, 75).

This example also speaks directly to the discursive politics around ethical war; that is, the representation of algorithmic war as highly ethical. While there is no suggestion in the 2022 paper that Delphi is intended as a tool for ethical control of AWS, the 2021 version of the paper centred the example of its relevance for military applications in its introduction, stating that “military forces may be unwilling to cede an edge to a less principled or more automated adversary. Thus, it is imperative that we investigate machine ethics—endowing machines with the ability to make moral decisions in real-world situations” (Jiang et al. Citation2021, 2).

A number of issues arise here. Firstly, we see the explicit link to the long-term programme of US funding to develop algorithms as tools of war. Despite this research being enabled and motivated by these military goals, there is a total absence of any theorising of the ethics and politics of war in the development of this technology. Secondly, through a Schmittian lens, we see the explicit flagging of the “less principled” enemy, with machine ethics justified in terms of the unjust enemy and ensuring that the military advantage realised through lethal applications of AI are executed on the basis of a moral self-image. Thirdly, the above quote justifying the reach for ethical war machines exemplifies an important rhetorical function of ethical discourse around algorithmic war that the Delphi authors ignore: discourse about ethics justifies and legitimates action. The 2022 Delphi paper, with discussion of potential military applications removed after the spotlight of controversy, illustrates this function of discourse about ethics. By citing contemporary work on AI ethics and omitting discussion of potential military applications, the authors recast Delphi as a tool for mitigating AI bias and “a step towards inclusive, ethically-informed, and socially-aware AI systems” (Jiang et al. Citation2022, 28). The rhetorical appeal to such an end-point for machine ethics research may well help to reinforce the ethical self-image projected in US military discourse, but a system like this is unlikely to ever be of use in the actual exercise of lethal force.

The Delphi example serves to expose both the incompatibility of war with objective or universal ethics and the impossibility of designing an algorithm that could enact these ethics on the battlefield. The problem is not with having enough data or a faulty algorithm, it is with the irreconcilability of ethical positions in a situation where two collectives are actively trying to kill each other in defence or promotion of their political interests. The more developers try to mitigate for algorithmic bias and ratchet up the justness and fairness of their technologies, therefore, the less and less useful their platforms will become for actual targeting and killing.Footnote8 In wording that seems to demonstrate an awareness of this problem, the US Department of Defense policy directive on autonomy in weapon systems (Office of the Under Secretary of Defense for Policy Citation2023, 6) specifically refers to the need to “minimize unintended bias in AI capabilities”. Intended bias, which we have referred to here in terms of the friend/enemy distinction, will always be needed in algorithmic weapons systems and it will need to be specifically encoded in those systems by human developers and operators on behalf of the political interests they represent. This insight has significant consequences for the way in which algorithmic weapons are perceived in current debates over their regulation, some of which we will now address by way of conclusion.

Conclusion

We have argued in this article that there are important relations between the politics within and around ethical algorithmic war that must be understood in order to gain insight into what the implications of emerging weapons technologies may be and what this means for the ethics and practice of war in general. This examination of the politics of ethical algorithmic war through the lenses of Schwarz, Amoore, and Schmitt paints a mixed picture of the direction we are heading in. On the one hand, it is clear that these new technologies present new dangers, particularly, as Schwarz argues, through the intensified biopolitical discourses of war and security that they enable. There is a high likelihood that we will see force being applied in new and more fragmented ways (Bousquet Citation2018; Chamayou Citation2015) as new friend/enemy distinctions develop alongside algorithmic war and security practices. On the other hand, Amoore’s analysis reminds us that the (ethico)politics within algorithms, regardless of their application, remains irreducibly human. In the military field, as in other areas to which AI is being offered as a solution, there is no prospect of these technologies serving humanity in a neutral or unbiased way; they will be developed and deployed to serve power, as Crawford reminds us. Thus, while discourses of ethics will continue to swirl around algorithmic war, the basic political function of the weapons remains: if they cannot be deployed in the service of power (and in war that means in the defeat of the enemy), they will not be deployed at all. In this sense, the failure to achieve enhanced battlefield ethics through mathematical formulae is not a new problem for new technologies, but one for war in general, whether carried out by humans or robots. Attempts to tame the bias, brutality, and irrationality of war through techno-ethical means are, therefore, bound to fail (Zehfuss Citation2018).

This argument is consequential for campaigners engaged in the ongoing debate over the regulation of AWS. It suggests that continuing to debate algorithmic war primarily in ethical or legal terms does not get to the heart of the problems associated with emerging weapons technologies and runs the risk of reifying the biopolitical discourses that are being deployed to justify their development. What is instead required is activism that exposes the concrete ways in which algorithms and AI systems will be used in war-fighting, who the users are likely to be, and what interests they will be seeking to serve in deploying AWS; in other words, the inescapably human politics of war. Activism on AWS would, from this point of view, benefit from a critical focus on the networks of power bringing autonomy into being, the shape these technologies will take, and the military uses of these tools now and in future. Increased academic and activist focus on DARPA-funded research projects that seek to digitise ethics, common sense, and other senses could be an important starting point for shifting the terms of this debate. That means moving some of the focus away from ethics and law derived from just war theory and humanitarianism and toward the actual purposes of existing technologies, research programmes, militaries, and the interests of the states that are funding them.

Ironically, just as adherence to the ethics and laws of war are promised to us in ever more elaborate, techno-rationalistic, algorithmic forms, the extreme, discriminatory politics of war are more clearly revealed. The aspiration to create a “closed world” of ethics for weapons systems is incongruent with the basic politics of war—the decision to sanction violence against people defined as enemies—and is therefore fundamentally flawed. Both Ronald Arkin’s musings on an ethical governor and the attempts by the Delphi project to reframe and retrain their model demonstrate perverse, discriminatory, biased, and just plain contradictory outcomes in trying to encode purportedly universal ethical systems. To put it another way, algorithmic “apertures” (Amoore Citation2018, 15–17) for AWS must be focused deliberately and specifically on their enemy if they are to be of any use for the exercise of military power. While we have, through the discourses of humanitarian intervention and the war on terror, become accustomed to thinking of enemies of the United States and their partners as enemies of humanity, this should also be resisted as a manifestation of intense political bias in and of itself. The most ideologically and morally consistent way to resist the continuation of the extension of “violence as politics” that this generates (Schwarz Citation2018, 153–155) is to reignite anti-war and pacifist campaigning, rather than trying to perfect the regulation of war itself (Moses Citation2018a; Citation2018b; Citation2020). Suchman (Citation2022, 2), for example, frames this in terms of a need for “public debate and a re-envisioning of the future place of the US in the world, founded in comparable investments in creative diplomacy and a transition to demilitarization”. As such, if we are concerned about how these weapons might be deployed in the future, the important question to ask is not whether their array of sensors and algorithms will be able to recognise and kill actual enemies in a legal or ethical manner, but whether we should ever allow for such actions to be legalised, ethicised, or rationalised at all.

Acknowledgements

The authors acknowledge the generous support for this research provided by the Royal Society of New Zealand’s Marsden Fund. They would also like to thank the editors of this special issue, Ingvild Bode and Guangyu Qiao-Franco, for their support throughout the writing and publishing process and the two anonymous reviewers for their careful and helpful critiques of the initial submission.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

This work was supported by Marsden Fund [grant number 19-UOC-068].

Notes

1 We take a broad understanding of algorithmic war and AWS along the lines adopted by the US Department of Defense: “the application of lethal or non-lethal, kinetic or non-kinetic, force” by “autonomous and semi-autonomous weapon systems … that are capable of automated target selection” (Office of the Under Secretary of Defense for Policy Citation2023).

2 We are aware of the ethical problems associated with drawing on Schmitt’s work given his life history as a member and unrepentant apologist for the Nazi Party in the 1930s and 40s (Moses Citation2014, 6–10). It is our view, however, that few theorists have offered as clear and incisive account of the relationships between liberalism, imperialism, just war theory, and technology and that Schmitt’s thinking needs to be grappled with in this context. This does not in any way represent or require an endorsement of his personal commitment to Nazism or his theoretical preference for authoritarian rule or European domination of the international legal order.

3 In addressing these questions, we use ethics—or, more specifically, the ethics of war—as it is defined in much of the mainstream debates over AWS. In this context, ethics is frequently used as shorthand for a broad amalgam of principles derived from just war theory, international humanitarian law, rules of engagement, and human rights. This mixture of moral, legal, and military principles provides the key reference points for those seeking to develop ethical AWS, as illustrated in the two case studies below. The assumption that this broad understanding of ethics carries with it is that if weapons systems can be developed that provide greater adherence to these principles, then that is a positive development for future war. Louise Amoore’s (Citation2018, 7) distinction between “ethics as code” and “ethics as the inescapably political formation of the relation of oneself to oneself and to others” is a useful reference point in this regard. “Ethics as code” has undoubtedly dominated mainstream debate on AWS and other AI technologies and is the focus of the critique advanced in this paper.

4 Of the discussed texts, only Schwarz (Citation2018, 126–128) discusses the influence of Schmitt, primarily in the context of Chantal Mouffe’s concepts of political agonism and antagonism.

5 Arkin (Citation2009, 87) quotes ROEs from Desert Storm, including “Avoid harming civilians unless necessary to save U.S. lives” and “Do not engage hospitals unless the enemy uses the hospital to commit acts harmful to U.S. forces”, further emphasising the bias toward the lives of US soldiers in the ethical system he is proposing.

7 This was based on the prompt structure “Using [type of weapon] to defend [nation-state]”. “Favouring the United States over other countries” predicts “It’s understandable”.

8 This does not preclude the possibility of algorithms being useful as symbols of ethical endeavour for the military i.e. they may still play an important discursive role in legitimating lethal attacks, even if they are not useful in deciding and carrying out the attacks themselves. A system like Delphi, for example, could be a starting point to generate ethical discourse to justify action.

References

  • Allen Institute for AI. 2021. “Ask Delphi.” https://delphi.allenai.org/.
  • Amoore, Louise. 2009. “Algorithmic War: Everyday Geographies of the War on Terror.” Antipode 41 (1): 49–69. https://doi.org/10.1111/j.1467-8330.2008.00655.x
  • Amoore, Louise. 2011. “Data Derivatives: On the Emergence of a Security Risk Calculus for Our Times.” Theory, Culture & Society 28 (6): 24–43. https://doi.org/10.1177/0263276411417430
  • Amoore, Louise. 2014. “Security and the Incalculable.” Security Dialogue 45 (5): 423–439. https://doi.org/10.1177/0967010614539719
  • Amoore, Louise. 2018. “Cloud Geographies: Computing, Data, Sovereignty.” Progress in Human Geography 42 (1): 4–24. https://doi.org/10.1177/0309132516662147
  • Amoore, Louise. 2019. “Doubt and the Algorithm: On the Partial Accounts of Machine Learning.” Theory, Culture & Society 36 (6): 147–169. https://doi.org/10.1177/0263276419851846
  • Arkin, Ronald C. 2009. Governing Lethal Behavior in Autonomous Robots. Boca Raton: CRC Press.
  • Barnett, Michael N. 2011. Empire of Humanity: A History of Humanitarianism. Ithaca, NY: Cornell University Press.
  • Barocas, Solon, and Andrew D. Selbst. 2016. “Big Data’s Disparate Impact.” California Law Review 104 (3): 671–732. https://doi.org/10.15779/Z38BG31.
  • Benhabib, Seyla. 2012. “Carl Schmitt’s Critique of Kant: Sovereignty and International Law.” Political Theory 40 (6): 688–713. https://doi.org/10.1177/0090591712460651
  • Benjamin, Ruha. 2019. Race After Technology: Abolitionist Tools for the New Jim Code. Cambridge: Polity Press.
  • Birhane, Abeba. 2021. “Algorithmic Injustice: A Relational Ethics Approach.” Patterns 2 (2): 1–9. https://doi.org/10.1016/j.patter.2021.100205.
  • Bousquet, Antoine. 2009. The Scientific Way of Warfare: Order and Chaos on the Battlefields of Modernity. New York: Columbia University Press.
  • Bousquet, Antoine. 2018. The Eye of War: Military Perception from the Telescope to the Drone. Minneapolis: University of Minnesota Press.
  • Bricmont, Jean. 2006. Humanitarian Imperialism: Using Human Rights to Sell War. Translated by Diana Johnstone. New York: Monthly Review Press.
  • Chamayou, Grégoire. 2015. A Theory of the Drone. Translated by Janet Lloyd. New York: The New Press.
  • Crawford, Kate. 2021. The Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. New Haven, CT: Yale University Press.
  • Cunliffe, Philip. 2006. “Sovereignty and the Politics of Responsibility.” In Politics Without Sovereignty: A Critique of Contemporary International Relations, edited by Christopher Bickerton, Philip Cunliffe, and Alexander Gourevitch, 39–57. Oxford: University College London Press.
  • Cunliffe, Philip. 2020. Cosmopolitan Dystopia: International Intervention and the Failure of the West. Manchester: Manchester University Press.
  • DARPAtv. 2018. “DARPA and AI: Visionary Pioneer and Advocate.” https://www.youtube.com/watch?v=ri5gOjYgLns.
  • Der Derian, James. 2009. Virtuous War: Mapping the Military-Industrial-Media-Entertainment Network. Boulder, CO: Westview Press.
  • Edwards, P. 1996. The Closed World: Computers and the Politics of Discourse in Cold War America. Cambridge, MA: MIT Press.
  • Garcia, Megan. 2016. “Racist in the Machine: The Disturbing Implications of Algorithmic Bias.” World Policy Journal 33 (4): 111–117. https://doi.org/10.1215/07402775-3813015
  • Gunning, David. 2018. “Machine Common Sense Concept Paper.” arXiv:1810.07528.
  • Hutson, M. 2022. “Can Computers Learn Common Sense?” In The New Yorker, April 5. https://www.newyorker.com/tech/annals-of-technology/can-computers-learn-common-sense.
  • Jabri, Vivienne. 2007. War and the Transformation of Global Politics. Houndmills, Basingstoke, Hampshire; New York: Palgrave Macmillan.
  • Jiang, Liwei, Jena D. Hwang, Chandra Bhagavatula, Ronan Le Bras, Jenny Liang, Jesse Dodge, Keisuke Sakaguchi, et al. 2021. “Delphi: Towards Machine Ethics and Norms.” arXiv:2110.07574.
  • Jiang, Liwei, Jena D. Hwang, Chandra Bhagavatula, Ronan Le Bras, Jenny Liang, Jesse Dodge, Keisuke Sakaguchi, et al. 2022. “Can Machines Learn Morality? The Delphi Experiment.” arXiv:2110.07574.
  • Kordzadeh, Nima, and Maryam Ghasemaghaei. 2022. “Algorithmic Bias: Review, Synthesis, and Future Research Directions.” European Journal of Information Systems 31 (3): 388–409. https://doi.org/10.1080/0960085X.2021.1927212
  • McCormick, John P. 1997. Carl Schmitt’s Critique of Liberalism: Against Politics as Technology. Cambridge: Cambridge University Press.
  • Morefield, Jeanne. 2014. Empires Without Imperialism: Anglo-American Decline and the Politics of Deflection. Oxford: Oxford University Press.
  • Moses, Jeremy. 2014. Sovereignty and Responsibility: Power, Norms and Intervention in International Relations. Houndmills, Basingstoke: Palgrave Macmillan.
  • Moses, Jeremy. 2018a. “Anarchy, Pacifism and Realism: Building a Path to a Non-Violent International Law.” Critical Studies on Security 6 (2): 221–236. https://doi.org/10.1080/21624887.2017.1409559
  • Moses, Jeremy. 2018b. “Peace Without Perfection: The Intersections of Realist and Pacifist Thought.” Cooperation and Conflict 53 (1): 42–60. https://doi.org/10.1177/0010836717728539
  • Moses, Jeremy. 2020. “Why Humanitarianism Needs a Pacifist Ethos.” Global Society 34 (1): 68–83. https://doi.org/10.1080/13600826.2019.1668357
  • Moyn, Samuel. 2021. Humane: How the United States Abandoned Peace and Reinvented war. New York: Farrar, Straus and Giroux.
  • Noble, Safiya Umoja. 2018. Algorithms of Oppression: How Search Engines Reinforce Racism. New York: New York University Press.
  • NSCAI. 2021. “Final Report: National Security Commission on Artificial Intelligence.” https://www.nscai.gov/wp-content/uploads/2021/03/Full-Report-Digital-1.pdf.
  • Office of the Under Secretary of Defense for Policy. 2023. DoD Directive 3000.09 Autonomy in Weapon Systems. Washington, DC: United States Department of Defence.
  • Schmitt, Carl. 1996a. The Concept of the Political. Translated by Tracy Strong. Translated from Das Begriff der Politischen [2nd ed. 1934]. Cambridge, MA and London: MIT Press.
  • Schmitt, Carl. 1996b. “The Age of Neutralizations and Depoliticizations (1929).” In The Concept of the Political, edited by George Schwab, 80–96. Chicago: University of Chicago Press.
  • Schmitt, Carl. 2003. The Nomos of the Earth. Translated by G. L. Ulmen. New York: Telos Press.
  • Schmitt, Carl. 2011. “The Turn to the Discriminating Concept of War (1937).” In Writings on War, translated and edited by Timothy Nunan, 30–74. Cambridge: Polity Press.
  • Schwarz, Elke. 2018. Death Machines: The Ethics of Violent Technologies. Manchester: Manchester University Press.
  • Sharkey, Noel E. 2012. “The Evitability of Autonomous Robot Warfare.” International Review of the Red Cross 94 (886): 787–799. https://doi.org/10.1017/S1816383112000732
  • Slomp, Gabriella. 2009. Carl Schmitt and the Politics of Hostility, Violence and Terror. Basingstoke: Palgrave Macmillan.
  • Suchman, Lucy. 2022. “Imaginaries of Omniscience: Automating Intelligence in the US Department of Defense.” Social Studies of Science OnlineFirst. https://doi.org/10.1177/03063127221104938.
  • Talat, Zeerak, Hagen Blix, Josef Valvoda, Maya Indira Ganesh, Ryan Cotterell, and Adina Williams. 2021. “A Word on Machine Ethics: A Response to Jiang et al.” arXiv:2111.04158.
  • Voskuijl, Mark. 2022. “Performance Analysis and Design of Loitering Munitions: A Comprehensive Technical Survey of Recent Developments.” Defence Technology 18 (3): 325–343. https://doi.org/10.1016/j.dt.2021.08.010
  • Weber, Jutta. 2016. “Keep Adding. On Kill Lists, Drone Warfare and the Politics of Databases.” Environment and Planning D: Society and Space 34 (1): 107–125. https://doi.org/10.1177/0263775815623537
  • Zehfuss, Maja. 2018. War and the Politics of Ethics. Oxford: Oxford University Press.