170
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Trusting technology to wage war: the politics of trust and ethics in the development of robotics, autonomous systems, and artificial intelligence

ORCID Icon
Received 13 Jun 2023, Accepted 22 May 2024, Published online: 26 Jun 2024

ABSTRACT

Robotics, autonomous systems, and artificial intelligence (RAS-AI) are at the technological edge of militaries trying to achieve ‘ethical’ war. RAS-AI have been cast as essential technologies for defence forces to develop in order to sustain military advantage. Central to the success of this endeavour is trust: the technologies must be trusted for defence personnel to be willing to use them, for defence to be trusted by the public, and for allies and partners to have confidence in each other’s developments. Yet, trust is not merely a technocratic term. Its dominant role in the successful adoption of RAS-AI gives it power. This paper argues that the language of trust is being used to facilitate the development and adoption of military RAS-AI, often in concert with the language of ethics. Building on Maja Zehfuss’ concept of the politics of ethics, this paper contends that when it comes to RAS-AI there is also a politics of trust. Analysing British, American, and Australian military documents demonstrates that this politics manifests in side-lining political questions about how RAS-AI will be used – against whom and for what purposes – through focusing instead on the need to develop ethical and trustworthy RAS-AI to wage virtuous war.

Military imaginaries have been energized by the potential of robotics, autonomous systems, and artificial intelligence (RAS-AI). Technology development and adoption always raises challenging questions, controversies, and debates. In the case of RAS-AI, particularly when it comes to lethal autonomous weapons systems (LAWS), considerable attention has been focused on the nature of the human-machine relationship, and the ethics of increasingly autonomous weapons systems. One essential ingredient in these considerations has been trust. For the US Department of Defense (DoD), for instance, ‘the issue of trust is core to DoD’s success in broader adoption of autonomy’ (Defense Science Board Citation2016, 1). From a defence perspective, trust is required for military personnel to be willing to use the systems, for the public to trust in defence to use LAWS and enable defence to retain their public licence, and for allies and partners to trust them to use such technologies responsibly and in line with their expectations. It is not only defence forces who are fixated on trust, however, but also civilian industry, defence industry, and academic research. Trust is seen as essential to technology adoption across both military and civilian domains, and therefore there is substantial focus on how to design technologies with trust in mind.

Trust, however, is not merely a technocratic term. Trust is seen as essential to the use of RAS-AI for military purposes, and the use of RAS-AI is seen as essential to military advantage, to saving money, and to saving lives. Therefore, trust serves a vital purpose in achieving the adoption of RAS-AI in defence contexts. There is power imbued in use of the word trust, which is being used to facilitate the development, adoption, and use of RAS-AI. Trust is political, and it is contextual. In the context of RAS-AI, it is being used by particular people and in particular ways to facilitate the technology development perceived to be necessary to sustain military advantage. The aim of this paper is to trace how the language of trust is being used in relation to RAS-AI in the defence documents of the US, UK, and Australia in order to better understand the power involved in the invocation of trust.

One dominant way in which the word trust is being used is in co-occurrence with a consideration of ethics. Ethics has been a long-standing consideration for the development, adoption, and deployment of new weapons technologies. Maja Zehfuss (Citation2018, 9) has argued that ethics, rather than reducing violence, have instead been used ‘to legitimize war and even enhance its violence’. These arguments find new resonance in the context of Israel’s use of AI systems in Gaza, including Habsora (The Gospel), Lavender, and Where’s Daddy (Abraham Citation2023, Citation2024). The ICJ has ruled it ‘plausible’ that Israel’s actions constitute violations of the Genocide Convention (International Court of Justice, Citation2024), a significant contrast to Israel’s claims to be the world’s ‘most moral army’ (see Loewenstein Citation2023). Zehfuss (Citation2018, 9) argues that ethics are able to play a political role precisely because they have been ‘construed as distinct from politics’ (Zehfuss Citation2018, 9). She terms this phenomenon ‘the politics of ethics’ (Zehfuss Citation2018, 9). Relatedly, Michael Richardson (Citation2022, 123) contends that ethics are being used as ‘commodities’ in an effort to ‘make war virtuous’. Here I argue that trust plays a similar role, both on its own terms, and when it appears in conjunction with ethics. There is not only a politics of ethics at play but also a politics of trust. The language of trust and associated policy is being used to wave away considerations of how, politically speaking, RAS-AI will be used. That is, against whom, to achieve what goals, where, when, and how? Instead, there is simply a focus on the need to develop these technologies in a trusted and ethical way because they are essential requirements in achieving military advantage. Packed in here lie several assumptions, including: that ‘we’ will develop technology in a trusted and ethical way and the adversary will not; that ‘we’ will wage virtuous and ethical war with those technologies; and that ‘we’ are on a teleological path to inevitable trustworthy and ethical RAS-AI.

Analysing the language of trust and ethics is one method to unpick these assumptions, understand their significance, and shed light on the politics of the development and deployment of robotics, autonomous systems, and artificial intelligence for military purposes. This article will proceed with a review of the literature on the politics of ethics and the nature of trust, before moving on to discuss the presence of ethics and trust in civilian and military RAS-AI policies, and finally analysing the politics of trust and ethics in British, American, and Australian defence documents.

The politics of trust and ethics

The role of both trust and ethics in relation to war has been explored from a variety of angles. When it comes to war and technology, ethics has taken centre stage. This section will explore how existing work on the politics of ethics in relation to war and technology can help in understanding the politics of trust. It will then look at existing work on trust and ethics in relation to civilian technology, and how this could be transferred to military technologies. Finally, it will turn to what we already know about the politics of trust in general, before bringing these three threads together to make sense of the politics of trust, technology, and the waging of war.

Given trust and ethics so frequently co-occur in the RAS-AI context, a natural place to turn for the tools to answer these questions is existing research on the role of ethics in warfare and military imaginaries. Existing work has analysed how ideas and discourses of ethics have been mobilized to support certain kinds of wars. Helen Dexter, for instance, explores the re-emergence of ‘Just War’ ideas in both the use of military force in humanitarian interventions, and to support the construction of the War on Terror as a ‘Good War’ (Dexter Citation2007, Citation2008). The relationship between technology and the mobilization of ethics discourse in relation to war is of particular relevance to understanding the emergence of RAS-AI. James Der Derian (Citation2009, xix-xx) points out that at the same time as militarized humanitarian interventions escalated, the US was undertaking its ‘revolution in military affairs’ (RMA). This process culminated, he argues, with the invasion of Iraq in the form of ‘virtuous war’ (xx). Virtuous war centres on ‘the technical capability and ethical imperative to threaten and, if necessary, actualize violence from a distance’ while minimizing casualties (xxxi). The concept of virtuous war considers technology – or virtual war – to be a key component in what allows war to be virtuous. It facilitates the sanitation of war, the physical removal from the effects of war, and the merging of representations and realities of war (xxxiv-xxxvi), allowing for a projection of ‘technological and ethical superiority’ drawing on ideas of just war (xx). Ethics, war, and technology intermingle to shape what wars can be conducted – including how, by whom, and against whom.

It is in this environment of ‘virtuous war’ that technology has been cast as making war more ethical. Zehfuss’ work on ‘the politics of ethics’ critiques such claims that war has been made more ethical through technological advances which boast of improved targeting precision and reduced civilian casualties. Rather than ‘constraining the violence, this commitment to and invocation of ethics has served to legitimize war and even to enhance its violence’ (Zehfuss Citation2018, 9). Ethics have been seen as something distinct from politics and thus able to ‘tame politics (and by implication war)’ (14). Claudia Aradau and Tobias Blanke make a similar point about the use of algorithms broadly across society, arguing that ethics have been used ‘as a political technique deployed to tame power relations’ (Citation2022, 141). They draw on Zehfuss to argue that where she claims ‘war is made through ethics, we can say that algorithms and AI are now made through the political technique of ethics’ (144–45). John R. Emery (Citation2020, 180) also takes up strands of Zehfuss’ argument in his work on the algorithmic targeting technologies which are essential to everything from precision-guided munitions to LAWS. He argues that ‘technology does not inherently make war more ethical’, and rather ‘these algorithms function to discursively replace due care with a techno-ethics of war that purports a fantasy of control over the inherent uncertainties of conflict’ (180). Emery centres his argument on Antoine Bousquet’s concept of ‘chaoplexic warfare’ (Bousquet Citation2022), which encapsulates the ways in which militaries view technology as a solution to uncertainty, and James Der Derian’s construct of ‘virtuous war’. Also drawing on Der Derian is Michael Richardson, who analyses the ways in which defence researchers and defence industry are able to commodify and trade in ethics. The approach these actors take to ethics allows them to believe that they ‘can not only be virtuous, but also can make war virtuous too’ (Richardson Citation2022, 123). When it comes to RAS-AI, the dual-use nature of the technology brings the views of a broader range of industries and researchers into the picture, making it important to look at the circulation of ideas and discourse between these spaces and the military context.

In a military context, ethics take on a particular set of meanings, drawing on long-standing traditions such as international humanitarian law, the laws of armed conflict, and ideas of what it means to be a good warrior (Richardson Citation2022, 124). The military understanding of ethics poses challenges for applying ethics to AI:

Militaries … tend to see ethics as instrumental and principally related to conduct on and off the battlefield by individual soldiers, rather than enmeshed with larger questions of justice or societal obligation. Ethics are typically posed as both values to hold and problems to solve. When this approach encounters military AI, the limits of ‘ethics’ as a framework for reducing harm become clear: ethics are already subordinated to martial violence in that they are always concerned with enabling its infliction.

(Richardson Citation2022, 124)

Trust is also seen as both a value to hold, and a problem to solve, as well as, upon occasion, a solution to the problems posed by ethics – particularly when it comes to military applications of RAS-AI. Derek Gregory makes a related argument about the impact of legality on ethics which is worth considering given that discussions of trust and ethics in military documents often appear in company with a consideration of how RAS-AI can be used in accordance with international law. Gregory argues that ‘the invocation of legality works to marginalise ethics and politics by making available a seemingly neutral, objective language’, such that debate centres only on technical matters (Citation2011, 247). When it comes to algorithmic warfare, this manifests in the various ways in which militaries aim to ‘render all ethico-political dilemmas of killing into quantifiable, predictable, and solvable risk-assessment scores’ (Emery Citation2020, 181). Like Zehfuss, Gregory contends that this in fact encourages war, although he does not consider the idea of ethical war in itself to be a problem in the way that Zehfuss does (Gregory Citation2011, 247). However, the relationship between ethics and militarization cannot be properly understood without taking account of ‘the politics of ethics’. The very idea that killing in warfare can be made ethical through technology is a driving force behind technology development and influences the military imaginaries which make sense of why particular technologies should be developed, how they should be regulated, and how they should be used. It means that ‘virtuous militarism constructs technical precision as inherently ethical warfare’ (Emery Citation2020, 183), and that ‘faith in the ethical conduct of war has increasingly become coterminous with faith in the weapons’ (Beier Citation2020, 10). It removes ethics from something which is contested, to something which is entirely a technical matter (Schwarz Citation2016, 65–66). When it comes to RAS-AI, the politics of how, why, and against whom these technologies will be used is obscured in favour of a focus on making trusted, ethical weapons which will be used to wage virtuous war.

It is not ethics that is the primary source of investigation here, however, but rather trust and the way in which trust and ethics are being used in relation to one another in military discourse on RAS-AI. The evidently interconnected relationship between trust and ethics has been examined at length by anthropologist Sarah Pink in the context of civilian applications of AI in ways which are helpful in analysing such matters in the military applications of RAS-AI. She is focused on trust and ethics in the context of technosolutionism – meaning the presumption that technology will be a simple solution to a problem. Here, thinking about trust and ethics is inherently future-looking. It underpins ‘an anticipated future’ in which AI-related technologies have been successfully adopted (44). Trust is seen as a successful outcome of a technology design which ‘successfully invests human ethics in automated technologies’ (52). Pink (Citation2022a) labels this approach of trying to ‘capture’ ethics to make trustworthy machines ‘extractivist ethics’, given that ethics are clearly seen as a resource:

As a resource that can be extracted, or as a bait to capture trust, ethics can be invested in trustworthy intelligent and automated machines, thus serving as the catalyst in causal chains of human trust, acceptance, and adoption of AI. Here, extracted ethics could participate in creating an anticipatory infrastructure through which ethical AI and ethical machines are seen as a technological solution to situations where public acceptance of automated technologies is perceived as a challenge, and thus to attract investment in the technologies that are envisioned as solutions to societal problems.

(43)

Focusing on automated decision-making and AI in transport, Pink outlines the intertwined relationship between trust and ethics, particularly in industry, policy-making, and engineering and computer sciences (46). From the perspectives of this set of actors, the technology will be adopted if it is trusted, and people will trust it if it is ethical (46). Trust, then, plays a fundamental and ‘unique’ role. It is ‘situated as something that needs to be generated so that people will (correctly) use automated technologies primed to solve societal problems’ (50). The salience of trust also then fuels the economy of research and innovation feeding into these technologies. It is in this context that the political economy of research becomes a key medium through which the word ‘trust’ is used to exert power, to advance interests, and to shape narratives, although extending this analysis further is beyond the scope of this article. The central takeaways from Pink’s work for the purposes of this paper lie in the teleological imaginary associated with RAS-AI, or the role trust plays in the ‘anticipated future’, and the fact that trust is seen as fundamental to the successful development and adoption of ‘ethical’ technologies. Where in Pink’s example these technologies are civilian technologies assumed to solve societal problems, here the focus is on military technologies which are assumed to solve military problems.

The concept of trust is subject to considerable study from a wide array of disciplines, ranging from computer science and engineering to anthropology, business studies, and international relations. Across these disciplines, there is no settled definition, nor methodological or conceptual approach to trust. Consequently, much of this research focuses on defining, conceptualizing, and operationalizing trust to make sense of a particular research problem. In the study of international relations, research has focused predominantly on relationships between states, including how trust facilitates alliance relationships, how trust can be built between enemies in the context of the security dilemma, and relatedly the role of trust in arms control (Keating and Ruzicka Citation2015; Ken and Wheeler Citation2008; Klimke, Kreis, and Ostermann Citation2016; Troath Citation2022; Wheeler Citation2018). In these approaches, trust is seen as something with ‘a certain level of stability of meaning across time and space’ and something which relies on the acceptance that ‘one can uncover a true or general understanding of a word’ (Considine Citation2015, 110). It is also presumed to be ‘important and … generally quite a good thing’ (ibid). An alternative approach is to think of trust instead as being contextual. Looking at trust contextually when it comes to military RAS-AI contests both these assumptions: trust has different meanings in different contexts, and in this context whether it is an inherent good will depend upon your political opinions about war, militarism, and technology.

Sarah Pink’s work takes a contextual approach to trust in the case of civilian applications of AI, deploying ‘situated contextual understandings of trust’ (Citation2022b, 47). Turning to international relations, Laura Considine (Citation2014) argues that trust only ever has meaning in context and, because of this, trying to define trust as something which exists ‘out there’ absent of context cannot tell us what trust means to particular actors in a particular context. In one example, she focuses on the matter of trust and technology in the context of Reagan and Nixon’s approaches to the verification of nuclear arms control agreements during the Cold War. Here, the salient context is the Cold War and nuclear vulnerability, which has specific and contextual implications for trust:

When states have the power to destroy each other on command and their populations have given to their leaders the responsibility to make decisions about using that power, often with limited knowledge themselves due to the apparent imperatives for secrecy in issues of national security, the political use of the word ‘trust’ has particular significance.

(167)

Another key aspect of the context here was American technological development, allowing them to verify that the Soviet Union was complying with arms control agreements. Reagan’s most famous trust-related saying, ‘trust but verify’, stems from precisely this context. In explaining the phrase doverey no provorey (trust but verify) he stated: ‘We can either bet on American technology to keep us safe or on Soviet promises, and each has its own track record. And I’ll bet on American technology any day’ (166). Thus, trust is not only contextual but also political. Considine (179–80) points out that the power of the individual invoking trust and the setting in which it is invoked – for instance the President, in the White House – impacts how trust is used and how it is received by the audience. Further, trust in nuclear arms control was used as ‘a political language trope’ and ‘part of a performance of diplomacy’, used to both advance the political goals of a particular actor, and to establish their validity as a speaker in this context (200). Trust can also be used in connection with political ideologies, and it can be used as ‘rhetorical coercion’ to convince other actors to do something (such as arms control) (202–3). In short, trust is political, and it is contextual. Where Considine outlines how this operates in a Cold War arms control context, trust also plays particular political roles in the context of military RAS-AI. Trust is notably salient in the discourse surrounding military RAS-AI, and knowing that trust is political makes it necessary to question: how is it being used, by whom, and to what ends?

Bringing the threads of these literatures together allows for an analysis of the politics of trust in the context of military applications of RAS-AI technologies. What these literatures illuminate in combination is: that the language of ethics has been used to facilitate rather than constrain violence, especially in the context of new and emerging technologies; that ethics and trust have an interconnected relationship in the context of civilian AI technologies; and that when it comes to both international relations and emerging technologies trust is something which is both political and contextual. Therefore, below I will explore the context in which the language of trust is being used and how the language of trust is being used when it comes to military applications of RAS-AI.

Ethics and trust in RAS-AI policy and debate

Before diving into military discourse, it is important to contextualize military thinking within the wider debate and policy on RAS-AI in both civilian and defence settings. Ideas and discourse get translated between civilian and defence domains, shaping imaginaries and influencing policies. Ethics and trust sit at the heart of discourse and policy relating to RAS-AI in both civilian and military documents. Policy and guidance documents abound, as various entities seek to outline their ethical frameworks for RAS-AI technologies. AI bears the brunt of this, but the relatedness of the technologies means that ideas about AI shape robotics and autonomous systems development, discourse, and use as well. It is therefore helpful to cluster them together. Such is the prominence of ethics in particular, that Aradau and Blanke describe ethics as ‘a public vocabulary’ (Citation2022, 142). There have been numerous high-profile publications on ethical AI or similar themes which have had a significant impact on shaping policy, both civilian and defence (Floridi and Cowls Citation2019). The most salient example is the EU’s Ethics Guidelines for Trustworthy AI, a result of the work of the High-Level Expert Group on Artificial Intelligence (HLEG AI) (High-Level Expert Group on Artificial Intelligence, Citation2018). Here, you can see the centrality not only of ethics but also of trust. According to the EU’s guidelines, trustworthy AI comprises three components: it must be lawful, ethical, and robust (2). Thus, trustworthy AI needs to be ethical, and ethical AI creates trustworthy AI – the two are mutually interdependent and reliant on one another. This interdependent nature is a common refrain in the civilian domain which also, as will be seen below, spills over into defence approaches to these technologies. There has been considerable criticism of the role of ethics in AI and related technologies, with Big Tech corporations in particular accused of ethics-washing (see for instance Phan et al. Citation2022). Aradau and Blanke argue that ethics is used as a tool to govern algorithms in a manner lacking the force of law, allowing for actors to embark upon a process of ‘consensually taming power’ (Citation2022, 143–44). As can be seen with the HLEG AI report, it is not only ethics but also trust which sees actors attempting to bypass the difficult politics of hard regulation and paint themselves as responsible, ethical, and trustworthy purveyors of RAS-AI.

The ethical quandaries of RAS-AI and the implications of the relationship between trust and ethics are sharpened when it comes to defence applications. The focus of debate at the international level, at the Convention on Certain Conventional Weapons (CCW) where discussions on potential regulation or prohibition of certain types of autonomous weapons are taking place, is on the extent of human control over autonomous systems. Activists and some states argue for the concept of ‘meaningful human control’, contending that delegating the decision to take a life to a machine is morally unconscionable (see for example Article 36 Citation2016; Campaign to Stop Killer Robots, Citation2021; Roff Citation2016; Schwarz Citation2021). Many defence forces, such as those of the United States and Australia, argue that their existing processes for weapons reviews, compliant with international legal requirements, are sufficient to meet the condition of meaningful human control (Australian Government, Citation2019). Some of those in favour of development also argue that autonomous weapons could be ‘ethical weapons’, saving civilian lives through increased precision (Scholz et al. Citation2020). From such a perspective, there is hope of programming the laws of war into autonomous weapons systems, and having them followed with more accuracy and reliability than humans can manage, either because of the foibles of human nature and emotions, or faster and more accurate processing of information (Arkin Citation2009). Democracies tend to view autonomous systems as ‘the silver bullet of democratic warfare’, driven to develop RAS-AI the hopes that they will lower the costs and casualties associated with waging war (Sauer and Schörnig Citation2012, 370–71).

Ideas about ethics dominate all sides of these debates. What is also present, albeit often less overtly, is trust. Submissions to the CCW, for instance, discuss the importance of transparency in trusting the technology itself as well as in trusting each other’s developments in these areas, the difficulties of trust in the human-machine relationship, whether autonomous systems can be trusted to perform as intended, and the need for appropriate levels of trust in LAWS (rather than over- or under-trust).Footnote1 Trust is also prominent through defence concept documents where they seek to make sense of what autonomous systems will mean for their personnel and how they operate. There is a strong focus on the difficulties of building personnel trust in autonomous systems (see for instance Galliott and Wyatt Citation2021). The issues of trust and ethics are explicitly linked through arguments that trusted RAS-AI must be ethical, and ethical RAS-AI must be trustworthy. This connection means that trust plays a significant role in the international debate on regulation of LAWS, as well as doctrinal development and professional military education in the defence context. It is for these reasons that trust must be explored, alongside ethics, as a term which is used to shape discourses and exert power when it comes to robotics, autonomous systems, and artificial intelligence development for defence purposes.

Trust, ethics, and RAS-AI in military thinking

When it comes to defence applications of RAS-AI, the politics of ethics and the nature of the human-machine relationship becomes a life-or-death matter. Ethics and trust are used instrumentally by defence forces in their various quests for military advantage. This section will focus on how the United States, United Kingdom, and Australia are using the language of trust and ethics in their defence doctrine, their more public-facing documents, and related defence science research.

Documents have been selected by exploring all publications relating to robotics, autonomous systems, or artificial intelligence available publicly, and then focusing on those which use trust language – particularly in conjunction with ethics. The documents range from doctrine to documents which are more public-facing, and are supplemented with high-profile pieces of defence science research such as the US’ Summer Study on Autonomy and Australia’s A Method for Ethical AI in Defence. They comprise documents published by the services, departments of defence, or defence science entities. Focusing on doctrine alone would have been insufficient given the limited number of RAS-AI specific doctrine available. Broadening it out to include other documents allows for capturing a broader range of discourse as well as a more wide-ranging discussion. These kinds of documents are intended for a wider range of audiences, including governments and officials in charge of budgeting and policymaking, defence and civilian industry, and foreign governments. Given that the technologies and norms relating to RAS-AI are still emerging, this is the best way to capture the range of debates and discussions happening in relation to trust and ethics. In the case of RAS-AI, this strategy is particularly helpful given the dual-use nature of the technologies in question. Discourse, norms, and definitions are transferring between civilian and defence realms. For the same reasons, I have also supplemented these with comments made by relevant individuals in interviews or speeches. The table below lists the documents examined, and the number of mentions of trust and ethics within those documents. Analysis below will focus on those documents which use trust and ethics language more prominently, alongside the supplementary comments (.

Table 1. Ethics and trust in US, UK, and Australian Defence Documents.

United States

The United States has a strong focus on the development of RAS-AI for defence purposes, particularly in the context of escalating strategic competition with China. When it comes to trust alone, there is a substantial focus on the need for appropriate levels of trust relative to system capabilities (United States Air Force, Office of the Chief Scientist, Citation2015, 9–10). Trust and ethics can be found discussed in conjunction across a number of defence documents. The US Navy for instance, in its Science and Technology Strategy for Intelligent Autonomous Systems, has a section in its table of contents titled ‘how we will succeed’, under which sits a section on ‘ethics and trust’ (Department of the Navy, Citation2021). In the section on ethics and trust, the strategy seeks to bust the myth that ‘ethics inhibits operational effectiveness’, arguing instead that ‘ethics serves as a critical enabler of U.S. and Allied competitive advantage’ (19) The central ethical goal is ‘to develop and field IAS-based capabilities that preserve and maximize warfighting effectiveness while conforming to law, policy, and ethical principles’ (19). To achieve this goal, systems will be designed and employed with the following objectives in mind: ‘safety, security, reliability, predictability, trustworthiness, and ethical boundaries’ (19). Within the naval context, there is a focus on two primary facets of trust: individual and institutional. Individual focuses on appropriate levels of human trust, and subsequently reliance, on a machine, while institutional trust is about how the institution of the navy should ‘assess trust’ as relates to both the machine itself and the human-machine team (19).

At a broader institutional level, the US DoD adopted five ethical principles for AI in 2020: responsible, equitable, traceable, reliable, and governable (Deputy Secretary of Defense, Citation2021). When announced, the principles were described as being in close alignment with ‘efforts to advance trustworthy AI technologies’ (U.S. Department of Defense, Citation2020). In her forward to the 2022 Responsible Artificial Intelligence Strategy and Implementation Pathway, Deputy Secretary of Defense Kathleen H. Hicks emphasizes the centrality of trust and ethics to successfully harnessing AI technologies to maintain US military advantage:

To ensure that our citizens, warfighters, and leaders can trust the outputs of DoD AI capabilities, DoD must demonstrate that our military’s steadfast commitment to lawful and ethical behavior apply when designing, developing, testing, procuring, deploying, and using AI … Integrating ethics from the start also empowers the DoD to maintain the trust of our allies and coalition partners as we work alongside them to promote democratic norms and international standards.

(U.S. Department of Defense, Citation2021)

It is not enough for the US to maintain technical superiority, but also ‘normative leadership’ (Defense Innovation Board, Citation2020, 7). Ethics and trust can thus be leveraged for reputational purposes, and in the application of power to influence international norms and standards.

The strategy itself focuses on the concept of responsible artificial intelligence, or RAI. RAI ‘is a journey to trust’ and ‘an approach to design, development, deployment, and use that ensures the safety of our systems and their ethical employment’ (U.S. Department of Defense, Citation2021, 6). Responsible AI must be ethical, and ethical RAI will enable ‘appropriate levels of trust’ (6). Trust, then, ‘enables rapid adoption and operationalization of new technology, strengthening the Department’s competitive edge’ (6). Secretary of Defense Lloyd J. Austin III describes RAI in the following terms:

But ultimately, AI systems only work when they are based in trust. We have a principled approach to AI that anchors everything that this Department does. We call this Responsible AI, and it’s the only kind of AI that we do. Responsible AI is the place where cutting-edge tech meets timeless values.

(Austin Citation2021)

For the US, trust and ethics are not only functional, nor only about reputation and power over global standards and norms. Identity is also a prominent theme in how ethics, AI, and trust are discussed in conjunction with one another. The ethical, trusted, and democratic approach of the US is often contrasted with the less ethical, less trusted, and authoritarian approach of adversaries. Adversaries ‘are almost certainly willing to tolerate less trust in new autonomous systems than we are’, while the US considers itself much more concerned with the legality and ethics of autonomous systems (Naval Research Advisory Committee, Citation2017, 26). International norms and standards are not only a domain for the US to exert its preferences and will but a domain in which the ‘contest of authoritarian versus democratic norms’ will be fought (Defense Innovation Board, Citation2020, 7). The US sees itself as a leader able to ‘demonstrate that a principled approach to AI, rooted in democratic values, represents a path to peace, security, and societal progress’ (U.S. Department of Defense, Citation2021, 8). Trust and ethics in RAS-AI are discursively constructed as essential for military advantage, for the exercise of power over global standards and norms, and in the normative construction of the democratic ‘us’ versus the authoritarian ‘them’.

United Kingdom

The UK also emphasizes the role of trust when it comes to RAS-AI. The Ministry of Defence (MoD) Defence Artificial Intelligence Strategy posits that ‘trust is a fundamental, cross-cutting enabler of any large-scale use of AI’ (UK, Citation2022, 26). Here again, ethics also comes to the fore as a requirement for trust, with ‘well-regarded ethics’ listed alongside assurance, verification, validation, and governance frameworks as necessities (26). Trust and ethics are clearly seen to be related problems for military use of AI:

Our immediate challenge, working closely with allies and partners, is to ensure ethical issues, related questions of trust, and the associated apparatus of policies, process and doctrine do not impede our legitimate responsible and ethical development of AI, as well as our efforts at collaboration and interoperability.

(52)

Similar themes are emphasized in the UK’s Ambitious, Safe, and Responsible Defence AI report, which states that ‘we must be ethical – and be seen to be ethical – in our AI development’, in order to retain the trust of UK citizens, stakeholders, and allies and partners (Ministry of Defence (UK) 2022a, 7). Another facet closely linked to trust and ethics is legality. The UK vision outlines that defence AI will be trusted ‘by the public, our partners and our people, for the safety and reliability of our AI systems, and our clear commitment to lawful and ethical AI use in line with our core values’ (5). Trust, ethics, and law are intertwined in the imaginary of how such systems can be used, and how they should be designed. In terms of implementing their vision, the UK focuses on building ‘justified trust’ through demonstrating their trustworthiness to the public and partners. In the civilian space, justified trust (a demonstration of trustworthiness) will be built through an ‘AI assurance ecosystem’ which verifies whether AI systems are operating as intended (Centre for Data Ethics and Innovation, Citation2021, 4). Assurance builds trust and confidence through evaluating AI systems and providing reliable information and evidence on their degree of trustworthiness (4). This same approach is taken in defence, with the intention of bringing the governance of defence AI closely in line with national AI frameworks (UK, Citation2022, 26).

The defence approach to RAS-AI can be seen in UK documents on human-machine teaming, robotics, and autonomous systems. First, in the Human-Machine Teaming joint concept note, one of the eight factors considered ‘critical to guide strategy, policy and force development’ is ‘trust and assurance for artificial intelligence’ (UK, Citation2018, 53–54). Trust is discussed in terms of something which is needed for more capable AI systems, and as a constraint to what kinds of tasks humans will allow machines to carry out (54). For more capable systems in particular, trust is seen as essential, as are assurance and certification methods, and consideration of the legal and ethical implications of using these systems (54). They must ‘not only be trusted and safe, but also perform in such a way that they are seen to be safe and reliable by users and observers’ (49). The British Army focuses on the need for ‘ethical and trustworthy AI’ (British Army, Citation2020, 7). To ensure AI is ethical and trustworthy, military commanders are to be ‘wholly accountable’ for RAS, and ‘AI assurance must be explainable, demonstrably trustworthy and secure’ (7). Trust is ‘central’ to the military use of RAS in three key ways: personnel trust in the machines, the trust of military regulators in soldiers, and societal trust in military regulators (7). As with the US, there is a reputational aspect to the British approach to RAS. It is not sufficient for RAS to be trusted and ethical as perceived by defence themselves, but also be perceived to be ethical by partners, allies, and the public. That is, it is not enough to be ethical and trusted, but also be perceived to be ethical and trusted. The UK also similarly describes its role in the creation of international norms, stating that their ethical principles for AI in Defence will drive their approach to international norms ‘to shape the global development of AI in the direction of freedom, openness and democracy’ (Ministry of Defence (UK) 2022a, 7). British AI will be developed for ‘beneficial’ purposes, including ‘upholding human rights and democratic values’ (9). They aim to ‘be the world’s most effective, efficient, trusted and influential Defence organisation’ for their size as regards AI (UK, Citation2022, 2). Trusted and ethical development of RAS-AI is seen to be necessary for successful military use of such systems, as well as for the British capacity to be interoperable with allies and partners, construct a world-leading reputation in defence AI, and facilitate global use of AI for the greater good.

Australia

Australia has a particularly strong focus on trust when it comes to robotics and autonomous systems. Trusted autonomous systems have been identified as a priority area, supported by the establishment of the Trusted Autonomous Systems Defence Cooperative Research Centre (TASDCRC), a government-funded research centre bringing together defence, academia, and industry to develop relevant technologies, ethical guidelines, and policy (TASDCRC, Citation2023). One document they have produced, in collaboration with the Defence Science Technology Group (DSTG) is A Method for Ethical AI in Defence (Devitt et al. Citation2020). Ethical AI, the document proposes, has five facets: responsibility, governance, trust, law, and traceability (ii). When it comes to using AI systems for defence purposes, they ‘need to be trusted by users and operators, by commanders and support staff and by the military, government and civilian population of a nation’ (19). Drawing on the EU HLEG report, this report emphasizes the ‘essential’ nature of trust, and the need for trustworthy AI to be lawful, ethical, and robust (19–20). The report then develops its own two-component model of trust for human-AI systems, building on models of trust between humans which argue trust comprises aspects of both competence (capabilities, reliability) and integrity (motives, character) (20). For a human-AI system, competence includes factors such as operator training, test and evaluation process, and data training, while integrity involves a ‘system that intends to be ethical, is transparent about their actions and embodies a culture that, regardless of competence, inclines them to take responsibility for their actions, be thoughtful and empathetic to others and other “positive” traits’ (21). This model of trust therefore, the report argues, ‘combines ability and ethics’ (21). Trust and ethics are, again, viewed as inherently intertwined. Trustworthy AI must be ethical, and ethical AI must be trustworthy. Other factors discussed under the auspices of trust include: sovereign capability in AI and related technologies, ensuring the safety of AI systems, better data transparency, mitigating against ‘unjust biases’, the importance of appropriate test and evaluation processes, cyber mitigation capabilities to protect AI systems from cyber attacks, and data privacy for defence personnel (21–27). The report then produces three practical tools intended for defence and defence industry to ‘ethically de-risk’ their AI-related technologies: the Ethical AI for Defence Checklist, and Ethical AI Risk Matrix, and a Legal and Ethical Assurance Program Plan (32–34). Although these methods are focused on ethical AI, they are also intended to be useful for ‘de-risking the ethics of autonomous systems, semi-autonomous, manned-unmanned teaming and human-autonomy teaming’ (34).

When it comes to autonomous systems, defence documents from across the services also demonstrates the intermingling of trust and ethics. The Australian Defence Force (ADF), in their Concept for Robotic and Autonomous Systems, list both trust and ethics as the two of the key challenges associated with using RAS, alongside legal questions and difficulties related to algorithms and data (Australian Defence Force Citation2020, 4). RAS create distinct ‘challenges relating to trust and the ethics of using machines for Defence missions’ (4). Trust is also essential, as ‘developing trust in RAS will be necessary to exploit the advantages that such systems can provide’ (32). As with the UK and US, they note that adversaries will approach trust in different ways (28). Similarly, they emphasize that ‘authoritarian regimes will approach trust in different ways to democratic societies’, while adding that terrorist organizations will have ‘a lower ethical threshold’ for collateral damage (28). Australia, meanwhile, with its focus on ethics will go about ‘mitigating technological and ethical challenges’ through the centrality of ‘human-machine teams’ (28). The construction of an Australian identity as a good, ethical actor can be seen in an interview with Kate Devitt. Devitt is an expert in ethical RAS-AI who was formerly a Chief Scientist of the TASDCRC, and worked on the development of an ‘ethical’ AI ‘targeting evaluation frameset’ for defence named Athena AI (Roberson et al. Citation2022). In discussing the importance of ethical military AI and the development of Athena AI, Devitt emphasizes that ‘no one wants to be the baddies’ and argues Australia should be ‘a strong, powerful, terrifying goodie’ (Tongol, Citation2023).

The Royal Australian Navy (RAN) outline their outlook on RAS-AI in their RAS-AI Strategy 2040 concept. The RAN declare that to achieve the RAS-AI outcomes they desire, ‘Navy will develop trusted autonomous capabilities which verifiably and reliably act in accordance with our legal and ethical obligations’ (Royal Australian Navy, Citation2019, 24). Two things stand out here. First, it is taken as given that ‘Navy will develop trusted autonomous capabilities’ (24, emphasis added) and by extension, the autonomous capabilities deployed by navy will be trustworthy. Second is, once more, the requirement for trusted technology to meet ethical obligations. In combination, there also exists here an underlying assumption then that the autonomous capabilities will not only be trusted but also ethical. The RAN expects that trusted autonomous systems will ‘increase accuracy, maintain compliance with Navy’s legal and policy obligation as well as regulatory standards, and … minimize incidental harm to civilians’ (25). The focus on minimizing civilian harm through increased precision touches on a common theme in LAWS discourse: that autonomous weapons can be more ethical weapons than existing weapons. Despite the confident language, the document does point to the fact that RAS-AI pose specific challenges for trust. To meet these challenges, the RAN argues that Test and Evaluation must be adapted accordingly and be sovereign, and that explicability and assurance must be ensured (24). Navy emphasize the importance of sovereign control, and the need for a ‘system of control’ which ‘must reflect Australia’s legal, ethical, and cultural values and embody Navy’s operational approach’ (15). There must be ‘human command and trusted machine control’ over RAS, underpinned by data and algorithms which ‘must be trusted’ (15).

In terms of trust, the Australian Army focuses more heavily on the human-machine relationship and questions of personnel’s trust in the system. In this case, ethics are not overtly associated with trust. The two versions of the Australian Army’s Robotics and Autonomous Systems Strategy both emphasize the need for users to trust that RAS will behave and function and intended, and be capable of operating in contested environments, including those with limited network availability (Australian Army Citation2018, 15; Citation2022, 19). Trust is centred in terms of its perceived necessity for more advanced RAS technologies to be possible. In a figure titled ‘RAS Realisation’, there is a clear succession from the early insertion of some limited forms of RAS technologies to existing platforms, through to the replacement of existing platforms with ‘Trusted Autonomy’ (19). As with the Navy, the trust is somewhat taken for granted – any autonomous system will be a trusted autonomous system. Another similarity with Navy is the focus on sovereign control. Given the ‘critical requirement that Army has trust and confidence in the systems that they are fielding, particularly those which incorporate AI’, it may therefore be necessary to ‘require Australian derived AI’ (25). To underline this point, they draw on the British argument that using foreign suppliers in this space carries risks (25). The Air Force does not have a dedicated RAS concept in the way that the Army and Navy do, but they do have a concept on the creation of ‘a fifth-generation force’ which incorporates considerable discussion of human-machine teaming. They describe the successful collaboration of humans and machines as augmented intelligence (Royal Australian Air Force, Citation2019, 4). There are four design principles of augmented intelligence: agile, open, resilient, and trusted. Under the heading of trusted, the key point is that ‘the exploration and pursuit of augmented intelligence must be transparent and accountable to the Air Force’s legal, ethical, and moral values and obligations’ (11, original emphasis removed). Elsewhere, they emphasize that AI and machine learning must be implemented ‘in a transparent, explainable, and trusted manner’ (8). Trust and ethics return in company, alongside transparency, explainability, legality, and morality. How to achieve trusted augmented intelligence is not explored further, but rather it is assumed that the technology will be trusted as long as it is transparent and in accordance with legal, ethical, and moral values. Trust is a particularly strong theme in Australian defence approaches to RAS-AI. As with the US and UK, this commonly appears in association with ethics, with a focus on the necessity of both trust and ethics to successfully adopt RAS-AI and achieve the desired military advantage.

Conclusion

There is power in the language of trust. It is being used in specific ways, by particular actors, to shape the discourse on robotics, autonomous systems, and artificial intelligence for defence purposes. As is the case with ethics, trust is not merely something ‘out there’, something neutral, or something technocratic. Rather, it is something political. It is a feature of discourse which contributes to the imaginaries of how RAS-AI will be deployed in warfare. These future imaginaries sit in the context of assumptions and ideas about waging ethical and virtuous war, with language of trust and ethics often intermingling to influence visions of future warfare. In this paper, I have demonstrated the ways in which trust language is being used in military documents related to RAS-AI. This context provides one small but important slice of a larger puzzle in which trusted language is also used by defence industry, defence science researchers, and scientific researchers. Future research is required to explore the language of trust further afield, assess how it is used in context, and look at the connections between the different domains. Future research should also dig into the seemingly interconnected relationship between law, trust, and ethics. Further, while this paper has analysed the discourse of trust in defence documents, it has not pinpointed the ways in which this discourse is affecting military practices or policy decisions, or how these discourses are being received by their audiences. Future research should dig deeper and pinpoint how trust discourse impacts practices or decisions, and how audiences are receiving these particular narratives of trust.

Military documents alone have revealed some valuable insights. First, there is often an assumption about trust – that RAS-AI will be trustworthy. There exists an inevitable future where trusted RAS-AI will be in operation, any other possible future is discounted. Second, it is often the case that trust and ethics co-occur together. Trustworthy RAS-AI is ethical, and ethical RAS-AI is trustworthy. These categories are, often, taken for granted, despite both being widely contested and debated entities in their own right, let alone in relation to military matters. Third, there is an identity aspect to military documents when it comes to the US, UK, and Australia, where their RAS-AI is trustworthy and they care about trust and ethics, but the other, the adversary, does not care. These insights speak to the existing literature on the politics of ethics. Pink’s ‘anticipated future’ of trusted and ethical RAS-AI in the civilian space is certainly evident in military thought. Ethics is used to facilitate trust, and trust is seen as essential to successful technology adoption. Zehfuss’ research on the politics of ethics and related works such as Aradau and Blanke, Emery, and Richardson, resonates in the context of military writing on RAS-AI. Despite Zehfuss’ (Citation2019, 264) claim that the West’s imagination of ethical war and identity as a moral leader was struggling to sustain itself come 2019 here we find it very much alive and kicking. The defence forces of the US, UK, and Australia construct themselves as the ethical developers of RAS-AI, who must be the ‘goodies’ pursuing these technologies because the ‘baddies’ will do so in an unethical way. Military RAS-AI can not only be used to wage ethical and virtuous war but can also be used for beneficial purposes such as promoting democracy, freedom, and human rights.

Ethics and trust play an interconnected role in obscuring the politics of how RAS-AI will be used – that is, against whom, for what purpose, in what context – in favour of a technocratic focus on how to design ethical and trustworthy technologies. Military advantage (in an ethical way) is cast as undeniably necessary, and trust essential to achieving this necessity. When the creation of ethical trustworthy technologies of war is seen as an end in and of itself, and as an anticipated future, the politics of war is side-lined. This paper has demonstrated that trust is political, it is contextual, and it is a necessary term to interrogate in companionship with ethics if one wishes to fully appreciate the future imaginaries emerging alongside RAS-AI.

Acknowledgement

I would like to profoundly thank the two anonymous reviewers who engaged with this article incredibly generously, and helped make it a better piece of work. My eternal gratitude also goes to Jeremy Moses and Geoff Ford, who have shaped my thinking on these matters in profound ways.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

The work was supported by the Marsden Fund [19-UOC-068].

Notes

1. This statement is made on the basis of a corpus of all texts submitted and all statements made to the CCW process, collected by Geoff Ford in: Geoff Ford, ‘ConText: A browser based concordance and language analysis application’.

References