520
Views
0
CrossRef citations to date
0
Altmetric
Articles

AI Challenges and the Inadequacy of Human Rights Protections

Pages 2-22 | Published online: 23 Mar 2021
 

Abstract

My aim in this article is to set out some counter-intuitive claims about the challenges posed by artificial intelligence (AI) applications to the protection and enjoyment of human rights and to be your guide through my unorthodox ideas. While there are familiar human rights issues raised by AI and its applications, these are perhaps the easiest of the challenges because they are already recognized by the human rights regime as problems. Instead, the more pernicious challenges are those that have yet to be identified or articulated, because they arise from new affordances rather than directly through AI modeled as a technology. I suggest that we need to actively explore the potential problem space on this basis. I suggest that we need to adopt models and metaphors that systematically exclude the possibility of applying the human rights regime to AI applications. This orientation will present us with the difficult, intractable problems that most urgently require responses. There are convincing ways of understanding AI that lock out the very possibility for human rights responses and this should be grounds for serious concern. I suggest that responses need to exploit both sets of insights I present in this paper: first that proactive and systematic searches of the potential problem space need to be continuously conducted to find the problems that require responses; and second that the monopoly that the human rights regime holds with regards to addressing harm and suffering needs to be broken so that we can deploy a greater range of barriers against failures to recognize and remedy AI-induced wrongs.

Notes

[I would like to express my gratitude to Jonathan Jacobs and Margaret Smith for their kind invitation to speak at John Jay College of Criminal Justice of the City University of New York in March 2020 on which this paper is based.]

[Disclosure Statement: No potential conflict of interest was reported by the author.]

Notes

1 Two outstanding pieces set the broader scene for the human rights challenges posed by AI: Cohen, “Affording Fundamental Rights”; and Zuboff, “We Make Them Dance.”

2 My aim in this paper is to set out a guide and an overview of the inadequacies of the human rights regime when faced with AI applications. I have worked out some of the ideas in in previous work, including: Liu and Zawieska, “From Responsible Robotics”; Liu, “Digital Disruption”; Liu, “Power Structure of Artificial Intelligence”; Liu and Zawieska, “A New Human Rights Regime.”

3 David Kennedy suggests that “[a]s a dominant and fashionable vocabulary for thinking about emancipation, human rights crowds out other ways of understanding harm and recompense.” Dark Sides of Virtue, 9. When it comes to human rights law, the legal mechanisms that recognise damage, and which recategorise damage as harm and injury, play a pivotal role. On this point see Veitch, Law and Irresponsibility.

4 My use of “brittle” here is a heuristic for fragility, which is a concave sensitivity to stressors. This leads to a negative sensitivity to increase in volatility. See Taleb, Antifragile.

5 See Liu and Maas, “Solving for X?”

6 See ibid.

7 As the ECHR highlights in one of its rulings, “[t]he Court must also recall that the Convention is a living instrument which, as the Commission rightly stressed, must be interpreted in the light of present-day conditions.” Tyler v UK, No. 5856/72 (European Court of Human Rights 25 April 1978), paragraph 31.

8 See Letsas, “ECHR as a Living Instrument.”

9 See Bennett Moses, “Regulating in the Face of Sociotechnical Change.”

10 See Liu et al., “Artificial Intelligence and Legal Disruption.”

11 Don Norman explains that “[t]he term affordance refers to the relationship between a physical object and a person (or for that matter, any interacting agent, whether animal or human, or even machines and robots). An affordance is a relationship between the properties of an object and the capabilities of the agent that determine just how the object could possibly be used.” Design of Everyday Things, 11.

12 Liu et al., “Artificial Intelligence and Legal Disruption,” 226. See generally 224–31.

13 In Steven Johnson’s words, “[t]he adjacent possible is a kind of shadow future, hovering on the edges of the present state of things, a map of all the ways in which the present can reinvent itself.” Where Good Ideas Come From, 31.

14 This is further exacerbated by the nature of AI as a “general-purpose technology” capable of affecting all facets of human activity. See Liu et al., “Artificial Intelligence and Legal Disruption.”

15 See for example, Clapham, Human Rights Obligations.

16 See for example, Osman v UK, No. 87/1997/871/1083 (European Court of Human Rights 28 October 1988).

17 The “minimum severity” standard is clearest in Article 3 for jurisprudence within the European Court of Human Rights. See Ireland v UK, No. 5310/71 (European Court of Human Rights 18 January 1978).

18 This time frame is different than the stipulated time-limits after the alleged human rights violation within which claims must be made.

19 See Lessig, “Law of the Horse.”

20 See Brownsword, “In the Year 2061.”

21 See Lessig, “Law of the Horse,” 507–10.

22 For structural forms of discrimination, see, for example, Schindler, “Architectural Exclusion”; Liu, “Three Types of Structural Discrimination.”

23 Schindler, “Architectural Exclusion,” 1954.

24 Veitch, Law and Irresponsibility, 86. (Emphasis added.) Thus, according to Veitch’s distinction, by definition “injury” are harms that are recognised by the law, while “damage” are harms to which the law is blind. This suggests that the initial categorization is critical, since responsibility need not be considered, let alone established, for “damage.”

25 As captured in, for example, the Preamble of the Universal Declaration of Human Rights (UDHR). UN General Assembly, “Universal Declaration of Human Rights,” UNGAR 217 A (III) (1948).

26 The fact that the modern human rights movement was born in the aftermath of the Second World War provides an example of this position. See Glendon, A World Made New.

27 See Dershowitz, Rights from Wrongs.

28 See ibid.

29 See Danaher, “Axiological Futurism.” See also, Morris, Foragers, Farmers, and Fossil Fuels; Flanagan, Geography of Morals.

30 See Liu, “Power Structure of Artificial Intelligence,” 214–17.

31 As David Collingridge puts it, “[w]hen change is easy, the need for it cannot be foreseen; when change is apparent, change has become expensive, difficult, and time consuming.” Social Control of Technology, 11.

32 See Liu et al., “Artificial Intelligence and Legal Disruption,” 207–17.

33 But see, for example, Marchant, Allenby, and Herkert, Growing Gap.

34 See Liu et al., “Artificial Intelligence and Legal Disruption,” 216–22.

35 See Calo, “Robotics and Lessons of Cyberlaw.”

36 See Balkin, “Path of Robotics Law.”

37 See Bennett Moses, “Regulating in Face of Sociotechnical Change.”

38 This question becomes pertinent if indeed “all models are wrong, but some are useful.” (A quip attributed to George Box.)

39 See Calo, “Robots as Legal Metaphors.”

40 See Saxe, Blind Men and the Elephant.

41 The metaphor itself may be overly restrictive: by failing to characterise the problem; by overlooking out-of-context problems; by missing out on the sociopolitical roots that shape research agendas; by missing out on the interrelation between AI and other potential problems; by adopting overly static rather than dynamic and adaptive perspectives; and adopting problem-identification from a human scale and perspective. See Liu and Maas, “Solving for X?”

42 Such thinking can follow the lines of “Legal Development” for example. See Liu et al., “Artificial Intelligence and Legal Disruption,” 233–42.

43 Here I am drawing an analogy with robotics. Ryan Calo has suggested that “robots, more so than other technology in our lives, have a social valence. They feel different to us, more like living agents. The effect is so systematic that a team of prominent psychologists and engineers has argued for a new ontological category for robots somewhere between object and agent.” “Robotics and Lessons of Cyberlaw,” 532.

44 See Liu, “Refining Responsibility.”

45 There is a significant literature on this point which I will not discuss, as it leads in the opposite direction of the argument I present here.

46 See Liu, “Power Structure of Artificial Intelligence,” 209–14.

47 But see, Stammers, “Human Rights and Power.”

48 See Liu, “The Power Structure of Artificial Intelligence,” 233–42.

49 Ibid., 209. Emphasis original.

50 See Brownsword, “In the Year 2061,” 30–1.

51 Ibid., 31. While any form of human-machine interaction opens the door for hybridization, the “reversal” heralded by humans being “under the loop” transforms them from being regulators to being regulatees. Thus, this reversal opens the door for Brownsword’s third generation of regulatory environments, as human beings become regulatory patients.

52 See Liu, “Power Structure of Artificial Intelligence,” 233–42.

53 Recall the above discussion surrounding the blind men and the elephant.

54 See Kennedy, Dark Sides of Virtue, 9.

55 Level 0 “involves an adherence to ‘normal’ governance processes, norms, structures and concepts, however constructed. While it does not deny the emergence of certain problems or challenges, it can be slow to recognize them—and even when it does, it will deny their fundamental ‘newness’. This governance level is therefore rigorously focused on solving problems within (or by extension or application of) the existing governance system.” Level 1 “still narrowly or primarily emphasizes the importance of fixing the direct problem at hand—that is, it takes for granted a narrow rationale—but it opens up for the possibility that doing so may require innovation and (even far-reaching) change in the regulatory tools or governance processes.” See further, Liu and Maas, “Solving for X?”

56 It thus stands in contradistinction to a problem-finding orientation. See ibid.

57 See Liu, Lauta, and Maas, “Governing Boring Apocalypses”; see also Wisner et al., At Risk; Perry, “What Is a Disaster?” This formulation is only intended to express the interrelationship of several variables.

58 By no means does this sketch limit human rights challenges to only those posed by AI, as it is intended as a general model for understanding present and future challenges to the regime.

59 I am here invoking Thomas Kuhn’s idea of paradigm shifts. See Kuhn, Structure of Scientific Revolutions.

60 See Yeung, ‘“Hypernudge”’; Susser, “Invisible Influence”; Susser, Roessler, and Nissenbaum, “Technology, Autonomy, and Manipulation”; Susser, Roessler, and Nissembaum, “Online Manipulation.”

61 Zuboff, ‘“We Make Them Dance,” 23. (Emphasis in original).

62 Winner, “Do Artifacts Have Politics?,” 123.

63 See Liu, “Power Structure of Artificial Intelligence,” 209–17.

64 See Mistreanu, “Life Inside China’s Social Credit Laboratory”; Hvistendahl, “Inside China’s Vast New Experiment.”

65 See Schindler, “Architectural Exclusion,” 1954. See also the discussion above with regards to the famous example of Robert Moses and his low-slung bridges on Long Island.

66 See Brownsword, “In the Year 2061,” 31.

67 See, for example, the “Swiss Cheese” model of (human) error that places an array of leaky barriers between hazards and losses. Reason, “Human Error.”

68 Here I adopt James Reason’s terminology. As I suggest above, the notion of human rights challenges as external hazards may be misleading. I would argue that all that matters is that multiple and varied barriers against failure are put in place.

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 53.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 167.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.