622
Views
1
CrossRef citations to date
0
Altmetric
Articles

Algorithmic explainability and legal reasoning

ORCID Icon
Pages 67-92 | Published online: 21 Feb 2022
 

ABSTRACT

Algorithmic explainability has become one of the key topics of the last decade of the discourse about automated decision making (AMD, machine-made decisions). Within this discourse, an important subfield deals with the explainability of machine-made decisions or outputs that affect a person’s legal position or have legal implications in general – in short, the algorithmic legal decisions. These could be decisions or recommendations taken or given by software which support judges, governmental agencies, or private actors. These could involve, for example, the automatic refusal of an online credit application or e-recruiting practices without any human intervention, or a prediction about one’s likelihood of recidivism. This article is a contribution to this discourse, and it claims, that as explainability has become a prominent issue in hundreds of ethical codes, policy papers and scholarly writings, so it has become a ‘semantically overloaded’ concept. It has acquired such a broad meaning, which overlaps with so many other ethical issues and values, that it is worth narrowing down and clarifying its meaning. This study suggests that this concept should be used only for individual automated decisions, especially when made by software based on machine learning, i.e. ‘black box-like’ systems. If the term explainability is only applied to this area, it allows us to draw parallels between legal decisions and machine decisions, thus recognising the subject as a problem of legal reasoning, and, in part, linguistics. The second claim of this article is, that algorithmic legal decisions should follow the pattern of legal reasoning, translating the machine outputs to a form, where the decision is explained as applications of norms to a factual situation. Therefore, as the norms and the facts should be translated to data for the algorithm, so the data outputs should be back-translated to a proper legal justification.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Notes

1 The literature of explainable algorithms has grown extremely large in the past few years. Barredo Arreita and others analysed more than 400 publications in an article (Alejandro Barredo Arrieta and others, ‘Explainable Artificial Intelligence (xai): Concepts, Taxonomies, Opportunities and Challenges Toward Responsible AI’ (2020) 58 Information Fusion 82.

2 Some other fresh publications: Pentelis Linardatos, Vasilis Papastefanopoulos, and Sotoris Kotsiantis, ‘Explainable AI: A Review of Machine Learning Interpretability Method’ (2021) 23(1) Entropy <https://www.mdpi.com/1099-4300/23/1/18> accessed 10 December 2021; Amina Adadi and Mohammed Berrada, ‘Peeking Inside the Black-box: A Survey on Explainable Artificial Intelligence, (xai)’ (2018) 6 IEEE Access 52138; Jorg Cassens and Rebekah Wegener, ‘Intrinsic, Dialogic, and Impact Measures of Success for Explainable AI’ in Jörg Cassens and Rebekah Wegener and Anders Kofod-Petersen (eds.), Proceedings of the Twelfth International Workshop Modelling and Reasoning in Context (MRC 2021).

3 The field has recently been called ‘human-computer interaction.’ It has a dedicated journal (Human-computer interaction, published by Taylor and Francis). Illustrative books on the topic: Alan Dix and others, Human-computer Interaction (4th edn., Pearsons Education 2004); Silvia Pfleger, Joao Goncalves, Kadamula Varghese (eds.), Advances in Human-Computer Interaction (Springer 1995); Recently: Sergio Sayago, Perspectives on Human-Computer Interaction Research with Older People (Springer Nature 2019). For a good overview of cases of explainability in various situations of life see Mersedeh Sadegh, Verena Klös and Andreas Vogelsang, ‘Cases for Explainable Software Systems: Characteristics and Examples’ available online <https://arxiv.org/pdf/2108.05980.pdf> accessed 10 December 2021.

4 There is an extensive literature about explainability of recommendations and diagnosis made by AI based on image-recognition. See for example, José Louis Solorio-Ramírez and others, ‘Brain Hemorrhage Classification in CT Scan Images Using Minimalist Machine Learning’ (2021) 11 Diagnostics, 1449; Amitojdeep Singh, Sourya Sengupta and Vasudevan Lakshminarayanan, ‘Explainable Deep Learning Models in Medical Image Analysis’ (2021) 6 Journal of Imaging 52 online available <https://www.mdpi.com/2313-433X/6/6/52> accessed 10 December 2021; Fabrizio Nunnari, Abdul Kadir and Daniel Sonntag, ‘On the Overlap Between Grad-CAM Saliency Maps and Explainable Visual Features in Skin Cancer Images’ in Andreas Holzinger and others (eds.), Machine Learning and Knowledge Extraction. CD-MAKE 2021. Lecture Notes in Computer Science, vol. 12844 (Springer 2021).

5 ‘Mais les juges de la nation ne sont, comme nous avons dit, que la bouche qui prononce les paroles de la loi’ Charles de Secondat de Montesquieu, De l’esprit des lois Book XI, Chapter 6. (onine edition) 116 <https://archives.ecole-alsacienne.org/CDI/pdf/1400/14055_MONT.pdf> accessed 14 December 2021.

6 Henrik Palmer Olsen, Jacob Livingston Slosser and Thomas Troels Hildebrandt, ‘What’s in the Box? The Legal Requirement of Explainability in Computationally Aided Decision-Making in Public Administration’ in Hans-W. Micklitz and others (eds), Constitutional Challenges in the Algorithmic Society (CUP 2022) 220–221.

7 Neil M. Richards and Jonathan H. King, ‘Big Data Ethics’ (2014) 49 Wake Forest Law Review 393; Solon Barocas and Andrew D. Selbst, ‘Big Data's Disparate Impact’ (2016) 104 California Law Review 671.

8 Vincent Müller, ‘Ethics of Artificial Intelligence and Robotics’ in Edward N. Zalta (ed.), The Stanford Encyclopedia of Philosophy (2021 summer edition), <https://plato.stanford.edu/archives/sum2021/entry/ethics-ai/> Mark Coeckelbergh, AI Ethics (The MIT Press 2020) and European Commission, ‘Proposal for a Regulation of the Parliament and the European Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act), and amending certain Union legislative acts’, SEC(2021) 167 final} – {SWD(2021) 84 final} – {SWD(2021) 85 final} ‘to address the opacity that may make certain AI systems incomprehensible to or too complex for natural persons, a certain degree of transparency should be required for high-risk AI systems. Users should be able to interpret the system output and use it appropriately’. (recital 47).

9 See section 2.1.

10 Roman Jakobson, ‘On Linguistic Aspects of translation’ in Reuben Arthur Brower (ed.), On Translation (Harvard University Press 1959) 232–239. Online available: <https://web.stanford.edu/~eckert/PDF/jakobson.pdf> accessed 10 December 2021.

11 Directive 95/46/EC of the European Parliament and the Council of 24 October 1995 on the protection of individuals with regard to the processing of personal data and on the free movement of such data.

12 ‘Member States shall grant the right to every person not to be subject to a decision […] which is based solely on automated processing of data’ ibid 15(1).

13 ibid 12(a).

14 Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) (GDPR).

15 Bryce Goodman and Seth Flaxman, ‘European Union regulations on algorithmic decision-making and a “right to explanation”’ (2016) arXiv:1606.08813v3 1; Bryan Casey, Ashkon Farhangi and Roland Vogl, ‘Rethinking Explainable Machines: The GDPR's 'Right to Explanation' Debate and the Rise of Algorithmic Audits in Enterprise’ (2019) 34 Berkeley Technology Law Journal 143.

16 ibid recital 71, italics added.

17 Article 29 Working Party, or WP 29 was a working group set up under Article 29 of the Directive to deal with privacy and personal data protection issues until 25 May 2018 (entry into force of the General Data Protection Regulation). It issued hundreds of opinions, including on automated individual decisions.

18 Article 29 Working Party ‘Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation’ 2016/679, WP251rev.01. Available at: <https://ec.europa.eu/newsroom/article29/items/612053/en27> accessed 10 December 2021.

19 These are systems that usually monitor roads with cameras and other image recognizers and are then able to detect some simple traffic violations (e.g., not signalling or speeding) and the offender’s license plate and then manage the entire first-degree fines process.

20 On the automatic traffic enforcement systems, see Jennifer M. Lancaster, ‘You have Got Mail: Analysis of the Constitutionality of Speeding Cameras in City of Moline Acres v. Brennan, 470 SW3D 367 (MO.2015)’ (2017) 41 Southern Illinois University Law Journal 485. 493.

21 Goodman and Flaxman (n 15) 1; Casey, Farhangi and Vogl (n 15) 145; see also Celine Castets-Renard, ‘Accountability of Algorithms in the GDPR and beyond: A European Legal Framework on Automated Decision-Making’ (2019) 30 Fordham Intellectual Property, Media & Entertainment Law Journal 91; Thomas D. Grant and Damon J. Wischik, ‘Show Us the Data: Privacy, Explainability, and Why the Law Can't Have Both’ (2020) 88 George Washington Law Review 1350.

22 Sandra Wachter, Brent Mittelstadt and Luciano Floridi, ‘Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation’ (2017) 7 International Data Privacy Law 76.

23 Gianclaudio Malgieri and Giovanni Comand, ‘Why a Right to Legibility of Automated Decision-Making Exists in the General Data Protection Regulation’ (2017) 7 International Data Privacy Law 243.

24 See, e.g. Joshua A. Kroll and others, ‘Accountable Algorithms’ (2017) 165 University of Pennsylvania Law Review 633; Cary Coglianese and David Lehr, ‘Regulating by Robot: Administrative Decision Making in the Machine-Learning Era’ (2017) 105 Georgetown Law Journal 1147; Andrew D. Selbst and Solon Barocas, ‘The Intuitive Appeal of Explainable Machines’ (2018) 87 Fordham Law Review 1085; Ashley Deeks, ‘The Judicial Demand for Explainable Artificial Intelligence’ (2019) 119 Columbia Law Review 1829; Katherine J. Strandburg, ‘Rulemaking and Inscrutable Automated Decision Tools’ (2019) 119 Columbia Law Review 1851.

25 Big Data: A Report on Algorithmic Systems, Opportunity, and Civil Rights Executive Office of the President May 2016 Online available: <https://obamawhitehouse.archives.gov/sites/default/files/microsites/ostp/2016_0504_data_discrimination.pdf> Accessed 13 December 2021; Ronan Hamon, Henrik Junklewitz and José Ignacio Sanchez Martin, Robustness and Explainability of Artificial Intelligence, EUR 30040 EN (Publications Office of the European Union 2020), doi:10.2760/57493, Online available at: <https://publications.jrc.ec.europa.eu/repository/handle/JRC119336> accessed 13 December 2021.

26 See the nearly 200 ethical guidelines on AI Ethics Guidelines Global Inventory <https://algorithmwatch.org/en/ai-ethics-guidelines-global-inventory/> accessed 13 December 2021.

27 see for example the recent European Artificial Intelligence Act proposal (n 8).

28 A few examples include Accenture, Responsible AI and Ethical Framework: ‘Transparency: When complex machine learning systems have been used to make significant decisions, it may be difficult to unpick the causes behind a specific course of action. The clear explanation of machine reasoning is necessary to determine accountability.’ Online available: <https://www.accenture.com/gb-en/company-responsible-ai-robotics> accessed 13 December 2021; Advisory Board on Artificial Intelligence and Human Society, Japan, Report on Artificial Intelligence and Human Society. ‘R&D should be conducted to develop technologies that enable people […] to explain the processes and logics of calculations inside AI technologies’ Online available: <https://www8.cao.go.jp/cstp/tyousakai/ai/summary/aisociety_en.pdf23> accessed 13 December 2021; Artificial Intelligence – Australia’s Ethics Framework. A Discussion Paper: ‘Transparency & Explainability. People must be informed when an algorithm is being used that impacts them and they should be provided with information about what information the algorithm uses to make decisions.’ Online available: <https://consult.industry.gov.au/strategic-policy/artificial-intelligence-ethics-framework/supporting_documents/ArtificialIntelligenceethicsframeworkdiscussionpaper.pdf6> accessed 13 December 2021; High Level Expert Group on Artificial Intelligence, Ethics guidelines for trustworthy AI. ‘Explainability concerns the ability to explain both the technical processes of an AI system and the related human decisions (e.g. application areas of a system). Technical explainability requires that the decisions made by an AI system can be understood and traced by human beings. Moreover, trade-offs might have to be made between enhancing a system's explainability (which may reduce its accuracy) or increasing its accuracy (at the cost of explainability). Whenever an AI system has a significant impact on people’s lives, it should be possible to demand a suitable explanation of the AI system’s decision-making process. Such explanation should be timely and adapted to the expertise of the stakeholder concerned (e.g. layperson, regulator or researcher). In addition, explanations of the degree to which an AI system influences and shapes the organisational decision-making process, design choices of the system, and the rationale for deploying it, should be available (hence ensuring business model transparency).’ online available: <https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai> accessed 13 Dec 2021.

29 Luciano Floridi and Josh Cowls, ‘A Unified Framework of Five Principles for AI in Society’ (2019) 1 Harvard Data Science Review, 2; Luciano Floridi and others, ‘AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations’ (2018) 28 Minds and Machines 687. online available <https://link.springer.com/article/10.1007/s11023-018-9482-5> accessed 15 December 2021.

30 Dwight Bolinger, ‘Semantic Overloading: A Restudy of the Verb Remind’ (1971) 47 Language 522.

31 The recently dominant theory of legal argumentation, coherence theory, rests on this assumption. See e.g. Ronald Dworkin, Taking Rights Seriously (HUP 1977), Ronald Dworkin, Law’s Empire (HUP 1986), Neil MaCormick, ‘Coherence in Legal Justification’, in Alexander Peczenik, Lars Lindahl and Bert Van Roermund (eds.), Theory of Legal Science (D. Reidel Publishing 1984); Joseph Raz, ‘The Relevance of Coherence’ (1992) 72 Boston University Law Review 273–321.

32 According to Dworkin, sometimes good procedures can lead to unfair decisions. In this case, the principle of integrity applies: in that case, we must at least be consistent. Dworkin, Law’s Empire (n 31) 176.

33 Peter Moffett and Gregory Moore, ‘The Standard of Care: Legal History and Definitions: the Bad and Good News’ (2011) The Western Journal of Emergency Medicine 12 109.

34 see e.g. John Naughton, ‘To Err is Human – is that Why We Fear Machines that Can Be Made to Err Less?’ The Guardian, 14 December 2019 online available: <https://www.theguardian.com/commentisfree/2019/dec/14/err-is-human-why-fear-machines-made-to-err-less-algorithmic-bias> accessed 13 December 2021.

35 Alexander Pope, ‘An Essay on Criticism’ online available <https://www.poetryfoundation.org/articles/69379/an-essay-on-criticism> accessed 13 December 2021.

36 Laetitia A.Renier, Marianne Schmid Mast and Anely Bekbergenova, ‘To Err is Human, Not Algorithmic – Robust Reactions to Erring Algorithms Computers in Human Behaviour’ (article in press) https://doi.org/10.1016/j.chb.2021.106879.

37 Lawrence Lessig, Code 2.0. (Basic Books 2006) 124–125.

38 Mirelle Hildebrandt, ‘Algorithmic Regulation and the Rule of Law’ (2018) 376 Philosophical Transactions of the Royal Society A 20170355.

39 Lon L. Fuller, ‘Reason and Fiat in Case Law’ (1946) 59 Harvard Law Review 376.

40 Olsen, Slosser and Hildebrandt (n 6) 220.

41 The COMPAS software played an important role in State v. Loomis, 881 N.W.2d 749 (Wis. 2016). For a description and detailed critique of the software, see ProPublica (Jeff Larson and others, ‘How We Analyzed the COMPAS Recidivism Algorithm’ (2016) Pro Publica available online <https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm>, May 23 2016.

42 Frank Pasquale, Black box society (HUP 2015) 149. and Frank Pasquale, ‘Toward a Fourth Law of Robotics: Preserving Attribution, Responsibility, and Explainability in an Algorithmic Society’ (2017) 78 Ohio State Law Journal 1243.

43 Richards and King (n 7).

44 These are mentioned in the GDPR (Recitals 71). See also Executive Office of the President (of the USA) Big Data: A Report on Algorithmic Systems, Opportunity, and Civil Rights. Online available <https://obamawhitehouse.archives.gov/sites/default/files/microsites/ostp/2016_0504_data_discrimination.pdf> accessed 13 December 2021.

45 E.g. Charles Duhigg, ‘How Companies Learn Your Secrets’ (2021) The New York Times, 16 Feb 2012 online available <https://www.nytimes.com/2012/02/19/magazine/shopping-habits.html> accessed 13 December 2021; Michael Mattioli, ‘Disclosing Big Data’ (2014) 99 Minnesota Law Review 535.

46 Federal Trade Commission Report Provides Recommendations to Business on Growing Use of Big Data, available online <https://www.ftc.gov/news-events/press-releases/2016/01/ftc-report-provides-recommendations-business-growing-use-big-data> accessed 13 December 2021; Cathy O’Neil, ‘Big-Data Algorithms Are Manipulating Us All’ (2016) Wired 2016. 10. 10. online available <https://www.wired.com/2016/10/big-data-algorithms-manipulating-us/> accessed 13 December 2021.

47 Directive 95/46/EK (n 11) 15(1).

48 AI HLEG Ethics Guidelines (n 28) 21.

49 See e.g., Recital 99 of the new draft DSA (Regulation of the European Parliament and of the Council on the single market for digital services (amending the Digital Services Act and Directive 2000/31 / EC) ‘The Commission should be empowered to request access to and explanations relating to, data-bases and algorithms … ’ and recital 52: ‘recipients of the service should have information on the main parameters used for determining that specific advertising is to be displayed to them, providing meaningful explanations of the logic used to that end.’

50 For a different answer to the ‘what should be explained?’ question see Hamon and others (n 25) and Bernhard Waltl and Roland Vogl, ‘Explainable Artificial Intelligence – the New Frontier in Legal Informatics’ (2018) Jusletter IT 22 Feb 2018.

51 Selbst and Barocas (n 24).

52 ibid 1094.

53 ibid 1097, and similarly, distinguishing between causal and legal explanation: Olsen, Slosser and Hildebrandt (n 6) 224.

54 Grant and Wischik (n 21) 1419. The author’s main argument here is to point out the tension within GDPR between privacy and explainability.

55 Heike Felzmann and others, ‘Transparency you can trust: Transparency requirements for artificial intelligence between legal norms and contextual concerns’ (2019) Big Data & Society 5.

56 A good example of the confusion between substantive principles and explainability is given by Casey and others who argue that the right to explanation is existing in GDPR that it is ‘a promising new mechanism for promoting fairness’ Casey and others (n 15) 148.

57 Floridi and Cowls (n 29) 8.

58 Lon L. Fuller, The Morality of Law (Yale University Press 1964).

59 E.g., Amanda Thomas in her blogpost (‘Model Transparency and Explainability’, online available <https://ople.ai/ai-blog/model-transparency-and-explainability/> accessed 20 June 2021) mentions explainability as a synonym with interpretability; In the same vein a course material published by University of Helsinki (Chapter 4: Should we know how AI works, online available <https://ethics-of-ai.mooc.fi/chapter-4/2-what-is-transparency> accessed 10 December 2021. Also: Mike Ananny and Kate Crawford, ‘Seeing without Knowing: Limitations of the Transparency Ideal and Its Application to Algorithmic Accountability’ (2018) 20 New Media & Society 973.

60 E.g., Joana Hois, Dimitra Theofanou-Fuelbier and Alicia-Janin Junk, ‘How to Achieve Explainability and Transparency in Human AI Interaction’ in Constantine Stephanidis (ed.), HCI International 2019 – Posters. HCII 2019. Communications in Computer and Information Science, vol 1033. (Springer 2019).

61 Floridi and Cowls (n. 29).

62 Barredo-Arrieta and others (n 1) (in general AI context) talk about ‘transparent models’ and ‘post-hoc explainability techniques for machine learning models’. I use basically the same distinction here.

63 The concepts of interpretability and explicability are treated here as synonyms for their explainability, with the difference that as if interpretability were used more often by technicians and programmers, but in essentially the same sense as explainability. (Chris Olah and others, ‘The Building Blocks of Interpretability’, (2018) Distill available online: <https://distill.pub/2018/building-blocks/> accessed 13 December 2021. In the material of the High Level Expert Group (AI HLEG) (n 28), this is quite spectacular, as ethical requirements are explained throughout, but at the end of the material, where the details of the requirements of technical robustness and safety are discussed, the term interpretability is used. ‘Explainability’ also seems to emphasize the role of ordinary narratives, which will play an important role later.

64 Floridi and Cowls (n 29).

65 see n 39.

66 Jakobson: On translation (n 10).

67 Frederick Schauer, ‘Giving Reasons’ (1995) 47 Stanford Law Review 633; Julie Dickson, ‘Interpretation and Coherence in Legal Reasoning’ (2016) Edward N. Zalta (ed.), The Stanford Encyclopedia of Philosophy (2021 summer edition), available online <https://plato.stanford.edu/entries/legal-reas-interpret/> accessed 13 December 2021, and Müller (n 8).

68 Dworkin: Law’s Empire (n 31).

69 Schauer (n 67) 634.

70 The facts, the laws, and application of laws to facts (facts, laws, and inference) are constant elements of decisions in almost every legal system. Judgments of the European Court of Human Rights, for example, use the following structure: The Facts, Relevant Legal Framework and Practice, The Law, and within the latter: The Parties’ submissions, The Court’s assessment. In the same vein Olsen, Slosser and Hildebrandt (n 6) 227–230.

71 J. Christopher Rideout, ‘Storytelling, Narrative Rationality, and Legal Persuasion’ (2008) Legal Writing: The Journal of the Legal Writing Institute 53.

72 Robert Alexy, A Theory of Legal Argumentation: The Theory of Rational Discourse as Theory of Legal Justification (OUP 1989).

73 Although in some legal cultures, like the French, legal decisions are presented as impersonal decisions.

74 Ronald Dworkin, ‘Law as Interpretation’ (1982) 60 Texas Law Review 527.

75 see e.g. Neil MacCormick and Robert S. Summers (eds.), Interpreting Statutes. A Comparative Study (Routledge 1991).

76 ‘They don’t just have to prove that something happened, but that what happened is believable. Proof cannot be focused on certainty, only on (maximum) probability – and the great errors of large lawsuits warn against this.’ Miklós Szabó: Ars Iuris, A jogdogmatika alapjai (Bíbor 2005) 257.

77 Wolfgang Fikentscher, Methoden des Rechts IV. (J.C.B. Mohr (Paul Siebeck) 1975–1977) 198 Fikentscher adapts the theory of Gadamer on the hermeneutic circle to the application of the laws to a particular case.

78 Lawrence Baum, Judges and their Audiences. A Perspective on Judicial Behaviour (Princeton University Press, 2008) 21, 50.

79 Kiel Brennan-Marquez, ‘Plausible Cause: Explanatory Standards in the Age of Powerful Machines’ (2017) 70 Vanderbilt Law Review 1249.

80 Robert M. Cover, ‘The Supreme Court, 1982 Term – Foreword: Nomos and Narrative’ (1983) 97 Harvard Law Review 4.

81 Caryn Devins and others, ‘The Law and Big Data’ (2017) 27 Cornell Journal of Law and Public Policy 357.

82 Hans-Georg Gadamer, Truth and Method (Continuum 2006) 267 and Fikentscher (n 77).

83 Marquez (n 79) 1250.

84 Recently the papers collected in Narrative and Metaphor in Law have demonstrated how important narratives and metaphors are in the law. Michael Hanne and Robert Weisberg (eds), Narrative and Metaphor in the Law (CUP 2018) and in a similar vein: Peter Brooks, ‘Narrative in and of the Law’ in James Phelan and Peter J. Rabinowitz (eds.), A Companion to Narrative Theory (Blackwell Publishing 2005).

85 Jakobson (n 10) 233.

86 Umberto Eco ‘Traduzione e interpretazione’ [Translation and Interpretation]. Versus 85–87. 55–100 cited by Nicola Dusi, ‘Intersemiotic Translation: Theories, Problems, Analysis’ (2015) 206 Semiotica 181–205.

87 Miklós Szabó, ‘Law as Translation’ (2004) 91 Archiv für Rechts- und Sozialphilosophie 65–66.

88 Jakobson (n 10) 236.

89 Liza A. Shay and others, ‘Do Robots Dream of Electric Laws? An Experiment in the Law as Algorithm’ in Ryan Calo, Michael Froomkin and Ian Kerr (eds.), Robot Law (Edward Elgar 2016) 274–305.

90 James Mohun and Alex Roberts, Cracking the Code. Rulemaking for humans and machines OECD Working Papers on Public Governance No. 42 (OECD 2020) online available <https://www.oecd-ilibrary.org/governance/cracking-the-code_3afe6ba5-en> accessed 14 December 2021.

91 ibid 18.

92 James Beniger, The Control Revolution, Technological and Economic Origins of the Information Society (HUP 1986) 15.

93 For the description of COMPAS see (n 41).

94 The questionnaire is online available <https://www.documentcloud.org/documents/2702103-Sample-Risk-Assessment-COMPAS-CORE.html> accessed 14 December 2021.

95 Jakobson (n 10) 233.

96 J.C. Catford, A Linguistic Theory of Translation (OUP 1965) 49.

97 Eugene A. Nida and Charles Taber, The Theory and Practice of Translation (Brill 2003) 1.

98 Olsen, Slosser and Hildebrandt (n 6) 234.

99 Dusi (n 86) 182.

100 Marquez (n 79) 1280.

101 Ananny and Crawford (n 59) 12.

102 For example, it is well known that some tolerance is built into speed control systems and speeding is generally allowed by 10%. In this case, the driver should know that he can actually drive 66 km/h at sign 60.

103 see Mohun and Roberts (n 90).

104 Olsen, Slosser and Hildebrandt (n 6).

105 ibid 232.

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 53.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 162.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.