32
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Programming utopia: artificial intelligence, judgement, and the prospect of jurisgenesis

Published online: 30 Jun 2024
 

ABSTRACT

There has been much handwringing about artificial intelligence’s application to law. Much of it regards AI’s potential incompatibility with restrictions on the unauthorized interpretation of law. However, the arguments commonly offered in defence of these restrictions fail to justify curtailing a technology that promises to narrow the ‘justice gap’ and deliver us to a more egalitarian future. Despite this failure, however, I contend that utopian visions of ‘robot lawyers’ nevertheless cast a dystopian shadow, as the increased use of AI encroaches on the space of human judgement. At stake in this encroachment, I contend, is the possibility of jurisgenesis, the interpretive activity by which law’s material power and normative meaning are bound. Preserving this possibility, I conclude, requires that discussions of law and AI must include not only scholars of law and of machine learning, but also humanists, who concern themselves with the connection between our law and our humanity.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Notes

1 See, for instance, Isaac Asimov, ‘The Vocabulary of Science Fiction,’ Asimov’s Science Fiction (Doubleday & Company 1981); Alexander Matuška, Karel Čapek: Man Against Destruction (George Allen & Unwin Ltd 1964); Kamila Kinyon, ‘The Phenomenology of Robots: Confrontations with Death in Karel Čapek’s “RUR,”’ (1999) 26 Science Fiction Studies 379; Juliane Strätz, ‘The Ordeal of Labour and the Birth of Robot Fiction,’ (2017) 62 American Studies 633; Matthew Dinan, ‘The Robot Condition: Karel Čapek’s RUR and Hannah Arendt on Labour, Technology, and Innovation,’ (2017) 46 Perspectives on Political Science 108; Jurai Odorčak & Pavlína Bakošavá, ‘Robots, Extinction, and Salvation: On Altruism in Human-Posthuman Interactions,’ (2021) 12 Religions 275.

2 Robert Cover, ‘Nomos and Narrative,’ (1983) 97 Harvard Law Review 1, 9.

3 Ibid.

4 Ibid, 4.

5 In this paper, I do not use the term ‘judg(e)ment’ to refer exclusively to the formal decisions of judges in legal procedures, but to refer to the broader human capacity by which particular cases and subjects are brought together with general norms, concepts, and rules. A formal legal judgment may indeed be a species of judgement that predicates and invokes and expresses norms of conduct. However, such judgements take place in a landscape of other predications and normative commitments asserted by human and non-human actors.

6 See Katherine Medianik, ‘Artificially Intelligent Lawyers: The Model Rules of Professional Conduct in Accordance with the New Technological Era,’ (2018) 39 Cardoza Law Review 1497; Ed Walters, ‘The Model Rules of Autonomous Conduct,’ (2019) 35 Georgia State Law Review 1073; & Chris Chambers Goodman, ‘AI/ESQ.: Impacts of AI in Lawyer-Client Relationships,’ (2019) 72 Oklahoma Law Review 149.

7 See Chambers Goodwin, ‘AI/ESQ.;’ Paul D. Callister, ‘Law, Artificial Intelligence, and Natural Language Processing: A Funny Thing Happened on the Way to My Search Results,’ (2020) 112 Law Library Journal 161; & Ronald Yu and Gabriela Spina Ali, ‘What’s inside the Black Box? AI Challenges for Lawyers and Researchers,’ (2019) 19 Legal Information Management 2.

8 See Norman W. Spaulding, ‘Is Human Judgment Necessary? Artificial Intelligence, Algorithmic Governance, and the Law,’ in Markus D. Drubber et al (eds), The Oxford Handbook of Ethics of AI (Oxford University Press 2020), 374.

9 See Mireille Hildebrandt, ‘Law as Information in the Era of Data Driven Agency,’ (2017) 79 The Modern Law Review 1; Mireille Hildebrandt, ‘Law as Computation in the Era of Artificial Legal Intelligence: Speaking Law to Statistics,’ (2017) 68 University of Toronto Law Journal 12; Mireille Hildebrandt, ‘Algorithmic Regulation and the Rule of Law,’ (2018) 376 Philosophical Transactions: Mathematical, Physical and Engineering Sciences 1; Katja De Vries, ‘Privacy, Due Process and the Computational Turn: a Parable and First Analysis’ in Mireille Hildebrant et al (eds), Privacy, Due Process and the Computational Turn: The Philosophy of Law Meets the Philosophy of Technology (Routledge 2013); Trevor Bench-Capon, ‘Argument in Artificial Intelligence and Law,’ (1997) 5 Artificial Intelligence and Law 249; Trevor Bench-Capon & Gregory Sartor, ‘A Model of Legal Reasoning with Cases Incorporating Theories and Values,’ (2003) 150 Artificial Intelligence 97; Laurence E. Diver, Digisprudence: Code as Law Rebooted (Edinburgh University Press 2022); & Mazvita Chirimuuta, ‘Rules, Judgment, and Mechanization,’ (2023) 1 Journal of Cross-Disciplinary Research in Computational Law 1.

10 Chirimuuta, ‘Rules, Judgment, and Mechanization,’ n.b., 7-10.

11 Daniel Martin Katz and others, ‘GPT-4 Passes the Bar Exam’ [2023] Social Sciences Research Network <https://doi.org/10.2139/ssrn.4389233> accessed 8 August 2023.

12 Joshua Browder (@jbrowder1), ‘Bad news: after receiving threats from State Bar prosecutors, it seems likely they will put me in jail for 6 months if I follow through with bringing a robot lawyer into a physical courtroom.’, X, 25 January 2023, 10:11 AM, https://twitter.com/jbrowder1/status/1618265395986857984?lang=en

13 The Robot Lawyer Resistance with Joshua Browder of DoNotPay (2023) <https://www.youtube.com/watch?v=AmVdYPTdw2c> accessed 8 August 2023.

14 At the time of this writing these two concerns have converged, a case filed in 2023 alleged that DoNotPay’s robot lawyer was a fraud and did not, in fact, operate on the AI technologies that Browder alleged. (See David B Arato, ‘Lessons from DoNotPay: The Ethical Implications of AI in the Legal Industry’ (Lexicon Legal Content, 27 March 2023) <https://www.lexiconlegalcontent.com/lessons-from-donotpay/> accessed 28 March 2024.) The petitioner does not hesitate to taunt Browder, writing, ‘by all appearances, Respondents are dressing up an old-fashioned document wizard and calling it a “Robot Lawyer.”’ She adds, ‘petitioner does and will consent to any application Respondents make to use their “Robot Lawyer” in these proceedings,’ submitting that ‘a failure to make such an application should weigh heavily in the Court’s evaluation of whether DoNotPay actually has such a product.’ Likewise, reports that ChatGPT-4 aced the UBE have been met with similar criticism (if not legal challenge). In an early release article, forthcoming in Artificial Intelligence and Law, Eric Martinez argues that ChatGPT’s designers skewed the data in their appraisal and that a more balanced appraisal would only place the system’s performance as low as the 48th percentile. (See Eric Martínez, ‘Re-Evaluating GPT-4’s Bar Exam Performance’ (2023) Forthcoming Artificial Intelligence and Law.)

15 For more in-depth technical overviews of current forms of AI, as well as the controversies associated with defining this term, see Brian S Haney, ‘Applied Natural Language Processing for Law Practice’ [2020] Intellectual Property & Technology Forum at Boston College School of Law; Callister, ‘Law, Artificial Intelligence and Natural Language Processing’; Joanna J. Bryson, ‘The Artificial Intelligence of the Ethics of Artificial Intelligence: An Introductory Overview for Law and Regulation’ in Markus D. Druber et al (eds), The Oxford Handbook of Ethics of AI (Oxford University Press 2020), 1.

16 Harry Surden, ‘Artificial Intelligence and Law: An Overview,’ (2019) 35 Georgia State University Law Review 1305.

17 Ibid, 1308.

18 Ibid.

19 Ibid.

20 Hildebrandt, ‘Algorithmic Regulation and the Rule of Law.’

21 See Marek Sergot and others, ‘The British Nationality Act as a Logic Program’ (1986) 29 Communications of the ACM 370.

22 See Bench-Capon & Sartor, ‘A Model of Legal Reasoning;’ & Diver, Digisprudence, n.b., 80ff.

23 Hildebrandt, ‘Law as Computation in the Era of Artificial Legal Intelligence,’ 27ff.

24 Hildebrandt, ‘Algorithmic Regulation and the Rule of Law,’ 3.

25 Hildebrandt too has discussed COMPAS in this technical context. (See Mireille Hildebrandt, ‘Code-Driven Law: Freezing the Future and Scaling the Past,’ in Simon Deakin et al (eds.), Is Law Computable: Critical Reflections on Law and Artificial Intelligence (Hart Publishing 2020); & Mireille Hildebrandt, ‘The Issue of Bias: The Framing Powers of Machine Learning,’ in Marcello Pelillo et al (eds), Machines We Trust: Perspectives on Dependable AI (MIT Press 2021).

26 Op. cit., n. 10.

27 Samuel Gibbs, ‘Chatbot Lawyer Overturns 160,000 Parking Tickets in London and New York’ The Guardian (28 June 2016) <https://www.theguardian.com/technology/2016/jun/28/chatbot-ai-lawyer-donotpay-parking-tickets-london-new-york> accessed 9 August 2023.

28 ‘Most Popular Features,’ (DoNotPay) <https://donotpay.com/learn/most-popular-features/> accessed 4 January 2024.

29 ‘Access to Justice: Mitigating the Justice Gap,’ (American Bar) <https://www.americanbar.org/groups/litigation/committees/minority-trial-lawyer/practice/2017/access-to-justice-mitigating-justice-gap/> accessed 30 August 2023.

30 ‘The Justice Gap: Executive Summary,’ (Legal Services Corporation) <https://justicegap.lsc.gov/resource/executive-summary/> accessed 4 January 2024. The report adds that 74% of low-income households experienced at least one civil legal problem in 2021, that 39% of these households experienced five or more, and that 20% experienced ten or more. The most common types of problems affecting these households had to do with consumer issues, health care, housing, or income maintenance.

31 See, for instance, Richard A Oppel Jr and Jugal K Patel, ‘One Lawyer, 194 Felony Cases, and No Time’ The New York Times (31 January 2019) <https://www.nytimes.com/interactive/2019/01/31/us/public-defender-case-loads.html, https://www.nytimes.com/interactive/2019/01/31/us/public-defender-case-loads.html> accessed 30 August 2023.

33 Katherine LW Norton, ‘The Middle Ground: A Meaningful Balance between the Benefits and Limitations of Artificial Intelligence to Assist with the Justice Gap,’ (2020) 75 University of Miami Law Review 190.

34 Ibid, 244.

35 Ibid, 236.

36 John O McGinnis & Russell G. Pearce, ‘The Great Disruption: How Machine Intelligence Will Transform the Role of Lawyers in the Delivery of Legal Services,’ (2014) 82 Fordham Law Review 3041.

37 I consider a third possible argument in note 63 below.

38 Ibid, 3065.

39 Ibid.

40 William S Duffey, Jr., The Significant Lawyer: The Pursuit of Purpose and Professionalism (Mercer University Press 2022).

41 ‘James Comey - Address on Race and Law Enforcement’ <https://www.americanrhetoric.com/speeches/jamescomeygeorgetownraceandlaw.htm> accessed 3 October 2023.

42 Chelsea Barabas, ‘Beyond Bias: “Ethical AI” in Criminal Law,’ in Markus D. Dubber et al (eds), The Oxford Handbook of Ethics of AI (Oxford University Press 2020), 739.

43 Ibid, 740-41.

44 Richard Berk et al, ‘Fairness in Criminal Justice Risk Assessments: The State of the Art,’ (2018) 50 Sociological Methods and Research 3.

45 COMPAS is a proprietary technology and the precise details of its development remain trade secrets. However, Rudin et al offer a plausible reconstruction of its innerworkings in Cynthia Rudin, Caroline Wang, and Beau Coker, “The Age of Secrecy and Unfairness in Recidivism Prediction,” (2020) 2 Harvard Data Science Review.

46 Barabas, ‘Beyond Bias,’ n.b., 740-751; Barabas points also to the contributions of Sharon Dolovich, David A. Harris, and Seth J. Prins. (See Sharon Dolovich, ‘Exclusion and Control in the Carceral State,’ (2011) 16 Berkeley Journal of Criminal Law 259; David A. Harris, ‘The Reality of Racial Disparity in Criminal Justice: The Significance of Data Collection,’ (2003) 66 Law & Contemporary Problems 71; & Seth J. Prins et al, ‘Can We Avoid Reductionism in Risk Reduction?,’ (2018) 22 Theoretical Criminology 258.)

47 Jeff Larson Mattu et al, ‘How We Analyzed the COMPAS Recidivism Algorithm,’ (ProPublica) <https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm>, accessed 3 October 2023. See also Julia Angwin Mattu et al, ‘Machine Bias,’ ProPublica, <https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing> accessed 10 October 2023

48 Ibid.

49 Barabas, ‘Beyond Bias.’

50 Ibid. Barabas deals with these problems at length in her article.

51 Spaulding, ‘Is Human Judgment Necessary,’ 380.

52 This issue came to a head in Loomis v. Wisconsin. (See Anne L. Washington, ‘How to Argue with an Algorithm: Lessons from the COMPAS-ProPublica Debate,’ (2018) 17 Colorado Technology Law Journal 131.)

53 Spaulding, ‘Is Human Judgment Necessary,’ 378.

54 Ibid, 395.

55 Laurence Diver’s criticisms of computational legalism underscore this ambiguity’s significance for democratic politics. (see Diver, Digisprudence).

56 Ibid, 401. Here, Spaulding quotes Wayne Martin, Theories of Judgment: Psychology, Logic, Phenomenology (Cambridge University Press 2006).

57 Although the technologies are not yet so pervasively applied, this danger is particularly exacerbated by generative natural language processing systems that might author judicial opinions of their own. Eugene Volokh has contended that if such systems could produce opinions that are as persuasive as the ones authored by their average human counterparts, then we would have no reason to mistrust a robot judiciary. However, even if Volokh’s imagined robot lawyers and judges were to create outputs that invoked norms and rules and resembled value judgements, they would be no less mindless in doing so than the self-executing code considered here. As such, Spaulding and Cover would still give us strong reason to resist allowing such technologies to make legal decisions. (See Eugene Volokh, ‘Chief Justice Robots,’ (2019) 68 Duke Law Journal 1135; & Ian Kerr and Carissima Mathen, ‘Chief Justice John Roberts Is a Robot,’ (2014) <https://ssrn.com/abstract=3395885>.)

58 Diver, Digisprudence, 79ff.

59 Cover, “Nomos and Narrative,” 11.

60 Ibid, 4.

61 Ibid, 5.

62 Ibid, 12-13.

63 Ibid. Cover’s padeic pattern makes possible a third argument for lawyers’ monopoly on the interpretation of law. We might think of bar associations themselves as padeic communities that steward ‘thick values’ regarding the advocacy and the administration of justice. Daniel Markovits considers that this might have been the case prior to the twentieth century. However, he contends, as bar associations become more diverse and less culturally homogenous, the thick values they once shared give way to thinner ‘cosmopolitan’ values insufficient to motivate moral commitment to the profession. It is beyond the scope this paper to settle this argument. However, resisting McGinnis and Pearce’s arguments may well demand some such counterargument about what bar associations should and must be, if not a description of their present state. (c.f., Daniel Markovits, A Modern Legal Ethics: Adversary Advocacy in a Democratic Age (Princeton University Press 2008), n.b., 212-46.)

64 Cover, ‘Nomos and Narrative,’ 13.

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 53.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 216.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.