246
Views
1
CrossRef citations to date
0
Altmetric
European Conference on Cognitive Ergonomics

Moral judgements of errors by AI systems and humans in civil and criminal law

Pages 1718-1728 | Received 15 Jul 2023, Accepted 04 Nov 2023, Published online: 15 Nov 2023

References

  • Ahn, Michael J., and Yu-Che Chen. 2022. “Digital Transformation toward AI-augmented Public Administration: The Perception of Government Employees and the Willingness to Use AI in Government.” Government Information Quarterly 39 (2): 101664. https://doi.org/10.1016/j.giq.2021.101664.
  • Amoroso, Daniele, and Guglielmo Tamburrini. 2021. “The Human Control Over Autonomous Robotic Systems: What Ethical and Legal Lessons for Judicial Uses of AI?” In New Pathways to Civil Justice in Europe: Challenges of Access to Justice, edited by X. Kramer, A. Biard, J. Hoevenaars, and E. Themeli, 23–42. Cham: Springer. https://doi.org/10.1007/978-3-030-66637-8_2.
  • Angwin, Julia, Jeff Larson, Surya Mattu, and Lauren Kirchner. 2016. “Machine Bias: There’s Software Used across the Country to Predict Future Criminals. And it’s Biased Against Blacks”. ProPublica. Accessed April 14, 2023. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.
  • Barrington, Sarah, and Hany Farid. 2023. “A Comparative Analysis of Human and AI Performance in Forensic Estimation of Physical Attributes.” Scientific Reports 13 (1): 4784. https://doi.org/10.1038/s41598-023-31821-3.
  • Barton, Dominic, Jonathan Woetzel, Jeongmin Seong, and Qinzheng Tian. 2017. Artificial Intelligence: Implications for China. McKinsey & Company.
  • Basile, Fabio. 2019. “Intelligenza Artificiale e Diritto Penale: Quattro Possibili Percorsi di Indagine.” Diritto Penale e Uomo 10:1–33. https://dirittopenaleuomo.org/wp-content/uploads/2019/09/IA-diritto-penale.pdf.
  • Buolamwini, Joy, and Timnit Gebru. 2018. “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification.” In Proceedings of the 1st Conference on Fairness, Accountability and Transparency (Proceedings of Machine Learning Research), Vol. 81, edited by Sorelle A. Friedler and Christo Wilson, 77–91. MIT Media Lab 75 Amherst St. Cambridge, MA 02139: PMLR. https://proceedings.mlr.press/v81/buolamwini18a.html.
  • Candrian, Cindy, and Anne Scherer. 2022. “Rise of the Machines: Delegating Decisions to Autonomous AI.” Computers in Human Behavior 134: 107308. https://doi.org/10.1016/j.chb.2022.107308.
  • Challen, Robert, Joshua Denny, Martin Pitt, Luke Gompels, Tom Edwards, and Krasimira Tsaneva-Atanasova. 2019. “Artificial Intelligence, Bias and Clinical Safety.” BMJ Quality & Safety 28 (3): 231–237. https://doi.org/10.1136/bmjqs-2018-008370.
  • Chander, Anupam. 2016. “The Racist Algorithm.” Michigan Law Review 115:1023. https://repository.law.umich.edu/mlr/vol115/iss6/13.
  • Chugunova, Marina, and Daniela Sele. 2022. “We and it: An Interdisciplinary Review of Experimental Evidence on How Humans Interact with Machines.” Journal of Behavioral and Experimental Economics 99:101897. https://doi.org/10.1016/j.socec.2022.101897.
  • Ciampi, Costantino. 1982. “Intelligenza Artificiale e Sistemi Informativi Giuridici.” Informatica e Diritto 8 (2): 79–91. http://www.ittig.cnr.it/EditoriaServizi/AttivitaEditoriale/InformaticaEDiritto/Ciampi.Ied.2-1982.html.
  • Coeckelbergh, Mark. 2020. “Artificial Intelligence, Responsibility Attribution, and a Relational Justification of Explainability.” Science and Engineering Ethics 26 (4): 2051–2068. https://doi.org/10.1007/s11948-019-00146-8.
  • Davenport, Thomas H., and Rajeev Ronanki. 2018. “Artificial Intelligence for the Real World.” Harvard Business Review 96 (1): 108–116.
  • Dressel, Julia, and Hany Farid. 2018. “The Accuracy, Fairness, and Limits of Predicting Recidivism.” Science Advances 4 (1): eaao5580. https://doi.org/10.1126/sciadv.aao5580.
  • Duan, Yanqing, John S. Edwards, and Yogesh K. Dwivedi. 2019. “Artificial Intelligence for Decision Making in the Era of Big Data – Evolution, Challenges and Research Agenda.” International Journal of Information Management 48:63–71. https://doi.org/10.1016/j.ijinfomgt.2019.01.021.
  • Durante, Massimo. 2007. “Intelligenza Artificiale. Applicazioni giuridiche.” In AA. VV. Digesto Italiano. Terza appendice di aggiornamento della IV edizione. Discipline Privatistiche, Vol. 2, 714–724. Torino: Utet.
  • Feier, Till, Jan Gogoll, and Matthias Uhl. 2021. “Hiding Behind Machines: When Blame is Shifted to Artificial Agents.” ArXiv 2101.11465.
  • Feier, Till, Jan Gogoll, and Matthias Uhl. 2022. “Hiding Behind Machines: Artificial Agents May Help to Evade Punishment.” Science and Engineering Ethics 28 (2): 19. https://doi.org/10.1007/s11948-022-00372-7.
  • Fügener, Andreas, Jörn Grahl, Alok Gupta, and Wolfgang Ketter. 2022. “Cognitive Challenges in Human–Artificial Intelligence Collaboration: Investigating the Path toward Productive Delegation.” Information Systems Research 33 (2): 678–696. https://doi.org/10.2139/ssrn.3368813.
  • Grey, Heather, and Kurt Grey. 2007. “Dimensions of Mind Perception.” Science 315 (5812): 619. https://doi.org/10.1126/science.1134475.
  • Grey, Kurt, Liane Young, and Adam Waytz. 2012. “Mind Perception is the Essence of Morality.” Psychological Inquiry 23 (2): 101–124. https://doi.org/10.1080/1047840X.2012.651387.
  • Grgić-Hlača, Nina, Elissa M. Redmiles, Krishna P. Gummadi, and Adrian Weller. 2018. “Human Perceptions of Fairness in Algorithmic Decision Making: A Case Study of Criminal Risk Prediction.” In Proceedings of the 2018 World Wide Web Conference (WWW ‘18), 903–912. International World Wide Web Conferences Steering Committee, Republic and Canton of Geneva, CHEhttps://doi.org/10.1145/3178876.3186138.
  • Guidi, Stefano, Enrica Marchigiani, Sergio Roncato, and Oronzo Parlangeli. 2021. “Human Beings and Robots: Are there Any Differences in the Attribution of Punishments for the Same Crimes?” Behaviour & Information Technology 40 (5): 445–453. https://doi.org/10.1080/0144929X.2021.1905879.
  • Hidalgo, César A., Diana Orghian, Jordi Albo Canals, Filipa De Almeida, and Natalia Martin. 2021. How Humans Judge Machines. Floor Cambridge, MA: MIT Press. https://doi.org/10.7551/mitpress/13373.001.0001.
  • Kehl, Danielle L., and Samuel A. Kessler. 2017. “Algorithms in the Criminal Justice System: Assessing the Use of Risk Assessments in Sentencing.” In Responsive Communities Initiative. Berkman Klein Center for Internet and Society, Harvard Law School 2. https://cyber.harvard.edu/publications/2017/07/Algorithms.
  • Kerikmäe, Tanel, and Evelin Pärn-Lee. 2021. “Legal Dilemmas of Estonian Artificial Intelligence Strategy: In between of E-society and Global Race.” Ai & Society 36 (2): 561–572. https://doi.org/10.1007/s00146-020-01009-8.
  • Kirkpatrick, Keith. 2017. “It’s Not the Algorithm, it’s the Data.” Communications of the ACM 60 (2): 21–23. https://doi.org/10.1145/3022181.
  • Königs, Peter. 2022. “Artificial Intelligence and Responsibility Gaps: What is the Problem?” Ethics and Information Technology 24 (3): 36. https://doi.org/10.1007/s10676-022-09643-0.
  • Langer, Markus, Cornelius J. König, and Maria Papathanasiou. 2019. “Highly Automated Job Interviews: Acceptance under the Influence of Stakes.” International Journal of Selection and Assessment 27 (3): 217–234. https://doi.org/10.1111/ijsa.12246.
  • Levinson, Jesse, Jake Askeland, Jan Becker, Jennifer Dolson, David Held, Soeren Kammel, J. Zico Kolter, et al. 2011. “Towards Fully Autonomous Driving: Systems and Algorithms.” In 2011 IEEE Intelligent Vehicles Symposium (IV), 163–168. Stanford, CA: Stanford University Press. https://doi.org/10.1109/IVS.2011.5940562.
  • Lima, Gabriel, Nina Grgić-Hlača, and Meeyoung Cha. 2021. “Human Perceptions on Moral Responsibility of AI: A Case Study in AI-assisted Bail Decision-making.” In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (Yokohama, Japan) (CHI ‘21). Article 235. New York, NY: Association for Computing Machinery. https://doi.org/10.1145/3411764.3445260.
  • Lucatuorto, Pier Luigi M. 2006. “Artificial Intelligence and Law: Judicial Applications of Expert Systems (Intelligenza Artificiale e Diritto: Le Applicazioni Giuridiche dei Sistemi Esperti).” Cyberspace and Law (Ciberspazio e Diritto) 7 (2): 219–242. https://ssrn.com/abstract=1158387.
  • Luciani, Massimo. 2018. “La Decisione Giudiziaria Robotica.” Rivista AIC 3 (3): 22. https://www.rivistaaic.it/images/rivista/pdf/3_2018_Luciani.pdf.
  • Malle, Bertram F., Stuti Thapa Magar, and Matthias Scheutz. 2019. “AI in the Sky: How People Morally Evaluate Human and Machine Decisions in a Lethal Strike Dilemma.” In Robotics and Well-Being, edited by Maria Isabel Aldinhas Ferreira, João Silva Sequeira, Gurvinder Singh Virk, Mohammad Osman Tokhi, and Endre E. Kadar, 111–133. Cham: Springer International Publishing. https://doi.org/10.1007/978-3-030-12524-0_11.
  • Matthias, Andreas. 2004. “The Responsibility Gap: Ascribing Responsibility for the Actions of Learning Automata.” Ethics and Information Technology 6 (3): 175–183. https://doi.org/10.1007/s10676-004-3422-1.
  • Merritt, Tim R., Kian Boon Tan, Christopher Ong, Aswin Thomas, Teong Leong Chuah, and Kevin McGee. 2011. “Are Artificial Team-mates Scapegoats in Computer Games.” In Proceedings of the ACM 2011 Conference on Computer Supported Cooperative Work (Hangzhou, People’s Republic of China) (CSCW ‘11), 685–688. New York, NY: Association for Computing Machinery. https://doi.org/10.1145/1958824.1958945.
  • Munch, Lauritz, Jakob Mainz, and Jens Christian Bjerring. 2023. “The Value of Responsibility Gaps in Algorithmic Decision-making.” Ethics and Information Technology 25 (1): 21. https://doi.org/10.1007/s10676-023-09699-6.
  • Navarro, Jordan, François Osiurak, Sandrine Ha, Guillaume Communay, Eleonore Ferrier-Barbut, Arnaud Coatrine, Vivien Gaujoux, et al. 2022. “Development of the Smart Tools Proneness Questionnaire (STP-Q): An Instrument to Assess the Individual Propensity to Use Smart Tools.” Ergonomics 65 (12): 1639–1658. https://doi.org/10.1080/00140139.2022.2048895.
  • Parlangeli, Oronzo, Maria C. Caratozzolo, and Stefano Guidi. 2014. “Multitasking and Mentalizing Machines: How the Workload can have Influence on the System Comprehension.” In Engineering Psychology and Cognitive Ergonomics, 11th International Conference, EPCE, Held as Part of HCI International, edited by D. Harris, 50–58. Crete: Heraklion. https://doi.org/10.1007/978-3-319-07515-0_6.
  • Santoni de Sio, Filippo, and Giulio Mecacci. 2021. “Four Responsibility Gaps with Artificial Intelligence: Why they Matter and How to Address Them.” Philosophy & Technology 34 (4): 1057–1084. https://doi.org/10.1007/s13347-021-00450-x.
  • Shank, Daniel B., Alyssa DeSanti, and Timothy Maninger. 2019. “When are Artificial Intelligence versus Human Agents Faulted for Wrongdoing? Moral Attributions after Individual and Joint Decisions.” Information, Communication & Society 22 (5): 648–663. https://doi.org/10.1080/1369118X.2019.1568515.
  • State vs Loomis. 2017. “Wisconsin Supreme Court requires warning before use of algorithmic risk assessments in sentencing.” Harvard Law Review, Criminal Law 130 (5): 8. https://harvardlawreview.org/print/vol-130/state-v-loomis/.
  • Surden, Harry. 2019. “Artificial Intelligence and Law: An Overview.” Georgia State University Law Review 35:19–22. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3411869.
  • Susskind, Richard. 2012a. “Technology and the Law.” In A Companion to the Philosophy of Technology, edited by J. Friis, S. Pedersen, and V. Hendricks. London: Wiley-Blackwell. https://philpapers.org/rec/SUSTAT-2.
  • Susskind, Richard. 2012b. “Artificial Intelligence, Expert Systems and Law.” The Denning Law Journal 5:105–116. https://doi.org/10.5750/dlj.v5i1.196.
  • Tischbirek, Alexander. 2020. “Artificial Intelligence and Discrimination: Discriminating Against Discriminatory Systems.” In Regulating Artificial Intelligence, edited by T. Wischmeyer and T. Rademacher, 103–121. Cham: Springer International Publishing. https://doi.org/10.1007/978-3-030-32361-5_5.
  • Topol, Eric J. 2019. “High-performance Medicine: The Convergence of Human and Artificial Intelligence.” Nature medicine 25 (1): 44–56. https://doi.org/10.1038/s41591-018-0300-7.
  • Wang, Bingcheng, Pei-Luen Patrick Rau, and Tianyi Yuan. 2023. “Measuring User Competence in Using Artificial Intelligence: Validity and Reliability of Artificial Intelligence Literacy Scale.” Behaviour & Information Technology 42 (9): 1324–1337. https://doi.org/10.1080/0144929X.2022.2072768.
  • Wilson, Abigail, Courtney Stefanik, and Daniel B. Shank. 2022. “How do People Judge the Immorality of Artificial Intelligence versus Humans Committing Moral Wrongs in Real-World Situations?” Computers in Human Behavior Reports 8:100229. https://doi.org/10.1016/j.chbr.2022.100229.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.