1,290
Views
1
CrossRef citations to date
0
Altmetric
Articles

Responsibility gap or responsibility shift? The attribution of criminal responsibility in human–machine interaction

ORCID Icon
Pages 1142-1162 | Received 05 Dec 2022, Accepted 23 Jun 2023, Published online: 25 Jul 2023

References

  • Abbott, R. (2020). The reasonable robot: Artificial intelligence and the Law. Cambridge University Press.
  • Alonso, E., & Mondragón, E. (2004). Agency, learning and animal-based reinforcement learning. In M. Nickles, M. Rovatsos, & G. Weiss (Eds.), Agents and computational autonomy: Potential, risks, and solutions (pp. 1–6). Springer.
  • Atzmüller, C., & Steiner, P. M. (2010). Experimental Vignette studies in survey research. Methodology: European Journal of Research Methods for the Behavioral and Social Sciences, 6(3), 128–138. https://doi.org/10.1027/1614-2241/a000014
  • Auspurg, K., & Hinz, T. (2015). Quantitative applications in the social sciences: Vol. 175. Factorial survey experiments. Sage.
  • Avila Negri, S. M. C. (2021). Robot as legal person: Electronic personhood in robotics and artificial intelligence. Frontiers in Robotics and AI, 8, Article 789327. https://doi.org/10.3389/frobt.2021.789327
  • Balkin, J. M. (2015). The path of robotics Law. California Law Review Circuit, 6, 45–60.
  • Beck, S. (2016). Intelligent agents and criminal law—Negligence, diffusion of liability and electronic personhood. Robotics and Autonomous Systems, 86, 138–143. https://doi.org/10.1016/j.robot.2016.08.028
  • Beck, S. (2017). Google cars, software agents, autonomous weapons systems – New challenges for criminal Law? In E. Hilgendorf, & U. Seidel (Eds.), Robotics, autonomics, and the law: Legal issues arising from the AUTONOMICS for industry 4.0 technology programme of the German federal ministry for economic affairs and energy (pp. 227–252). Nomos.
  • Chinen, M. A. (2016). The Co-evolution of autonomous machines and legal responsibility. Virginia Journal of Law & Technology, 20, 338–393.
  • Coeckelbergh, M. (2020). Artificial intelligence, responsibility attribution, and a relational justification of explainability. Science and Engineering Ethics, 26(4), 2051–2068. https://doi.org/10.1007/s11948-019-00146-8
  • Danaher, J. (2016). Robots, law and the retribution gap. Ethics and Information Technology, 18(4), 299–309. https://doi.org/10.1007/s10676-016-9403-3
  • Floridi, L., & Sanders, J. W. (2004). On the morality of artificial agents. Minds and Machines, 14(3), 349–379. https://doi.org/10.1023/B:MIND.0000035461.63578.9d
  • Friedman, B. (1995, May 7–11). “It’s the computer’s fault” – reasoning about computers as moral agents [Conference session]. Conference Companion on Human Factors in Computing Systems (CHI 95) (pp. 226–227).
  • Gless, S., Silverman, E., & Weigend, T. (2016). If robots cause harm, Who is to blame? Self-driving cars and criminal liability. New Criminal Law Review, 19(3), 412–436. https://doi.org/10.1525/nclr.2016.19.3.412
  • Hallevy, G. (2010). The criminal liability of artificial intelligence entities – from science fiction to legal social control. Akron Intellectual Property Journal, 4(2), 171–201.
  • Kahn, P. H., Jr., Kanda, T., Ishiguro, H., Gill, B. T., Ruckert, J. H., Shen, S., Gary, H. E., Reichert, A. L., Freier, N. G., & Severson, R. L. (2012, March 5–8). Do people hold a humanoid robot morally accountable for the harm it causes? [Conference session]. 7th ACM/IEEE international conference on human-robot interaction (HRI), Boston, MA, United States.
  • Karrer, K., Glaser, C., Clemens, C., & Bruder, C. (2009). Technikaffinität erfassen – der Fragebogen TA-EG. In A. Lichtenstein, C. Stößel, & C. Clemens (Eds.), Der Mensch im Mittelpunkt technischer Systeme. 8. Berliner Werkstatt Mensch-Maschine-Systeme (pp. 194–201). VDI Verlag GmbH.
  • Killias, M. (2006). The Opening and Closing of Breaches – A Theory on Crime Waves, Law Creation and Crime Prevention. European Journal of Criminology, 3(1), 11–31. https://doi.org/10.1177/1477370806059079
  • Kirchkamp, O., & Strobel, C. (2019). Sharing responsibility with a machine. Journal of Behavioral and Experimental Economics, 80(3), 25–33. https://doi.org/10.1016/j.socec.2019.02.010
  • Lima, G., Cha, M., Jeon, C., & Park, K. S. (2021). The conflict between people’s urge to punish AI and legal systems. Frontiers in Robotics and AI, 8, Article 756242.
  • Matthias, A. (2004). The responsibility gap: Ascribing responsibility for the actions of learning automata. Ethics and Information Technology, 6(3), 175–183. https://doi.org/10.1007/s10676-004-3422-1
  • Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The ethics of algorithms: Mapping the debate. Big Data & Society, 3(2), 205395171667967. https://doi.org/10.1177/2053951716679679
  • Nof, S. Y. (2009). Automation: What it means to US around the world. In S. Y. Nof (Ed.), Springer handbook of automation (pp. 13–52). Springer.
  • Nyholm, S. (2018). Attributing agency to automated systems: Reflections on human-robot collaborations and responsibility-loci. Science and Engineering Ethics, 24(4), 1201–1219. https://doi.org/10.1007/s11948-017-9943-x
  • Parasuraman, R., Sheridan, T. B., & Wickens, C. D. (2000). A model for types and levels of human interaction with automation. IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans, 30(3), 286–297. https://doi.org/10.1109/3468.844354
  • Rammert, W. (2012). Distributed agency and advanced technology. Or: How to analyze constellations of collective inter-agency. In J. Passoth, B. Peuker, & M. Schillmeier (Eds.), Agency without actors? New approaches to collective action (pp. 89–112). Routledge.
  • Russell, S. J., & Norvig, P. (2014). Artificial intelligence: A modern approach. Pearson Education Limited.
  • Santoni de Sio, F., & Mecacci, G. (2021). Four responsibility gaps with artificial intelligence: Why they matter and How to address them. Philosophy & Technology, 34(4), 1057–1084. https://doi.org/10.1007/s13347-021-00450-x
  • Santosuosso, A., & Bottalico, B. (2017). Autonomous systems and the law: Why intelligence matters. In E. Hilgendorf, & U. Seidel (Eds.), Robotics, autonomics, and the law: Legal issues arising from the AUTONOMICS for industry 4.0 technology programme of the German Federal Ministry for economic affairs and energy (pp. 27–58). Nomos.
  • Simmler, M. (2020). Strict liability and the purpose of punishment. New Criminal Law Review, 23(4), 516–564. https://doi.org/10.1525/nclr.2020.23.4.516
  • Simmler, M. (2023). Automation. In P. Caeiro, S. Gless, V. Mitsilegas, M. J. Costa, J. D. Snaijer, & G. Theodorakakou (Eds.), Elgar encyclopedia of crime and criminal justice. Elgar Publishing.
  • Simmler, M., & Frischknecht, R. (2020). A taxonomy of human-machine collaboration: Capturing automation and technical autonomy. AI & Society, 36(1), 239–250. https://doi.org/10.1007/s00146-020-01004-z
  • Simmler, M., & Markwalder, N. (2019). Guilty robots? – rethinking the nature of culpability and legal personhood in an Age of artificial intelligence. Criminal Law Forum, 30(1), 1–31. https://doi.org/10.1007/s10609-018-9360-0
  • Strasser, A. (2022). Distributed responsibility in human–machine interactions. AI and Ethics, 2(3), 523–532. https://doi.org/10.1007/s43681-021-00109-5
  • Suhling, S., Löbmann, R., & Greve, W. (2005). Zur Messung von Strafeinstellungen: Argumente für den Einsatz von fiktiven Fallgeschichten. Zeitschrift für Sozialpsychologie, 36(4), 203–213. https://doi.org/10.1024/0044-3514.36.4.203
  • Taddeo, M., & Floridi, L. (2018). How AI can be a force for good: An ethical framework will help to harness the potential of AI while keeping humans in control. Science, 361(6404), 751–752. https://doi.org/10.1126/science.aat5991
  • Tigard, D. W. (2021). There is no techno-responsibility gap. Philosophy & Technology, 34(3), 589–607. https://doi.org/10.1007/s13347-020-00414-7
  • Vagia, M., Transeth, A. A., & Fjerdingen, S. A. (2016). A literature review on the levels of automation during the years. What are the different taxonomies that have been proposed? Applied Ergonomics, 53(1), 190–202. https://doi.org/10.1016/j.apergo.2015.09.013