Publication Cover
Inquiry
An Interdisciplinary Journal of Philosophy
Latest Articles
2,128
Views
3
CrossRef citations to date
0
Altmetric
Research Article

How to deal with risks of AI suffering

ORCID Icon
Received 28 Feb 2023, Accepted 11 Jul 2023, Published online: 22 Jul 2023

References

  • Agarwal, A., and S. Edelman. 2020. “Functionally Effective Conscious AI Without Suffering.” Journal of Artificial Intelligence and Consciousness 7 (1): 39–50. doi:10.1142/S2705078520300030.
  • Alexander, S. 2022, August 8. “Why Not Slow AI Progress? [Substack Newsletter].” Astral Codex Ten. https://astralcodexten.substack.com/p/why-not-slow-ai-progress.
  • Alexandrova, A. 2017. A Philosophy for the Science of Well-Being (Vol. 1). Oxford University Press. doi:10.1093/oso/9780199300518.001.0001.
  • Armstrong, S., N. Bostrom, and C. Shulman. 2016. “Racing to the Precipice: A Model of Artificial Intelligence Development.” AI & SOCIETY 31 (2): 201–206. doi:10.1007/s00146-015-0590-y.
  • Bayne, T. 2018. “On the Axiomatic Foundations of the Integrated Information Theory of Consciousness.” Neuroscience of Consciousness 2018 (1), doi:10.1093/nc/niy007.
  • Birch, J. 2017. “Animal Sentience and the Precautionary Principle.” Animal Sentience 2 (16), doi:10.51291/2377-7478.1200.
  • Birch, J. 2022. “The Search for Invertebrate Consciousness.” Noûs 56 (1): 133–153. doi:10.1111/nous.12351.
  • Birch, J., and K. Andrews. 2023. What has Feelings? Aeon. https://aeon.co/essays/to-understand-ai-sentience-first-understand-it-in-animals.
  • Birch, J., and H. Browning. 2021. “Neural Organoids and the Precautionary Principle.” The American Journal of Bioethics 21 (1): 56–58. doi:10.1080/15265161.2020.1845858.
  • Birch, J., S. Ginsburg, and E. Jablonka. 2020. “Unlimited Associative Learning and the Origins of Consciousness: A Primer and Some Predictions.” Biology & Philosophy 35 (6): 56. doi:10.1007/s10539-020-09772-0.
  • Block, N. 1981. “Psychologism and Behaviorism.” The Philosophical Review 90 (1): 5. doi:10.2307/2184371.
  • Bolte, L., T. Vandemeulebroucke, and A. van Wynsberghe. 2022. “From an Ethics of Carefulness to an Ethics of Desirability: Going Beyond Current Ethics Approaches to Sustainable AI.” Sustainability 14 (8): Article 8. doi:10.3390/su14084472.
  • Cave, S., and S. S. ÓhÉigeartaigh. 2018. “An AI Race for Strategic Advantage: Rhetoric and Risks.” Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, 36–40. doi:10.1145/3278721.3278780.
  • Chalmers, D. J. 1996. The Conscious Mind: In Search of a Fundamental Theory. Vol. 4, 609–612. Oxford: Oxford University Press.
  • Chalmers, D. J. 2023. Could a Large Language Model be Conscious? (arXiv:2303.07103). arXiv. doi:10.48550/arXiv.2303.07103.
  • Chan, K. M. A. 2011. “Ethical Extensionism Under Uncertainty of Sentience: Duties to Non-Human Organisms Without Drawing a Line.” Environmental Values 20 (3): 323–346. doi:10.3197/096327111X13077055165983.
  • Chang, H. 2004. Inventing Temperature: Measurement and Scientific Progress. Oxford University Press. doi:10.1093/0195171276.001.0001.
  • Chappell, R. Y. 2022. “Pandemic Ethics and Status Quo Risk.” Public Health Ethics 15 (1): 64–73. doi:10.1093/phe/phab031.
  • Christiano, T., and S. Bajaj. 2022. “Democracy.” In The Stanford Encyclopedia of Philosophy (Spring 2022), edited by E. N. Zalta. Metaphysics Research Lab, Stanford University. https://plato.stanford.edu/archives/spr2022/entries/democracy/.
  • Coelho Mollo, D. 2022. “Intelligent Behaviour.” Erkenntnis, 1–18. doi:10.1007/s10670-022-00552-8.
  • Crump, A., H. Browning, A. Schnell, C. Burn, and J. Birch. 2022. “Sentience in Decapod Crustaceans: A General Framework and Review of the Evidence.” Animal Sentience 7 (32), doi:10.51291/2377-7478.1691.
  • Danaher, J. 2020. “Welcoming Robots into the Moral Circle: A Defence of Ethical Behaviourism.” Science and Engineering Ethics 26 (4): 2023–2049. doi:10.1007/s11948-019-00119-x.
  • Danaher, J. Forthcoming. Moral Uncertainty and Our Relationships with Unknown Minds. Cambridge Quarterly of Healthcare Ethics. https://philarchive.org/rec/DANMUA-2.
  • Dennett, D. C. 1995. “Animal Consciousness: What Matters and Why?” Social Research: An International Quarterly 62: 691–710.
  • Doerig, A., A. Schurger, and M. H. Herzog. 2021. “Hard Criteria for Empirical Theories of Consciousness.” Cognitive Neuroscience 12 (2): 41–62. doi:10.1080/17588928.2020.1772214.
  • Dung, L. 2022a. “Assessing Tests of Animal Consciousness.” Consciousness and Cognition 105: 103410. doi:10.1016/j.concog.2022.103410.
  • Dung, L. 2022b. “Does Illusionism Imply Skepticism of Animal Consciousness?” Synthese 200 (3): 238. doi:10.1007/s11229-022-03710-1.
  • Dung, L. 2022c. “Why the Epistemic Objection Against Using Sentience as Criterion of Moral Status is Flawed.” Science and Engineering Ethics 28 (6): 51. doi:10.1007/s11948-022-00408-y.
  • Dung, L. Forthcoming. “Preserving the Normative Significance of Sentience.” Journal of Consciousness Studies.
  • Godfrey-Smith, P. 2016. “Mind, Matter, and Metabolism.” Journal of Philosophy 113 (10): 481–506. doi:10.5840/jphil20161131034.
  • Grace, K. 2022. “Let’s Think About Slowing Down AI.” EA Forum. https://forum.effectivealtruism.org/posts/vwK3v3Mekf6Jjpeep/let-s-think-about-slowing-down-ai-1.
  • Graziano, M. S. A., A. Guterstam, B. J. Bio, and A. I. Wilterson. 2020. “Toward a Standard Model of Consciousness: Reconciling the Attention Schema, Global Workspace, Higher-Order Thought, and Illusionist Theories.” Cognitive Neuropsychology 37 (3–4): 155–172. doi:10.1080/02643294.2019.1670630.
  • Gunkel, D. J. 2018. Robot Rights. The MIT Press. doi:10.7551/mitpress/11444.001.0001.
  • Habermas, J. 1984. The Theory of Communicative Action. Vol. I: Reason and the Rationalization of Society. Beacon.
  • Habermas, J. 1987. The Theory of Communicative Action. Vol. II: Lifeworld and System. Beacon.
  • Herzog, M. H., M. Esfeld, and W. Gerstner. 2007. “Consciousness & the Small Network Argument.” Neural Networks 20 (9): 1054–1056. doi:10.1016/j.neunet.2007.09.001.
  • Irvine, E. 2012. Consciousness as a Scientific Concept: A Philosophy of Science Perspective. Dordrecht: Springer.
  • Kammerer, F. 2019. “The Normative Challenge for Illusionist Views of Consciousness.” Ergo, an Open Access Journal of Philosophy 6. doi:10.3998/ergo.12405314.0006.032.
  • Kammerer, F. 2022. “Ethics Without Sentience. Facing Up to the Probable Insignificance of Phenomenal Consciousness.” Journal of Consciousness Studies 29 (3-4): 180–204. https://doi.org/10.53765/20512201.29.3.180
  • Knutsson, S., and C. Munthe. 2017. “A Virtue of Precaution Regarding the Moral Status of Animals with Uncertain Sentience.” Journal of Agricultural and Environmental Ethics 30 (2): 213–224. doi:10.1007/s10806-017-9662-y.
  • Kriegel, U. 2019. “The Value of Consciousness.” Analysis 79 (3): 503–520. doi:10.1093/analys/anz045.
  • Ladak, A. 2023. What would Qualify an Artificial Intelligence for Moral Standing? AI and Ethics. doi:10.1007/s43681-023-00260-1.
  • Lau, H. 2022. In Consciousness we Trust: The Cognitive Neuroscience of Subjective Experience. Oxford: Oxford University Press.
  • MacAskill, W., K. Bykvist, and T. Ord. 2020. Moral Uncertainty. 1st ed. Oxford University Press. doi:10.1093/oso/9780198722274.001.0001.
  • Mandelbaum, E. 2022. “Everything and More: The Prospects of Whole Brain Emulation.” The Journal of Philosophy 119 (8): 444–459. doi:10.5840/jphil2022119830.
  • Massimini, M., and G. Tononi. 2018. Sizing up Consciousness (Vol. 1). Oxford University Press. doi:10.1093/oso/9780198728443.001.0001.
  • Metzinger, T. 2021. “Artificial Suffering: An Argument for a Global Moratorium on Synthetic Phenomenology.” Journal of Artificial Intelligence and Consciousness 8 (1): 43–66. doi:10.1142/S270507852150003X.
  • Müller, V. C. 2016. “Autonomous Killer Robots Are Probably Good News.” In Drones and Responsibility: Legal, Philosophical and Socio-Technical Perspectives on the use of Remotely Controlled Weapons, edited by E. D. Nucci and F. S. de Sio, 67–81. London: Ashgate. https://philarchive.org/rec/MLLAKR.
  • Müller, V. C. 2021. “Is it Time for Robot Rights? Moral Status in Artificial Entities.” Ethics and Information Technology 23 (4): 579–587. doi:10.1007/s10676-021-09596-w.
  • Nagel, T. 1974. “What is It Like to Be a Bat?” The Philosophical Review 83 (4): 435–450. doi:10.2307/2183914.
  • Negro, N. 2020. “Phenomenology-first Versus Third-Person Approaches in the Science of Consciousness: The Case of the Integrated Information Theory and the Unfolding Argument.” Phenomenology and the Cognitive Sciences 19 (5): 979–996. doi:10.1007/s11097-020-09681-3.
  • Niikawa, T., Y. Hayashi, J. Shepherd, and T. Sawai. 2022. “Human Brain Organoids and Consciousness.” Neuroethics 15 (1): 5. doi:10.1007/s12152-022-09483-1.
  • Nordgren, A. 2023. “Pandemics and the Precautionary Principle: An Analysis Taking the Swedish Corona Commission’s Report as a Point of Departure.” Medicine, Health Care and Philosophy 26: 163–173. doi:10.1007/s11019-023-10139-x.
  • Nussbaum, M. C. 2007. Frontiers of Justice: Disability, Nationality, Species Membership. Harvard: Harvard University Press.
  • Prinz, J. 2003. “Level-Headed Mysterianism and Artificial Experience.” Journal of Consciousness Studies 10 (4–5): 111–132.
  • Saad, B., and A. Bradley. 2022. “Digital Suffering: Why it’s a Problem and how to Prevent it.” Inquiry 0 (0): 1–36. doi:10.1080/0020174X.2022.2144442.
  • Sandberg, A. 2013. “Feasibility of Whole Brain Emulation.” In Philosophy and Theory of Artificial Intelligence, edited by V. C. Müller, 251–264. Springer. doi:10.1007/978-3-642-31674-6_19.
  • Schukraft, J. 2020. “Comparisons of Capacity for Welfare and Moral Status Across Species.” Rethink Priorities. https://rethinkpriorities.org/publications/comparisons-of-capacity-for-welfare-and-moral-status-across-species.
  • Schwitzgebel, E. 2020. “Is There Something It’s Like to be a Garden Snail.” Philosophical Topics 48 (1): 39–63. doi:10.5840/philtopics20204813.
  • Schwitzgebel, E., and M. Garza. 2015. “A Defense of the Rights of Artificial Intelligences.” Midwest Studies in Philosophy 39 (1): 98–119. doi:10.1111/misp.12032.
  • Searle, J. 2017. “Biological Naturalism.” In The Blackwell Companion to Consciousness. 1st ed., 327–336, edited by S. Schneider and M. Velmans. Wiley. doi:10.1002/9781119132363.ch23.
  • Seidenfeld, T. 1985. “Calibration, Coherence, and Scoring Rules.” Philosophy of Science 52 (2): 274–294. doi:10.1086/289244.
  • Seth, A. K., and T. Bayne. 2022. “Theories of Consciousness.” Nature Reviews Neuroscience 23 (7): Article 7. doi:10.1038/s41583-022-00587-4.
  • Shevlin, H. 2020. “General Intelligence: An Ecumenical Heuristic for Artificial Consciousness Research?” Journal of Artificial Intelligence and Consciousness, doi:10.17863/CAM.52059.
  • Shevlin, H. 2021a. “How Could We Know When a Robot was a Moral Patient?” Cambridge Quarterly of Healthcare Ethics 30 (3): 459–471. doi:10.1017/S0963180120001012.
  • Shevlin, H. 2021b. “Non-Human Consciousness and the Specificity Problem: A Modest Theoretical Proposal.” Mind & Language 36 (2): 297–314. doi:10.1111/mila.12338.
  • Shriver, A. J. 2020. “The Role of Neuroscience in Precise, Precautionary, and Probabilistic Accounts of Sentience.” In Neuroethics and Nonhuman Animals, edited by L. S. M. Johnson, A. Fenton, and A. Shriver, 221–233. Springer International Publishing. doi:10.1007/978-3-030-31011-0_13.
  • Shulman, C., and N. Bostrom. 2021. “Sharing the World with Digital Minds.” In Rethinking Moral Status, edited by S. Clarke, H. Zohny, and J. Savulescu, 306–326. Oxford University Press. doi:10.1093/oso/9780192894076.003.0018.
  • Singer, P. 2011. Practical Ethics. 3rd ed. Cambridge University Press. doi:10.1017/CBO9780511975950.
  • Steel, D. 2014. Philosophy and the Precautionary Principle: Science, Evidence, and Environmental Policy. Cambridge University Press. doi:10.1017/CBO9781139939652.
  • Steele, K., and H. O. Stefánsson. 2020. “Decision Theory.” In The Stanford Encyclopedia of Philosophy (Winter 2020), edited by E. N. Zalta. Metaphysics Research Lab, Stanford University. https://plato.stanford.edu/archives/win2020/entries/decision-theory/.
  • Tetlock, P. E., and D. Gardner. 2015. Superforecasting: The Art and Science of Prediction. New York: Crown.
  • Tomasik, B. 2014. Do Artificial Reinforcement-Learning Agents Matter Morally? ArXiv:1410.8233 [Cs]. http://arxiv.org/abs/1410.8233.
  • Tononi, G., and C. Koch. 2015. “Consciousness: Here, There and Everywhere?” Philosophical Transactions of the Royal Society B: Biological Sciences 370 (1668), doi:10.1098/rstb.2014.0167.
  • Tye, M. 2017. Tense Bees and Shell-Shocked Crabs: Are Animals Conscious? Oxford University Press. doi:10.1093/acprof:oso/9780190278014.001.0001.
  • Višak, T. 2022. Capacity for Welfare Across Species. Oxford: Oxford University Press.
  • Wilkinson, H. 2022. “In Defense of Fanaticism.” Ethics 132 (2): 445–477. doi:10.1086/716869.