Publication Cover
Social Epistemology
A Journal of Knowledge, Culture and Policy
Volume 38, 2024 - Issue 4
260
Views
0
CrossRef citations to date
0
Altmetric
Research Article

AI-Testimony, Conversational AIs and Our Anthropocentric Theory of Testimony

ORCID Icon
Pages 476-490 | Received 07 Aug 2023, Accepted 06 Feb 2024, Published online: 06 Mar 2024

References

  • Albrecht, J., E. Kitanidis, and A. J. Fetterman. 2022. “Despite‘ Super-human’ Performance, Current LLMs are Unsuited for Decisions About Ethics and Safety.” ML Safety Workshop, 36th Conference on Neural Information Processing Systems (NeurIPS 2022), arXiv Preprint arXiv: 2212.06295.
  • Alfano, M., and G. Skorburg. 2018. “Extended Knowledge, the Recognition Heuristic, and Epistemic Injustice.” In Extended Epistemology, edited by Adam J. Carter, Andy Clark, Jesper Kallestrup, Orestis S. Palermos, Duncan Pritchard. 239–265. Oxford: Oxford Scholarship Online.
  • Alvarado, R. 2023. “AI as an Epistemic Technology.” Science and Engineering Ethics 29 (5). https://doi.org/10.1007/s11948-023-00451-3.
  • Anscombe, G. E. M. 1963. Intention. 2nd ed. Oxford: Blackwell.
  • Baier, A. 1986. “Trust and Antitrust.” Ethics 96 (2): 231–260. https://doi.org/10.1086/292745.
  • Behdadi, D., and C. Munthe. 2020. “A Normative Approach to Artificial Moral Agency.” Minds and Machines 30 (2): 195–218. https://doi.org/10.1007/s11023-020-09525-8.
  • Bendel, O. 2017. “The Synthetization of Human Voices.” AI & Society 34 (1): 83–89. https://doi.org/10.1007/s00146-017-0748-x.
  • Bender, E. M. 2023. “Policy Makers: Please Don’t Fall for the Distractions of #AIhype.” https://medium.com/@emilymenonbender/policy-makers-please-dont-fall-for-the-distractions-of-aihype-e03fa80ddbf1.
  • Bender, E. M., T. Gebru, A. McMillan-Major, and S. Shmitchell 2021, March. “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?.” In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, March 3–10, 2021, Virtual Event, Canada, 610–623.
  • Bloor, D. 1999. “Anti-Latour.” Studies in the History and Philosophy of Science 30 (1): 81–112. https://doi.org/10.1016/S0039-3681(98)00038-7.
  • Braun, M., H. Bleher, and P. Hummel. 2021. “A Leap of Faith: Is There a Formula for ‘Trustworthy’ AI?” The Hastings Center Report 51 (3): 17–22. https://doi.org/10.1002/hast.1207.
  • Bryson, J. 2022. ”One Day, AI Will Seem as Human as Anyone. What Then?” Wired, Accessed June 26, 2022. https://www.wired.com/story/lamda-sentience-psychology-ethics-policy/.
  • Buchanan, B., Lohn, A., and Musser, M. 2021. “Truth, Lies, and Automation: How Language Models Could Change Disinformation.” Center for Security and Emerging Technology, https://cset.georgetown.edu/publication/truth-lies-and-automation.
  • Burrell, J. 2016. “How the Machine ‘Thinks’: Understanding Opacity in Machine Learning Algorithms.” Big Data & Society 3 (1): 205395171562251. https://doi.org/10.1177/2053951715622512.
  • Center for AI Safety. (2023). “Statement on AI Risk.” https://www.safe.ai/statement-on-ai-risk.
  • Clément, F. 2010. “To Trust or Not to Trust? Children’s Social Epistemology.” Review of Philosophy and Psychology 1 (4): 531–549. https://doi.org/10.1007/s13164-010-0022-3.
  • Coady, C. A. J. 1992. Testimony: A Philosophical Study. Oxford: Oxford University Press.
  • Coeckelbergh, M. 2012. “Can We Trust Robots?” Ethics and Information Technology 14 (1): 53–60. https://doi.org/10.1007/s10676-011-9279-1.
  • Collins, H. M. 2010. “Humans Not Instruments.” Spontaneous Generations: A Journal for the History and Philosophy of Science 4 (1): 138–147. https://doi.org/10.4245/sponge.v4i1.11354.
  • Collins, H. M., and M. Kusch. 1998. The Shape of Actions: What Humans and Machines Can Do. Cambridge, MA: MIT Press.
  • DAIR. 2023. “Statement from the Listed Authors of Stochastic Parrots on the ‘AI Pause’ Letter.” https://www.dair-institute.org/blog/letter-statement-March2023/.
  • Danaher, J. 2020. “Robot Betrayal: A Guide to the Ethics of Robotic Deception.” Ethics and Information Technology 22 (2): 117–128. https://doi.org/10.1007/s10676-019-09520-3.
  • Descartes, R. 1985. “Discourse on the Method.” In The Philosophical Works of Descartes, edited by J. Cottingham, R. Stoothoff, and D. Murdoch, pp. 111–151. Vol. I. Cambridge, UK: Cambridge University Press.
  • D-ID. 2023. “Experience the Future of Conversational AI with D-ID.” https://www.d-id.com.
  • Faulkner, P. R. 2011. Knowledge on Trust. Oxford: Oxford University Press.
  • Firt, E. 2023. “Ought We Align the Values of Artificial Moral Agents?” AI and Ethics. https://doi.org/10.1007/s43681-023-00264-x.
  • Floridi, L. 2023. “AI as Agency without Intelligence: On ChatGPT, Large Language Models, and Other Generative Models.” Philosophy and Technology 36 (1): 15. https://doi.org/10.1007/s13347-023-00621-y.
  • Floridi, L., and J. W. Sanders. 2004. “On the Morality of Artificial Agents.” Minds and Machines 14 (3): 349–379. https://doi.org/10.1023/B:MIND.0000035461.63578.9d.
  • Freiman, O. 2014. “Towards the Epistemology of the Internet of Things Techno-Epistemology and Ethical Considerations Through the Prism of Trust.” The International Review of Information Ethics 22: 6–22. https://doi.org/10.29173/irie124.
  • Freiman, O. 2021. The Role of Knowledge in the Formation of Trust in Technologies. Dissertation, Bar-Ilan University.
  • Freiman, O. 2022. “Making Sense of the Conceptual Nonsense ‘Trustworthy AI’.” AI & Ethics 3 (4): 1–10. https://doi.org/10.1007/s43681-022-00241-w.
  • Freiman, O. 2023. “Instrument-Based Beliefs, Testimony-Based Beliefs, and Technology-Based Beliefs: Analysis of Beliefs Acquired from a Conversational AI.” Episteme 1–17. https://doi.org/10.1017/epi.2023.12.
  • Freiman, O., and N. Geslevich Packin. 2022. “Artificial Intelligence Products Cannot Be Moral Agents.” Toronto Star. Accessed August 7. https://www.thestar.com/opinion/contributors/2022/08/07/artificial-intelligence-products-cannot-be-moral-agents-the-tech-industry-must-be-held-responsible-for-what-it-develops.html.
  • Freiman, O., and B. Miller. 2021. “Can Artificial Entities Assert?” In Oxford Handbook of Assertion, edited by S. C. Goldberg, 415–434. Oxford: Oxford University Press.
  • Fricker, E. 2002. “Trusting Others in the Sciences: A Priori or Empirical Warrant?” Studies in History & Philosophy of Science Part A 33 (2): 373–383. https://doi.org/10.1016/S0039-3681(02)00006-7.
  • Fricker, E. 2015. “How to Make Invidious Distinctions Amongst Reliable Testifiers.” Episteme 12 (2): 173–202. https://doi.org/10.1017/epi.2015.6.
  • Fricker, M. 2007. Epistemic Injustice: Power and the Ethics of Knowing. Oxford: Oxford University Press.
  • Frost-Arnold, K. 2019. “Epistemic Injustice and the Challenges of Online Moderation”, Invited Keynote Lecture at Knowledge in a Digital World: Epistemic Injustice in Bias and Other Challenges in the Age of Artificial Intelligence, Canadian Society for Epistemology, Montreal, 14-15 November, 2019.
  • Future of Life Institute. 2023. “Pause Giant AI Experiments: An Open Letter.” https://futureoflife.org/open-letter/pause-giant-ai-experiments.
  • Gelfert, A. 2014. A Critical Introduction to Testimony. London: Bloomsbury Publishing.
  • Gelfert, A. 2018. “Testimony.” In Routledge Encyclopedia of Philosophy. London: Taylor and Francis. https://doi.org/10.4324/0123456789-P049-2.
  • Gerken, M. 2022. Scientific Testimony: Its Roles in Science and Society. Oxford: Oxford University Press.
  • Goldberg, S. C. 2012. “Epistemic Extendedness, Testimony, and the Epistemology of Instrument-Based Belief.” Philosophical Explorations: An International Journal for the Philosophy of Mind and Action 15 (2): 181–197. https://doi.org/10.1080/13869795.2012.670719.
  • Goldberg, S. C. 2016. “Epistemically Engineered Environments.” Synthese 197 (7): 2783–2802. https://doi.org/10.1007/s11229-017-1413-0.
  • Google AI Blog. (2018). “Google Duplex: An AI System for Accomplishing Real-World Tasks Over the Phone”. Google AI Blog, Accessed May 8, 2018. https://ai.googleblog.com/2018/05/duplex-ai-system-for-natural-conversation.html.
  • Graham, P. J. 1997. “What is Testimony?” The Philosophical Quarterly 47 (187): 227–232. https://doi.org/10.1111/1467-9213.00057.
  • Green, C. R. 2008. “Epistemology of Testimony.” In Internet Encyclopedia of Philosophy. https://iep.utm.edu/ep-testi/.
  • Grice, H. P. 1971. “Intention and Uncertainty”, Proceedings of the British Academy 5: 263–279.
  • Hao, K., and A. P. Hernández 2022. “How the AI Industry Profits from Catastrophe.” MIT Technology Review, Accessed April 20, 2022. https://www.technologyreview.com/2022/04/20/1050392/ai-industry-appen-scale-data-labels/.
  • Harrer, S. 2023. “Attention is Not All You Need: The Complicated Case of Ethically Using Large Language Models in Healthcare and Medicine.” E Bio Medicine 90. 10.1016/j.ebiom.2023.104512.
  • Hatherley, J. J. 2020. “Limits of Trust in Medical AI.” Journal of Medical Ethics 46 (7): 478–481. https://doi.org/10.1136/medethics-2019-105935.
  • Hurst, L. 2022. “ChatGPT: Why the Human-Like AI Chatbot Suddenly Has Everyone Talking”, EuroNews, Accessed December 14, 2022. https://www.euronews.com/next/2022/12/14/chatgpt-why-the-human-like-ai-chatbot-suddenly-got-everyone-talking.
  • Jones, K. 1996. “Trust as an Affective Attitude.” Ethics 107 (1): 4–25. https://doi.org/10.1086/233694.
  • Kusch, M. 2002. Knowledge by Agreement: The Programme of Communitarian Epistemology. Oxford: Oxford University Press.
  • Lackey, J. 2006. “It Takes Two to Tango: Beyond Reductionism and Non-Reductionism in the Epistemology of Testimony.” In The Epistemology of Testimony, edited by J. Lackey and E. Sosa, 160–189. Oxford: Oxford University Press.
  • Lackey, J. 2008. Learning from Words. Oxford: Oxford University Press.
  • Lackey, J., and E. Sosa, Eds. 2006. The Epistemology of Testimony. Oxford: Oxford University Press.
  • Latour, B. 1988. Science in Action: How to Follow Scientists and Engineers Through Society. Cambridge, MA: Harvard University Press.
  • Latour, B., and S. Woolgar. 1986. Laboratory Life. Princeton: Princeton University Press.
  • Lee, K. M. 2008. “Media Equation Theory.” In The International Encyclopedia of Communication, edited by W. Donsbach. Wiley Publishing. https://doi.org/10.1002/9781405186407.wbiecm035.
  • Licklider, J. C. R., and R. W. Taylor. 1968. “The Computer as a Communication Device.” Science and Technology 76 (2): 1–3.
  • Lin, Z., H. Akin, R. Rao, B. Hie, Z. Zhu, W. Lu Smetanin, N. et al., A. Rives. 2023. “Evolutionary-Scale Prediction of Atomic-Level Protein Structure with a Language Model.” Science 379 (6637): 1123–1130. https://doi.org/10.1126/science.ade2574.
  • Lopez-Lira, A., and Y. Tang. 2023. “Can Chatgpt Forecast Stock Price Movements? Return Predictability and Large Language Models.” SSRN Electronic Journal arXiv preprint arXiv:2304.07619. https://doi.org/10.2139/ssrn.4412788.
  • Metzinger, T. 2019. “Ethics Washing Made in Europe (Der Tagesspiegel, 2019).” https://www.tagesspiegel.de/politik/eu-guidelines-ethics-washing-made-in-europe/24195496.html.
  • Miller, B., and O. Freiman. 2021. “Trust and Distributed Epistemic Labor.” In The Routledge Handbook of Trust and Philosophy, edited by J. Simon, 341–353. New York: Routledge.
  • Miller, B., and I. Record. 2013. “Justified Belief in a Digital Age: On the Epistemic Implications of Secret Internet Technologies.” Episteme 10 (2): 117–134. https://doi.org/10.1017/epi.2013.11.
  • Mökander, J., J. Schuett, H. R. Kirk, and L. Floridi. 2023. “Auditing Large Language Models: A Three-Layered Approach.” AI and Ethics 1–31. https://doi.org/10.1007/s43681-023-00289-2.
  • Mollman, S. 2022. “ChatGPT Gained 1 Million Users in Under a Week. Here’s Why the AI Chatbot is Primed to Disrupt Search as We Know It”, Yahoo! Finance, December 9, 2022. https://finance.yahoo.com/news/chatgpt-gained-1-million-followers-224523258.html.
  • Moon, Y., and C. Nass. 1996. “How ‘Real’ are Computer Personalities? Psychological Responses to Personality Types in Human-Computer Interaction.” Communication Research 23 (6): 651–674. https://doi.org/10.1177/009365096023006002.
  • Müller, V. C. 2020. “Ethics of Artificial Intelligence and Robotics.” In Stanford Encyclopedia of Philosophy, edited by E. Zalta. Stanford University.
  • Neges, K. S. 2018. Instrumentation. A Study in Social Epistemology. PhD Dissertation, University of Vienna.
  • Nickel, P. J. 2013a. “Artificial Speech and Its Authors.” Minds and Machines 23 (4): 489–502. https://doi.org/10.1007/s11023-013-9303-9.
  • Nickel, P. J. 2013b. “Trust in Technological Systems.” In Norms in Technology, edited by M. J. De Vries, S. O. Hansson, and A. W. Meijers, 223–237. Dordrecht: Springer.
  • Nieva, R. 2018. “Exclusive: Google’s Duplex Could Make Assistant the Most Lifelike AI Yet.” CNET News, Accessed May 9, 2018. https://www.cnet.com/news/google-assistant-duplex-at-io-could-become-the-most-lifelike-ai-voice-assistant-yet.
  • Nissenbaum, H. 1996. “Accountability in a Computerized Society.” Science and Engineering Ethics 2 (1): 25–42. https://doi.org/10.1007/BF02639315.
  • Open AI. 2022. “ChatGPT: Optimizing Language Models for Dialogue.” Accessed November 30, 2022. https://openai.com/blog/chatgpt/.
  • Origgi, G. 2008. Qu’est-ce que la confiance?. Paris: VRIN.
  • Pasquale, F. 2015. The Black Box Society: The Secrgorithms That Control Money and Information. Cambridge, MA: Harvard University Press.
  • Pitt, J. C. 1983. “The Epistemological Engine.” Philosophica 32 (2): 77–95. https://doi.org/10.21825/philosophica.82574.
  • Pitt, J. C. 2010. “It’s Not About Technology.” Knowledge, Technology & Policy 23 (3–4): 445–454. https://doi.org/10.1007/s12130-010-9125-5.
  • Pritchard, D. 2004. “The Epistemology of Testimony.” Philosophical Issues 14 (1): 326–348. https://doi.org/10.1111/j.1533-6077.2004.00033.x.
  • Ruane, E., A. Birhane, and A. Ventresque 2019. “Conversational AI: Social and Ethical Considerations.” AICS, (pp. 104–115). https://ceur-ws.org/Vol-2563/aics_12.pdf.
  • Salinga, M., and M. Wuttig. 2011. “Phase-Change Memories on a Diet.” Science 332 (6029): 543. https://doi.org/10.1126/science.1204093.
  • Setiya, K. 2018. “Intention.” In The Stanford Encyclopedia of Philosophy, edited by E. N. Zalta, https://plato.stanford.edu/entries/intention.
  • Shanahan, M. 2022. “Talking About Large Language Models.” Communications of the ACM 76 (2): 68–79. https://doi.org/10.1145/3624724.
  • Shapin, S. 1994. A Social History of Truth. Chicago: University of Chicago Press.
  • Shapin, S., and S. Schaffer. 1985. Leviathan and the Air-Pump. Princeton: Princeton University Press.
  • Simon, J. 2015. “Distributed Epistemic Responsibility in a Hyperconnected Era.” In The Onlife Manifesto, edited by L. Floridi, 145–159, Cham: Springer International Publishing.
  • Slota, S. C., K. R. Fleischmann, S. Greenberg, N. Verma, B. Cummings, L. Li, and C. Shenefiel. 2021. “Many Hands Make Many Fingers to Point: Challenges in Creating Accountable AI.” AI & Society 38 (4): 1–13. https://doi.org/10.1007/s00146-021-01302-0.
  • Strasser, A. 2023. “On Pitfalls (And Advantages) of Sophisticated Large Language Models.” In Ethics in Online AI-Based Systems: Risks and Opportunities in Current Technological Trends, edited by J. Casas-Roma, S. Caballe, and J. Conesa. Elsevier. https://doi.org/10.48550/arXiv.2303.17511.
  • Tiku, N. 2022. “The Google Engineer Who Thinks the Company’s AI Has Come to Life.” The Washington Post, https://www.washingtonpost.com/technology/2022/06/11/google-ai-lamda-blake-lemoine/.
  • Turing, A. M. 1950. “Computing Machinery and Intelligence.” Mind 59 (236): 433–460. https://doi.org/10.1093/mind/LIX.236.433.
  • Véliz, C. 2023. “Chatbots Shouldn’t Use Emojis.” Nature 615 (7952): 375. https://doi.org/10.1038/d41586-023-00758-y.
  • Vipra, J., and A. Korinek 2023. “Market Concentration Implications of Foundation Models: The Invisible Hand of ChatGPT.” Brookings Institute, Accessed September 7, 2023. https://www.brookings.edu/articles/market-concentration-implications-of-foundation-models-the-invisible-hand-of-chatgpt.
  • VoiceBot AI. 2019. Voice Assistant Consumer Adoption Report 2018. https://voicebot.ai/voice-assistant-consumer-adoption-report-2018.
  • Waddell, K. 2019. “Defending Against Audio Deepfakes Before It’s Too Late.” Axios, Accessed April 3, 2019. https://www.axios.com/deepfake-audio-ai-impersonators-f736a8fc-162e-47f0-a582-e5eb8b8262ff.html.
  • Wallach, W., and C. Allen. 2008. Moral Machines: Teaching Robots Right from Wrong. Oxford: Oxford University Press.
  • Wheeler, B. 2020. “Reliabilism and the Testimony of Robots.” Techné: Research in Philosophy and Technology 24 (3): 332–356. https://doi.org/10.5840/techne202049123.
  • Yee, A. K. 2023. “Information Deprivation and Democratic Engagement.” Philosophy of Science 90 (5): 1110–1119. https://doi.org/10.1017/psa.2023.9.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.