Publication Cover
Social Epistemology
A Journal of Knowledge, Culture and Policy
Latest Articles
190
Views
0
CrossRef citations to date
0
Altmetric
Research Article

AI-Testimony, Conversational AIs and Our Anthropocentric Theory of Testimony

ORCID Icon
Received 07 Aug 2023, Accepted 06 Feb 2024, Published online: 06 Mar 2024
 

ABSTRACT

The ability to interact in a natural language profoundly changes devices’ interfaces and potential applications of speaking technologies. Concurrently, this phenomenon challenges our mainstream theories of knowledge, such as how to analyze linguistic outputs of devices under existing anthropocentric theoretical assumptions. In section 1, I present the topic of machines that speak, connecting between Descartes and Generative AI. In section 2, I argue that accepted testimonial theories of knowledge and justification commonly reject the possibility that a speaking technological artifact can give testimony. In section 3, I identify three assumptions underlying the view that rejects conversational AIs – AI-based technologies that converse, as testifiers: conversational AIs (1) lack intentions, (2) cannot be normatively assessed, and (3) cannot constitute an object in trust relations, while humans can. In section 4, I propose the concept ‘AI-testimony’ for analyzing outputs of conversational AIs, suggesting three conditions for technologies to deliver AI-testimony: (1) content is propositional, (2) generated and delivered with no other human directly involved, (3) the output is perceived as phenomenologically similar to that of a human. I conclude that this concept overcomes the limitations of the anthropocentric concept of testimony, opening future directions of research without associating conversational AIs with human-like agency.

Acknowledgments

This paper is partly based on my dissertation (2021), submitted to the Graduate Program in Science, Technology and Society at Bar-Ilan University. I thank Boaz Miller, Duncan Pritchard and Noah Efron for commenting on an earlier version of this argument. Additionally, I thank two anonymous reviewers for their valuable and critical comments and suggestions. Views and errors are my own.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Notes

1. This passage was brought to my attention by Nickel (Citation2013a), who credits his attention to Salinga and Wuttig (Citation2011).

2. Later on, the work of the Strong Programme in the Sociology of Knowledge was influenced from Shapin’s work. They advocated a theory of knowledge that recognizes testimony as a way for epistemic communities to generate ‎knowledge. This is a result of the ability of testimony to transforms opinions into knowledge – and establish social agreement ‎ (Kusch Citation2002; for further sociological context of the concept of testimony, see Neges Citation2018; Freiman and Miller Citation2021). In fact, it might be that philosophical interest in the concept of testimony was the result of influence from sociology and the history of sciences (Clément Citation2010), from works such as Latour and Woolgar (Citation1986), Latour (Citation1988), Shapin and Schaffer (Citation1985) and Shapin (Citation1994).

3. In a prior work, I (Freiman Citation2014) list different related fields that commonly attribute morality to technological artifacts. For current debate about artificial moral agency, see: Behdadi and Munthe (Citation2020); Müller (Citation2020); Firt (Citation2023).

4. I thank an anonymous referee for raising this point and mentioning that others lay the grounds for the view that we may accept one day – algorithms that have achieved the capacity of being moral.

5. Among the considered views are those of Graham (Citation1997, 227) and Coady (Citation1992, 42), both requiring intention to deliver testimony.

6. Lackey uses the term ‘person’ in a broad sense that includes non-human animals (Lackey Citation2008, fn 13).

7. The proposition ‘the train from Union station will enter platform 4, in 3 minutes’ can preexist as a whole, or can be assembled by an algorithm picking up information from different sources: ‘the train from’ + X +”will enter platform” + Y + ‘in’ + Z + ‘minutes’.

8. The COvid Stay Informed Bot initiative is based on answers from reputable sources. It is available at https://cosibot.org.

Additional information

Notes on contributors

Ori Freiman

Ori Freiman is a Post-Doctoral Fellow at McMaster University’s Digital Society Lab and The Centre for International Governance Innovation’s Digital Policy Hub.

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 53.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 384.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.