Publication Cover
Social Epistemology
A Journal of Knowledge, Culture and Policy
Volume 37, 2023 - Issue 5
281
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Bots: Some Less-Considered Epistemic Problems

ORCID Icon
Pages 713-725 | Published online: 19 Sep 2022
 

ABSTRACT

Posts on social media platforms like Twitter are sometimes the products of deceptively designed bots. These bots can cause obvious epistemic problems, such as tricking human users into believing the contents of misleading posts. However, less-considered epistemic problems involve false bot judgements where a human user mistakes another human user’s post for a bot-post, or where a human user mistakenly believes that bots are the primary vehicles for tokening certain content on social media. This paper takes up three questions concerning false bot judgements: what exactly are their associated epistemic harms, just how harmful are they, and what should we do about them?

Disclosure statement

No potential conflict of interest was reported by the author(s).

Notes

1. A good example of an epistemically valuable bot is the recently created ‘Vaccine Hunter Bot’ (@VaxHunterBot), which provides automated information about the availability of COVID-19 vaccines in different regions.

3. It has recently been argued that credibility excesses can yield testimonial injustice (Huda Citation2019). I do not discuss any such cases here.

4. Epistemic injustices might also occur when one’s questions are not taken sufficiently seriously (Hookway Citation2010).

5. There may well be cases where speakers suffer “hermeneutical death” at the hands of prejudiced hearers. These are cases where “one’s status as a subject of knowledge and understanding is barely recognized”, to such an extent that one’s voice is “killed” (Medina Citation2017, 46). In these cases, however, it is unlikely that the speaker is being treated as having no illocutionary intentions whatsoever, such that the speaker is deemed asincere. It is just that these illocutionary intentions are taken with minimal seriousness.

6. In the sense of Strawson (Citation1962).

7. The reality probably involves some combination of conditioning and convincing (Langton and West Citation1999).

8. One might think that how one’s words are taken is a perlocutionary matter (cf. Jacobson Citation1995; Kukla Citation2014, 14–15). However, one might also understand illocution as dependant on uptake – see next note (see also Caponetto Citation2021, Hornsby and Langton Citation1998, Hornsby et al. Citation2009).

9. Kukla identifies a form of epistemic injustice called “discursive injustice” (Kukla Citation2014). Discursive injustices also involve illocutionary disablement. For example, a woman floor manager working in a factory of 95% male employees might issue an imperative to her male subordinates but be taken as merely making a request. She is thus illocutionarily disabled from commanding, or so it is argued (see McDonald Citation2021 for criticism about the strong extent to which the hearer can determine the illocutionary force of a speaker’s utterance, on Kukla’s view). However, at most, one’s illocutionary act is transformed here into a different illocutionary act, whereas the present line of thought sees false bot judgements as eliminating illocution entirely.

10. Thanks to an anonymous reviewer for pressing this line.

11. The analogy is imperfect, at least in part, because audience members would not mistake the actor for a non-human entity such as a bot.

12. This result would be slightly different from standard “content-focused” or “content-based” testimonial injustices as described by Davis (Citation2021) and Dembroff and Whitcomb (CitationForthcoming), where a speaker’s testimony is dismissed not due to prejudice toward the speaker but due to prejudices associated with the content of what is testified by the speaker. To take a standard example, a white male speaker might be dismissed by a racist hearer for whom the content of the speaker’s testimony is “black woman coded” and thus not worthy, by the hearer’s lights, of serious consideration, despite the hearer’s recognizing that the speaker is not black. In the case I am describing, however, Tom takes the coding of a post’s content as a clue regarding the speaker’s identity, and hence may indeed harbor an identity prejudice toward the (again, misidentified) speaker.

13. If others insincerely assert that Brianna’s post is a bot product, Tom might be duped into a false bot judgement just as readily as if their assertions were sincere. Thanks to an anonymous reviewer for raising this point.

14. Further, potentially blurrier cases abound. To take an anonymous reviewer’s case, a bot might be designed to appear as the account of a black person endorsing conservative positions, and might receive an excess of credibility from politically-aligned readers. Users who are hypervigilant about this sort of possibility might erroneously judge that an actual minority user with conservative leanings is just a bot.

15. Byskov claims that all testimonial injustice must involve prejudice toward the speaker (Citation2021, 3), and yet content-based testimonial injustices strike me as counter examples (see note 12).

16. In the sense of Haslanger (Citation2000).

17. See also Fricker: “someone with a background experience of persistent testimonial injustice may lose confidence in her general intellectual abilities to such an extent that she is genuinely hindered in her educational or other intellectual development” (Fricker Citation2007, 46).

18. Thanks to an anonymous reviewer for suggesting a worry that prompted this point.

19. Q-posts being posts with the content Q, not posts made by the de facto leader, username ‘Q’, of the insidious “Q-Anon” movement.

20. This being an instance of a more general “information pollution” propaganda strategy that also includes the dissemination of fake news (cf. Lynch Citation2019).

21. This is paraphrased from an anonymous reviewer.

22. Those who follow Fricker (Citation2007, chapter 3) may also argue that, in general, a stance of default credulity toward social media testimony is unwise, even if active skepticism is melodramatic. However, the details of Fricker’s account of justified belief in testimony are complex and have come under some fire (Goldberg Citation2010). Therefore, I do not endeavour to examine the implications of her account for my argument here. Thanks to an anonymous reviewer for this point.

23. Levy argues that academic Twitter in particular is an important source of knowledge, so much so that academic users should take others’ posts at “face value”, even given the risk of being duped by, e.g. sock-puppeteers. In the academic Twitter context, “the epistemic benefits that flow from trust are too important, and too easily damaged, for us to risk becoming less trusting” (Citation2022, 286).

24. Naturally, a norm that does not presuppose moral encroachment would require fewer theoretical commitments. Sophisticated criticisms of moral encroachment are also available (see Gardiner Citation2018, ms.).

25. This being said, one possible benefit is that other users may see and then take seriously one’s response to the bot.

26. Some institutional interventions empower individual users to detect bots. One feature is Twitter’s “blue check” displayed by prominent users’ avatars, thus indicating that the identities of the users are verified. This feature protects against human imposters and bot imposters. However, the blue check system is currently only designed to verify the identities of sufficiently well-known users, whereas bots often masquerade as non-famous users.

29. One that Elon Musk has, for whatever reason, recently exacerbated – see https://www.cnn.com/2022/05/16/tech/elon-musk-twitter-spam-bots-parag/index.html.

30. Thanks are due to Regina Rini, Dylan Ludwig, Jozef Delvaux, and Dennis Papadopoulos for reading and commenting on earlier versions of this paper. Special thanks are due to several anonymous reviewers for their incredibly helpful feedback, all of which has strongly influenced the final version of this project.

Additional information

Notes on contributors

Benjamin Winokur

Benjamin Winokur received his PhD from York University in 2021. For the 2022-2023 academic year he will be Visiting Assistant Professor of Philosophy at Ashoka University. His research interests span epistemology, philosophy of mind, and philosophy of language. Currently, he is working on several projects about self-knowledge, first-person authority, and the social epistemology of the Internet.

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 53.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 384.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.