577
Views
0
CrossRef citations to date
0
Altmetric
Research Article

AI as an Apolitical Referee: Using Alternative Sources to Decrease Partisan Biases in the Processing of Fact-Checking Messages

ORCID Icon, ORCID Icon & ORCID Icon
Published online: 14 Sep 2023
 

Abstract

While fact-checking has received much attention as a tool to fight misinformation online, fact-checking efforts have yielded limited success in combating political misinformation due to partisans’ biased information processing. The efficacy of fact-checking often decreases, if not backfires, when the fact-checking messages contradict individual audiences’ political stance. To explore ways to minimize such politically biased processing of fact-checking messages, an online experiment (N = 645) examined how different source labels of fact-checking messages (human experts vs. AI vs. crowdsourcing vs. human experts-AI hybrid) influence partisans’ processing of fact-checking messages. Results showed that AI and crowdsourcing source labels significantly reduced motivated reasoning in evaluating the credibility of fact-checking messages whereas the partisan bias remained evident for the human experts and human experts-AI hybrid source labels.

Disclosure Statement

No potential conflict of interest was reported by the author(s).

Notes

1 A series of analysis of variance (ANOVA) and Chi-square tests found no significant demographic differences between conditions (p = .099 for age; p = .522 for gender; p = .417 for income; p = .364 for education; p = .549 for political partisanship; p = .153 for political ideology, p = .493 for frequency of social media use). Thus, randomization was deemed successful.

2 To further explore differences in message credibility across the four fact-checking source labels, one-way ANOVA and a Bonferroni post hoc test were conducted. The results showed that there are significant differences across the four source labels in shaping message credibility, F(3, 641) = 2.82, p = .038, Cohen’s d = 0.23. Those in the AI condition reported the highest message credibility (M = 3.89, SD = 0.79), followed by the human experts condition (M = 3.86, SD = 0.89) and the human experts-AI condition (M = 3.84, SD = 0.81). The crowdsourcing condition showed the lowest message credibility (M = 3.66, SD = 0.81). The post hoc test indicated that the AI source label induced significantly higher message credibility than the crowdsourcing source label (p = .042). However, no significant differences were found among other source labels.

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 53.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 104.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.