770
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Improving Trust in AI with Mitigating Confirmation Bias: Effects of Explanation Type and Debiasing Strategy for Decision-Making with Explainable AI

& ORCID Icon
Received 17 Jul 2023, Accepted 13 Nov 2023, Published online: 27 Nov 2023
 

Abstract

With advancements in artificial intelligence (AI), explainable AI (XAI) has emerged as a promising tool for enhancing the explainability of complex machine learning models. However, the explanations generated by an XAI may lead to cognitive biases among human users. To address this problem, this study aims to investigate how to mitigate users’ cognitive biases based on their individual characteristics. In the literature review, we found two factors that can be helpful in remedying biases: 1) debiasing strategies that have been reported to potentially reduce biases in users’ decision-making via additional information or change in information delivery, and 2) explanation modality types. To examine these factors’ effects, we conducted an experiment with a 4 (debiasing strategy) × 3 (explanation type) between-subject design. In the experiment, participants were exposed to an explainable interface that provides an AI’s outcomes with explanatory information, and their behavioral and attitudinal responses were collected. Specifically, we statistically examined the effects of textual and visual explanations on users’ trust and confirmation bias toward AI systems, considering the moderating effects of debiasing methods and watching time. The results demonstrated that textual explanations lead to higher trust in XAI systems compared to visual explanations. Moreover, we found that textual explanations are particularly beneficial for quick decision-makers to evaluate the outputs of AI systems. Next, the results indicated that the cognitive bias can be effectively mitigated by providing users with a priori information. These findings have theoretical and practical implications for designing AI-based decision support systems that can generate more trustworthy and equitable explanations.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Correction Statement

This article has been republished with minor changes. These changes do not impact the academic content of the article.

Additional information

Funding

This work was supported by the faculty research fund of Sejong University in 2023. This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (RS-2023-00210250).

Notes on contributors

Taehyun Ha

Taehyun Ha is an Assistant Professor in the Department of Data Science at Sejong University. His research focuses on Online User Behavior, Human-AI Interaction, and Trust Formation.

Sangyeon Kim

Sangyeon Kim is a research professor at the Institute of Engineering Research at Korea University. He received a PhD from the Department of Interaction Science at Sungkyunkwan University in 2022. His research interests include human-computer interaction, gestural interaction, accessible computing, and human-centered AI.

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 61.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 306.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.