59
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Overcome Medical Algorithm Aversion: Conditional Joint Effect of Direct and Indirect Information

& ORCID Icon
Received 03 Jan 2024, Accepted 12 Apr 2024, Published online: 02 May 2024
 

Abstract

The just-ended COVID-19 pandemic and the looming global aging remind us that we need to be prepared for the shortage of doctors, which might become an urgent medical crisis in the future. Medical AI could relieve this urgency and sometimes perform better than human doctors; however, people are reluctant to trust medical AI because of algorithm aversion. Although several factors that can minimize algorithm aversion have been identified, they are not effective enough to promote medical AI as people’s first choice. Therefore, inspired by the direct and indirect information model of trust and media equation hypothesis, this research explored a new method to minimize aversion to medical AI by highlighting its social attributes. In 3 between-subject studies, a medical AI system’s direct information (i.e., transparency and quantitation of the decision-making process (DMP)) and indirect information (i.e., social proof) were manipulated. Study 1 (N = 193) and 2 (N = 429) showed that transparency of DMP and social proof increased trust in AI, but did not affect trust in human doctors. Social proof jointly affected trust in AI with non-quantitative DMP but not quantitative DMP. Study 3 (N = 184) further revealed the joint effect of the transparent non-quantitative DMP and near-perfect social proof, which could minimize algorithm aversion. These results extended the direct-indirect information model in interpersonal trust, revealed conditional media equation in human-AI trust, and offered practical implications for medical AI interface design.

Authors’ contributions

All authors contributed to designing the research. Yansong Zhao conducted the study, collected data, and analyzed the data. Chengli Xiao took the lead in writing the manuscript. All authors approved the final manuscript.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Ethical statement

The ethics committee of psychology research of Nanjing University approved the study (approval number NJUPSY202302003). Written informed consent was obtained from each participant before the experiments began.

Data availability statement

Data and materials are available upon request.

Additional information

Funding

This study was funded by the National Social Science Fund of China (Grant Number 21BSH045).

Notes on contributors

Yansong Zhao

Yansong Zhao is currently pursuing the M.S. degree with Department of Psychology, School of Social and Behavioral Sciences, Nanjing University, Nanjing, China. His current research interests include human factors in human-robot interaction and people’s social acceptance of artificial intelligence.

Chengli Xiao

Chengli Xiao is currently a professor of Psychology and the associate head of the School of Social and Behavioral Sciences, Nanjing University, Nanjing, China. Her research interests include human factors in human-robot interaction and people’s social acceptance of artificial intelligence.

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 61.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 306.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.