Abstract
The just-ended COVID-19 pandemic and the looming global aging remind us that we need to be prepared for the shortage of doctors, which might become an urgent medical crisis in the future. Medical AI could relieve this urgency and sometimes perform better than human doctors; however, people are reluctant to trust medical AI because of algorithm aversion. Although several factors that can minimize algorithm aversion have been identified, they are not effective enough to promote medical AI as people’s first choice. Therefore, inspired by the direct and indirect information model of trust and media equation hypothesis, this research explored a new method to minimize aversion to medical AI by highlighting its social attributes. In 3 between-subject studies, a medical AI system’s direct information (i.e., transparency and quantitation of the decision-making process (DMP)) and indirect information (i.e., social proof) were manipulated. Study 1 (N = 193) and 2 (N = 429) showed that transparency of DMP and social proof increased trust in AI, but did not affect trust in human doctors. Social proof jointly affected trust in AI with non-quantitative DMP but not quantitative DMP. Study 3 (N = 184) further revealed the joint effect of the transparent non-quantitative DMP and near-perfect social proof, which could minimize algorithm aversion. These results extended the direct-indirect information model in interpersonal trust, revealed conditional media equation in human-AI trust, and offered practical implications for medical AI interface design.
Authors’ contributions
All authors contributed to designing the research. Yansong Zhao conducted the study, collected data, and analyzed the data. Chengli Xiao took the lead in writing the manuscript. All authors approved the final manuscript.
Disclosure statement
No potential conflict of interest was reported by the author(s).
Ethical statement
The ethics committee of psychology research of Nanjing University approved the study (approval number NJUPSY202302003). Written informed consent was obtained from each participant before the experiments began.
Data availability statement
Data and materials are available upon request.
Additional information
Funding
Notes on contributors
Yansong Zhao
Yansong Zhao is currently pursuing the M.S. degree with Department of Psychology, School of Social and Behavioral Sciences, Nanjing University, Nanjing, China. His current research interests include human factors in human-robot interaction and people’s social acceptance of artificial intelligence.
Chengli Xiao
Chengli Xiao is currently a professor of Psychology and the associate head of the School of Social and Behavioral Sciences, Nanjing University, Nanjing, China. Her research interests include human factors in human-robot interaction and people’s social acceptance of artificial intelligence.