Abstract
Research in explainable AI (XAI) aims to provide insights into the decision-making process of opaque AI models. To date, most XAI methods offer one-off and static explanations, which cannot cater to the diverse backgrounds and understanding levels of users. With this paper, we investigate if free-form conversations can enhance users’ comprehension of static explanations in image classification, improve acceptance and trust in the explanation methods, and facilitate human-AI collaboration. We conduct a human-subject experiment with 120 participants. Half serve as the experimental group and engage in a conversation with a human expert regarding the static explanations, while the other half are in the control group and read the materials regarding static explanations independently. We measure the participants’ objective and self-reported comprehension, acceptance, and trust of static explanations. Results show that conversations significantly improve participants’ comprehension, acceptance , trust, and collaboration with static explanations, while reading the explanations independently does not have these effects and even decreases users’ acceptance of explanations. Our findings highlight the importance of customized model explanations in the format of free-form conversations and provide insights for the future design of conversational explanations.
Acknowledgments
Any opinions, findings, conclusions, or recommendations expressed in this material are those of the authors and do not reflect the views of the funding agencies.
Disclosure statement
No potential conflict of interest was reported by the author(s).
Additional information
Funding
Notes on contributors
Tong Zhang
Tong Zhang is a research assistant and Ph.D. student in the School of Computer Science and Engineering at Nanyang Technological University, Singapore. He received a B.E. degree (2020) in Computer Science and Technology from Shandong University.
X. Jessie Yang
X. Jessie Yang is an Assistant Professor in the Department of Industrial and Operations Engineering at the University of Michigan Ann Arbor. She obtained a PhD (2014) and a MEng (2009) in Mechanical and Aerospace Engineering (Human Factors), and a BEng (2006) in Electrical and Electronic Engineering, all from Nanyang Technological University.
Boyang Li
Boyang Li is a Nanyang Associate Professor at the School of Computer Science and Engineering, Nanyang Technological University, Singapore. He was a Senior Research Scientist at Baidu Research USA, and a Research Scientist and Group Leader at Disney Research. He received his Ph.D. degree (2015) from Georgia Institute of Technology.