Abstract
As autonomous systems become more complicated, humans may have difficulty deciphering autonomy-generated solutions and increasingly perceive autonomy as a mysterious black box. The lack of transparency contributes to the lack of trust in autonomy and suboptimal team performance. In response to this concern, researchers have proposed various methods to enhance autonomy transparency and evaluated how enhanced transparency could affect the people’s trust and the human-autonomy team performance. However, the majority of prior studies measured trust at the end of the experiment and averaged behavioral and performance measures across all trials in an experiment, yet overlooked the temporal dynamics of those variables. We have little understanding of how autonomy transparency affects trust, dependence, and performance over time. The present study, therefore, aims to fill the gap and examine such temporal dynamics. We develop a game Treasure Hunter wherein a human uncovers a map for treasures with the help from an intelligent assistant. The intelligent assistant recommends where the human should go next. The rationale behind each recommendation could be conveyed in a display that explicitly lists the option space (i.e., all the possible actions) and the reason why a particular action is the most appropriate in a given context. Results from a human-in-the-loop experiment with 28 participants indicate that by conveying the intelligent assistant’s decision-making rationale via the display, participants’ trust increases significantly and becomes more calibrated over time. Using the display also leads to a higher acceptance of recommendations from the intelligent agent.
Acknowledgements
We would also like to thank Kevin Y. Huang for his assistance in data collection.
Disclosure statement
No potential conflict of interest was reported by the author(s).
Additional information
Funding
Notes on contributors
Ruikun Luo
Ruikun Luo is a PhD candidate at the Robotics Institute, University of Michigan, Ann Arbor. Prior to joining the University of Michigan, he obtained a M.S. in Mechanical Engineering from Carnegie Mellon University in 2014 and a BS in Mechanical Engineering and Automation from Tsinghua University, China in 2012.
Na Du
Na Du is an Assistant Professor in the Department of Informatics and Networked Systems at the University of Pittsburgh. She received her PhD degree in Industrial and Operations Engineering from the University of Michigan in 2021 and Bachelor’s degree in Psychology from Zhejiang University in 2016.
X. Jessie Yang
X. Jessie Yang is an Assistant Professor in the Department of Industrial and Operations Engineering and an affiliated faculty at the Robotics Institute, University of Michigan, Ann Arbor. She obtained a PhD in Mechanical and Aerospace Engineering (Human Factors) from Nanyang Technological University, Singapore in 2014.