730
Views
2
CrossRef citations to date
0
Altmetric
Articles

Evaluating Effects of Enhanced Autonomy Transparency on Trust, Dependence, and Human-Autonomy Team Performance over Time

, & ORCID Icon
Pages 1962-1971 | Received 31 Mar 2021, Accepted 01 Apr 2022, Published online: 13 Jul 2022
 

Abstract

As autonomous systems become more complicated, humans may have difficulty deciphering autonomy-generated solutions and increasingly perceive autonomy as a mysterious black box. The lack of transparency contributes to the lack of trust in autonomy and suboptimal team performance. In response to this concern, researchers have proposed various methods to enhance autonomy transparency and evaluated how enhanced transparency could affect the people’s trust and the human-autonomy team performance. However, the majority of prior studies measured trust at the end of the experiment and averaged behavioral and performance measures across all trials in an experiment, yet overlooked the temporal dynamics of those variables. We have little understanding of how autonomy transparency affects trust, dependence, and performance over time. The present study, therefore, aims to fill the gap and examine such temporal dynamics. We develop a game Treasure Hunter wherein a human uncovers a map for treasures with the help from an intelligent assistant. The intelligent assistant recommends where the human should go next. The rationale behind each recommendation could be conveyed in a display that explicitly lists the option space (i.e., all the possible actions) and the reason why a particular action is the most appropriate in a given context. Results from a human-in-the-loop experiment with 28 participants indicate that by conveying the intelligent assistant’s decision-making rationale via the display, participants’ trust increases significantly and becomes more calibrated over time. Using the display also leads to a higher acceptance of recommendations from the intelligent agent.

Acknowledgements

We would also like to thank Kevin Y. Huang for his assistance in data collection.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

This work is partially supported by the National Science Foundation under Grant [No. 2045009].

Notes on contributors

Ruikun Luo

Ruikun Luo is a PhD candidate at the Robotics Institute, University of Michigan, Ann Arbor. Prior to joining the University of Michigan, he obtained a M.S. in Mechanical Engineering from Carnegie Mellon University in 2014 and a BS in Mechanical Engineering and Automation from Tsinghua University, China in 2012.

Na Du

Na Du is an Assistant Professor in the Department of Informatics and Networked Systems at the University of Pittsburgh. She received her PhD degree in Industrial and Operations Engineering from the University of Michigan in 2021 and Bachelor’s degree in Psychology from Zhejiang University in 2016.

X. Jessie Yang

X. Jessie Yang is an Assistant Professor in the Department of Industrial and Operations Engineering and an affiliated faculty at the Robotics Institute, University of Michigan, Ann Arbor. She obtained a PhD in Mechanical and Aerospace Engineering (Human Factors) from Nanyang Technological University, Singapore in 2014.

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 61.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 306.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.