2,197
Views
3
CrossRef citations to date
0
Altmetric
Research Articles

Explainable Artificial Intelligence: Evaluating the Objective and Subjective Impacts of xAI on Human-Agent Interaction

ORCID Icon, , , & ORCID Icon
Pages 1390-1404 | Received 24 Nov 2021, Accepted 11 Jul 2022, Published online: 08 Aug 2022
 

Abstract

Intelligent agents must be able to communicate intentions and explain their decision-making processes to build trust, foster confidence, and improve human-agent team dynamics. Recognizing this need, academia and industry are rapidly proposing new ideas, methods, and frameworks to aid in the design of more explainable AI. Yet, there remains no standardized metric or experimental protocol for benchmarking new methods, leaving researchers to rely on their own intuition or ad hoc methods for assessing new concepts. In this work, we present the first comprehensive (n = 286) user study testing a wide range of approaches for explainable machine learning, including feature importance, probability scores, decision trees, counterfactual reasoning, natural language explanations, and case-based reasoning, as well as a baseline condition with no explanations. We provide the first large-scale empirical evidence of the effects of explainability on human-agent teaming. Our results will help to guide the future of explainability research by highlighting the benefits of counterfactual explanations and the shortcomings of confidence scores for explainability. We also propose a novel questionnaire to measure explainability with human participants, inspired by relevant prior work and correlated with human-agent teaming metrics.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Notes

1 Our study was approved by the Georgia Institute of Technology IRB under Protocol H20522.

Additional information

Funding

This work was sponsored by MIT Lincoln Laboratory grant 7000437192, NASA Early Career Fellowship grant 80HQTR19NOA01-19ECF-B1, a gift to the Georgia Tech Foundation from Konica Minolta, Inc, and the National Science Foundation [20-604].

Notes on contributors

Andrew Silva

Andrew Silva is a Computer Science PhD candidate at the Georgia Institute of Technology focusing on interactive and explainable methods for machine learning. He graduated with a BS in Computational Media and a MS in Computer Science from Georgia Tech.

Mariah Schrum

Mariah Schrum graduated from Johns Hopkins with a BS in Biomedical Engineering and received a master’s of Computer Science from the Georgia Institute of Technology. She is currently a Robotics PhD student and NSF ARMS fellow at Georgia Tech working with Dr. Matthew Gombolay in the CORE Robotics Lab.

Erin Hedlund-Botti

Erin Hedlund-Botti is a Robotics PhD candidate at the Georgia Institute of Technology researching machine learning and human-robot interaction. She graduated with a BS in Computer Engineering from Bucknell University in 2017, and has previously worked at the Johns Hopkins University Applied Physics Lab.

Nakul Gopalan

Nakul Gopalan is a postdoctoral researcher at Georgia Tech. He completed his PhD at Brown University’s Computer Science department in 2019. His research interests lie at the intersection of language grounding and robot learning. His work has received a best paper award at the RoboNLP workshop at ACL 2017.

Matthew Gombolay

Matthew Gombolay is an Assistant Professor of Interactive Computing at the Georgia Institute of Technology. He received a BS in Mechanical Engineering from the Johns Hopkins University in 2011, an S.M. in AeroAstro from MIT in 2013, and a PhD in Autonomous Systems from MIT in 2017.

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 61.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 306.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.