431
Views
0
CrossRef citations to date
0
Altmetric
Articles

Transparency through Explanations and Justifications in Human-Robot Task-Based Communications

, &
Pages 1739-1752 | Received 09 May 2021, Accepted 07 Jun 2022, Published online: 27 Jul 2022
 

Abstract

Transparent task-based communication between human instructors and robot instructees requires robots to be able to determine whether a human instruction can and should be carried out, i.e., whether the human is authorized, and whether the robot can and should do it. If the instruction is not appropriate, the robot needs to be able to reject it in a transparent manner by including its reasons for the rejection. In this article, we provide a brief overview of our work on natural language understanding and transparent communication in the Distributed Integrated Affect Reflection Cognition (DIARC) architecture and demonstrate how the robot can perform different inferences based on context to determine whether it should reject a human instruction. Specifically, we discuss four task-based dialogues and show videos of the interactions with fully autonomous robots that are able to reject human commands and provide succinct explanations and justifications for their rejection. The proposed approach can form the basis of further algorithmic developments for adapting the robot’s level of transparency for different interlocutors and contexts.

Acknowledgment

Special thanks to Gordon Briggs, Tom Williams, Evan Krause, and Bradley Oosterveld for their contributions to the DIARC architecture.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Notes

1 Note that this is not the typical trust relation of the human trusting the robot that has been extensively explored in the human factors, human-computer interaction, and human-robot interaction communities, but rather the reverse relation of a robot trusting a human.

Additional information

Funding

This work was supported in part by AFOSR grant #FA9550-18-1-0465.

Notes on contributors

Matthias Scheutz

Matthias Scheutz is a full professor in computer science at Tufts University and director of the Human-Robot Interaction Laboratory. He has over 400 publications in artificial intelligence, natural language understanding, robotics, and human-robot interaction, with current research focusing on complex ethical robots with instruction-based learning capabilities in open worlds.

Ravenna Thielstrom

Ravenna Thielstrom is a programmer and research staff member in the Human-Robot Interaction Laboratory at Tufts University, whose primary area of research is on dialogue and belief systems. She received her BA from Swarthmore College in computer science and cognitive science.

Mitchell Abrams

Mitchell Abrams is a Ph.D. student in computer science at Tufts University. He works in the human-robot interaction laboratory with a research focus on natural language understanding and reference resolution. Before Tufts, Mitchell received his BA in Linguistics from Binghamton University and his MS in computational linguistics from Georgetown University.

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 61.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 306.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.