Abstract
Transparent task-based communication between human instructors and robot instructees requires robots to be able to determine whether a human instruction can and should be carried out, i.e., whether the human is authorized, and whether the robot can and should do it. If the instruction is not appropriate, the robot needs to be able to reject it in a transparent manner by including its reasons for the rejection. In this article, we provide a brief overview of our work on natural language understanding and transparent communication in the Distributed Integrated Affect Reflection Cognition (DIARC) architecture and demonstrate how the robot can perform different inferences based on context to determine whether it should reject a human instruction. Specifically, we discuss four task-based dialogues and show videos of the interactions with fully autonomous robots that are able to reject human commands and provide succinct explanations and justifications for their rejection. The proposed approach can form the basis of further algorithmic developments for adapting the robot’s level of transparency for different interlocutors and contexts.
Acknowledgment
Special thanks to Gordon Briggs, Tom Williams, Evan Krause, and Bradley Oosterveld for their contributions to the DIARC architecture.
Disclosure statement
No potential conflict of interest was reported by the author(s).
Notes
1 Note that this is not the typical trust relation of the human trusting the robot that has been extensively explored in the human factors, human-computer interaction, and human-robot interaction communities, but rather the reverse relation of a robot trusting a human.
Additional information
Funding
Notes on contributors
Matthias Scheutz
Matthias Scheutz is a full professor in computer science at Tufts University and director of the Human-Robot Interaction Laboratory. He has over 400 publications in artificial intelligence, natural language understanding, robotics, and human-robot interaction, with current research focusing on complex ethical robots with instruction-based learning capabilities in open worlds.
Ravenna Thielstrom
Ravenna Thielstrom is a programmer and research staff member in the Human-Robot Interaction Laboratory at Tufts University, whose primary area of research is on dialogue and belief systems. She received her BA from Swarthmore College in computer science and cognitive science.
Mitchell Abrams
Mitchell Abrams is a Ph.D. student in computer science at Tufts University. He works in the human-robot interaction laboratory with a research focus on natural language understanding and reference resolution. Before Tufts, Mitchell received his BA in Linguistics from Binghamton University and his MS in computational linguistics from Georgetown University.