Abstract
This article considers the ways that explainable AI can be used to help secure human-interactive robots. To do so, we acknowledge that robots interact with a variety of people. For example, some people may operate robots that perform tasks in their homes or offices, while other people may be tasked with defending robots from potential attackers. We describe how explainable AI can be used to help the human operators of robots appropriately calibrate the trust they have in their systems, and we demonstrate this through an implementation. We also describe a novel generalizable human-in-the-loop framework based on control loops to characterize and explain attacks on robots to a robot defender. We explore the utility of such a framework through an analysis of its application in the incident management process, applied to robots. This framework allows formal definition of explainability, and the necessary condition for explainability in robots. The overarching goal of this article is to introduce the application of explainability for security of robotics as a novel area of research, therefore, we also discuss several open research problems we uncovered while applying explainable AI to security of robots.
Disclosure statement
No potential conflict of interest was reported by the author(s).
Additional information
Notes on contributors
Antonio Roque
Antonio Roque performed this work as a Research Assistant Professor at Tufts University. He received his PhD in Computer Science at the University of Southern California, where he worked at USC's Institute for Creative Technologies.
Suresh K. Damodaran
Suresh K. Damodaran currently works at MITRE. Suresh researches and applies advances in streaming analytics and machine learning to cybersecurity. He has authored over 10 patents. Suresh received Ph.D. in Computer Science from the University of Louisiana, B.Tech. and M.Tech. degrees from the Indian Institute of Technology, Kharagpur, India.