428
Views
0
CrossRef citations to date
0
Altmetric
Articles

Transparency through Explanations and Justifications in Human-Robot Task-Based Communications

, &
Pages 1739-1752 | Received 09 May 2021, Accepted 07 Jun 2022, Published online: 27 Jul 2022

References

  • Allen, J. F., & Perrault, C. R. (1980). Analyzing intention in utterances. Artificial Intelligence, 15(3), 143–178. https://doi.org/10.1016/0004-3702(80)90042-9
  • Arnold, T., Kasenberg, D., & Scheutz, M. (2021). Explaining in time: Meeting interactive standards of explanation for robotic systems. ACM Transactions on Human-Robot Interaction, 10(3), 1–23. https://doi.org/10.1145/3457183
  • Austin, J. L. (1962). How to do things with words: Lecture I. How to Do Things with Words: JL Austin, 1–11. https://doi.org/10.1093/acprof:oso/9780198245537.001.0001
  • Berzan, C., & Scheutz, M. (2012). What am i doing? automatic construction of an agent’s state-transition diagram through introspection. In Proceedings of AAMAS 2012. International Foundation for Autonomous Agents and Multiagent Systems.
  • Bicchieri, C. (2006). The grammar of society: The nature and dynamics of social norms. Cambridge University Press.
  • Bonial, C., Donatelli, L., Abrams, M., Lukin, S., Tratz, S., Marge, M., et al. (2020). Dialogue-amr: Abstract meaning representation for dialogue. In Proceedings of the 12th Language Resources and Evaluation Conference (pp. 684–695).
  • Brick, T., Scheutz, M. (2007, March). Incremental natural language processing for hri. In Proceedings of the Second ACM IEEE International Conference on Human-Robot Interaction (pp. 263–270).
  • Briggs, G., & Scheutz, M. (2011, June). Facilitating mental modeling in collaborative human-robot interaction through adverbial cues. In Proceedings of the SIGDIAL 2011 Conference (pp. 239–247). Portland, Oregon.
  • Briggs, G., & Scheutz, M. (2012). Investigating the effects of robotic displays of protest and distress. In Proceedings of the 2012 Conference on Social Robotics (pp. 238–247). Springer.
  • Briggs, G., & Scheutz, M. (2013). A hybrid architectural approach to understanding and appropriately generating indirect speech acts. In Proceedings of Twenty-Seventh AAAI Conference on Artificial Intelligence (pp. 1213–1219).
  • Briggs, G., & Scheutz, M. (2014). How robots can affect human behavior: Investigating the effects of robotic displays of protest and distress. International Journal of Social Robotics, 6(3), 313–343. https://doi.org/10.1007/s12369-014-0235-1
  • Briggs, G., & Scheutz, M. (2015). “Sorry, i can’t do that:” Developing mechanisms to appropriately reject directives in human-robot interactions. In Proceedings of the 2015 AAAI Fall Symposium on AI and HRI.
  • Briggs, G., & Scheutz, M. (2017). Strategies and mechanisms to enable dialogue agents to respond appropriately to indirect speech acts. In Robot and Human Interactive Communication (RO-MAN), 26th IEEE International Symposium on.
  • Briggs, G., McConnell, I., & Scheutz, M. (2015). When robots object: Evidence for the utility of verbal, but not necessarily spoken protest. In Proceedings of the 7th International Conference on Social Robotics.
  • Briggs, G., Williams, T., & Scheutz, M. (2017). Enabling robots to understand indirect speech acts in task-based interactions. Journal of Human-Robot Interaction, 6(1), 64–94. https://doi.org/10.5898/JHRI.6.1.Briggs
  • Briggs, G., Williams, T., Jackson, R. B., & Scheutz, M. (2022). Why and how robots should say ‘no’. International Journal of Social Robotics, 14(2), 323–339. https://doi.org/10.1007/s12369-021-00780-y
  • Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., Agarwal, S., Herbert-Voss, A., Krueger, G., Henighan, T., Child, R., Ramesh, A., Ziegler, D. M., Wu, J., Winter, C., … Amodei, D. (2020). Language models are few-shot learners. Advances in Neural Information Processing Systems, 33(V1), 1877–1901. https://doi.org/10.48550/arXiv.2005.14165
  • Cantrell, R., Potapova, E., Krause, E., Zillich, M., & Scheutz, M. (2012). Incremental referent grounding with nlp-biased visual search. In Proceedings of AAAI 2012 Workshop on Grounding Language for Physical Systems. Indiana University.
  • Cantrell, R., Scheutz, M., Schermerhorn, P., & Wu, X. (2010, March). Robust spoken instruction understanding for HRI. In Proceedings of the 2010 Human-Robot Interaction Conference (pp. 275–282).
  • Chen, J. Y., Procci, K., Boyce, M., Wright, J., Garcia, A., & Barnes, M. (2014). Situation awareness-based agent transparency. (Tech. Rep.). Army Research Laboratory.
  • Clark, H. H., & Schaefer, E. F. (1989). Contributing to discourse. Cognitive Science, 13(2), 259–294. https://doi.org/10.1207/s15516709cog1302_7
  • Dahlbäck, N., Jönsson, A., & Ahrenberg, L. (1993). Wizard of oz studies–why and how. Knowledge-Based Systems, 6(4), 258–266. https://doi.org/10.1016/0950-7051(93)90017-N
  • Dzifcak, J., Scheutz, M., Baral, C., & Schermerhorn, P. (2009, May). What to do and how to do it: Translating natural language directives into temporal and dynamic logic representation for goal management and action execution. In Proceedings of the 2009 IEEE International Conference on Robotics and Automation (ICRA ’09). Kobe, Japan.
  • Floridi, L., & Chiriatti, M. (2020). Gpt-3: Its nature, scope, limits, and consequences. Minds and Machines, 30(4), 681–694. https://doi.org/10.1007/s11023-020-09548-1
  • Frasca, T., Oosterveld, B., Krause, E., & Scheutz, M. (2018). One-shot interaction learning from natural language instruction and demonstration. Advances in Cognitive Systems, 6, 159–176.
  • Frasca, T., Thielstrom, R., Krause, E., & Scheutz, M. (2020). “Can you do this?” self-assessment dialogues with autonomous robots before, during, and after a mission. In HRI workshop on assessing, explaining, and conveying robot proficiency for human-robot teaming. https://doi.org/10.48550/arXiv.2005.01527
  • Goffman, E. (1967). Interaction ritual: Essays in face-to-face behavior. Aldine Publishing Company.
  • Grice, H. (1975). Logic and conversation. In P. Cole & J. L. Morgan (Eds.), Syntax and semantics 3: Speech acts (pp. 41–58). Elsevier.
  • Janoff-Bulman, R., Sheikh, S., & Hepp, S. (2009). Proscriptive versus prescriptive morality: Two faces of moral regulation. Journal of Personality and Social Psychology, 96(3), 521–537. https://doi.org/10.1037/a0013779
  • Kim, T. J., & Hinds, P. J. (2006). Who should I blame? Effects of autonomy and transparency on attributions in human-robot interaction. In ROMAN 2006 – The 15th IEEE International Symposium on Robot and Human Interactive Communication (pp. 80–85).
  • Kramer, J., & Scheutz, M. (2007). Reflection and reasoning mechanisms for failure detection and recovery in a distributed robotic architecture for complex robots. In Proceedings of the 2007 IEEE International Conference on Robotics and Automation, April (pp. 3699–3704).
  • Krause, E., Schermerhorn, P., & Scheutz, M. (2012). Crossing boundaries: Multi-level introspection in a complex robotic architecture for automatic performance improvements. In Proceedings of the Twenty-Sixth AAAI Conference on Artificial Intelligence.
  • Lee, J. D., & See, K. A. (2004). Trust in automation: Designing for appropriate reliance. Human Factors, 46(1), 50–80. https://doi.org/10.1518/hfes.46.1.50_30392
  • Lyons, J. B. (2013). Being transparent about transparency: A model for human-robot interaction. In 2013 AAAI Spring Symposium Series.
  • Malle, B. F., & Scheutz, M. (2014). Moral competence in social robots. In Proceedings of IEEE International Symposium on Ethics in Engineering, Science, and Technology.
  • Malle, B. F., Rosen, E., Chi, V. B., Berg, M., & Haas, P. (2020). A general methodology for teaching norms to social robots. In 2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN) (pp. 1395–1402). https://doi.org/10.1109/RO-MAN47096.2020.9223610
  • Milli, S., Hadfield-Menell, D., Dragan, A., & Russell, S. (2017). Should robots be obedient?. In International Joint Conference on Artificial Intelligence.
  • Morgenstern, L. (1988). Knowledge preconditions for actions and plans. In Readings in distributed artificial intelligence (pp. 192–199). Elsevier.
  • Sarathy, V., Tsuetaki, A., Roque, A., & Scheutz, M. (2020). Reasoning requirements for indirect speech act interpretation. In Proceedings of COLING 2020: The 28th International Conference on Computational Linguistics.
  • Schermerhorn, P., Kramer, J., Brick, T., Anderson, D., Dingler, A., & Scheutz, M. (2006). Diarc: A testbed for natural human-robot interactions. In Proceedings of AAAI 2006 Mobile Robot Workshop.
  • Scheutz, M. (2014). The need for moral competency in autonomous agent architectures. In V. C. Müller (Ed.), Fundamental issues of artificial intelligence. Springer.
  • Scheutz, M., Briggs, G., Cantrell, R., Krause, E., Williams, T., & Veale, R. (2013). Novel mechanisms for natural human-robot interactions in the diarc architecture. In Proceedings of AAAI Workshop on Intelligent Robotic Systems. AAAI Press.
  • Scheutz, M., Cantrell, R., & Schermerhorn, P. (2011). Toward humanlike task-based dialogue processing for human robot interaction. AI Magazine, 32(4), 77–84. https://doi.org/10.1609/aimag.v32i4.2381
  • Scheutz, M., Eberhard, K., & Andronache, V. (2004). A parallel, distributed, realtime, robotic model for human reference resolution with visual constraints. Connection Science, 16(3), 145–167. https://doi.org/10.1080/09540090412331314803
  • Scheutz, M., Krause, E., Oosterveld, B., Frasca, T., & Platt, R. (2017). Spoken instruction-based one-shot object and action learning in a cognitive robotic architecture. In Proceedings of the 16th International Conference on Autoomous Agents and Multiagent Systems.
  • Scheutz, M., Krause, E., Oosterveld, B., Frasca, T., & Platt, R. (2018). Recursive spoken instruction-based one-shot object and action learning. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence (pp. 5354–5358).
  • Scheutz, M., Schermerhorn, P., Kramer, J., & Anderson, D. (2007). First steps toward natural human-like HRI. Autonomous Robots, 22(4), 411–423. https://doi.org/10.1007/s10514-006-9018-3
  • Scheutz, M., Williams, T., Krause, E., Oosterveld, B., Sarathy, V., & Frasca, T. (2019). An overview of the distributed integrated affect and reflection cognitive diarc architecture. In M. I. A. Ferreira, J. S. Sequeira, & R. Ventura (Eds.), Cognitive architectures. Springer International Publishing.
  • Searle, J. R. (1969). Speech acts: An essay in the philosophy of language (Vol. 626). Cambridge University Press.
  • Searle, J. R. (1975). Indirect speech acts. In Speech acts (pp. 59–82). Brill.
  • Stalnaker, R. C. (1978). Assertion. In Pragmatics (pp. 315–332). Brill.
  • Sycara, K., & Sukthankar, G. (2006). Literature review of teamwork models. Robotics Institute. Carnegie Mellon University, 31, 31.
  • Talamadupula, K., Briggs, G., Scheutz, M., & Kambhampti, S. (2017). Architectural mechanisms for handling human instructions for open-world mixed-initiative team tasks and goals. Advances in Cognitive Systems, 5, 37–56.
  • Tellex, S., Gopalan, N., Kress-Gazit, H., & Matuszek, C. (2020). Robots that use language. Annual Review of Control, Robotics, and Autonomous Systems, 3(1), 25–55. https://doi.org/10.1146/annurev-control-101119-071628
  • Thielstrom, R., Roque, A., Chita-Tegmark, M., & Scheutz, M. (2020). Generating explanations of action failures in a cognitive robotic architecture. In Proceedings of NL4XAI: 2nd Workshop on interactive natural language technology for explainable artificial intelligence. Association for Computational Linguistics.
  • Traum, D., & Allen, J. F. (1994). Discourse obligations in dialogue processing. arXiv preprint cmp-lg/9407011.
  • Traum, D., Rickel, J., Gratch, J., & Marsella, S. (2003). Negotiation over tasks in hybrid human-agent teams for simulation-based training. In Proceedings of the second international joint conference on Autonomous agents and multiagent systems (pp. 441–448).
  • Veale, R., Briggs, G., & Scheutz, M. (2013). Linking cognitive tokens to biological signals: Dialogue context improves neural speech recognizer performance. In Proceedings of the 35th Annual Conference of the Cognitive Science Society. Cognitive Science Society.
  • Williams, T., & Scheutz, M. (2015). A domain-independent model of open-world reference resolution. In Proceedings of the 37th Annual Conference of the Cognitive Science Society.
  • Williams, T., Acharya, S., Schreitter, S., & Scheutz, M. (2016). Situated open world reference resolution for human-robot dialogue. In Proceedings of the 11th ACM/IEEE Conference on Human-Robot Interaction.
  • Williams, T., Briggs, G., Oosterveld, B., & Scheutz, M. (2015). Going beyond command- based instructions: Extending robotic natural language interaction capabilities. In Proceedings of AAAI. AAAI Press.
  • Williams, T., Nunez, R. C., Briggs, G., Scheutz, M., Premaratne, K., & Murthi, M. N. (2014). A dempster-shafer theoretic approach to understanding indirect speech acts. In Advances in Artificial Intelligence. Springer.
  • Williams, T., Thames, D., Novakoff, J., & Scheutz, M. (2018). “Thank you for sharing that interesting fact!”: Effects of capability and context on indirect speech act use in task-based human-robot dialogue. In Proceedings of the 13th ACM/IEEE International Conference on Human-Robot Interaction.
  • Zhang, Y., Tino, P., Leonardis, A., & Tang, K. (2021). A survey on neural network interpretability. IEEE Transactions on Emerging Topics in Computational Intelligence, 5(5), 726–742. https://doi.org/10.1109/TETCI.2021.3100641

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.