7,429
Views
5
CrossRef citations to date
0
Altmetric
Articles

The Situation Awareness Framework for Explainable AI (SAFE-AI) and Human Factors Considerations for XAI Systems

&
Pages 1772-1788 | Received 17 Mar 2021, Accepted 11 Mar 2022, Published online: 15 Jun 2022

References

  • Adebayo, J., Gilmer, J., Muelly, M., Goodfellow, I., Hardt, M., & Kim, B. (2018). Sanity checks for saliency maps. In Advances in Neural Information Processing Systems, 9505–9515. https://doi.org/10.48550/arXiv.1810.03292
  • Amir, D., Amir, O. (2018). Highlights: Summarizing agent behavior to people. In Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems (pp. 1168–1176). International Foundation for Autonomous Agents and Multiagent Systems.
  • Anjomshoae, S., Najjar, A., Calvaresi, D., Främling, K. (2019). Explainable agents and robots: Results from a systematic literature review. In Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems, pages 1078–1088. International Foundation for Autonomous Agents and Multiagent Systems.
  • Bansal, A., Farhadi, A., & Parikh, D. (2014). Towards transparent systems: Semantic characterization of failure modes. In European Conference on Computer Vision (pp. 366–381). Springer, Cham.
  • Bedny, G., & Meister, D. (1999). Theory of activity and situation awareness. International Journal of Cognitive Ergonomics, 3(1), 63–72. https://doi.org/10.1207/s15327566ijce0301_5
  • Billings, D. R., Schaefer, K. E., Chen, J. Y., Hancock, P. A. (2012). Human-robot interaction: Developing trust in robots. In Proceedings of the Seventh Annual ACM/IEEE International Conference on Human-Robot Interaction (pp. 109–110).
  • Borgo, R., Cashmore, M., & Magazzeni, D. (2018). Towards providing explanations for ai planner decisions. arXiv preprint arXiv:1810.06338
  • Broekens, J., Harbers, M., Hindriks, K., Van Den Bosch, K., Jonker, C., Meyer, J.-J. (2010). Do you get it? user-evaluated explainable bdi agents. In German Conference on Multiagent System Technologies (pp. 28–39). Springer.
  • Chakraborti, T., Sreedharan, S., Grover, S., Kambhampati, S. (2019). Plan explanations as model reconciliation. In 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI) (pp. 258–266). Ieee
  • Chen, J. Y. (2011). Individual differences in human-robot interaction in a military multitasking environment. Journal of Cognitive Engineering and Decision Making, 5(1), 83–105. https://doi.org/10.1177/1555343411399070
  • Chen, J. Y., Barnes, M. J., Selkowitz, A. R., & Stowers, K. (2016). Effects of agent transparency on human-autonomy teaming effectiveness [Paper presentation]. In 2016 IEEE international conference on Systems, man, and cybernetics (SMC) (pp. 001838–001843). IEEE. https://doi.org/10.1109/SMC.2016.7844505
  • Chen, J. Y., Barnes, M. J., Wright, J. L., Stowers, K., & Lakhmani, S. G. (2017). Situation awareness-based agent transparency for human-autonomy teaming effectiveness [Paper presentation]. In Micro-and nanotechnology sensors, systems, and applications IX (vol. 10194, pp. 101941V). International Society for Optics and Photonics. https://doi.org/10.1117/12.2263194
  • Chen, J. Y., Lakhmani, S. G., Stowers, K., Selkowitz, A. R., Wright, J. L., & Barnes, M. (2018). Situation awareness-based agent transparency and human-autonomy teaming effectiveness. Theoretical Issues in Ergonomics Science, 19(3), 259–282. https://doi.org/10.1080/1463922X.2017.1315750
  • Chen, J. Y., Procci, K., Boyce, M., Wright, J., Garcia, A., & Barnes, M. (2014). Situation awareness-based agent transparency. Technical report, Army research lab Aberdeen proving ground MD human research and engineering.
  • Dannenhauer, D., Floyd, M. W., Molineaux, M., & Aha, D. W. (2018). Learning from exploration: Towards an explainable goal reasoning agent.
  • Doshi-Velez, F., Kim, B. (2017). Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608
  • Dragan, A. D., Lee, K. C., & Srinivasa, S. S. (2013). Legibility and predictability of robot motion [Paper presentation]. In 2013 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI) (pp. 301–308). IEEE. https://doi.org/10.1109/HRI.2013.6483603
  • Dzindolet, M. T., Peterson, S. A., Pomranky, R. A., Pierce, L. G., & Beck, H. P. (2003). The role of trust in automation reliance. International Journal of Human-Computer Studies, 58(6), 697–718. https://doi.org/10.1016/S1071-5819(03)00038-7
  • Endsley, M. (1995). Measurement of Situation Awareness in Dynamic Systems. Human Factors, 37(1), 65–84. https://doi.org/10.1518/001872095779049499
  • Endsley, M., & Jones, W. (2001). A model of inter-and intrateam situation awareness: Implications for design. new trends in cooperative activities: Understanding system dynamics in complex environments. m. mcneese, e. salas and m. endsley. santa monica. CA, Human Factors and Ergonomics Society
  • Endsley, M. R. (1988). Situation awareness global assessment technique (sagat) [Paper presentation]. In Proceedings of the IEEE 1988 National Aerospace and Electronics Conference (pp. 789–795). IEEE. https://doi.org/10.1109/NAECON.1988.195097
  • Endsley, M. R. (2015). Situation awareness misconceptions and misunderstandings. Journal of Cognitive Engineering and Decision Making, 9(1), 4–32. https://doi.org/10.1177/1555343415572631
  • Endsley, M. R. (2017). Direct measurement of situation awareness: Validity and use of sagat. In Situational Awareness (pp. 129–156). Routledge.
  • Endsley, M. R. (2019). A systematic review and meta-analysis of direct objective measures of situation awareness: A comparison of sagat and spam. Human Factors, 63(1), 124–150. https://doi.org/10.1177/0018720819875376
  • Floyd, M. W., Aha, D. W. (2016). Incorporating transparency during trust-guided behavior adaptation. In International Conference on Case-Based Reasoning (pp. 124–138). Springer.
  • Fox, M., Long, D., & Magazzeni, D., (2017). Explainable planning. arXiv preprint arXiv:1709.10256
  • Gunning, D., & Aha, D. W. (2019). Darpa’s explainable artificial intelligence program. AI Magazine, 40(2), 44–58. https://doi.org/10.1609/aimag.v40i2.2850
  • Halpern, J. Y., & Pearl, J. (2005). Causes and explanations: A structural-model approach. part i: Causes. The British Journal for the Philosophy of Science, 56(4), 843–887. https://doi.org/10.1093/bjps/axi147
  • Hancock, P. A., Billings, D. R., Schaefer, K. E., Chen, J. Y., De Visser, E. J., & Parasuraman, R. (2011). A meta-analysis of factors affecting trust in human-robot interaction. Human Factors, 53(5), 517–527. https://doi.org/10.1177/0018720811417254
  • Harbers, M., Bradshaw, J. M., Johnson, M., Feltovich, P., Van Den Bosch, K., Meyer, J.-J. (2011). Explanation in human-agent teamwork. In International Workshop on Coordination, Organizations, Institutions, and Norms in Agent Systems (pp. 21–37). Springer.
  • Harbers, M., van den Bosch, K., Meyer, J.-J. (2010). Design and evaluation of explainable bdi agents. In 2010 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology (vol. 2, pp. 125–132). IEEE.
  • Hart, S. G., & Staveland, L. E. (1988). Development of nasa-tlx (task load index): Results of empirical and theoretical research. In Advances in Psychology (vol. 52, pp. 139–183). Elsevier.
  • Hayes, B., & Shah, J. A. (2017). Improving robot controller transparency through autonomous policy explanation [Paper presentation]. In 2017 12th ACM/IEEE International Conference on Human-Robot Interaction (HRI) (pp. 303–312). IEEE. https://doi.org/10.1145/2909824.3020233
  • Hellström, T., & Bensch, S. (2018). Understandable robots-what, why, and how. Paladyn, Journal of Behavioral Robotics, 9(1), 110–123. https://doi.org/10.1515/pjbr-2018-0009
  • Hoffman, R., Miller, T., Mueller, S. T., Klein, G., & Clancey, W. J. (2018). Explaining explanation, part 4: A deep dive on deep nets. IEEE Intelligent Systems, 33(3), 87–95. https://doi.org/10.1109/MIS.2018.033001421
  • Hoffman, R. R., Mueller, S. T., Klein, G., & Litman, J. (2018). Metrics for explainable ai: Challenges and prospects. arXiv preprint arXiv:1812.04608
  • Johnson, M., Bradshaw, J. M., Feltovich, P. J., Jonker, C. M., Van Riemsdijk, M. B., & Sierhuis, M. (2014). Coactive design: Designing support for interdependence in joint activity. Journal of Human-Robot Interaction, 3(1), 43–69. https://doi.org/10.5898/JHRI.3.1.Johnson
  • Kim, B., Rudin, C., & Shah, J. A. (2014). The Bayesian case model: A generative approach for case-based reasoning and prototype classification. In Advances in Neural Information Processing Systems 1952–1960. https://doi.org/10.48550/arXiv.1503.01161
  • Kim, B., Wattenberg, M., Gilmer, J., Cai, C., Wexler, J., Viegas, F., & Sayres, R. (2017). Interpretability beyond feature attribution: Quantitative testing with concept activation vectors (tcav). arXiv preprint arXiv:1711.11279
  • Krarup, B., Cashmore, M., Magazzeni, D., & Miller, T. (2019). Model-based contrastive explanations for explainable planning.
  • Kulesza, T., Stumpf, S., Burnett, M., Yang, S., Kwan, I., Wong, W.-K. (2013). Too much, too little, or just right? ways explanations impact end users’ mental models. In 2013 IEEE Symposium on Visual Languages and Human Centric Computing, pages 3–10. IEEE
  • Lage, I., Chen, E., He, J., Narayanan, M., Kim, B., Gershman, S., & Doshi-Velez, F. (2019). An evaluation of the human-interpretability of explanation. arXiv preprint arXiv:1902.00006
  • Lee, J. D., & See, K. A. (2004). Trust in automation: Designing for appropriate reliance. Human Factors, 46(1), 50–80. https://doi.org/10.1518/hfes.46.1.50_30392
  • Lim, B. Y., Dey, A. K., Avrahami, D. (2009). Why and why not explanations improve the intelligibility of context-aware intelligent systems. In Proceedings of the SIGCHI conference on human factors in computing systems (pp. 2119–2128).
  • Lipton, Z. C. (2016). The mythos of model interpretability. arXiv preprint arXiv:1606.03490
  • Lomas, M., Chevalier, R., Cross, E. V., Garrett, R. C., Hoare, J., & Kopack, M. (2012). Explaining robot actions. In Proceedings of the seventh annual ACM/IEEE international conference on Human-Robot Interaction (pp. 187–188). https://doi.org/10.1145/2157689.2157748
  • Marino, D. L., Wickramasinghe, C. S., & Manic, M. (2018). An adversarial approach for explainable ai in intrusion detection systems. In IECON 2018-44th Annual Conference of the IEEE Industrial Electronics Society (pp. 3237–3243). IEEE.
  • Miller, G. A. (1956). The magical number seven, plus or minus two: Some limits on our capacity for processing information. Psychological Review, 63(2), 81–97. https://doi.org/10.1037/h0043158
  • Miller, T. (2019). Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence, 267, 1–38. https://doi.org/10.1016/j.artint.2018.07.007
  • Miller, T., Howe, P., & Sonenberg, L. (2017). Explainable ai: Beware of inmates running the asylum or: How i learnt to stop worrying and love the social and behavioural sciences. arXiv preprint arXiv:1712.00547
  • Neerincx, M. A., van der Waa, J., Kaptein, F., van Diggelen, J. (2018). Using perceptual and cognitive explanations for enhanced human-agent team performance. In International Conference on Engineering Psychology and Cognitive Ergonomics (pp. 204–214). Springer.
  • Ososky, S., Sanders, T., Jentsch, F., Hancock, P., & Chen, J. Y. (2014). Determinants of system transparency and its influence on trust in and reliance on unmanned robotic systems. In Unmanned Systems Technology XVI (vol. 9084, pp. 90840E). International Society for Optics and Photonics. https://doi.org/10.1117/12.2050622
  • Parasuraman, R., & Riley, V. (1997). Humans and automation: Use, misuse, disuse, abuse. Human Factors, 39(2), 230–253. https://doi.org/10.1518/001872097778543886
  • Parasuraman, R., Sheridan, T. B., & Wickens, C. D. (2008). Situation awareness, mental workload, and trust in automation: Viable, empirically supported cognitive engineering constructs. Journal of Cognitive Engineering and Decision Making, 2(2), 140–160. https://doi.org/10.1518/155534308X284417
  • Preece, A., Harborne, D., Braines, D., Tomsett, R., & Chakraborty, S. (2018). Stakeholders in explainable ai. arXiv preprint arXiv:1810.00184
  • Pynadath, D. V., Barnes, M. J., Wang, N., & Chen, J. Y. (2018). Transparency communication for machine learning in human-automation interaction. In Human and machine learning (pp. 75–90). Springer.
  • Ribeiro, M. T., Singh, S., Guestrin, C. (2016). Why should i trust you?: Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining (pp. 1135–1144). ACM.
  • Ribera, M., & Lapedriza, A. (2019). Can we do better explanations? A proposal of user-centered explainable ai. In IUI Workshops.
  • Salmon, P. M., Stanton, N. A., Walker, G. H., Baber, C., Jenkins, D. P., McMaster, R., & Young, M. S. (2008). What really is going on? Review of situation awareness models for individuals and teams. Theoretical Issues in Ergonomics Science, 9(4), 297–323. https://doi.org/10.1080/14639220701561775
  • Sanneman, L., Shah, J. A. (2020). A situation awareness-based framework for design and evaluation of explainable ai. In International Workshop on Explainable, Transparent Autonomous Agents and Multi-Agent Systems (pp. 94–110). Springer.
  • Schaefer, K. E., Billings, D. R., Szalma, J. L., Adams, J. K., Sanders, T. L., Chen, J. Y., & Hancock, P. A. (2014). A meta-analysis of factors influencing the development of trust in automation: Implications for human-robot interaction. Technical report. Army Research Lab Aberdeen Proving Ground Md Human Research And Engineering.
  • Schaefer, K. E., Chen, J. Y., Szalma, J. L., & Hancock, P. A. (2016). A meta-analysis of factors influencing the development of trust in automation: Implications for understanding autonomy in future systems. Human Factors, 58(3), 377–400. https://doi.org/10.1177/0018720816634228
  • Schaefer, K. E., Straub, E. R., Chen, J. Y., Putney, J., & Evans, A. W. III, (2017). Communicating intent to develop shared situation awareness and engender trust in human-agent teams. Cognitive Systems Research, 46, 26–39. https://doi.org/10.1016/j.cogsys.2017.02.002
  • Sheh, R., & Monteath, I. (2017). Introspectively assessing failures through explainable artificial intelligence. In IROS Workshop on Introspective Methods for Reliable Autonomy.
  • Sheh, R. K. (2017). Different xai for different hri. In 2017 AAAI Fall Symposium Series.
  • Smith, K., & Hancock, P. A. (1995). Situation awareness is adaptive, externally directed consciousness. Human Factors, 37(1), 137–148. https://doi.org/10.1518/001872095779049444
  • Sreedharan, S., Kambhampati, S., et al. (2017). Balancing explicability and explanation in human-aware planning. In 2017 AAAI Fall Symposium Series.
  • Sreedharan, S., Srivastava, S., & Kambhampati, S. (2018). Hierarchical expertise level modeling for user specific contrastive explanations. In The International Joint Conference on Artificial Intelligence (pp. 4829–4836).
  • Sreedharan, S., Srivastava, S., Smith, D., & Kambhampati, S. (2019). Why can’t you do that hal? explaining unsolvability of planning tasks. In Proceedings of The International Joint Conference on Artificial Intelligence.
  • Stanton, N. A., Chambers, P. R., & Piggott, J. (2001). Situational awareness and safety. Safety Science, 39(3), 189–204. https://doi.org/10.1016/S0925-7535(01)00010-8
  • Stowers, K., Kasdaglis, N., Newton, O., Lakhmani, S., Wohleber, R., Chen, J. (2016). Intelligent agent transparency: The design and evaluation of an interface to facilitate human and intelligent agent collaboration. In Proceedings of the human factors and ergonomics society annual meeting (vol. 60, pp. 1706–1710). SAGE Publications Sage CA. https://doi.org/10.1177/1541931213601392
  • Wang, N., Pynadath, D. V., Hill, S. G. (2016a). The impact of pomdp-generated explanations on trust and performance in human-robot teams. In Proceedings of the 2016 international conference on autonomous agents & multiagent systems (pp. 997–1005). International Foundation for Autonomous Agents and Multiagent Systems.
  • Wang, N., Pynadath, D. V., & Hill, S. G. (2016b). Trust calibration within a human-robot team: Comparing automatically generated explanations. In 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI) (pp. 109–116). IEEE. https://doi.org/10.1109/HRI.2016.7451741
  • Wickens, C. D. (2008). Multiple resources and mental workload. Human Factors, 50(3), 449–455. https://doi.org/10.1518/001872008X288394
  • Wickens, C. D., Helton, W. S., Hollands, J. G., & Banbury, S. (2021). Engineering psychology and human performance. Routledge.
  • Wright, J. L., Chen, J. Y., Barnes, M. J., Hancock, P. A. (2016). Agent reasoning transparency’s effect on operator workload. In Proceedings of the human factors and ergonomics society annual meeting (vol. 60, pp. 249–253). SAGE Publications Sage. https://doi.org/10.1177/1541931213601057