334
Views
0
CrossRef citations to date
0
Altmetric
Research Articles

Human-Centered Explainability for Intelligent Vehicles—A User Study

ORCID Icon, , &
Pages 3237-3253 | Received 29 Dec 2022, Accepted 09 May 2023, Published online: 28 May 2023

References

  • Abdul, A., Vermeulen, J., Wang, D., Lim, B. Y., & Kankanhalli, M. (2018). Trends and trajectories for explainable, accountable and intelligible systems. In R. Mandryk, M. Hancock, M. Perry, & A. Cox (Eds.), Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (pp. 1–18). ACM. https://doi.org/10.1145/3173574.3174156
  • Adadi, A., & Berrada, M. (2018). Peeking inside the black-box: A survey on explainable artificial intelligence (XAI). IEEE Access, 6, 52138–52160. https://doi.org/10.1109/ACCESS.2018.2870052
  • Amershi, S., Weld, D., Vorvoreanu, M., Fourney, A., Nushi, B., Collisson, P., Suh, J., Iqbal, S., Bennett, P. N., Inkpen, K., Teevan, J., Kikin-Gil, R., & Horvitz, E. (2019). Guidelines for human–AI interaction. In S. Brewster, G. Fitzpatrick, A. Cox, & V. Kostakos (Eds.), Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (pp. 1–13). ACM. https://doi.org/10.1145/3290605.3300233
  • Apple Inc. (2021). Human interface guidelines–Machine learning. Retrieved from https://developer.apple.com/design/human-interface-guidelines/machine-learning/overview/introduction/
  • Barredo Arrieta, A., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., Garcia, S., Gil-Lopez, S., Molina, D., Benjamins, R., Chatila, R., & Herrera, F. (2020). Explainable artificial intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, 58, 82–115. https://doi.org/10.1016/j.inffus.2019.12.012
  • Chandrashekar, A., Amat, F., Basilico, J., & Jebara, T. (2017). Artwork personalization at Netflix. Retrieved from https://netflixtechblog.com/artwork-personalization-c589f074ad76
  • Dorneich, M. C., McGrath, K. A., Dudley, R. F., & Morris, M. D. (2013). Analysis of the characteristics of adaptive systems. In IEEE International Conference on Systems, Man, and Cybernetics (pp. 888–893). https://doi.org/10.1109/SMC.2013.156
  • Doshi-Velez, F., & Kim, B. (2017, February 28). Towards a rigorous science of interpretable machine learning. Retrieved from http://arxiv.org/pdf/1702.08608v2
  • Eiband, M., Völkel, S. T., Buschek, D., Cook, S., & Hussmann, H. (2019). When people and algorithms meet. In W.-T. Fu, S. Pan, O. Brdiczka, P. Chau, & G. Calvary (Eds.), Proceedings of the 24th International Conference on Intelligent User Interfaces (pp. 96–106). ACM. https://doi.org/10.1145/3301275.3302262
  • Eslami, M., Rickman, A., Vaccaro, K., Aleyasen, A., Vuong, A., Karahalios, K., Hamilton, K., & Sandvig, C. (2015). I always assumed that I wasn’t really that close to [her]. In B. Begole, J. Kim, K. Inkpen, & W. Woo (Eds.), Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (pp. 153–162). ACM. https://doi.org/10.1145/2702123.2702556
  • Feigh, K. M., Dorneich, M. C., & Hayes, C. C. (2012). Toward a characterization of adaptive systems: A framework for researchers and system designers. Human Factors, 54(6), 1008–1024. https://doi.org/10.1177/0018720812443983
  • Google PAIR (2019). People + AI Guidebook. Retrieved from https://pair.withgoogle.com/guidebook/
  • Google (n.d.). Find places you’ll like. Retrieved February 10, 2022, from https://support.google.com/maps/answer/7677966?hl=en&co=GENIE.Platform%3DAndroid#zippy=%2Chow-matches-are-created%2Chow-ratings-affect-your-matchest
  • Graefe, J., Engelhardt, D., Rittger, L., & Bengler, K. (2022). How well does the algorithm know me? In M. M. Soares, E. Rosenzweig, & A. Marcus (Eds.), Lecture notes in computer science. Design, user experience, and usability: Design thinking and practice in contemporary and emerging technologies (Vol. 13323, pp. 311–336). Springer International Publishing. https://doi.org/10.1007/978-3-031-05906-3_24
  • Holzinger, A., Langs, G., Denk, H., Zatloukal, K., & Müller, H. (2019). Causability and explainability of artificial intelligence in medicine. Wiley Interdisciplinary Reviews. Data Mining and Knowledge Discovery, 9(4), e1312. https://doi.org/10.1002/widm.1312
  • Jameson, A., & Gajos, K. Z. (2012). Systems that adapt to their users. In J. A. Jacko (Ed.), Human factors and ergonomics. The human–computer interaction handbook: Fundamentals, evolving technologies, and emerging applications (3rd ed., pp. 431–455). Taylor & Francis.
  • Köhler, L. M. (2018). Adaptives Informationskonzept für beanspruchende urbane Fahrsituationen [PhD thesis]. Technical University of Munich.
  • Koo, J., Kwac, J., Ju, W., Steinert, M., Leifer, L., & Nass, C. (2015). Why did my car just do that? Explaining semi-autonomous driving actions to improve driver understanding, trust, and performance. International Journal on Interactive Design and Manufacturing, 9(4), 269–275. https://doi.org/10.1007/s12008-014-0227-2
  • Körber, M. (2019). Theoretical considerations and development of a questionnaire to measure trust in automation. In S. Bagnara, R. Tartaglia, S. Albolino, T. Alexander, & Y. Fujita (Eds.), Advances in Intelligent Systems and Computing. Proceedings of the 20th Congress of the International Ergonomics Association (IEA 2018) (Vol. 823, pp. 13–30). Springer International Publishing. https://doi.org/10.1007/978-3-319-96074-6_2
  • Lim, B. Y., & Dey, A. K. (2010). Toolkit to support intelligibility in context-aware applications. In J. E. Bardram, M. Langheinrich, K. N. Truong, & P. Nixon (Eds.), Proceedings of the 12th ACM International Conference on Ubiquitous Computing (pp. 13–22). ACM. https://doi.org/10.1145/1864349.1864353
  • Lim, B. Y., & Dey, A. K. (2013). Evaluating intelligibility usage and usefulness in a context-aware application. In D. Hutchison, T. Kanade, J. Kittler, J. M. Kleinberg, F. Mattern, J. C. Mitchell, M. Naor, O. Nierstrasz, C. Pandu Rangan, B. Steffen, M. Sudan, D. Terzopoulos, D. Tygar, M. Y. Vardi, G. Weikum, & M. Kurosu (Eds.), Lecture notes in computer science. Human–computer interaction. Towards intelligent and implicit interaction (Vol. 8008, pp. 92–101). Springer Berlin Heidelberg. https://doi.org/10.1007/978-3-642-39342-6_11
  • Lim, B. Y., Dey, A. K., & Avrahami, D. (2009). Why and why not explanations improve the intelligibility of context-aware intelligent systems. In D. R. Olsen, R. B. Arthur, K. Hinckley, M. R. Morris, S. Hudson, & S. Greenberg (Eds.), Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 2119–2128). ACM. https://doi.org/10.1145/1518701.1519023
  • Markus, A. F., Kors, J. A., & Rijnbeek, P. R. (2021). The role of explainability in creating trustworthy artificial intelligence for health care: A comprehensive survey of the terminology, design choices, and evaluation strategies. Journal of Biomedical Informatics, 113, 103655. https://doi.org/10.1016/j.jbi.2020.103655
  • Meta (Ed.). (2021). How machine learning powers Facebook’s News Feed ranking algorithm. Retrieved from https://engineering.fb.com/2021/01/26/ml-applications/news-feed-ranking/
  • Miller, T. (2019). Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence, 267, 1–38. https://doi.org/10.1016/j.artint.2018.07.007
  • Mohseni, S., Zarei, N., & Ragan, E. D. (2021). A multidisciplinary survey and framework for design and evaluation of explainable AI systems. ACM Transactions on Interactive Intelligent Systems, 11(3–4), 1–45. https://doi.org/10.1145/3387166
  • Molnar, C. (2019). Interpretable machine learning. A guide for making black box models explainable. Retrieved from https://christophm.github.io/interpretable-ml-book/
  • Parasuraman, R., Sheridan, T. B., & Wickens, C. D. (2000). A model for types and levels of human interaction with automation. IEEE Transactions on Systems, Man, and Cybernetics. Part A, Systems and Humans, 30(3), 286–297. https://doi.org/10.1109/3468.844354
  • Rader, E., Cotter, K., & Cho, J. (2018). Explanations as mechanisms for supporting algorithmic transparency. In R. Mandryk, M. Hancock, M. Perry, & A. Cox (Eds.), Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (pp. 1–13). ACM. https://doi.org/10.1145/3173574.3173677
  • Ribera, M., & Lapedriza, A. (2019). Can we do better explanations? A proposal of user-centered explainable AI. In ACM (Chair), ACM IUI.
  • Rittger, L., Engelhardt, D., & Schwartz, R. (2022). Adaptive user experience in the car—Levels of adaptivity and adaptive HMI design. IEEE Transactions on Intelligent Transportation Systems, 23(5), 4866–4876. https://doi.org/10.1109/TITS.2021.3124990
  • Schneider, T., Hois, J., Rosenstein, A., Ghellal, S., Theofanou-Fülbier, D., & Gerlicher, A. R. (2021). ExplAIn yourself! Transparency for positive UX in autonomous driving. In Y. Kitamura, A. Quigley, K. Isbister, T. Igarashi, P. Bjørn, & S. Drucker (Eds.), Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (pp. 1–12). ACM. https://doi.org/10.1145/3411764.3446647
  • Semmens, R., Martelaro, N., Kaveti, P., Stent, S., & Ju, W. (2019). Is now a good time? In S. Brewster, G. Fitzpatrick, A. Cox, & V. Kostakos (Eds.), Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (pp. 1–12). ACM. https://doi.org/10.1145/3290605.3300867
  • Shin, D. (2021). The effects of explainability and causability on perception, trust, and acceptance: Implications for explainable AI. International Journal of Human–Computer Studies, 146, 102551. https://doi.org/10.1016/j.ijhcs.2020.102551
  • Tchankue, P., Wesson, J., & Vogts, D. (2011). The impact of an adaptive user interface on reducing driver distraction. In M. Tscheligi (Ed.), Proceedings of the 3rd International Conference on Automotive User Interfaces and Interactive Vehicular Applications–AutomotiveUI '11 (p. 87). ACM Press. https://doi.org/10.1145/2381416.2381430
  • Tomsett, R., Preece, A., Braines, D., Cerutti, F., Chakraborty, S., Srivastava, M., Pearson, G., & Kaplan, L. (2020). Rapid trust calibration through interpretable and uncertainty-aware AI. Patterns, 1(4), 100049. https://doi.org/10.1016/j.patter.2020.100049
  • van der Laan, J. D., Heino, A., & de Waard, D. (1997). A simple procedure for the assessment of acceptance of advanced transport telematics. Transportation Research Part C: Emerging Technologies, 5(1), 1–10. https://doi.org/10.1016/S0968-090X(96)00025-3
  • Völkel, S. T., Schneegass, C., Eiband, M., & Buschek, D. (2020). What is “intelligent” in intelligent user interfaces? In F. Paternò, N. Oliver, C. Conati, L. D. Spano, & N. Tintarev (Eds.), Proceedings of the 25th International Conference on Intelligent User Interfaces (pp. 477–487). ACM. https://doi.org/10.1145/3377325.3377500
  • Walter, N. (2018). Personalization and context-sensitive user interaction of in-vehicle infotainment systems [PhD thesis]. Technische Universität München.
  • Wintersberger, P., Nicklas, H., Martlbauer, T., Hammer, S., & Riener, A. (2020). Explainable automation: Personalized and adaptive UIs to foster trust and understanding of driving automation systems. In 12th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (pp. 252–261). ACM. https://doi.org/10.1145/3409120.3410659
  • Wright, A. P., Wang, Z. J., Park, H., Guo, G., Sperrle, F., El-Assady, M., Endert, A., Keim, D., & Chau, D. H. (2020). A comparative analysis of industry human–AI interaction guidelines. Retrieved from http://arxiv.org/pdf/2010.11761v1

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.