6,421
Views
17
CrossRef citations to date
0
Altmetric
Research Articles

Transitioning to Human Interaction with AI Systems: New Challenges and Opportunities for HCI Professionals to Enable Human-Centered AI

, , &
Pages 494-518 | Received 09 May 2021, Accepted 10 Feb 2022, Published online: 06 Apr 2022

References

  • Abdul, A., Vermeulen, J., Wang, D., Lim, B. Y., & Kankanhalli, M. (2018). Trends and trajectories for explainable, accountable and intelligible systems: an HCI research agenda. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, Montreal, QC, Canada, Paper No. 582. https://doi.org/10.1145/3173574.3174156
  • Acuna, D., Ling, H., Kar, A., Fidler, S. (2018). Efficient interactive annotation of segmentation datasets with polygon-RNN++. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 859–868).
  • Amershi, S., Cakmak, M., Knox, W. B., & Kulesza, T. (2014). Power to the people: The role of humans in interactive machine learning. AI Magazine, 35(4), 105–120. https://doi.org/10.1609/aimag.v35i4.2513
  • Amershi, S., Weld, D., Vorvoreanu, M., Fourney, A., Nushi, B., Collisson, P., Suh, J., Iqbal, S., Bennett, P. N., Inkpen, K., Teevan, J., Kikin-Gil, R., & Horvitz, E. (2019). Guidelines for human-AI interaction. In CHI 2019, May 4–9, 2019, Glasgow, Scotland, UK. https://doi.org/10.1145/3290605.3300233
  • Auernhammer, J. (2020). Human-centered AI: The role of human-centered design research in the development of AI. In S. Boess, M. Cheung, & R. Cain (Eds.), Synergy – DRS International Conference 2020, 11–14 August, Held nline. https://doi.org/10.21606/drs.2020.282
  • Bakker, S., & Niemantsverdriet, K. (2016). The interaction-attention continuum: Considering various levels of human attention in interaction design. International Journal of Design, 10(2), 1–14.
  • Bainbridge, L. (1983). Ironies of automation. Automatica, 19(6), 775–779. https://doi.org/10.1016/0005-1098(83)90046-8
  • Bansal, G., Nushi, B., Kamar, E., Lasecki, W. S., Weld, D. S., Horvitz, E. (2019). Beyond accuracy: The role of mental models in human-AI team performance. In Proceedings of the AAAI Conference on Human Computation and Crowdsourcing (Vol. 7, No. 1, pp. 2–11).
  • Bansal, G., Nushi, B., Kamar, E., Weld, D. S., Lasecki, W. S., & Horvitz, E. (2019). Updates in human-AI teams: Understanding and addressing the performance/compatibility tradeoff. Proceedings of the AAAI Conference on Artificial Intelligence, 33, 2429–2437. https://doi.org/10.1609/aaai.v33i01.33012429
  • Bathaee, Y. (2018). The artificial intelligence black box and the failure of intent and causation. Harvard Journal of Law & Technology, 31(2), 890–938.
  • Beckers, G., Sijs, J., van Diggelen, J., van Dijk, R. J. E., Bouma, H., Lomme, M., Hommes, R., Hillerström, F., van der Waa, J., van Velsen, A., Mannucci, T., Voogd, J., van Staal, W., Veltman, K., Wessels, P., & Huizing, A. (2019). Intelligent autonomous vehicles with an extendable knowledge base and meaningful human control. In Counterterrorism, Crime Fighting, Forensics, and Surveillance Technologies III (Vol. 11166, p. 111660C). Int. Soc. for Optics and Photonics. https://doi.org/10.1117/12.2533740
  • Berndt, J. O., Rodermund, S. C., Lorig, F., Timm, I. J. (2017). Modeling user behavior in social media with complex agents. In HUSO 2017: The Third International Conference on Human and Social Analytics, 19–24, IARIA, 2017. ISBN: 978-1-61208-578-4
  • Bolukbasi, T., Chang, K.-W., Zou, J. Y., Saligrama, V., & Kalai, A. T. (2016). Man is to computer programmer as woman is to homemaker? Debiasing word embeddings. In Advances in Neural Information Processing Systems (pp. 4349–4357). The MIT Press.
  • Bond, R., Mulvenna, M. D., Wan, H., Finlay, D. D., Wong, A., Koene, A., & Adel, T. (2019). Human centered artificial intelligence: Weaving UX into algorithmic decision making. In RoCHI (pp. 2–9).
  • Brandt, S. L., Lachter, R. R., & Shively, R. J. (2018). A human-autonomy teaming approach for a flight-following task. In C. Baldwin (Ed.), Advances in neuroergonomics and cognitive engineering, advances in intelligent systems and computing. Springer International Publishing AG.
  • Brezillon, P. (2003). Focusing on context in human-centered computing. IEEE Intelligent Systems, 18(3), 62–66. https://doi.org/10.1109/MIS.2003.1200731
  • Brill, J. C., Cummings, M. L., Evans, A. W.III, Hancock, P. A., Lyons, J. B., & Oden, K. (2018). Navigating the advent of human-machine teaming. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 62(1), 455–459. https://doi.org/10.1177/1541931218621104
  • Bromham, L., Dinnage, R., & Hua, X. (2016). Interdisciplinary research has consistently lower funding success. Nature, 534(7609), 684–687.
  • Broniatowski, D. A. (2021). Psychological foundations of explainability and interpretability in artificial intelligence.
  • Brown, B., Bødker, S., & Höök, K. (2017). Does HCI scale? Scale hacking and the relevance of HCI. Interactions, 24(5), 28–33. https://doi.org/10.1145/3125387
  • Bryson, J. J., & Theodorou, A. (2019). How society can maintain human-centric artificial intelligence. In Human-centered digitalization and services (pp. 305–323). Springer.
  • Bryson, J., & Winfield, A. (2017). Standardizing ethical design for artificial intelligence and autonomous systems. Computer Magazine, 50(5), 116–119. https://doi.org/10.1109/MC.2017.154
  • Budiu, R., & Laubheimer, P. (2018). Intelligent assistants have poor usability: A user study of Alexa, Google Assistant, and Siri. https://www.nngroup.com
  • Buolamwini, J., & Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. In S. A. Friedler & C. Wilson (Eds.), Proceedings of the 1st Conference on Fairness, Accountability and Transparency (Vol. 81, pp. 77–91). PMLR.
  • Cooke, N. (2018). 5 ways to help robots work together with people. The Conversation. https://theconversation.com/5-ways-to-help-robots-work-together-with-people-101419
  • Calhoun, G. L., Ruff, H. A., Behymer, K. J., & Frost, E. M. (2018). Human-autonomy teaming interface design considerations for multi-unmanned vehicle control. Theoretical Issues in Ergonomics Science, 19(3), 321–352. https://doi.org/10.1080/1463922X.2017.1315751
  • Card, S. K., Moran, T. P., & Newell, A. (1983). The psychology of human-computer interaction. Lawrence Erlbaum Associates.
  • Carter, S., & Nielsen, M. (2017). Using artificial intelligence to augment human intelligence. Distill, 2(12), 12. https://doi.org/10.23915/distill.00009
  • Cerejo, J. (2021). The design process of human centered AI — Part2. BootCamp: https://bootcamp.uxdesign.cc/human-centered-ai-design-process-part-2-empathize-hypothesis-6065db967716
  • Chatzimparmpas, A., Martins, R. M., Jusufi, I., & Kerren, A. (2020). A survey of surveys on the use of visualization for interpreting machine learning models. Information Visualization, 19(3), 207–233. https://doi.org/10.1177/1473871620904671
  • Chittajallu, D. R., Dong, B., Tunisin, P. (2019). XAI-CBIR: Explainable AI system for content based retrieval of video frames from minimally invasive surgery videos. In 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019).
  • Chowdhury, R. (2018, Jun 4). Is Explainability Enough? Why We Need Understandable AI. Forbes. Retrieved on Feb 17, 2022, from https://www.forbes.com/sites/rummanchowdhury/2018/06/04/is-explainability-enough-why-we-need-understandable-ai/?sh=6168aa8262f4
  • Clare, A. S., Cummings, M. L., & Repenning, N. P. (2015). Influencing trust for human-automation collaborative scheduling of multiple unmanned vehicles. Human Factors, 57(7), 1208–1218. https://doi.org/10.1177/0018720815587803
  • Correia, A., Paredes, H., Schneider, D., Jameel, S., & Fonseca, B. (2019). Towards hybrid crowd-AI centered systems: Developing an integrated framework from an empirical perspective. In 2019 IEEE International Conference on Systems, Man and Cybernetics (SMC) (pp. 4013–4018). https://doi.org/10.1109/SMC.2019.8914075
  • Crandall, J. W. (2018). Cooperating with machines. Nature Communication, 9, 233. https://doi.org/10.1038/s41467-017-02597-8
  • Cummings, M. L., & Clare, A. S. (2015). Holistic modelling for human autonomous system interaction. Theoretical Issues in Ergonomics Science, 16(3), 214–231. https://doi.org/10.1080/1463922X.2014.1003990
  • Cummings, M. L. (2019). Lethal autonomous weapons: Meaningful human control or meaningful human certification? IEEE Technology and Society Magazine, 38(4), 20–26. https://doi.org/10.1109/MTS.2019.2948438
  • Cummings, M. L., & Britton, D. (2020). Regulating safety-critical autonomous systems: Past, present, and future perspectives. In Living with robots (pp. 119–140). Academic Press.
  • de Visser, E. J., Pak, R., & Shaw, T. H. (2018). From ‘automation’ to ‘autonomy’: the importance of trust repair in human-machine interaction. Ergonomics, 61(10), 1409–1427. https://doi.org/10.1080/00140139.2018.1457725
  • de Sio, S. F., & den Hoven, V. J. (2018). Meaningful human control over autonomous systems: A philosophical account. Frontiers in Robotics and AI, 5, 15. https://doi.org/10.3389/frobt.2018.00015
  • den Broek, H. V., Schraagen, J. M., Te Brake, G., van Diggelin, J. (2017). Approaching full autonomy in the maritime domain: Paradigm choices and human factors challenges. In Proceedings of the MTEC, 26–28 April 2017.
  • Dai, G. Z., & Wang, H. (2004). Physical object icons buttons gesture (PIBG): A new interaction paradigm with pen. In Proceedings of the 8th International Conference on Computer Supported Cooperative Work (pp. 11–20).
  • Dellermann, D., Calma, A., Lipusch, N., Weber, T., Weigel, S., & Ebel, P. (2019). The future of human-ai collaboration: a taxonomy of design knowledge for hybrid intelligence systems. In Hawaii International Conference on System Sciences (HICSS), Hawaii, USA. https://doi.org/10.24251/HICSS.2019.034
  • Dellermann, D., Ebel, P., Söllner, M., & Leimeister, J. M. (2019). Hybrid intelligence. Business & Information Systems Engineering, 61(5), 637–643. https://doi.org/10.1007/s12599-019-00595-2
  • Demir, M., Cooke, N. J., & Amazeen, P. G. (2018). A conceptual model of team dynamical behaviors and performance in human-autonomy teaming. Cognitive Systems Research, 52, 497–507. https://doi.org/10.1016/j.cogsys.2018.07.029
  • Demir, M., Likens, A. D., Cooke, N. J., Amazeen, P. G., & McNeese, N. J. (2018). Team coordination and effectiveness in human-autonomy teaming. IEEE Transactions on Human-Machine Systems, 49(2), 150–159. https://doi.org/10.1109/THMS.2018.2877482
  • Demir, M., McNeese, N. J., & Cooke, N. J. (2017). Team situation awareness within the context of human-autonomy teaming. Cognitive Systems Research, 46, 3–12. https://doi.org/10.1016/j.cogsys.2016.11.003
  • Donahoe, E. (2018). Human centered AI: Building trust, democracy and human rights by design. An overview of Stanford’s global digital policy incubator and the XPRIZE Foundation’s June 11th Event. Stanford GDPi. https://medium.com/stanfords-gdpi/human-centered-ai-building-trust-democracy-and-human-rights-by-design-2fc14a0b48af
  • Dove, G., Halskov, K., Forlizzi, J., & Zimmerman, J. (2017). UX design innovation: Challenges for working with machine learning as a design material. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems – CHI ’17 (pp. 278–288). New York, NY: ACM Press.
  • Ehsan, U., & Riedl, M. O. (2020). Human-centered explainable AI: Towards a reflective sociotechnical approach. arXiv 2002.01092.
  • Endsley, M. R. (2017). From here to autonomy: Lessons learned from human-automation research. Human Factors, 59(1), 5–27. https://doi.org/10.1177/0018720816681350
  • Endsley, M. R. (2018). Situation awareness in future autonomous vehicles: Beware of the unexpected. In Proceedings of the 20th Congress of the International Ergonomics Association (IEA 2018), IEA 2018. Springer.
  • Endsley, M. R., & Kiris, E. O. (1995). The out-of-the-loop performance problem and level of control in automation. Human Factors, 37(2), 381–394. https://doi.org/10.1518/001872095779064555
  • Endsley, M. R., & Jones, D. G. (2012). Designing for situation awareness: An approach to user-centered design (2nd ed.). CRC Press.
  • Epstein, Z. (2018). Closing the AI knowledge gap. arXiv [cs.CY].
  • Fiebrink, R., & Gillies, M. (2018). Introduction to the special issue on human-centered machine learning. ACM Transactions on Interactive Intelligent Systems, 8(2), 1–7. https://doi.org/10.1145/3205942
  • Farooq, U., & Grudin, J. (2016). Human computer integration. Interactions, 23(6), 26–32. https://doi.org/10.1145/3001896
  • Fan, T., Fan, J., Dai, G., Du, Y., & Liu, Z. (2018). Thoughts on human-computer interaction in the age of artificial intelligence. Scientia Sinica Informationis, 48(4), 361–375. https://doi.org/10.1360/N112017-00221
  • Ford, K. M., Hayes, P. J., Glymour, C., & Allen, J. (2015). Cognitive orthoses: Toward human-centered AI. AI Magazine, 36(4), 5–8. https://doi.org/10.1609/aimag.v36i4.2629
  • Fu, X. L., Cai, L., Liu, Y., Jia, J., Chen, W., Yi, Z., Zhao, G., Liu, Y. J., & Wu, C. X. (2014). A computational cognition model of perception, memory, and judgment. Science China Information Sciences, 57(3), 1–15. https://doi.org/10.1007/s11432-013-4911-9
  • Girardin, F., & Lathia, N. (2017). When user experience designers partner with data scientists. In The AAAI Spring Symposium Series Technical Report: Designing the User Experience of Machine Learning Systems. The AAAI Press. https://www.aaai.org/ocs/index.php/SSS/SSS17/paper/view/15364environment
  • Garlan, D., Siewiorek, D. P., Smailagic, A., & Steenkiste, P. (2002). Project Aura: Toward distraction-free pervasive computing. IEEE Pervasive Computing, 1(2), 22–31. https://doi.org/10.1109/MPRV.2002.1012334
  • Gerber, A., Derckx, P., Döppner, D. A., & Schoder, D. (2020). Conceptualization of the human-machine symbiosis–A literature review. In Proceedings of the 53rd Hawaii International Conference on System Sciences. https://doi.org/10.24251/HICSS.2020.036
  • Google PAIR (2019). People + AI Guidebook: Designing human-centered AI products. pair.withgoogle.com
  • Grudin, J. (2005). Three faces of human-computer interaction. IEEE Annals of the History of Computing, 27(4), 46–62. https://doi.org/10.1109/MAHC.2005.67
  • Gunning, D. (2017). Explainable Artificial Intelligence (XAI) at DARPA. https://www.darpa.mil/attachments/XAIProgramUpdate.pdf
  • Hancock, P. A. (2019). Some pitfalls in the promises of automated and autonomous vehicles. Ergonomics, 62(4), 479–495. https://doi.org/10.1080/00140139.2018.1498136
  • Hawking, S., Musk, E., & Wozniak, S. (2015). Autonomous weapons: An open letter from AI & robotics researchers. Future of Life Institute.
  • Hagendorff, T. (2020). The ethics of AI ethics: An evaluation of guidelines. Minds and Machines, 30(1), 99–120. https://doi.org/10.1007/s11023-020-09517-8
  • He, H., Gray, J., Cangelosi, A., Meng, Q., McGinnity, T. M., & Mehnen, J. (2021). The challenges and opportunities of human-centered AI for trustworthy robots and autonomous systems. IEEE Transactions on Cognitive and Developmental Systems, 1. https://doi.org/10.1109/TCDS.2021.3132282
  • Hoffman, R. R., Cullen, T. M., & Hawley, J. K. (2016). The myths and costs of autonomous weapon systems. Bulletin of the Atomic Scientists, 72(4), 247–255. https://doi.org/10.1080/00963402.2016.1194619
  • Hoffman, R. R., Mueller, S. T., & Klein, G. (2017). Explaining explanation, part 2: Empirical foundations. IEEE Intelligent Systems, 32(4), 78–86. https://doi.org/10.1109/MIS.2017.3121544
  • Hoffman, R. R., Roesler, A., & Moon, B. M. (2004). What is design in the context of human-centered computing? IEEE Intelligent Systems, 19(4), 89–95. https://doi.org/10.1109/MIS.2004.36
  • Hoffman, R. R., Klein, G., Mueller, S. T. (2018). Explaining explanation for “explainable AI”. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting (Vol. 62, No. 1, pp. 197–201). Los Angeles, CA: SAGE Publications.
  • Holmquist, L. E. (2017). Intelligence on tap: Artificial intelligence as a new design material. Interactions, 24(4), 28–33. https://doi.org/10.1145/3085571
  • Hollnagel, E., & Woods, D. D. (2005). Joint cognitive systems: Foundations of cognitive systems engineering. CRC Press.
  • Horvitz, E. (2017). AI, people, and society. Science, 357(6346), 7. https://doi.org/10.1126/science.aao2466
  • Howell, W. C. (2001). The HF/E/E parade: A tale of two models. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 45(1), 1–5. https://doi.org/10.1177/154193120104500101
  • Hurts, K., & de Greef, P. (1994, August). Cognitive ergonomics of multi-agent systems: Observations, principles and research issues. In International Conference on Human-Computer Interaction (pp. 164–180). Springer.
  • IEEE (2019). Ethically aligned design: A vision for prioritizing human well-being with autonomous and intelligent systems (1st ed.). The Institute of Electrical and Electronics Engineers (IEEE), Incorporated.
  • Inkpen, K., Chancellor, S., De Choudhury, M., Veale, M., & Baumer, E. P. (2019, May). Where is the human? Bridging the gap between AI and HCI. In Extended Abstracts of the 2019 Chi Conference on Human Factors in Computing Systems (pp. 1–9). https://doi.org/10.1145/3290607.3299002
  • International Organization for Standardization. (2020). Ergonomics-Ergonomics of Human-System Interaction – Part 810: Robotic, Intelligent and Autonomous Systems (version for review).
  • Jacob, R. J. K., Girouard, A., Hirshfield, L. M., Horn, M. S., Erin, O. S., Solovey, T., & Zigelbaum, J. (2008). Reality-based interaction. A framework for post-WIMP interfaces. In CHI 2008, April 5–10, 2008, Florence, Italy.
  • Jacko, J. A. (Ed.). (2012). Human computer interaction handbook: Fundamentals, evolving technologies, and emerging applications. CRC Press.
  • Jeong, K. A. (2019). Human-system cooperation in automated driving. International Journal of Human-Computer Interaction, 35, 917–918.
  • Johnson, M., & Vera, A. (2019). No AI is an island: The case for teaming intelligence. AI Magazine, 40(1), 16–28. https://doi.org/10.1609/aimag.v40i1.2842
  • Jun, S., Yuming, W., & Cui, H. (2021). An integrated analysis framework of artificial intelligence social impact based on application scenarios. Science of Science and Management of S & T, 42(5), 3.
  • Kaber, D. B. (2018). A conceptual framework of autonomous and automated agents. Theoretical Issues in Ergonomics Science, 19(4), 406–430. https://doi.org/10.1080/1463922X.2017.1363314
  • Kaluarachchi, T., Reis, A., & Nanayakkara, S. (2021). A review of recent deep learning approaches in human-centered machine learning. Sensors, 21(7), 2514. https://doi.org/10.3390/s21072514
  • Kistan, T., Gardi, A., & Sabatini, R. (2018). Machine learning and cognitive ergonomics in air traffic management: Recent developments and considerations for certification. Aerospace, 5(4), 103. https://doi.org/10.3390/aerospace5040103
  • Klein, H. A., Lin, M. H., Miller, N. L., Militello, L. G., Lyons, J. B., & Finkeldey, J. G. (2019). Trust across culture and context. Journal of Cognitive Engineering and Decision Making, 13(1), 10–29. https://doi.org/10.1177/1555343418810936
  • Klein, G., Woods, D. D., Bradshaw, J. M., Hoffman, R. R., & Feltovich, P. J. (2004). Ten challenges for making automation a “team player” in joint human-agent activity. IEEE Intelligent Systems, 19(6), 91–95. https://doi.org/10.1109/MIS.2004.74
  • Kleppe, M., & Otte, M. (2017). Analysing and understanding news consumption patterns by tracking online user behaviour with a multimodal research design. Digital Scholarship in the Humanities, 32(suppl_2), ii158–ii170. https://doi.org/10.1093/llc/fqx030
  • Kies, J. K., Williges, R. C., & Rosson, M. B. (1998). Coordinating computer‐supported cooperative work: A review of research issues and strategies. Journal of the American Society for Information Science, 49(9), 776–791.
  • Kulesza, T., Burnett, M., Wong, W. K., & Stumpf, S. (2015). Principles of explanatory debugging to personalize interactive machine learning. In Proceedings of the 20th International Conference on Intelligent User Interfaces (pp. 126–137). ACM. https://doi.org/10.1145/2678025.2701399
  • Lazer, D., Kennedy, R., King, G., & Vespignani, A. (2014). Big data. The parable of Google Flu: Traps in big data analysis. Science, 343(6176), 1203–1205. https://doi.org/10.1126/science.1248506
  • Lau, N., Fridman, L., Borghetti, B. J., & Lee, J. D. (2018). Machine learning and human factors: Status, applications, and future directions. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 62(1), 135–138. https://doi.org/10.1177/1541931218621031
  • Lee, J. D., & Kolodge, K. (2018). Understanding attitudes towards self-driving vehicles: Quantitative analysis of qualitative data. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 62(1), 1399–1403. https://doi.org/10.1177/1541931218621319
  • Leibo, J. Z. (2018). Psychlab: A psychology laboratory for deep reinforcement learning agents. arXiv [cs.AI].
  • Licklider, J. C. R. (1960). Man-computer symbiosis. IRE Transactions on Human Factors in Electronics, HFE-1(1), 4–11. https://doi.org/10.1109/THFE2.1960.4503259
  • Lieberman, H. (2009). User interface goals, AI opportunities. AI Magazine, 30(4), 16–22. https://doi.org/10.1609/aimag.v30i4.2266
  • Li, F. F. (2018). How to make A.I. that’s good for people. The New York Times. https://www.nytimes.com/2018/03/07/opinion/artificial-intelligence-human.html
  • Li, F. F., & Etchemendy, J. (2018). A common goal for the brightest minds from Stanford and beyond: Putting humanity at the center of AI.
  • Liu, Y., Wang, Y., Bian, Y., Ren, L., & Xuan, Y. (2018). A psychological model of human-computer cooperation for the era of artificial intelligence. Scientia Sinica Informationis, 48(4), 376–389. https://doi.org/10.1360/N112017-00225
  • Lyons, J. B., Mahoney, S., Wynne, K. T., Roebke, M. A. (2018). Viewing machines as teammates: A qualitative study. In 2018 AAAI Spring Symposium Series.
  • Madni, A. M., & Madni, C. C. (2018). Architectural framework for exploring adaptive human-machine teaming options in simulated dynamic environments. Systems, 6(4), 44, 1–17. https://doi.org/10.3390/systems6040044
  • Martelaro, N., & Ju, W. (2017). WoZ way: Enabling real-time remote interaction prototyping & observation in on-road vehicles. In ACM Conference on Computer-Supported Cooperative Work and Social Computing, February 25–March 1, 2017.
  • McGregor, S. (2021). AI incident database. Retrieved July 12, 2021, from https://incidentdatabase.ai/
  • Miller, T., Howe, P., Sonenberg, L. (2017). Explainable AI: Beware of inmates running the asylum. https://arxiv.org/pdf/1712.00547.pdf
  • Mittelstadt, B. (2019). AI ethics–too principled to fail? arXiv 1906.06668.
  • Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., Graves, A., Riedmiller, M., Fidjeland, A. K., Ostrovski, G., Petersen, S., Beattie, C., Sadik, A., Antonoglou, I., King, H., Kumaran, D., Wierstra, D., Legg, S., & Hassabis, D. (2015). Human-level control through deep reinforcement learning. Nature, 518(7540), 529–533. https://doi.org/10.1038/nature14236
  • Mou, Y., & Xu, K. (2017). The media inequality: Comparing the initial human-human and human-AI social interactions. Computers in Human Behavior, 72, 432–440. https://doi.org/10.1016/j.chb.2017.02.067
  • Mueller, S. T., Hoffman, R. R., Clancey, W., Emrey, A., & Klein, G. (2019). Explanation in human-AI systems: A literature meta-review, synopsis of key ideas and publications, and bibliography for explainable AI. arXiv 1902.01876.
  • Mumaw, R. J., Boonman, D., Griffin, J., & Xu, W. (2000). Training and design approaches for enhancing automation awareness (Boeing Document D6-82577), December 2020.
  • Mumaw, R., Sarter, N., Wickens, C., Kimball, S., Nikolic, M., Marsh, R., & Xu, X. (2000). Analysis of pilot monitoring and performance on highly automated flight decks (NASA Final Project Report: NAS2-99074). NASA Ames Research Center.
  • Nagao, K. (2019). Symbiosis between humans and artificial intelligence. In Artificial Intelligence Accelerates Human Learning (pp. 135–151). Springer.
  • Navarro, J. (2019). A state of science on highly automated driving. Theoretical Issues in Ergonomics Science, 20(3), 366–296. https://doi.org/10.1080/1463922X.2018.1439544
  • NHTSA. (2018). Automated vehicles for safety. NHTSA (The U.S. Department of National Highway Traffic Safety Administration) Report. https://www.nhtsa.gov/technology-innovation/automated-vehicles-safety
  • NTSB. (2017). Collision between a car operating with automated vehicle control systems and a tractor-semitrailor truck near Williston, Florida, May 7, 2016. Accidents Report, by National Transportation Safety Board (NTSB) 2017.
  • Neuhauser, L., & Kreps, G. L. (2011). Participatory design and artificial intelligence: Strategies to improve health communication for diverse audiences. In AAAI Spring Symposium (pp. 49–52). Association for the Advancement of Artificial Intelligence.
  • Oliveira, J. D., Couto, J. C., Paixão-Cortes, V. S. M., & Bordini, R. H. (2022). Improving the design of ambient intelligence systems: Guidelines based on a systematic review. International Journal of Human–Computer Interaction, 38(1), 19–27. https://doi.org/10.1080/10447318.2021.1926114
  • Onnasch, L., Wickens, C. D., Li, H., & Manzey, D. (2014). Human performance consequences of stages and levels of automation: An integrated meta-analysis. Human Factors, 56(3), 476–488. https://doi.org/10.1177/0018720813501549
  • O’Neill, T., McNeese, N., Barron, A., & Schelble, B. (2020). Human–Autonomy Teaming: A Review and Analysis of the Empirical Literature. Human Factors. https://doi.org/10.1177/0018720820960865
  • Ötting, S. K. (2020). Artificial intelligence as colleague and supervisor: Successful and fair interactions between intelligent technologies and employees at work [Dissertation]. University of Bielefeld.
  • Riedl, M. O. (2019). Human‐centered artificial intelligence and machine learning. Human Behavior and Emerging Technologies, 1(1), 33–36. https://doi.org/10.1002/hbe2.117
  • Parasuraman, R., Sheridan, T. B., & Wickens, C. D. (2000). A model for types and levels of human interaction with automation. IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans, 30(3), 286–297. https://doi.org/10.1109/3468.844354
  • Pásztor, D. (2018). AI UX: 7 principles of designing good AI products. UXStudio https://uxstudioteam.com/ux-blog/ai-ux/
  • Prada, R., & Paiva, A. (2014). Human-agent interaction: Challenges for bringing humans and agents together. In Proc. of the 3rd Int. Workshop on Human-Agent Interaction Design and Models (HAIDM 2014) at the 13th Int. Conf. on Agent and Multi-Agent Systems (AAMAS 2014), 1–10.
  • PricewaterhouseCoopers (2018). Explainable AI: Driving Business Value through greater understanding. https://www.pwc.co.uk/audit-assurance/assets/explainable-ai.pdf
  • Rahwan, I., Cebrian, M., Obradovich, N., Bongard, J., Bonnefon, J.-F., Breazeal, C., Crandall, J. W., Christakis, N. A., Couzin, I. D., Jackson, M. O., Jennings, N. R., Kamar, E., Kloumann, I. M., Larochelle, H., Lazer, D., McElreath, R., Mislove, A., Parkes, D. C., Pentland, A., & Wellman, M. (2019). Machine behaviour. Nature, 568(7753), 477–486.
  • Ribeiro, M. T., Singh, S., Guestrin, C. (2016). Why should i trust you? Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (ACM, 2016) (pp. 1135–1144).
  • Rogers, Y., & Marshall, P. (2017). Research in the Wild. Morgan & Claypool Publishers. https://doi.org/10.2200/S00764ED1V01Y201703HCI037
  • Russell, S., Dewey, D., & Tegmark, M. (2015). Research priorities for robust and beneficial artificial intelligence. AI Magazine, 36(4), 105–114. https://doi.org/10.1609/aimag.v36i4.2577
  • Society of Automotive Engineers. (2018). Taxonomy and definitions for terms related to driving automation systems for on-road motor vehicles. Recommended Practice J3016 (revised 2018-06).
  • Salmon, P. M. (2019). The horse has bolted! Why human factors and ergonomics has to catch up with autonomous vehicles (and other advanced forms of automation) Commentary on Hancock (2019) Some pitfalls in the promises of automated and autonomous vehicles). Ergonomics, 62(4), 502–504. https://doi.org/10.1080/00140139.2018.1563333
  • Salmon, P. M., Hancock, P., & Carden, A. W. (2019). To protect us from the risks of advanced artificial intelligence, we need to act now. In Conversation (Vol. 25). Conversation Media Group.
  • Salvi, C., Bricolo, E., Kounios, J., Bowden, E., & Beeman, M. (2016). Insight solutions are correct more often than analytic solutions. Thinking & Reasoning, 22(4), 443–460. https://doi.org/10.1080/13546783.2016.1141798
  • Sarter, N. B., & Woods, D. D. (1995). How in the world did we ever get into that mode: Mode error and awareness in supervisory control. Human Factors: The Journal of the Human Factors and Ergonomics Society, 37(1), 5–19. https://doi.org/10.1518/001872095779049516
  • Sheridan, T. B., & Verplank, W. L. (1978). Human and computer control of undersea teleoperators. Massachusetts Inst of Tech Cambridge Man-Machine Systems Lab.
  • Shively, R. J., Lachter, J., Brandt, S. L., Matessa, M., Battiste, V., & Johnson, W. W. (2017). Why human-autonomy teaming? In International conference on applied human factors and ergonomics (pp. 3–11). Springer.
  • Shneiderman, B. (2020a). Human-centered artificial intelligence: Reliable, safe & trustworthy. International Journal of Human–Computer Interaction, 36(6), 495–504. https://doi.org/10.1080/10447318.2020.1741118
  • Shneiderman, B. (2020b). Human-centered artificial intelligence: Three fresh ideas. AIS Transactions on Human-Computer Interaction, 12, 109–124. https://doi.org/10.17705/1thci.00131
  • Shneiderman, B. (2020c). Design lessons from AI’s two grand goals: Human emulation and useful applications. IEEE Transactions on Technology and Society, 1(2), 73–82. https://doi.org/10.1109/TTS.2020.2992669
  • Shneiderman, B. (2020d). Bridging the gap between ethics and practice: Guidelines for reliable, safe, and trustworthy human-centered AI systems. ACM Transactions on Interactive Intelligent Systems, 10(4), 1–31. https://doi.org/10.1145/3419764
  • Shneiderman, B., Plaisant, C., Cohen, M., Jacobs, S., Elmqvist, N., & Diakopoulos, N. (2016). Grand challenges for HCI researchers. Interactions, 23(5), 24–25.
  • Spencer, J., Poggi, J., & Gheerawo, R. (2018). Designing out stereotypes in artificial intelligence: Involving users in the personality design of a digital assistant. In Proceedings of The 4th EAI International Conference on Smart Objects and Technologies for Social Good (pp. 130–135).
  • Sperrle, F., El‐Assady, M., Guo, G., Borgo, R., Chau, D. H., Endert, A., & Keim, D. (2021, June). A survey of human-centered evaluations in human-centered machine learning. Computer Graphics Forum, 40(3), 543–567. https://doi.org/10.1111/cgf.14329
  • Stephanidis, C., Salvendy, G., Antona, M., Chen, J. Y., Dong, J., Duffy, V. G., & Zhou, J. (2019). Seven HCI grand challenges. International Journal of Human-Computer Interaction, 35(14), 1229–1269. https://doi.org/10.1080/10447318.2019.1619259
  • Strauch, B. (2017). Ironies of automation: still unresolved after all these years. IEEE Transactions on Human-Machine Systems, 99, 1–15. https://doi.org/10.1109/THMS.2017.2732506
  • Streitz, N. (2007). From human–computer interaction to human–environment interaction: Ambient intelligence and the disappearing computer. In Proceedings of the 9th ERCIM Workshop on User Interfaces for All (pp. 3–13). Springer. https://doi.org/10.1007/978-3-540-71025-7_1
  • Sundar, S. S. (2020). Rise of machine agency: A framework for studying the psychology of human–AI interaction (HAII). Journal of Computer-Mediated Communication, 25(1), 74–88. https://doi.org/10.1093/jcmc/zmz026
  • van Allen, P. (2018). Prototyping ways of prototyping AI. Interactions, 25(6), 46–51. https://doi.org/10.1145/3274566
  • van Diggelen, J., Barnhoorn, J., Post, R., Sijs, J., van der Stap, N., & van der Waa, J. (2021). Delegation in Human-machine Teaming: Progress, Challenges and Prospects. In Intelligent Human Systems Integration. Springer.
  • Wang, F. C. (2019). Intelligent control. China Science and Technology Press.
  • Wang, W., & Siau, K. (2018). Artificial intelligence: A study on governance, policies, and regulations. In MWAIS 2018 Proceedings (Vol. 40).
  • Watson, D. P., & Scheidt, D. H. (2005). Autonomous systems. Johns Hopkins APL Technical Digest, 26(4), 368–376.
  • Wesche, J. S., & Sonderegger, A. (2019). When computers take the lead: The automation of leadership. Computers in Human Behavior, 101(12), 197–209. https://doi.org/10.1016/j.chb.2019.07.027
  • Wickens, C. D., Hollands, J. G., Banbury, S., & Parasuraman, R. (2015). Engineering psychology and human performance. Psychology Press.
  • Wickens, C. D., & Kessel, C. (1979). The effects of participatory mode and task workload on the detection of dynamic system failures. IEEE Transactions on Systems, Man, and Cybernetics, 9(1), 24–34. https://doi.org/10.1109/TSMC.1979.4310070
  • Winograd, T. (2006). Shifting viewpoints: Artificial intelligence and human–computer interaction. Artificial Intelligence, 170(18), 1256–1258. https://doi.org/10.1016/j.artint.2006.10.011
  • Wooldridge, M. (2009). An introduction to multiagent systems. John Wiley & Sons.
  • Xu, W. (2004). Status and challenges: Human factors in developing modern civil flight decks. Journal of Ergonomics, 10(4), 53–56.
  • Xu, W. (2005). Recent trend of research and applications on human-computer interaction. Journal of Ergonomics, 11(4), 37–40.
  • Xu, W. (2007). Identifying problems and generating recommendations for enhancing complex systems: Applying the abstraction hierarchy framework as an analytical tool. Human Factors, 49(6), 975–994. https://doi.org/10.1518/001872007X249857
  • Xu, W. (2018). User-centered design (III): Methods for user experience and innovative design in the intelligent era. Chinese Journal of Applied Psychology, 25(1), 3–17.
  • Xu, W. (2019a). Toward human-centered AI: A perspective from human-computer interaction. Interactions, 26(4), 42–46. https://doi.org/10.1145/3328485
  • Xu, W. (2019b). User-centered design (IV): Human-centered artificial intelligence. Chinese Journal of Applied Psychology, 25(4), 291–305.
  • Xu, W. (2020). User-centered design (V): From automation to the autonomy and autonomous vehicles in the intelligence era. Chinese Journal of Applied Psychology, 26(2), 108–129.
  • Xu, W. (2021). From automation to autonomy and autonomous vehicles: Challenges and opportunities for human-computer interaction. Interactions, 28(1), 48–53. https://doi.org/10.1145/3434580
  • Xu, W., & Ge, L. (2018). New trends in human factors. Advances in Psychological Science, 26(9), 1521. https://doi.org/10.3724/SP.J.1042.2018.01521
  • Xu, W., & Ge, L. (2020). Engineering psychology in the era of artificial intelligence. Advances in Psychological Science, 28(9), 1409–1425. https://doi.org/10.3724/SP.J.1042.2020.01409
  • Xu, W., Furie, D., Mahabhaleshwar, M., Suresh, B., & Chouhan, H. (2019). Applications of an interaction, process, integration and intelligence (IPII) design approach for ergonomics solutions. Ergonomics, 62(7), 954–980. https://doi.org/10.1080/00140139.2019.1588996
  • Yang, F., Huang, Z., Scholtz, J., & Arendt, D. L. (2020). How do visual explanations foster end users' appropriate trust in machine learning? In Proceedings of the 25th ACM International Conference on Intelligent User Interfaces (pp. 189–201). https://doi.org/10.1145/3377325.3377480
  • Yang, Q., Steinfeld, A., Rosé, C., & Zimmerman, J. (2020). Re-examining whether, why, and how human-AI interaction is uniquely difficult to design. In CHI Conference on Human Factors in Computing Systems (CHI ’20), April 25–30, 2020, Honolulu, HI, USA (p. 12). New York, NY: ACM. https://doi.org/10.1145/3313831.3376301
  • Yang, Q., Scuito, A., Zimmerman, J., Forlizzi, J., & Steinfeld, A. (2018). Investigating how experienced UX designers effectively work with machine learning. In Proceedings of the 2018 Designing Interactive Systems Conference (DIS ’18) (pp. 585–596). New York, NY: ACM. https://doi.org/10.1145/3196709.3196730
  • Yang, Q. (2018). Machine learning as a UX design material: How can we imagine beyond automation, recommenders, and reminders? In Conference: 2018 AAAI Spring Symposium Series: User Experience of Artificial Intelligence, March 2018, Palo Alto, CA.
  • Yampolskiy, R. V. (2019). Predicting future AI failures from historic examples. Foresight, 21(1), 138–152. https://doi.org/10.1108/FS-04-2018-0034
  • Zanzotto, F. M. (2019). Human-in-the-loop artificial intelligence. Journal of Artificial Intelligence Research, 64, 243–252. https://doi.org/10.1613/jair.1.11345
  • Zhang, X. L., Lyu, F., & Cheng, S. W. (2018). Interaction paradigm in intelligent systems. Scientia Sinica Informationis, 48(4), 406–418. https://doi.org/10.1360/N112017-00217
  • Zhang, X., Khalili, M. M., & Liu, M. (2020). Long-term impacts of fair machine learning. Ergonomics in Design: The Quarterly of Human Factors Applications, 28(3), 7–11. https://doi.org/10.1177/1064804619884160
  • Zheng, N., Liu, Z., Ren, P., Ma, Y., Chen, S., Yu, S., Xue, J., Chen, B., & Wang, F. (2017). Hybrid-augmented intelligence: Collaboration and cognition. Frontiers of Information Technology & Electronic Engineering, 18(2), 153–179. https://doi.org/10.1631/FITEE.1700053
  • Zhou, J., & Chen, F. (Eds.). (2018). 2D transparency space—Bring domain users and machine learning experts together. In Human and machine learning: Visible, explainable, trustworthy and transparent (Kindle ed.). Springer International Publishing.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.