9,161
Views
4
CrossRef citations to date
0
Altmetric
Research Articles

A Situation Awareness Perspective on Human-AI Interaction: Tensions and Opportunities

Pages 1789-1806 | Received 12 Nov 2021, Accepted 21 Jun 2022, Published online: 13 Jul 2022

References

  • Agrawal, R., Wright, T. J., Samuel, S., Zilberstein, S., & Fisher, D. L. (2017). Effects of a change in environment on the minimum time to situation awareness in transfer of control scenarios. Transportation Research Record: Journal of the Transportation Research Board, 2663(1), 126–133. https://doi.org/10.3141/2663-16
  • Alsheiabni, S., Cheung, Y., & Messom, C. (2019, August 15–17). Factors inhibiting the adoption of artificial intelligence at organizational-level: A preliminary investigation [Paper presentation]. Americas Conference on Information Systems, Cancun, Mexico.
  • Bainbridge, L. (1983). Ironies of automation (Analysis, design and evaluation of man–machine systems (pp. 129–135). Elsevier.
  • Berberian, B., Sarrazin, J.-C., Le Blaye, P., & Haggard, P. (2012). Automation technology and sense of control: A window on human agency. Plos One, 7(3), e34075. https://doi.org/10.1371/journal.pone.0034075
  • Bhaskara, A., Skinner, M., & Loft, S. (2020). Agent transparency: A review of current theory and evidence. IEEE Transactions on Human-Machine Systems, 50(3), 215–224. https://doi.org/10.1109/THMS.2020.2965529
  • Bonchev, D., & Rouvray, D. H. (2003). Complexity: Introduction and fundamentals (Vol. 7). CRC Press.
  • Bond, R. R., Novotny, T., Andrsova, I., Koc, L., Sisakova, M., Finlay, D., Guldenring, D., McLaughlin, J., Peace, A., McGilligan, V., Leslie, S. E. P. J., Wang, H., & Malik, M. (2018). Automation bias in medicine: The influence of automated diagnoses on interpreter accuracy and uncertainty when reading electrocardiograms. Journal of Electrocardiology, 51(6), S6–S11. https://doi.org/10.1016/j.jelectrocard.2018.08.007
  • Bonneau, G.-P., Hege, H.-C., Johnson, C. R., Oliveira, M. M., Potter, K., Rheingans, P., & Schultz, T. (2014). Overview and state-of-the-art of uncertainty visualization (Scientific Visualization) (pp. 3–27). Springer.
  • Bowden, V. K., Griffiths, N., Strickland, L., & Loft, S. (2021). Detecting a single automation failure: The impact of expected (but not experienced) automation reliability. Human Factors: The Journal of the Human Factors and Ergonomics Society, 001872082110371. https://doi.org/10.1177/00187208211037188
  • Carroll, J. M. (1997). Human-computer interaction: Psychology as a science of design. Annual Review of Psychology, 48(1), 61–83. https://doi.org/10.1146/annurev.psych.48.1.61
  • Chen, J., & Barnes, M. J. (2015, October 9–12). Agent transparency for human-agent teaming effectiveness [Paper presentation]. 2015 IEEE International Conference on Systems, Man, and Cybernetics, Hong Kong, China.
  • Chen, J., Lakhmani, S. G., Stowers, K., Selkowitz, A. R., Wright, J. L., & Barnes, M. (2018). Situation awareness-based agent transparency and human-autonomy teaming effectiveness. Theoretical Issues in Ergonomics Science, 19(3), 259–282. https://doi.org/10.1080/1463922X.2017.1315750
  • Chen, J. Y., Procci, K., Boyce, M., Wright, J., Garcia, A., & Barnes, M. (2014). Situation awareness-based agent transparency. US. Army Research Laboratory.
  • Colombo, E., Mercorio, F., & Mezzanzanica, M. (2019). AI meets labor market: Exploring the link between automation and skills. Information Economics and Policy, 47, 27–37. https://doi.org/10.1016/j.infoecopol.2019.05.003
  • Cummings, M. L., & Guerlain, S. (2007). Developing operator capacity estimates for supervisory control of autonomous vehicles. Human Factors: The Journal of the Human Factors and Ergonomics Society, 49(1), 1–15. http://proxy.binghamton.edu/login?url=https://search.ebscohost.com/login.aspx?direct=true&db=a9h&AN=24622895&site=ehost-live https://doi.org/10.1518/001872007779598109
  • D’Aniello, G., Loia, V., & Orciuoli, F. (2019). An adaptive system based on situation awareness for goal-driven management in container terminals. IEEE Intelligent Transportation Systems Magazine, 11(4), 126–136. https://doi.org/10.1109/MITS.2019.2939137
  • Davis, F. D., Bagozzi, R. P., & Warshaw, P. R. (1989). User acceptance of computer technology: A comparison of two theoretical models. Management Science, 35(8), 982–1003. https://doi.org/10.1287/mnsc.35.8.982
  • De Bruyn, A., Viswanathan, V., Beh, Y. S., Brock, J. K. U., & Von Wangenheim, F. (2020). Artificial intelligence and marketing: Pitfalls and opportunities. Journal of Interactive Marketing, 51(1), 91–105. https://doi.org/10.1016/j.intmar.2020.04.007
  • de Visser, E. J., Pak, R., & Shaw, T. H. (2018). From “automation’ to “autonomy’: The importance of trust repair in human-machine interaction. Ergonomics, 61(10), 1409–1427. https://doi.org/10.1080/00140139.2018.1457725
  • Dequech, D. (2011). Uncertainty: A typology and refinements of existing concepts. Journal of Economic Issues, 45(3), 621–640. https://doi.org/10.2753/JEI0021-3624450306
  • Doll, W. J., Deng, X., & Metts, G. A. (2003). User empowerment in computer-mediated work [Paper presentation]. Proceedings of ISOneWorld Conference , Las Vegas, NV, USA.
  • Endsley, M. (1995). Toward a theory of situation awareness in dynamic systems. Human Factors: The Journal of the Human Factors and Ergonomics Society, 37(1), 32–64. https://doi.org/10.1518/001872095779049543
  • Endsley, M. R. (2000). Theoretical underpinnings of situation awareness: A critical review. In M. R. Endsley & D. J. Garland (Eds.), Situation awareness analysis and measurement. Lawrence Erlbaum.
  • Endsley, M. R. (2016). Designing for situation awareness: An approach to user-centered design. CRC press.
  • Endsley, M. R., & Garland, D. J. (2000). Situation awareness analysis and measurement. CRC Press.
  • Endsley, M. R., & Kaber, D. B. (1999). Level of automation effects on performance, situation awareness and workload in a dynamic control task. Ergonomics, 42(3), 462–492. https://doi.org/10.1080/001401399185595
  • Endsley, M. R., & Kiris, E. O. (1995). The out-of-the-loop performance problem and level of control in automation. Human Factors: The Journal of the Human Factors and Ergonomics Society, 37(2), 381–394. https://doi.org/10.1518/001872095779064555
  • Engen, V., Pickering, J. B., & Walland, P. (2016). Machine agency in human-machine networks; impacts and trust implications [Paper presentation]. International Conference on Human-Computer Interaction, Toronto, Canada.
  • Engin, A., & Vetschera, R. (2017). Information representation in decision making: The impact of cognitive style and depletion effects. Decision Support Systems, 103, 94–103. https://doi.org/10.1016/j.dss.2017.09.007
  • Fan, W., Liu, J., Zhu, S., & Pardalos, P. M. (2020). Investigating the impacting factors for the healthcare professionals to adopt artificial intelligence-based medical diagnosis support system (AIMDSS). Annals of Operations Research, 294(1–2), 567–592. https://doi.org/10.1007/s10479-018-2818-y
  • Ferraro, J. C., & Mouloua, M. (2021). Effects of automation reliability on error detection and attention to auditory stimuli in a multi-tasking environment. Applied Ergonomics, 91, 103303. https://doi.org/10.1016/j.apergo.2020.103303
  • Freitas, A. L., Clark, S. L., Kim, J. Y., & Levy, S. R. (2009). Action-construal levels and perceived conflict among ongoing goals: Implications for positive affect. Journal of Research in Personality, 43(5), 938–941. https://doi.org/10.1016/j.jrp.2009.05.006
  • Frensch, P. A. (1998). One concept, multiple meanings: On how to define the concept of implicit learning. In Handbook of implicit learning (pp. 47–104). Sage.
  • Gillath, O., Ai, T., Branicky, M. S., Keshmiri, S., Davison, R. B., & Spaulding, R. (2021). Attachment and trust in artificial intelligence. Computers in Human Behavior, 115, 106607. https://doi.org/10.1016/j.chb.2020.106607
  • Glikson, E., & Woolley, A. W. (2020). Human trust in artificial intelligence: review of empirical research. Academy of Management Annals, 14(2), 627–660. https://doi.org/10.5465/annals.2018.0057
  • Guo, J., Tao, D., & Yang, C. (2019). The effects of continuous conversation and task complexity on usability of an AI-based conversational agent in smart home environments [Paper presentation]. International Conference on Man-Machine-Environment System Engineering, Shanghai, China.
  • Guzman, A. L., & Lewis, S. C. (2020). Artificial intelligence and communication: A Human–Machine Communication research agenda. New Media & Society, 22(1), 70–86. https://doi.org/10.1177/1461444819858691
  • Habib, L., Pacaux-Lemoine, M., & Millot, P. (2018, October 7–10). Human-robots team cooperation in crisis management mission [Paper presentation]. 2018 IEEE International Conference on Systems, Man, and Cybernetics (SMC). https://doi.org/10.1109/SMC.2018.00545
  • Hancock, J. T., Naaman, M., & Levy, K. (2020). AI-mediated communication: Definition, research agenda, and ethical considerations. Journal of Computer-Mediated Communication, 25(1), 89–100. https://doi.org/10.1093/jcmc/zmz022
  • Hancock, P. A., Billings, D. R., Schaefer, K. E., Chen, J. Y. C., de Visser, E. J., & Parasuraman, R. (2011). A meta-analysis of factors affecting trust in human-robot interaction. Human Factors, 53(5), 517–527. https://doi.org/10.1177/0018720811417254
  • Hjalmdahl, M., Krupenia, S., & Thorslund, B. (2017). Driver behaviour and driver experience of partial and fully automated truck platooning - a simulator study. European Transport Research Review, 9(1), 8. https://doi.org/10.1007/s12544-017-0222-3
  • Hoff, K. A., & Bashir, M. (2015). Trust in automation: Integrating empirical evidence on factors that influence trust. Human Factors, 57(3), 407–434. https://doi.org/10.1177/0018720814547570
  • Holsapple, C. W., & Sena, M. P. (2005). ERP plans and decision-support benefits. Decision Support Systems, 38(4), 575–590. https://doi.org/10.1016/j.dss.2003.07.001
  • Hutchins, E. (1995a). Cognition in the wild. MIT press.
  • Hutchins, E. (1995b). How a cockpit remembers its speeds. Cognitive Science, 19(3), 265–288. https://doi.org/10.1207/s15516709cog1903_1
  • Ilbeygi, M., & Kangavari, M. R. (2018). Comprehensive architecture for intelligent adaptive interface in the field of single-human multiple-robot interaction. ETRI Journal, 40(4), 483–498. https://doi.org/10.4218/etrij.2017-0294
  • Janssen, C. P., Iqbal, S. T., Kun, A. L., & Donker, S. F. (2019). Interrupted by my car? Implications of interruption and interleaving research for automated vehicles. International Journal of Human-Computer Studies, 130, 221–233. https://doi.org/10.1016/j.ijhcs.2019.07.004
  • Jiang, Z., & Benbasat, I. (2007). The effects of presentation formats and task complexity on online consumers’ product understanding. MIS Quarterly, 31(3), 475–500. https://doi.org/10.2307/25148804
  • Kanal, L. N., & Lemmer, J. F. (2014). Uncertainty in artificial intelligence. Elsevier.
  • Kang, S., Kim, A. R., & Seong, P. H. (2017). Empirical verification for application of Bayesian inference in situation awareness evaluations. Annals of Nuclear Energy, 102, 106–115. https://doi.org/10.1016/j.anucene.2016.12.008
  • Keil, M., Beranek, P. M., & Konsynski, B. R. (1995). Usefulness and ease of use: Field study evidence regarding task considerations. Decision Support Systems, 13(1), 75–91. https://doi.org/10.1016/0167-9236(94)E0032-M
  • Kim, H.-W., & Gupta, S. (2014). A user empowerment approach to information systems infusion. IEEE Transactions on Engineering Management, 61(4), 656–668. https://doi.org/10.1109/TEM.2014.2354693
  • Kim, T. W., & Duhachek, A. (2020). Artificial Intelligence and persuasion: A construal-level account. Psychological Science, 31(4), 363–380. https://doi.org/10.1177/0956797620904985
  • Kläs, M., & Vollmer, A. M. (2018). Uncertainty in machine learning applications: A practice-driven classification of uncertainty [Paper presentation]. International Conference on Computer Safety, Reliability, and Security, Västerås, Sweden.
  • Kolbjørnsrud, V., Amico, R., & Thomas, R. J. (2017). Partnering with AI: How organizations can win over skeptical managers. Strategy & Leadership, 45(1), 37–43. https://doi.org/10.1108/SL-12-2016-0085
  • Kruger, K., Schade, U., & Ziegler, J. (2008, June 30–July 3). Uncertainty in the fusion of information from multiple diverse sources for situation awareness [Paper presentation]. 2008 11th International Conference on Information Fusion, Cologne, Germany.
  • Kunze, A., Summerskill, S. J., Marshall, R., & Filtness, A. J. (2019). Automation transparency: implications of uncertainty communication for human-automation interaction and interfaces. Ergonomics, 62(3), 345–360. https://doi.org/10.1080/00140139.2018.1547842
  • Le Goff, K., Rey, A., Haggard, P., Oullier, O., & Berberian, B. (2018). Agency modulates interactions with automation technologies. Ergonomics, 61(9), 1282–1297. https://doi.org/10.1080/00140139.2018.1468493
  • Le, N. T., Rao Hill, S., & Troshani, I. (2022). Perceived control and perceived risk in self-service technology recovery. Journal of Computer Information Systems, 62(1), 164–173. https://doi.org/10.1080/08874417.2020.1756533
  • Lee, S. J. (2019). The role of construal level in message effects research: A review and future directions. Communication Theory, 29(3), 231–338. https://doi.org/10.1093/ct/qty030
  • Li, K., & Wieringa, P. A. (2000). Understanding perceived complexity in human supervisory control. Cognition. Technology & Work, 2(2), 75–88. https://doi.org/10.1007/s101110050029
  • Liberman, N., & Trope, Y. (1998). The role of feasibility and desirability considerations in near and distant future decisions: A test of temporal construal theory. Journal of Personality and Social Psychology, 75(1), 5–18. https://doi.org/10.1037/0022-3514.75.1.5
  • Liu, B. (2021). In AI we trust? Effects of agency locus and transparency on uncertainty reduction in human–AI interaction. Journal of Computer-Mediated Communication, 26(6), 384–402. https://doi.org/10.1093/jcmc/zmab013
  • Liu, P., & Li, Z. (2012). Task complexity: A review and conceptualization framework. International Journal of Industrial Ergonomics, 42(6), 553–568. https://doi.org/10.1016/j.ergon.2012.09.001
  • Lyytinen, K., Nickerson, J. V., & King, J. L. (2021). Metahuman systems = humans + machines that learn. Journal of Information Technology, 36(4), 427–445. https://doi.org/10.1177/0268396220915917
  • Mansikka, H., Virtanen, K., & Harris, D. (2019). Dissociation between mental workload, performance, and task awareness in pilots of high performance aircraft. IEEE Transactions on Human-Machine Systems, 49(1), 1–9. https://doi.org/10.1109/THMS.2018.2874186
  • Marquez, J. J., & Cummings, M. L. (2008). Design and evaluation of path planning decision support for planetary surface exploration. Journal of Aerospace Computing Information and Communication, 5(3), 57–71. https://doi.org/10.2514/1.26248
  • Masters, R., Poolton, J. M., Maxwell, J. P., & Raab, M. (2008). Implicit motor learning and complex decision making in time-constrained environments. Journal of Motor Behavior, 40(1), 71–79. https://doi.org/10.3200/JMBR.40.1.71-80
  • McCarthy, J. (2007). From here to human-level AI. Artificial Intelligence, 171(18), 1174–1182. https://doi.org/10.1016/j.artint.2007.10.009
  • Merritt, S. M., Ako-Brew, A., Bryant, W. J., Staley, A., McKenna, M., Leone, A., & Shirase, L. (2019). Automation-induced complacency potential: Development and validation of a new scale. Frontiers in Psychology, 10, 225. https://doi.org/10.3389/fpsyg.2019.00225
  • Miah, S. J., Gammack, J. G., & McKay, J. (2019). A Metadesign theory for Tailorable decision support. Journal of the Association for Information Systems, 20(5), 4. https://doi.org/10.17705/1jais.00544
  • Miller, C. A., & Parasuraman, R. (2007). Designing for flexible interaction between humans and automation: Delegation interfaces for supervisory control. Human Factors, 49(1), 57–75. https://doi.org/10.1518/001872007779598037
  • Miller, D., Sun, A., & Ju, W. (2014, October 5–8). Situation awareness with different levels of automation [Paper presentation]. 2014 IEEE International Conference on Systems, Man, and Cybernetics (SMC), San Diego, CA.
  • Miller, T. (2019). Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence, 267, 1–38. https://doi.org/10.1016/j.artint.2018.07.007
  • Mills, P. K., & Ungson, G. R. (2003). Reassessing the limits of structural empowerment: Organizational constitution and trust as controls. Academy of Management Review, 28(1), 143–153. https://doi.org/10.5465/amr.2003.8925254
  • Mohseni, S., Zarei, N., & Ragan, E. D. (2021). A multidisciplinary survey and framework for design and evaluation of explainable AI systems. ACM Transactions on Interactive Intelligent Systems, 11(3–4), 1–45. https://doi.org/10.1145/3387166
  • Nadkarni, S., & Gupta, R. (2007). A task-based model of perceived website complexity. MIS Quarterly, 31(3), 501–524. https://doi.org/10.2307/25148805
  • Ng, K. K., Chen, C.-H., Lee, C. K., Jiao, J. R., & Yang, Z.-X. (2021). A systematic literature review on intelligent automation: Aligning concepts from theory, practice, and future perspectives. Advanced Engineering Informatics, 47, 101246. https://doi.org/10.1016/j.aei.2021.101246
  • Okoli, C. (2015). A guide to conducting a standalone systematic literature review. Communications of the Association for Information Systems, 37(1), 43. https://doi.org/10.17705/1CAIS.03743
  • Onnasch, L., Wickens, C. D., Li, H., & Manzey, D. (2014). Human performance consequences of stages and levels of automation: An integrated meta-analysis. Human Factors, 56(3), 476–488. https://doi.org/10.1177/0018720813501549
  • Padgham, L., & Winikoff, M. (2005). Developing intelligent agent systems: A practical guide (Vol. 13). John Wiley & Sons.
  • Page, T. (2009). Feature creep and usability in consumer electronic product design. International Journal of Product Development, 9(4), 406–428. https://doi.org/10.1504/IJPD.2009.027474
  • Parasuraman, R., & Manzey, D. H. (2010). Complacency and bias in human use of automation: An attentional integration. Human Factors, 52(3), 381–410. https://doi.org/10.1177/0018720810376055
  • Parasuraman, R., Molloy, R., & Singh, I. L. (1993). Performance consequences of automation-induced’complacency. The International Journal of Aviation Psychology, 3(1), 1–23. https://doi.org/10.1207/s15327108ijap0301_1
  • Park, D., Yoon, W. C., & Lee, U. (2020). Cognitive states matter: Design guidelines for driving situation awareness in smart vehicles. Sensors (14248220), 20(10), 2978–2978. https://doi.org/10.3390/s20102978
  • Paulius, D., & Sun, Y. (2019). A survey of knowledge representation in service robotics. Robotics and Autonomous Systems, 118, 13–30. https://doi.org/10.1016/j.robot.2019.03.005
  • Prangnell, N., & Wright, D. (2015). The robots are coming (No. 2304459804577). Deloitte report. https://www2.deloitte.com/uk/en/pages/finance/articles/robots-coming-global-business-services.html
  • Psoinos, A., Kern, T., & Smithson, S. (2000). An exploratory study of information systems in support of employee empowerment. Journal of Information Technology, 15(3), 211–230. https://doi.org/10.1177/026839620001500304
  • Rai, A. (2020). Explainable AI: From black box to glass box. Journal of the Academy of Marketing Science, 48(1), 137–141. https://doi.org/10.1007/s11747-019-00710-5
  • Rajabiyazdi, F., & Jamieson, G. A. (2020, October 11–14). A review of transparency (seeing-into) models [Paper presentation]. 2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Toronto, Ontario, Canada. https://doi.org/10.1109/SMC42975.2020.9282970
  • Salmon, P. M., Stanton, N. A., Walker, G. H., Jenkins, D., Ladva, D., Rafferty, L., & Young, M. (2009). Measuring Situation Awareness in complex systems: Comparison of measures study. International Journal of Industrial Ergonomics, 39, 490–500. https://doi.org/10.1016/j.ergon.2008.10.010
  • Schaefer, K. E., Straub, E. R., Chen, J. Y., Putney, J., & Evans, A. W. (2017). Communicating intent to develop shared situation awareness and engender trust in human-agent teams. Cognitive Systems Research, 46, 26–39. https://doi.org/10.1016/j.cogsys.2017.02.002
  • Schleiffer, R. (2005). An intelligent agent model. European Journal of Operational Research, 166(3), 666–693. https://doi.org/10.1016/j.ejor.2004.03.039
  • Schmidt, A. (2020). Interactive human centered artificial intelligence: A definition and research challenges [Paper presentation]. Proceedings of the International Conference on Advanced Visual Interfaces, Island of Ischia, Italy. https://doi.org/10.1145/3399715.3400873
  • Schneider, H., Eiband, M., Ullrich, D., & Butz, A. (2018). Empowerment in HCI-A survey and framework [Paper presentation]. Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems.
  • Schoonderwoerd, T. A. J., Jorritsma, W., Neerincx, M. A., & van den Bosch, K. (2021). Human-centered XAI: Developing design patterns for explanations of clinical decision support systems. International Journal of Human-Computer Studies, 154, 102684. https://doi.org/10.1016/j.ijhcs.2021.102684
  • Sehgal, R., & Stewart, G. (2004). Exploring the relationship between user empowerment and enterprise system success measures. AMCIS 2004 Proceedings, 15, 91–98. https://aisel.aisnet.org/amcis2004/15
  • Sheridan, T. B., & Verplank, W. L. (1978). Human and computer control of undersea teleoperators. https://apps.dtic.mil/sti/citations/ADA057655
  • Shneiderman, B. (1980). Software psychology: Human factors in computer and information systems. Winthrop Inc.
  • Shneiderman, B. (2020). Human-centered artificial intelligence: Reliable, safe & trustworthy. International Journal of Human-Computer Interaction, 36(6), 495–504. https://doi.org/10.1080/10447318.2020.1741118
  • Shneiderman, B. (2021). Human-centered AI: A new synthesis [Paper presentation].IFIP Conference on Human-Computer Interaction, Bari, Italy.
  • Singh, I. L., Molloy, R., & Parasuraman, R. (1993). Automation-induced “complacency”: Development of the complacency-potential rating scale. The International Journal of Aviation Psychology, 3(2), 111–122. https://doi.org/10.1207/s15327108ijap0302_2
  • Sivadasan, S., Efstathiou, J., Calinescu, A., & Huatuco, L. H. (2006). Advances on measuring the operational complexity of supplier–customer systems. European Journal of Operational Research, 171(1), 208–226. https://doi.org/10.1016/j.ejor.2004.08.032
  • Skraaning, G., & Jamieson, G. A. (2021). Human performance benefits of the automation transparency design principle: Validation and variation. Human Factors, 63(3), 379–401. https://doi.org/10.1177/0018720819887252
  • Smith, P. J., & Baumann, E. (2020, October 11–15). Human-automation teaming: Unintended consequences of automation on user performance [Paper presentation]. 2020 AIAA/IEEE 39th Digital Avionics Systems Conference (DASC), San Antonio, TX.
  • Sonoda, K., & Wada, T. (2017). Displaying system situation awareness increases driver trust in automated driving. IEEE Transactions on Intelligent Vehicles, 2(3), 185–193. https://doi.org/10.1109/TIV.2017.2749178
  • Sparrow, B., Liu, J., & Wegner, D. M. (2011). Google effects on memory: Cognitive consequences of having information at our fingertips. Science (New York, N.Y.), 333(6043), 776–778.
  • Stanton, N. A., Stewart, R., Harris, D., Houghton, R. J., Baber, C., McMaster, R., Salmon, P., Hoyle, G., Walker, G., Young, M. S., Linsell, M., Dymott, R., & Green, D. (2006). Distributed situation awareness in dynamic systems: Theoretical development and application of an ergonomics methodology. Ergonomics, 49(12–13), 1288–1311. https://doi.org/10.1080/00140130600612762
  • Stowers, K., Kasdaglis, N., Rupp, M. A., Newton, O. B., Chen, J. Y. C., & Barnes, M. (2020). The IMPACT of agent transparency on human performance. IEEE Transactions on Human-Machine Systems, 50(3), 245–253. https://doi.org/10.1109/THMS.2020.2978041
  • Sundar, S. S. (2020). Rise of machine agency: A framework for studying the psychology of human–AI interaction (HAII). Journal of Computer-Mediated Communication, 25(1), 74–88. https://doi.org/10.1093/jcmc/zmz026
  • Te’eni, D. (1989). Determinants and consequences of perceived complexity in human‐computer interaction. Decision Sciences, 20(1), 166–181. https://doi.org/10.1111/j.1540-5915.1989.tb01405.x
  • Tegarden, D. P., Sheetz, S. D., & Monarchi, D. E. (1995). A software complexity model of object-oriented systems. Decision Support Systems, 13(3–4), 241–262. https://doi.org/10.1016/0167-9236(93)E0045-F
  • Trope, Y., & Liberman, N. (2010). Construal-level theory of psychological distance. Psychological Review, 117(2), 440–463. https://doi.org/10.1037/a0018963
  • Trope, Y., Liberman, N., & Wakslak, C. (2007). Construal levels and psychological distance: Effects on representation, prediction, evaluation, and behavior. Journal of Consumer Psychology: The Official Journal of the Society for Consumer Psychology, 17(2), 83–95. https://doi.org/10.1016/S1057-7408(07)70013-X
  • van der Waa, J., Schoonderwoerd, T., van Diggelen, J., & Neerincx, M. (2020). Interpretable confidence measures for decision support systems. International Journal of Human-Computer Studies, 144, 102493. https://doi.org/10.1016/j.ijhcs.2020.102493
  • Vatanasombut, B., Igbaria, M., Stylianou, A. C., & Rodgers, W. (2008). Information systems continuance intention of web-based applications customers: The case of online banking. Information & Management, 45(7), 419–428. https://doi.org/10.1016/j.im.2008.03.005
  • Venkatesh, V., Morris, M. G., Davis, G. B., & Davis, F. D. (2003). User acceptance of information technology: Toward a unified view. MIS Quarterly, 27(3), 425–478. [Database] https://doi.org/10.2307/30036540
  • Venkatesh, V., Thong, J. Y., & Xu, X. (2012). Consumer acceptance and use of information technology: extending the unified theory of acceptance and use of technology. MIS Quarterly, 36(1), 157–178. https://doi.org/10.2307/41410412
  • Vial, G., Jiang, J., Giannelia, T., & Cameron, A.-F. (2021). The data problem stalling AI. MIT Sloan Management Review, 62(2), 47–53.
  • Vicente, K. J., & Rasmussen, J. (1992). Ecological interface design: Theoretical foundations. IEEE Transactions on Systems, Man, and Cybernetics, 22(4), 589–606. https://doi.org/10.1109/21.156574
  • Villaren, T., Madier, C., Legras, F., Leal, A., Kovacs, B., & Coppin, G. (2010). Towards a method for context-dependent allocation of functions [Paper presentation]. Proceedings of the 2nd Conference on Human Operating Unmanned Systems (HUMOUS’10), Toulouse, France.
  • Windt, K., Philipp, T., & Böse, F. (2008). Complexity cube for the characterization of complex production systems. International Journal of Computer Integrated Manufacturing, 21(2), 195–200. https://doi.org/10.1080/09511920701607725
  • Woods, D. D. (2018). Decomposing automation: Apparent simplicity, real complexity. In Automation and human performance: Theory and applications (pp. 3–17). CRC Press.
  • Wu, J., & Shang, S. (2020). Managing uncertainty in AI-enabled decision making and achieving sustainability. Sustainability, 12(21), 8758. https://doi.org/10.3390/su12218758
  • Xu, W. (2021). From automation to autonomy and autonomous vehicles: Challenges and opportunities for human-computer interaction. Interactions, 28(1), 48–53. https://doi.org/10.1145/3434580
  • Yamani, Y., Long, S. K., & Itoh, M. (2020). Human-automation trust to technologies for naive users amidst and following the COVID-19 pandemic. Human Factors, 62(7), 1087–1094. https://doi.org/10.1177/0018720820948981
  • Zadeh, L. A. (1986). Is probability theory sufficient for dealing with uncertainty in AI: A negative view. In Machine intelligence and pattern recognition (Vol. 4, pp. 103–116). Elsevier.
  • Zafari, S., & Koeszegi, S. T. (2021). Attitudes toward attributed agency: Role of perceived control. International Journal of Social Robotics, 13, 2071–2080. https://doi.org/10.1007/s12369-020-00672-7
  • Ziefle, M. (2002). The influence of user expertise and phone complexity on performance, ease of use and learnability of different mobile phones. Behaviour & Information Technology, 21(5), 303–311. https://doi.org/10.1080/0144929021000048538