1,562
Views
0
CrossRef citations to date
0
Altmetric
Research Article

(Over)Trusting AI Recommendations: How System and Person Variables Affect Dimensions of Complacency

ORCID Icon, ORCID Icon, ORCID Icon & ORCID Icon
Received 01 Jul 2023, Accepted 29 Dec 2023, Published online: 22 Jan 2024

References

  • Adadi, A., & Berrada, M. (2018). Peeking inside the black-box: A survey on explainable artificial intelligence (XAI). IEEE Access, 6(1), 52138–52160. https://doi.org/10.1109/ACCESS.2018.2870052
  • Al-Emran, M., & Granić, A. (2021). Is it still valid or outdated? A bibliometric analysis of the Technology Acceptance Model and its applications from 2010 to 2020. In M. Al-Emran & K. Shaalan (Eds.), Recent Advances in Technology Acceptance Models and Theories (pp. 1–12). Springer International Publishing. https://doi.org/10.1007/978-3-030-64987-6_1
  • Amin, M., Rezaei, S., & Abolghasemi, M. (2014). User satisfaction with mobile websites: The impact of perceived usefulness (PU), perceived ease of use (PEOU) and trust. Nankai Business Review International, 5(3), 258–274. https://doi.org/10.1108/NBRI-01-2014-0005
  • Ananny, M., & Crawford, K. (2018). Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New Media & Society, 20(3), 973–989. https://doi.org/10.1177/1461444816676645
  • Anaraky, R. G., Knijnenburg, B. P., & Risius, M. (2020). Exacerbating mindless compliance: The danger of justifications during privacy decision making in the context of Facebook applications. AIS Transactions on Human-Computer Interaction, 12(2), 70–95. https://doi.org/10.17705/1thci.00129
  • Bach, T. A., Khan, A., Hallock, H., Beltrão, G., & Sousa, S. (2022). A systematic literature review of user trust in AI-enabled systems: An HCI perspective. International Journal of Human–Computer Interaction, 1–16. Advance online publication. https://doi.org/10.1080/10447318.2022.2138826
  • Bahner, J. E. (2008). Übersteigertes Vertrauen in Automation: Der Einfluss von Fehlererfahrungen auf Complacency und Automation Bias [Overtrust in automation: The impact of failure experience on complacency and automation bias] [Dissertation]. Technische Universität Berlin. https://doi.org/10.14279/depositonce-1990
  • Balfe, N., Sharples, S., & Wilson, J. R. (2018). Understanding is key: An analysis of factors pertaining to trust in a real-world automation system. Human Factors, 60(4), 477–495. https://doi.org/10.1177/0018720818761256
  • Barredo Arrieta, A., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., Garcia, S., Gil-Lopez, S., Molina, D., Benjamins, R., Chatila, R., & Herrera, F. (2020). Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, 58(1), 82–115. https://doi.org/10.1016/j.inffus.2019.12.012
  • Bernstein, E. S. (2017). Making transparency transparent: The evolution of observation in management theory. Academy of Management Annals, 11(1), 217–266. https://doi.org/10.5465/annals.2014.0076
  • Brown, T. A. (2006). Confirmatory factor analysis for applied research (pp. xiii–x475). The Guilford Press.
  • Brown, T. A., & Moore, M. T. (2012). Confirmatory factor analysis. In R. H. Hoyle (Ed.), Handbook of structural equation modeling (pp. 361–379). The Guilford Press.
  • Buçinca, Z., Malaya, M. B., & Gajos, K. Z. (2021). To trust or to think: Cognitive forcing functions can reduce overreliance on AI in AI-assisted decision-making. Proceedings of the ACM on Human-Computer Interaction, 5(CSCW1), 1–21. https://doi.org/10.1145/3449287
  • Bussone, A., Stumpf, S., & O’Sullivan, D. (2015). The role of explanations on trust and reliance in clinical decision support systems. In International Conference on Healthcare Informatics (pp. 160–169). IEEE. https://doi.org/10.1109/ICHI.2015.26
  • Cacioppo, J. T., & Petty, R. E. (1982). The need for cognition. Journal of Personality and Social Psychology, 42(1), 116–131. https://doi.org/10.1037/0022-3514.42.1.116
  • Carmines, E. G., & McIver, J. P. (1981). Analyzing models with unobserved variables: Analysis of covariance structures. In G. W. Bohrnstedt & E. F. Borgatta (Eds.), Social measurement: Current issues (pp. 65–115). Sage Publications, Inc.
  • Chen, J. Y. C., Lakhmani, S. G., Stowers, K., Selkowitz, A. R., Wright, J. L., & Barnes, M. (2018). Situation awareness-based agent transparency and human-autonomy teaming effectiveness. Theoretical Issues in Ergonomics Science, 19(3), 259–282. https://doi.org/10.1080/1463922X.2017.1315750
  • Chen, Y.-F., & Lan, Y.-C. (2016). An empirical study of the factors affecting mobile shopping in Taiwan. International Journal of Technology and Human Interaction, 10(1), 19–30. https://doi.org/10.4018/ijthi.2014010102
  • Choung, H., David, P., & Ross, A. (2021). Trust in AI and its role in the acceptance of AI technologies. International Journal of Human–Computer Interaction, 39(9), 1727–1739. https://doi.org/10.1080/10447318.2022.2050543
  • Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Lawrence Erlbaum Associates. https://doi.org/10.4324/9780203771587
  • Cohen, J. (1992). Statistical power analysis. Current Directions in Psychological Science, 1(3), 98–101. https://doi.org/10.1111/1467-8721.ep10768783
  • Dang, J., King, K. M., & Inzlicht, M. (2020). Why are self-report and behavioral measures weakly correlated? Trends in Cognitive Sciences, 24(4), 267–269. https://doi.org/10.1016/j.tics.2020.01.007
  • Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly, 13(3), 319–340. https://doi.org/10.2307/249008
  • Dillon, A. (2001). User acceptance of information technology. In W. Karwowski (Ed.), Encyclopedia of human factors and ergonomics (1st ed., Vol. 1, pp. 1–11). Taylor and Francis.
  • Dunn, N., Dingus, T., & Soccolich, S. (2019). Understanding the impact of technology: Do advanced driver assistance and semi-automated vehicle systems lead to improper driving behavior? [Technical Report] (pp. 1–103). AAA Foundation for Traffic Safety. https://trid.trb.org/view/1673569
  • Eiband, M., Buschek, D., Kremer, A., & Hussmann, H. (2019). The impact of placebic explanations on trust in intelligent systems. In Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems (pp. 1–6). ACM. https://doi.org/10.1145/3290607.3312787
  • European Commission (2019). Ethics guidelines for trustworthy AI. https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai
  • Fallon, C. K., & Blaha, L. M. (2018). Improving automation transparency: Addressing some of machine learning’s unique challenges. In D. D. Schmorrow & C. M. Fidopiastis (Eds.), Augmented cognition: Intelligent technologies (Vol. 10915, pp. 245–254). Springer International Publishing. https://doi.org/10.1007/978-3-319-91470-1_21
  • Faul, F., Erdfelder, E., Lang, A.-G., & Buchner, A. (2007). G*Power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behavior Research Methods, 39(2), 175–191. https://doi.org/10.3758/BF03193146
  • Glikson, E., & Woolley, A. W. (2020). Human trust in artificial intelligence: Review of empirical research. Academy of Management Annals, 14(2), 627–660. https://doi.org/10.5465/annals.2018.0057
  • Gupta, M. (2023). Explainable Artificial Intelligence (XAI): Understanding and future perspectives. In A. E. Hassanien, D. Gupta, A. K. Singh, & A. Garg (Eds.), Explainable edge AI: A futuristic computing perspective (Vol. 1072, pp. 19–33). Springer. https://doi.org/10.1007/978-3-031-18292-1_2
  • Hayes, A. F. (2022). Introduction to mediation, moderation, and conditional process analysis: A regression-based approach (3rd ed.). Guilford Press.
  • Hayes, A. F., & Coutts, J. J. (2020). Use omega rather than Cronbach’s alpha for estimating reliability. But…. Communication Methods and Measures, 14(1), 1–24. https://doi.org/10.1080/19312458.2020.1718629
  • Hoff, K. A., & Bashir, M. (2015). Trust in automation: Integrating empirical evidence on factors that influence trust. Human Factors, 57(3), 407–434. https://doi.org/10.1177/0018720814547570
  • Holden, R. J., & Karsh, B.-T. (2010). The Technology Acceptance Model: Its past and its future in health care. Journal of Biomedical Informatics, 43(1), 159–172. https://doi.org/10.1016/j.jbi.2009.07.002
  • Holt, C. A., & Laury, S. K. (2002). Risk aversion and incentive effects. American Economic Review, 92(5), 1644–1655. https://doi.org/10.1257/000282802762024700
  • Holzinger, A., Carrington, A., & Müller, H. (2020). Measuring the quality of explanations: The System Causability Scale (SCS). Kunstliche Intelligenz, 34(2), 193–198. https://doi.org/10.1007/s13218-020-00636-z
  • Hu, L.-T., & Bentler, P. M. (1999). Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling: A Multidisciplinary Journal, 6(1), 1–55. https://doi.org/10.1080/10705519909540118
  • Kahneman, D. (2011). Thinking, fast and slow. Macmillan.
  • Langer, E. J., Blank, A., & Chanowitz, B. (1978). The mindlessness of ostensibly thoughtful action: The role of “placebic” information in interpersonal interaction. Journal of Personality and Social Psychology, 36(6), 635–642. https://doi.org/10.1037/0022-3514.36.6.635
  • Larsson, S., & Heintz, F. (2020). Transparency in artificial intelligence. Internet Policy Review, 9(2), 1–16. https://doi.org/10.14763/2020.2.1469
  • Lee, J. D., & See, K. A. (2004). Trust in automation: Designing for appropriate reliance. Human Factors, 46(1), 50–80. https://doi.org/10.1518/hfes.46.1.50_30392
  • Lee, S., Moon, W.-K., Lee, J.-G., & Sundar, S. S. (2023). When the machine learns from users, is it helping or snooping? Computers in Human Behavior, 138, 107427. https://doi.org/10.1016/j.chb.2022.107427
  • Lopes, P., Silva, E., Braga, C., Oliveira, T., & Rosado, L. (2022). XAI systems evaluation: A review of human and computer-centred methods. Applied Sciences, 12(19), 9423. https://doi.org/10.3390/app12199423
  • Macmillan, N. A. (1993). Signal detection theory as data analysis method and psychological decision model. In G. Keren & C. Lewis (Eds.), A handbook for data analysis in the behavioral sciences: Methodological issues (pp. 21–57). Lawrence Erlbaum Associates.
  • Manzey, D., & Bahner, J. E. (2005). Vertrauen in Automation als Aspekt der Verlässlichkeit von Mensch-Maschine-Systemen [Trust in automation as an aspect of the reliability of human-machine systems]. In K. Karrer, B. Gauss, & C. Steffens (Eds.), Beiträge zur Mensch-Maschine-Systemtechnik aus Forschung und Praxis – Festschrift für Klaus-Peter Timpe (1st ed., pp. 93–109). Springer.
  • Marangunić, N., & Granić, A. (2015). Technology acceptance model: A literature review from 1986 to 2013. Universal Access in the Information Society, 14(1), 81–95. https://doi.org/10.1007/s10209-014-0348-1
  • Mayer, R. C., Davis, J. H., & Schoorman, F. D. (1995). An integrative model of organizational trust. The Academy of Management Review, 20(3), 709. https://doi.org/10.2307/258792
  • Mercado, J. E., Rupp, M. A., Chen, J. Y. C., Barnes, M. J., Barber, D., & Procci, K. (2016). Intelligent agent transparency in human-agent teaming for multi-UxV management. Human Factors, 58(3), 401–415. https://doi.org/10.1177/0018720815621206
  • Merritt, S. M., Ako-Brew, A., Bryant, W. J., Staley, A., McKenna, M., Leone, A., & Shirase, L. (2019). Automation-induced complacency potential: Development and validation of a new scale. Frontiers in Psychology, 10(1), 225. https://doi.org/10.3389/fpsyg.2019.00225
  • Miller, T. (2019). Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence, 267, 1–38. https://doi.org/10.1016/j.artint.2018.07.007
  • Molina, M. D., & Sundar, S. S. (2022). When AI moderates online content: Effects of human collaboration and interactive transparency on user trust. Journal of Computer-Mediated Communication, 27(4), 1–12. https://doi.org/10.1093/jcmc/zmac010
  • Moray, N. (2003). Monitoring, complacency, scepticism and eutactic behaviour. International Journal of Industrial Ergonomics, 31(3), 175–178. https://doi.org/10.1016/S0169-8141(02)00194-4
  • Noetel, M., Griffith, S., Delaney, O., Harris, N. R., Sanders, T., Parker, P., del Pozo Cruz, B., & Lonsdale, C. (2021). Multimedia design for learning: An overview of reviews with meta-meta-analysis. Review of Educational Research, 92(3), 413–454. https://doi.org/10.3102/00346543211052329
  • Parasuraman, R., & Manzey, D. (2010). Complacency and bias in human use of automation: An attentional integration. Human Factors, 52(3), 381–410. https://doi.org/10.1177/0018720810376055
  • Parasuraman, R., & Riley, V. (1997). Humans and automation: Use, misuse, disuse, abuse. Human Factors: The Journal of the Human Factors and Ergonomics Society, 39(2), 230–253. https://doi.org/10.1518/001872097778543886
  • Prinzel, L. J., DeVries, H., Freeman, F. G., & Mikulka, P. (2001). Examination of automation-induced complacency and individual difference variates. [Technical Memorandum No. TM-2001-211413]. National Aeronautics and Space Administration Langley Research Center.
  • Putnam, V., & Conati, C. (2019). Exploring the need for Explainable Artificial Intelligence (XAI) in Intelligent Tutoring Systems (ITS). In C. Trattner, D. Parra, & N. Riche (Eds.), Joint Proceedings of the ACM IUI 2019 Workshops (Vol. 2327). http://ceur-ws.org/Vol-2327/IUI19WS-ExSS2019-19.pdf
  • Rajpurkar, P., Chen, E., Banerjee, O., & Topol, E. J. (2022). AI in health and medicine. Nature Medicine, 28(1), 31–38. https://doi.org/10.1038/s41591-021-01614-0
  • Rey, G. D. (2012). A review of research and a meta-analysis of the seductive detail effect. Educational Research Review, 7(3), 216–237. https://doi.org/10.1016/j.edurev.2012.05.003
  • Ribera, M., & Lapedriza García, À. (2019). Can we do better explanations? A proposal of user-centered explainable AI. In C. Trattner, D. Parra, & N. Riche (Eds.), Joint Proceedings of the ACM IUI 2019 Workshops co-located with the 24th ACM Conference on Intelligent User Interfaces (ACM IUI 2019), Los Angeles, USA. ACM. http://hdl.handle.net/10609/99643
  • Schaffer, J., O’Donovan, J., Michaelis, J., Raglin, A., & Höllerer, T. (2019). I can do better than your AI: Expertise and explanations. In Proceedings of the 24th International Conference on Intelligent User Interfaces (pp. 240–251). ACM. https://doi.org/10.1145/3301275.3302308
  • Schwesig, R., Brich, I., Buder, J., Huff, M., & Said, N. (2023). Using Artificial Intelligence (AI)? Risk and opportunity perception of AI predict people’s willingness to use AI. Journal of Risk Research, 26(10), 1053–1084. https://doi.org/10.1080/13669877.2023.2249927
  • Sheridan, T., & Parasuraman, R. (2005). Human-automation interaction. Reviews of Human Factors and Ergonomics, 1(1), 89–129. https://doi.org/10.1518/155723405783703082
  • Shin, D., Zhong, B., & Biocca, F. A. (2020). Beyond user experience: What constitutes algorithmic experiences? International Journal of Information Management, 52(1), 102061. https://doi.org/10.1016/j.ijinfomgt.2019.102061
  • Singh, I. L., Molloy, R., & Parasuraman, R. (1993a). Automation-induced “complacency”: Development of the Complacency-Potential Rating Scale. The International Journal of Aviation Psychology, 3(2), 111–122. https://doi.org/10.1207/s15327108ijap0302_2
  • Singh, I. L., Molloy, R., & Parasuraman, R. (1993b). Individual differences in monitoring failures of automation. The Journal of General Psychology, 120(3), 357–373. https://doi.org/10.1080/00221309.1993.9711153
  • Speith, T. (2022). A review of taxonomies of Explainable Artificial Intelligence (XAI) methods. In 2022 5th ACM Conference on Fairness, Accountability, and Transparency (FAccT ’22), Seoul, Republic of Korea (pp. 2239–2250). Association for Computing Machinery. https://doi.org/10.1145/3531146.3534639
  • Srinivasan, R., & Chander, A. (2020). Explanation perspectives from the cognitive sciences — A survey. In C. Bessiere (Ed.), Proceedings of the 29th International Joint Conference on Artificial Intelligence (pp. 4812–4818). International Joint Conferences on Artificial Intelligence Organization. https://doi.org/10.24963/ijcai.2020/670
  • Sweller, J., Ayres, P., & Kalyuga, S. (2011). The split-attention effect. In Cognitive load theory. explorations in the learning sciences, instructional systems and performance technologies (1st ed., Vol. 1, pp. 111–128). Springer.
  • Trope, Y., & Liberman, N. (2010). Construal-level theory of psychological distance. Psychological Review, 117(2), 440–463. https://doi.org/10.1037/a0018963
  • Venkatesh, V., Morris, M. G., Davis, G. B., & Davis, F. D. (2003). User acceptance of information technology: Toward a unified view. MIS Quarterly, 27(3), 425. https://doi.org/10.2307/30036540
  • Venkatesh, V., Thong, J. Y. L., & Xu, X. (2012). Consumer acceptance and use of information technology: Extending the unified theory of acceptance and use of technology. MIS Quarterly, 36(1), 157–178. https://doi.org/10.2307/41410412
  • Vorm, E. S., & Combs, D. J. Y. (2022). Integrating transparency, trust, and acceptance: The Intelligent Systems Technology Acceptance Model (ISTAM). International Journal of Human–Computer Interaction, 38(18-20), 1828–1845. https://doi.org/10.1080/10447318.2022.2070107
  • Wang, A. Y., & Rhemtulla, M. (2021). Power analysis for parameter estimation in structural equation modeling: A discussion and tutorial. Advances in Methods and Practices in Psychological Science, 4(1), 251524592091825. https://doi.org/10.1177/2515245920918253
  • Wason, P. C., & Evans, J. S. B. T. (1974). Dual processes in reasoning? Cognition, 3(2), 141–154. https://doi.org/10.1016/0010-0277(74)90017-1
  • Wiener, E. L. (1985). Cockpit automation: In need of a philosophy. SAE Transactions, 94(6), 952–958.
  • Wright, J. L., Chen, J. Y. C., Baker, M. J., & Hancock, P. A. (2020). Agent reasoning transparency: The influence of information level on automation-induced complacency. IEEE Transactions on Human-Machine Systems, 50(3), 254–263. https://doi.org/10.1109/THMS.2019.2925717
  • Wu, I.-L., & Chen, J.-L. (2005). An extension of trust and TAM model with TPB in the initial adoption of on-line tax: An empirical study. International Journal of Human-Computer Studies, 62(6), 784–808. https://doi.org/10.1016/j.ijhcs.2005.03.003
  • Zhang, T., Tao, D., Qu, X., Zhang, X., Lin, R., & Zhang, W. (2019). The roles of initial trust and perceived risk in public’s acceptance of automated vehicles. Transportation Research Part C: Emerging Technologies, 98(1), 207–220. https://doi.org/10.1016/j.trc.2018.11.018