2,593
Views
5
CrossRef citations to date
0
Altmetric
Articles

Integrating Transparency, Trust, and Acceptance: The Intelligent Systems Technology Acceptance Model (ISTAM)

ORCID Icon &
Pages 1828-1845 | Received 01 Apr 2021, Accepted 18 Apr 2022, Published online: 20 May 2022

References

  • Arrieta, A. B., Díaz-Rodríguez, N., Ser, J. D., Bennetot, A., Tabik, S., Barbado, A., Garcia, S., Gil-Lopez, S., Molina, D., Benjamins, R., Chatila, R., & Herrera, F. (2020). Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, 58, 82–115. https://doi.org/10.1016/j.inffus.2019.12.012
  • Avati, A., Jung, K., Harman, S., Downing, L., Ng, A., & Shah, N. H. (2017). Improving palliative care with deep learning (pp. 311–316). IEEE. https://doi.org/10.1109/bibm.2017.8217669
  • Bae, J., Ventocilla, E., Riveiro, M., Helldin, T., & Falkman, G. (2017). Evaluating multi-attributes on cause and effect relationship visualization [Paper presentation]. Proceedings of the 12th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications -- IVAPP (pp. 64–74). https://doi.org/10.5220/0006102300640074
  • Bernstein, E. (2017). Making transparency transparent: The evolution of observation in management theory. Academy of Management Annals, 11(1), 217–266. https://doi.org/10.5465/annals.2014.0076
  • Bitzer, T., Wiener, M., & Morana, S. (2021). Algorithmic transparency and contact-tracing apps: An empirical investigation. Presented at the twenty-seventh Americas Conference on Information Systems, Montreal.
  • Blasch, E., Sung, J., Nguyen, T., Daniel, C. P., & Mason, A. P. (2019). Artificial intelligence strategies for national security and safety standards. Presented at the AAAI Fall Symposium Series, Arlington, VA.
  • Blume, L. E., & Easley, D. (2016). Rationality. In The new Palgrave dictionary of economics (pp. 1–13). Palgrave Macmillan. https://doi.org/10.1057/978-1-349-95121-5_2138-1
  • Bowen, J., Winckler, M., & Vanderdonckt, J. (2020). A glimpse into the past, present, and future of engineering interactive computing systems. Proceedings of the ACM on Human-Computer Interaction, 4(EICS), 1–32. https://doi.org/10.1145/3394973
  • Chen, J., Procci, K., Boyce, M., Wright, J., Garcia, A., & Barnes, M. (2014). Situation awareness-based agent transparency. Army Research Laboratory Report ARL-TR-6905.
  • Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly, 13(3), 319–340. https://doi.org/10.2307/249008
  • Diakopoulos, N., & Koliska, M. (2017). Algorithmic transparency in the news media. Digital Journalism, 5(7), 809–820. https://doi.org/10.1080/21670811.2016.1208053
  • Dix, A., Finlay, J., Abowd, G. D., & Beale, R. (2004). Human-computer interaction (3rd ed.). Pearson Prentice Hall.
  • DNI (2015). Intelligence community directive 203 (ICD 203). https://www.dni.gov/files/documents/ICD/ICD%20203%20Analytic%20Standards.pdf
  • Eslami, M., Rickman, A., Vaccaro, K., Aleyasen, A., Vuong, A., Karahalios, K., Hamilton, K., & Sandvig, C. (2015, April). “I always assumed that I wasn’t really that close to [her]”: Reasoning about invisible algorithms in news feeds. CHI ’15: Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (pp. 153--162). https://doi.org/10.1145/2702123.2702556
  • EU (2019). Ethics guidelines for trustworthy AI. https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai
  • Gregor, S., & Benbasat, I. (1999). Explanations from intelligent systems: Theoretical foundations and implications for practice. MIS Quarterly, 23(4), 497. https://doi.org/10.2307/249487
  • Harrigan, M., Feddema, K., Wang, S., Harrigan, P., & Diot, E. (2021). How trust leads to online purchase intention founded in perceived usefulness and peer communication. Journal of Consumer Behaviour, 20(5), 1297–1312. https://doi.org/10.1002/cb.1936
  • Herlocker, J. L., Konstan, J. A., & Riedl, J. (2000). Explaining collaborative filtering recommendations [Paper presentation]. ACM conference on computer-supported cooperative work, CSCW’00 (pp. 241–250). https://doi.org/10.1145/358916.358995
  • Hoff, K., & Bashir, M. (2015). Trust in automation: Integrating empirical evidence on factors that influence trust. Human Factors, 57(3), 407–434. https://doi.org/10.1177/0018720814547570
  • Hoffman, R. (2017). A taxonomy of emergent trusting in the human--machine relationship. In Cognitive systems engineering: The future for a changing world (1st ed., pp. 137–164). CRC Press. https://doi.org/10.1201/9781315572529-8
  • Holzinger, A., Carrington, A., & Müller, H. (2020). Measuring the quality of explanations: The System Causability Scale (SCS): Comparing human and machine explanations. Kunstliche intelligenz, 34(2), 193–198. https://doi.org/10.1007/s13218-020-00636-z
  • Horrigan, J. B. (2017). How people approach facts and information. Pew Research Center.
  • IBM (2020). What’s next for AI – building trust. https://www.ibm.com/watson/advantage-reports/future-of-artificial-intelligence/building-trust-in-ai.html#section2
  • Jung, J., Park, E., Moon, J., & Lee, W. S. (2021). Exploration of sharing accommodation platform airbnb using an extended technology acceptance model. Sustainability, 13(3), 1185. https://doi.org/10.3390/su13031185
  • Kim, W., Kim, N., Lyons, J. B., & Nam, C. S. (2020). Factors affecting trust in high-vulnerability human-robot interaction contexts: A structural equation modeling approach. Applied Ergonomics, 85, 103056. https://doi.org/10.1016/j.apergo.2020.103056
  • Klein, G., Woods, D. D., Bradshaw, J. M., Hoffman, R. R., & Feltovich, P. J. (2004). Ten challenges for making automation a “team player” in joint human-agent activity. IEEE Intelligent Systems, 19(06), 91–95. https://doi.org/10.1109/MIS.2004.74
  • Koo, J., Kwac, J., Ju, W., Steinert, M., Leifer, L., & Nass, C. (2014). Why did my car just do that? Explaining semi-autonomous driving actions to improve driver understanding, trust, and performance. International Journal on Interactive Design and Manufacturing (IJIDeM), 9, 269--275. https://doi.org/10.1007/s12008-014-0227-2
  • Lee, J. D., & Kolodge, K. (2020). Exploring trust in self-driving vehicles through text analysis. Human Factors: The Journal of the Human Factors and Ergonomics Society, 62(2), 260–277. https://doi.org/10.1177/0018720819872672
  • Lee, J. D., & See, K. A. (2004). Trust in automation: Designing for appropriate reliance. Human Factors, 46(1), 50–80. https://doi.org/10.1518/hfes.46.1.50_30392
  • Lipton, Z. (2018). The mythos of model interpretability. Communications of the ACM, 61(10), 36–43. https://doi.org/10.1145/3233231
  • Lyons, J. (2013). Being transparent about transparency: A model for human-robot interaction. In Trust and autonomous systems: Papers from the 2013 AAAI Spring Symposium.
  • Lyons, J., Koltai, K., Ho, N., Johnson, W., Smith, D., & Shively, R. J. (2016). Engineering trust in complex automated systems. Ergonomics in Design: The Quarterly of Human Factors Applications, 24(1), 13–17. https://doi.org/10.1177/1064804615611272
  • Lyons, J. B., Vo, T., Wynne, K. T., Mahoney, S., Nam, C. S., & Gallimore, D. (2020). Trusting autonomous security robots: The role of reliability and stated social intent. Human Factors, 63(4), 603--618. https://doi.org/10.1177/0018720820901629
  • Ma, R. H. Y., Morris, A., Herriotts, P., & Birrell, S. (2021). Investigating what level of visual information inspires trust in a user of a highly automated vehicle. Applied Ergonomics, 90, 103272. https://doi.org/10.1016/j.apergo.2020.103272
  • Malle, B. F., & Ullman, D. (2021). Trust in human-robot interaction (pp. 3–25). Elsevier. https://doi.org/10.1016/b978-0-12-819472-0.00001-0
  • Marwick, A., & Boyd, D. (2011). To see and be seen: Celebrity practice on Twitter. Convergence: The International Journal of Research into New Media Technologies, 17(2), 139–158. https://doi.org/10.1177/1354856510394539
  • Mayer, R. C., Davis, J. H., & Schoorman, F. D. (1995). An integrative model of organizational trust. The Academy of Management Review, 20(3), 709. https://doi.org/10.2307/258792
  • Mercado, J. E., Rupp, M. A., Chen, J. Y. C., Barnes, M. J., Barber, D., & Procci, K. (2016). Intelligent agent transparency in human-agent teaming for multi-UxV management. Human Factors, 58(3), 401–415. https://doi.org/10.1177/0018720815621206
  • Meske, C., & Bunde, E. (2020). Transparency and trust in Human-AI-Interaction: The role of model-agnostic explanations in computer vision-based decision support. In H. Degen, & L. Reinerman-Jones (Eds.), Artificial intelligence in HCI. HCII 2020. Lecture notes in computer science (LNISA, Vol. 12217). Springer. https://doi.org/10.1007/978-3-030-50334-5_4
  • Mühlbacher, T., Piringer, H., Gratzl, S., Sedlmair, M., & Streit, M. (2014). Opening the black box: Strategies for increased user involvement in existing algorithm implementations. IEEE Transactions on Visualization and Computer Graphics, 20(12), 1643–1652. https://doi.org/10.1109/TVCG.2014.2346578
  • Mumaw, R. J. (2017). Analysis of alerting system failures in commercial aviation accidents. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 61(1), 110–114. https://doi.org/10.1177/1541931213601493
  • Mumaw, R. J., Roth, E. M., Vicente, K. J., & Burns, C. M. (2000). There is more to monitoring a nuclear power plant than meets the eye. Human Factors, 42(1), 36–55. https://doi.org/10.1518/001872000779656651
  • Orsosky, D., Sander, T., Jentsch, F., Hancock, P., & Chen, J. (2014). Determinants of system transparency and its influence on trust in and reliance on unmanned robotic systems. Presented at the SPIE Defense + Security, SPIE. https://doi.org/10.1117/12.2050622
  • Parasuraman, R., Sheridan, T. B., & Wickens, C. D. (2000). A model for types and levels of human interaction with automation. IEEE Transactions on Systems, Man, and Cybernetics. Part A, Systems and Humans: A Publication of the IEEE Systems, Man, and Cybernetics Society, 30(3), 286–297. https://doi.org/10.1109/3468.844354
  • Patterson-Hann, V., & Watson, P. (2022). The precursors of acceptance for a prosumer-led transition to a future smart grid. Technology Analysis & Strategic Management, 34(3), 307–315. https://doi.org/10.1080/09537325.2021.1896698
  • Pharmer, J. (2004). An investigation into providing feedback to users of decision support [Doctoral dissertation]. Electronic Theses and Dissertations (p. 224). https://stars.library.ucf.edu/etd/22
  • Poon, A. I. F., & Sung, J. J. Y. (2021). Opening the black box of AI-Medicine. Journal of Gastroenterology and Hepatology, 36(3), 581–584. https://doi.org/10.1111/jgh.15384
  • Rahwan, I., Cebrian, M., Obradovich, N., Bongard, J., Bonnefon, J.-F., Breazeal, C., Crandall, J. W., Christakis, N. A., Couzin, I. D., Jackson, M. O., Jennings, N. R., Kamar, E., Kloumann, I. M., Larochelle, H., Lazer, D., McElreath, R., Mislove, A., Parkes, D. C., Pentland, A.… Wellman, M. (2019). Machine behaviour. Nature, 568(7753), 477–410. https://doi.org/10.1038/s41586-019-1138-y
  • Ren, F., & Bao, Y. (2020). A review on human-computer interaction and intelligent robots. International Journal of Information Technology & Decision Making, 19(01), 5–47. https://doi.org/10.1142/S0219622019300052
  • Riveiro, M., Helldin, T., Falkman, G., & Lebram, M. (2014). Effects of visualizing uncertainty on decision-making in a target identification scenario. Computers & Graphics, 41, 84–98. https://doi.org/10.1016/j.cag.2014.02.006
  • Rogers, Y., Sharp, H., & Preece, J. (2015). Interaction design: Beyond human - computer interaction (4th ed.). Wiley Publishing. https://doi.org/10.5555/2031622
  • Rouse, W. B., & Morris, N. M. (1986). On looking into the black box: Prospects and limits in the search for mental models. Psychological Bulletin, 100(3), 349–363. https://doi.org/10.1037/0033-2909.100.3.349
  • Schwartz, B. (2004). The paradox of choice: Why more is less. HarperCollins Publishers.
  • Sen, S., Geyer, W., Freyne, J., Castells, P., Amatriain, X., & Basilico, J. (2016). Past, present, and future of recommender systems: An industry perspective [Paper presentation]. Proceedings of the 10th ACM Conference on Recommender Systems (pp. 211–214). https://doi.org/10.1145/2959100.2959144
  • Siegrist, M. (2021). Trust and risk perception: A critical review of the literature. Risk Analysis: An Official Publication of the Society for Risk Analysis, 41(3), 480–490. https://doi.org/10.1111/risa.13325
  • Starke, S. D., & Baber, C. (2020). The effect of known decision support reliability on outcome quality and visual information foraging in joint decision making. Applied Ergonomics, 86, 103102. https://doi.org/10.1016/j.apergo.2020.103102
  • Swearingen, K., & Sinha, R. (2001). Beyond algorithms: An HCI perspective on recommender systems. In ACM SIGIR 2001 Workshop on Recommender Systems.
  • Van, H. N., Pham, L., Williamson, S., Chan, C.-Y., Thang, T. D., & Nam, V. X. (2021). Explaining intention to use mobile banking: Integrating perceived risk and trust into the technology acceptance model. International Journal of Applied Decision Sciences, 14(1), 55–80. https://doi.org/10.1504/IJADS.2021.112933
  • Venkatesh, V. (1999). Creation of favorable user perceptions: Exploring the role of intrinsic motivation. MIS Quarterly, 23(2), 239–160. https://doi.org/10.2307/249753
  • Venkatesh, V., & Davis, F. D. (2000). A theoretical extension of the technology acceptance model: Four longitudinal field studies. Management Science, 46(2), 186–204. https://doi.org/10.1287/mnsc.46.2.186.11926.
  • Venkatesh, V., & Bala, H. (2008). Technology acceptance model 3 and a research agenda on interventions. Decision Sciences, 39(2), 39. https://doi.org/10.1111/j.1540-5915.2008.00192.x
  • Viégas, F. B., Golder, S., & Donath, J. (2006). Visualizing email content: portraying relationships from conversational histories [Paper presentation]. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems - CHI ’06 (pp. 979–988). https://doi.org/10.1145/1124772.1124919
  • Vorm, E. S., & Miller, A. D. (2020). Modeling user information needs to enable successful human-machine teams: Designing transparency for autonomous systems. In D. Schmorrow & C. Fidopiastis (Eds.), Augmented cognition. Human cognition and behavior. HCII 2020. Lecture notes in computer science (LNCS, vol. 12197). Springer. Cham. https://doi.org/10.1007/978-3-030-50439-7_31
  • Wang, F.-Y., Carley, K. M., Zeng, D., & Mao, W. (2007). Social computing: From social informatics to social intelligence. IEEE Intelligent Systems, 22(2), 79–83. https://doi.org/10.1109/MIS.2007.41
  • Watts, S., & Stenner, P. (2012). Doing Q methodological research. Sage Publishing.
  • Yang, X. J., Unhelkar, V. V., Li, K., & Shah, J. A. (2017). Evaluating effects of user experience and system transparency on trust in automation. In Proceedings of the 2017 ACM/IEEE International Conference on Human-Robot Interaction (pp. 408–416). https://doi.org/10.1145/2909824.3020230

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.