8,942
Views
12
CrossRef citations to date
0
Altmetric
Research Articles

Trust in AI and Its Role in the Acceptance of AI Technologies

ORCID Icon, ORCID Icon & ORCID Icon
Pages 1727-1739 | Received 30 Jun 2021, Accepted 21 Feb 2022, Published online: 20 Apr 2022

References

  • AI HLEG (2019). Ethics guidelines for trustworthy AI. European Commission. https://data.europa.eu/doi/10.2759/346720
  • Barredo Arrieta, A., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., Garcia, S., Gil-Lopez, S., Molina, D., Benjamins, R., Chatila, R., & Herrera, F. (2020). Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, 58, 82–115. https://doi.org/10.1016/j.inffus.2019.12.012
  • Beldad, A. D., & Hegner, S. M. (2018). Expanding the technology acceptance model with the inclusion of trust, social influence, and health valuation to determine the predictors of German users’ willingness to continue using a fitness app: A structural equation modeling approach. International Journal of Human–Computer Interaction, 34(9), 882–893. https://doi.org/10.1080/10447318.2017.1403220
  • Calhoun, C. S., Bobko, P., Gallimore, J. J., & Lyons, J. B. (2019). Linking precursors of interpersonal trust to human-automation trust: An expanded typology and exploratory experiment. Journal of Trust Research, 9(1), 28–46. https://doi.org/10.1080/21515581.2019.1579730
  • Choi, J. K., & Ji, Y. G. (2015). Investigating the importance of trust on adopting an autonomous vehicle. International Journal of Human-Computer Interaction, 31(10), 692–702. https://doi.org/10.1080/10447318.2015.1070549
  • Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly, 13(3), 319. https://doi.org/10.2307/249008
  • de Visser, E. J., Monfort, S. S., McKendrick, R., Smith, M. A. B., McKnight, P. E., Krueger, F., & Parasuraman, R. (2016). Almost human: Anthropomorphism increases trust resilience in cognitive agents. Journal of Experimental Psychology. Applied, 22(3), 331–349. https://doi.org/10.1037/xap0000092
  • Edelman (2021). Edelman trust barometer 2021 (Annual Edelman Trust Barometer, p. 58). https://www.edelman.com/sites/g/files/aatuss191/files/2021-03/2021%20Edelman%20Trust%20Barometer.pdf
  • Erebak, S., & Turgut, T. (2019). Caregivers’ attitudes toward potential robot coworkers in elder care. Cognition, Technology & Work, 21(2), 327–336. https://doi.org/10.1007/s10111-018-0512-0
  • Gaudiello, I., Zibetti, E., Lefort, S., Chetouani, M., & Ivaldi, S. (2016). Trust as indicator of robot functional and social acceptance. An experimental study on user conformation to iCub answers. Computers in Human Behavior, 61, 633–655. https://doi.org/10.1016/j.chb.2016.03.057
  • Gefen, D., Karahanna, E., & Straub, D. W. (2003). Trust and TAM in online shopping: An integrated model. MIS Quarterly, 27(1), 51–90. https://doi.org/10.2307/30036519
  • Gillath, O., Ai, T., Branicky, M. S., Keshmiri, S., Davison, R. B., & Spaulding, R. (2021). Attachment and trust in artificial intelligence. Computers in Human Behavior, 115, 106607. https://doi.org/10.1016/j.chb.2020.106607
  • Glikson, E., & Woolley, A. W. (2020). Human trust in artificial intelligence: Review of empirical research. Academy of Management Annals, 14(2), 627–660. https://doi.org/10.5465/annals.2018.0057
  • Gorwa, R., Binns, R., & Katzenbach, C. (2020). Algorithmic content moderation: Technical and political challenges in the automation of platform governance. Big Data & Society, 7(1), 205395171989794. https://doi.org/10.1177/2053951719897945
  • Hagendorff, T. (2020). The ethics of AI ethics: An evaluation of guidelines. Minds and Machines, 30(1), 99–120. https://doi.org/10.1007/s11023-020-09517-8
  • Hancock, P. A., Billings, D. R., Schaefer, K. E., Chen, J. Y. C., de Visser, E. J., & Parasuraman, R. (2011). A meta-analysis of factors affecting trust in human-robot interaction. Human Factors, 53(5), 517–527. https://doi.org/10.1177/0018720811417254
  • Hoff, K. A., & Bashir, M. (2015). Trust in automation: Integrating empirical evidence on factors that influence trust. Human Factors, 57(3), 407–434. https://doi.org/10.1177/0018720814547570
  • Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389–399. https://doi.org/10.1038/s42256-019-0088-2
  • Kim, J. B. (2012). An empirical study on consumer first purchase intention in online shopping: Integrating initial trust and TAM. Electronic Commerce Research, 12(2), 125–150. https://doi.org/10.1007/s10660-012-9089-5
  • Kim, K., Boelling, L., Haesler, S., Bailenson, J., Bruder, G., & Welch, G. F. (2018). Does a digital assistant need a body? The influence of visual embodiment and social behavior on the perception of intelligent virtual agents in AR. In 2018 IEEE International Symposium on Mixed and Augmented Reality (ISMAR) (pp. 105–114). https://doi.org/10.1109/ISMAR.2018.00039
  • Krafft, P. M., Young, M., Katell, M., Huang, K., & Bugingo, G. (2020). Defining AI in policy versus practice. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society (pp. 72–78). https://doi.org/10.1145/3375627.3375835
  • Lankton, N., McKnight, D. H., & Tripp, J. (2015). Technology, humanness, and trust: Rethinking trust in technology. Journal of the Association for Information Systems, 16(10), 880–918. https://doi.org/10.17705/1jais.00411
  • Lee, J. D., & See, K. A. (2004). Trust in automation: Designing for appropriate reliance. Human Factors, 46(1), 50–80. https://doi.org/10.1518/hfes.46.1.50_30392
  • Lee, M. K. (2018). Understanding perception of algorithmic decisions: Fairness, trust, and emotion in response to algorithmic management. Big Data & Society, 5(1), 205395171875668. https://doi.org/10.1177/2053951718756684
  • Lee, M. K., & Rich, K. (2021). Who is included in human perceptions of AI?: Trust and perceived fairness around healthcare AI and cultural mistrust [Paper presentation] (p. 14). https://doi.org/10.1145/3411764.3445570
  • Lockey, S., Gillespie, N., Holm, D., & Someh, I. A. (2021). A review of trust in artificial intelligence: Challenges, vulnerabilities and future directions. Hawaii International Conference on System Sciences. https://doi.org/10.24251/HICSS.2021.664
  • Madhavan, P., & Wiegmann, D. A. (2007). Effects of information source, pedigree, and reliability on operator interaction with decision support systems. Human Factors, 49(5), 773–785. https://doi.org/10.1518/001872007X230154
  • Marangunić, N., & Granić, A. (2015). Technology acceptance model: A literature review from 1986 to 2013. Universal Access in the Information Society, 14(1), 81–95. https://doi.org/10.1007/s10209-014-0348-1
  • Mayer, R. C., Davis, J. H., & Schoorman, F. D. (1995). An integrative model of organizational trust: Past, present, and future. The Academy of Management Review, 20(3), 709–734. https://doi.org/10.2307/258792
  • Mcknight, D. H., Carter, M., Thatcher, J. B., & Clay, P. F. (2011). Trust in a specific technology: An investigation of its components and measures. ACM Transactions on Management Information Systems, 2(2), 1–25. https://doi.org/10.1145/1985347.1985353
  • McLean, G., Osei-Frimpong, K. (2019). Hey Alexa … examine the variables influencing the use of artificial intelligent in-home voice assistants. Computers in Human Behavior, 99, 28–37. https://doi.org/10.1016/j.chb.2019.05.009
  • OECD (2019). Artificial intelligence in society. OECD Publishing. https://doi.org/10.1787/eedfee77-en
  • Reeves, B., & Nass, C. I. (1996). The media equation: How people treat computers, television, and new media like real people and places (1. paperback ed.). CSLI Publ.
  • Rheu, M., Shin, J. Y., Peng, W., & Huh-Yoo, J. (2021). Systematic review: Trust-building factors and implications for conversational agent design. International Journal of Human–Computer Interaction, 37(1), 81–96. https://doi.org/10.1080/10447318.2020.1807710
  • Rosseel, Y. (2012). lavaan: An R package for structural equation modeling. Journal of Statistical Software, 48(2), 1–36. https://doi.org/10.18637/jss.v048.i02
  • Rousseau, D. M., Sitkin, S. B., Burt, R. S., & Camerer, C. (1998). Not so different after all: A cross-discipline view of trust. Academy of Management Review, 23(3), 393–404. https://doi.org/10.5465/amr.1998.926617
  • Russell, S. J., Norvig, P., Davis, E., & Edwards, D. (2016). Artificial intelligence: A modern approach (3rd ed., Global ed.). Pearson.
  • Schmidt, P., Biessmann, F., & Teubner, T. (2020). Transparency and trust in artificial intelligence systems. Journal of Decision Systems, 29(4), 260–278. https://doi.org/10.1080/12460125.2020.1819094
  • Shin, D. (2020a). How do users interact with algorithm recommender systems? The interaction of users, algorithms, and performance. Computers in Human Behavior, 109, 106344. https://doi.org/10.1016/j.chb.2020.106344
  • Shin, D. (2020b). User perceptions of algorithmic decisions in the personalized AI system: Perceptual evaluation of fairness, accountability, transparency, and explainability. Journal of Broadcasting & Electronic Media, 64(4), 541–565. https://doi.org/10.1080/08838151.2020.1843357
  • Shin, D. (2020c). Expanding the role of trust in the experience of algorithmic journalism: User sensemaking of algorithmic heuristics in Korean users. Journalism Practice, 1–24. https://doi.org/10.1080/17512786.2020.1841018
  • Shin, D. (2021a). Embodying algorithms, enactive artificial intelligence and the extended cognition: You can see as much as you know about algorithm. Journal of Information Science, 1–14. https://doi.org/10.1177/0165551520985495
  • Shin, D. (2021b). The effects of explainability and causability on perception, trust, and acceptance: Implications for explainable AI. International Journal of Human-Computer Studies, 146, 102551. https://doi.org/10.1016/j.ijhcs.2020.102551
  • Shin, D. (2021c). The perception of humanness in conversational journalism: An algorithmic information-processing perspective. New Media & Society, 1–25. https://doi.org/10.1177/1461444821993801
  • Shin, D. (2021d). Why does explainability matter in news analytic systems? Proposing explainable analytic journalism. Journalism Studies, 22(8), 1047–1065. https://doi.org/10.1080/1461670X.2021.1916984
  • Shin, D. (2022). How do people judge the credibility of algorithmic sources? Ai & Society, 37, 81–96. https://doi.org/10.1007/s00146-021-01158-4
  • Shin, D., & Park, Y. J. (2019). Role of fairness, accountability, and transparency in algorithmic affordance. Computers in Human Behavior, 98, 277–284. https://doi.org/10.1016/j.chb.2019.04.019
  • Sledgianowski, D., & Kulviwat, S. (2009). Using social networking sites: The effects of playfulness, critical mass and trust in a hedonic context. The Journal of Computer Information Systems, 49(4), 74–83. https://doi.org/10.1080/08874417.2009.11645342
  • Söllner, M., Hoffmann, A., & Leimeister, J. M. (2016). Why different trust relationships matter for information systems users. European Journal of Information Systems, 25(3), 274–287. https://doi.org/10.1057/ejis.2015.17
  • Suh, B., & Han, I. (2002). Effect of trust on customer acceptance of Internet banking. Electronic Commerce Research and Applications, 1(3–4), 247–263. https://doi.org/10.1016/S1567-4223(02)00017-0
  • Sundar, S. S., Jung, E. H., Waddell, T. F., & Kim, K. J. (2017). Cheery companions or serious assistants? Role and demeanor congruity as predictors of robot attraction and use intentions among senior citizens. International Journal of Human-Computer Studies, 97, 88–97. https://doi.org/10.1016/j.ijhcs.2016.08.006
  • Terzopoulos, G., & Satratzemi, M. (2020). Voice assistants and smart speakers in everyday life and in education. Informatics in Education, 19(3), 473–490. https://doi.org/10.15388/infedu.2020.21
  • Thiebes, S., Lins, S., & Sunyaev, A. (2021). Trustworthy artificial intelligence. Electronic Markets, 31, 447–464. https://doi.org/10.1007/s12525-020-00441-4
  • Tung, F., Chang, S., & Chou, C. (2008). An extension of trust and TAM model with IDT in the adoption of the electronic logistics information system in HIS in the medical industry. International Journal of Medical Informatics, 77(5), 324–335. https://doi.org/10.1016/j.ijmedinf.2007.06.006
  • Venkatesh, V., & Davis, F. D. (2000). A theoretical extension of the technology acceptance model: Four longitudinal field studies. Management Science, 46(2), 186–204. https://doi.org/10.1287/mnsc.46.2.186.11926
  • Venkatesh, V., & Bala, H. (2008). Technology acceptance model 3 and a research agenda on interventions. Decision Sciences, 39(2), 273–315. https://doi.org/10.1111/j.1540-5915.2008.00192.x
  • Watson, D. (2019). The rhetoric and reality of anthropomorphism in artificial intelligence. Minds and Machines, 29(3), 417–440. https://doi.org/10.1007/s11023-019-09506-6
  • Waytz, A., Heafner, J., & Epley, N. (2014). The mind in the machine: Anthropomorphism increases trust in an autonomous vehicle. Journal of Experimental Social Psychology, 52, 113–117. https://doi.org/10.1016/j.jesp.2014.01.005
  • Wu, I.-L., & Chen, J.-L. (2005). An extension of trust and TAM model with TPB in the initial adoption of on-line tax: An empirical study. International Journal of Human-Computer Studies, 62(6), 784–808. https://doi.org/10.1016/j.ijhcs.2005.03.003
  • Wu, J., & Liu, D. (2007). Please note that there is no doi associated with this paper: The effects of trust and enjoyment on intention to play online games. Journal of Electronic Commerce Research, 8(2), 128–140.
  • Wu, K., Zhao, Y., Zhu, Q., Tan, X., & Zheng, H. (2011). A meta-analysis of the impact of trust on technology acceptance model: Investigation of moderating influence of subject and context type. International Journal of Information Management, 31(6), 572–581. https://doi.org/10.1016/j.ijinfomgt.2011.03.004

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.