8,942
Views
12
CrossRef citations to date
0
Altmetric
Research Articles

Trust in AI and Its Role in the Acceptance of AI Technologies

ORCID Icon, ORCID Icon & ORCID Icon
Pages 1727-1739 | Received 30 Jun 2021, Accepted 21 Feb 2022, Published online: 20 Apr 2022
 

Abstract

As AI-enhanced technologies become common in a variety of domains, there is an increasing need to define and examine the trust that users have in such technologies. Given the progress in the development of AI, a correspondingly sophisticated understanding of trust in the technology is required. This paper addresses this need by explaining the role of trust in the intention to use AI technologies. Study 1 examined the role of trust in the use of AI voice assistants based on survey responses from college students. A path analysis confirmed that trust had a significant effect the on intention to use AI, which operated through perceived usefulness and participants’ attitude toward voice assistants. In Study 2, using data from a representative sample of the U.S. population, different dimensions of trust were examined using exploratory factor analysis, which yielded two dimensions: human-like trust and functionality trust. The results of the path analyses from Study 1 were replicated in Study 2, confirming the indirect effect of trust and the effects of perceived usefulness, ease of use, and attitude on intention to use. Further, both dimensions of trust shared a similar pattern of effects within the model, with functionality-related trust exhibiting a greater total impact on usage intention than human-like trust. Overall, the role of trust in the acceptance of AI technologies was significant across both studies. This research contributes to the advancement and application of the TAM in AI-related applications and offers a multidimensional measure of trust that can be utilized in the future study of trustworthy AI.

Disclosure statement

The authors declare that there is no conflict of interest.

Data availability statement

The data that support the findings of this study are available from the corresponding author, HC, upon request.

Additional information

Notes on contributors

Hyesun Choung

Hyesun Choung is a postdoctoral Research Associate at Michigan State University in the College of Communication Arts & Sciences. She is a media effects researcher interested in social implications of emerging information technologies. Her recent research includes the automation of journalism and ethical implications of automated decision-making.

Prabu David

Prabu David is the dean of the College of Communication Arts & Sciences at Michigan State University. He is a communication researcher who studies media and cognition. His recent research includes multitasking, mobile health, problematic use of social media, and trustworthy and ethical AI.

Arun Ross

Arun Ross is the Cillag Endowed Chair in Engineering and a Professor in the Department of Computer Science and Engineering at Michigan State University. He also serves as the Site Director of the NSF Center for Identification Technology Research (CITeR). His research interests include Machine Learning, Computer Vision, and Biometrics.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.