2,568
Views
5
CrossRef citations to date
0
Altmetric
Articles

Integrating Transparency, Trust, and Acceptance: The Intelligent Systems Technology Acceptance Model (ISTAM)

ORCID Icon &
Pages 1828-1845 | Received 01 Apr 2021, Accepted 18 Apr 2022, Published online: 20 May 2022
 

Abstract

Intelligent systems such as technologies related to artificial intelligence, robotics, machine learning, etc. open new insights into data and expand the concept of work in myriad domains. These technologies, while potentially useful, face high barriers to widespread adoption and acceptance by industries and citizens alike. The complexity and multi-dimensionality inherent in intelligent systems often renders traditional validation efforts (e.g., traceability analysis) impossible. In addition, contexts where predictions or computer-generated recommendations have real-world consequences, such as in medical prognosis, financial investing, or military applications introduce new risks and a host of moral and ethical concerns that can further hinder the widespread adoption of intelligent systems. Naturally, such reluctance by would-be users limits the potential of intelligent systems to solve real-world problems. This article reviews the challenges to technology acceptance through the lens of system transparency and user trust, and extends the Technology Acceptance Model (TAM) structure with issues germane to intelligent systems. We examine several prospective transparency frameworks that could be adopted and used by Human-Computer Interaction (HCI) practitioners involved in systems development. Our intention is to assist practitioners in the design of more transparent systems with a specific eye towards enhancing trust and acceptance in intelligent systems. Further, as a result of our review, we suggest that the well-known TAM should be expanded in the context of intelligent systems to include trust and transparency as key elements of the model. Finally, we conclude with a research agenda that might offer empirical evidence showing how transparency might enhance acceptance and use of intelligent systems.

View correction statement:
Correction

Correction Statement

This article was originally published with errors, which have now been corrected in the online version. Please see Correction (http://dx.doi.org/10.1080/10447318.2022.2082649)

Acknowledgements

We would like to acknowledge the assistance of Dr. Terry Fong, Chief Roboticist at NASA Ames for his valuable insights and assistance in theoreticizing these ideas. We would like to acknowledge Dr. Glen Henshaw of the US Naval Research Laboratory’s Space Sciences Division for his useful and thoughtful feedback. We would like to acknowledge Dr. Chris Wickens, Professor Emeritus at the University of Illinois for his invaluable insight and guidance.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Notes

1 e.g., (Orsosky et al., Citation2014).

2 These procedures are often thought of as cousins of one another, see: https://www.qualtrics.com/blog/an-introduction-to-maxdiff/

3 For ease of depiction, we do not present each potential item to test nor the proposed moderating variables noted in Venkatesh & Bala (Citation2008).

Additional information

Notes on contributors

E. S. Vorm

E. S. Vorm is a cognitive systems engineer with a PhD in Human-Computer Interaction from Indiana University. As the deputy director for the Laboratory for Autonomous Systems Research at the US Naval Research Laboratory in Washington, DC, he works to improve human-machine collaboration through the deliberate design of intelligent systems.

David J. Y. Combs

David J. Y. Combs is an experimental social psychologist with a PhD from the University of Kentucky. He has over 10 years of experience across the government, non-profit, and private sectors. He specializes in leveraging social and behavioral science models to make the world a better place.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.