Abstract
Trust plays an essential role in the interaction between humans and artificial intelligence (AI). To promote trust in AI, information about the AI’s performance should be communicated well to the users. Accordingly, this paper investigates how information about AI performance should be presented, focusing on message framing and the ownership of decisions. A 2 (ownership: no ownership vs. ownership) × 3 (message framing: no information vs. negative information vs. positive information) between-subjects experiment was conducted (N = 120). Participants were asked to choose items to help them survive in the desert, supported by an AI decision. The results showed that participants without decision ownership perceived higher trust than those with decision ownership. Also, trust was perceived to be higher when participants were not given performance information than when they were. The results indicate the importance of carefully communicating with AI. The implications of this study are discussed.
Disclosure statement
No potential conflict of interest was reported by the authors.
Notes
1 The 15 prearranged items in order of importance were: cosmetic mirror, plastic raincoat, a quart of water per person, flashlight, pocket knife, red and white parachute, .45 caliber pistol, a pair of sunglasses, compress kit with gauze, compass, sectional air map of the area, a book entitled Edible Animals of the Desert, 2 quarts of 180 proof Vodka, 100 salt tablets, and 1 quart of rubbing alcohol.
2 In the human-computer interaction domain, trust and credibility are largely undistinguished from one another and are used interchangeably (Fogg & Tseng, Citation1999). Fogg and Tseng (Citation1999) pointed out that the notions of trust and credibility with respect to computers should be separated. In addition, Schroeder et al. (Citation2021) distinguished credibility, which is the perceived quality of someone or something, from trust, which has been characterized as an attitude toward someone or something with a belief that an agent will assist an individual in achieving their purpose. Fogg and Tseng (Citation1999) acknowledge that trust and credibility are interrelated and difficult to distinguish. For instance, the dimensions of trust and credibility appear to have much in common. Early definitions of credibility characterize it as a mixture of competence, trustworthiness, and goodwill (McCroskey & Teven, Citation1999). This view of credibility is similar to the factors affecting trust, which include ability, integrity, and benevolence (Mayer et al., Citation1995), or trust in automation, which is based on performance, process, and purpose (Lee & See, Citation2004). Thus, in this study, credibility is considered to be a cognitive aspect of trust.
Additional information
Funding
Notes on contributors
Taenyun Kim
Taenyun Kim is a researcher at the Interaction Science Research Center, Sungkyunkwan University. He received M.S. in interaction science and a double-major of B.A. in Psychology and B.S. in Informatics at Sungkyunkwan University. His research interests include trust in autonomous agents and human cognitive bias in automation systems.
Hayeon Song
Hayeon Song is a Professor in the Department of Interaction Science at Sungkyunkwan University, South Korea. She received a Ph.D. in communication from University of Southern California. Her primary research interest is to investigate ways to use new media as persuasive and educational vehicle for health promotion.