ABSTRACT
This study conducted an experiment to test how the level of blame differs between an artificial intelligence (AI) and a human driver based on attribution theory and computers are social actors (CASA). It used a 2 (human vs. AI driver) x 2 (victim survived vs. victim died) x 2 (female vs. male driver) design. After reading a given scenario, participants (N = 284) were asked to assign a level of responsibility to the driver. The participants blamed drivers more when the driver was AI compared to when the driver was a human. Also, the higher level of blame was shown when the result was more severe. However, gender bias was found not to be significant when faulting drivers. These results indicate that the intention of blaming AI comes from the perception of dissimilarity and the seriousness of outcomes influences the level of blame. Implications of findings for applications and theory are discussed.
Disclosure of potential conflict of interest
No potential conflict of interest was reported by the authors.
Additional information
Notes on contributors
Joo-Wha Hong
Joo-Wha Hong is a researcher in the Annenberg School for Communications at the University of Southern California. His work focuses on human-machine communication, particularly focusing on how people perceive artificial intelligence and interact with AI agents.
Yunwen Wang
Yunwen Wang is a researcher in the Annenberg School for Communication and Journalism at the University of Southern California. Her works focus on emerging technology adoption, user experience, and human-computer interaction in relation to persuasion, health, and wellbeing.
Paulina Lanz
Paulina Lanz is a scholar in the Annenberg School for Communication and Journalism at the University of Southern California. From a cultural studies and media studies perspective, her work examines materiality as an archival mechanism for storytelling through spatial-temporal remembrance.