Abstract
An online experiment was conducted to investigate how perceived collision algorithm types (selfish vs. utilitarian) and social approval for the algorithms (weak approval vs. strong approval) jointly affect individuals’ attitudes toward automated vehicles (AVs). The results showed a discrepancy between what individuals consider socially desirable and what they trust or would use. Although participants regarded AVs with utilitarian collision algorithms as more ethical and socially beneficial, they personally preferred AVs with selfish algorithms: they trusted selfish AVs more and showed higher intention to use and pay a premium for selfish AVs. Also, participants evaluated AVs as more ethical and socially beneficial when strong social approval was given to the algorithms. However, in utilitarian algorithms, strong social approval did not increase trust or behavioral intention to use AVs. Only strong social approval of selfish algorithms increased participants’ trust and intention to use AVs.
Disclosure statement
No potential conflict of interest was reported by the author(s).
Additional information
Notes on contributors
Yeon Kyoung Joo
Yeon Kyoung Joo is an associate professor in the Department of Digital Media at Myongji University, Seoul, Korea. She is mainly interested in changing users’ behaviors and attitudes by designing user-centered technology. Her research interests include users’ acceptance of autonomous vehicles, the use of just-in-time information, and VR experiences.
Banya Kim
Banya Kim is a researcher in the Institute of Communication Research at Seoul National University, Seoul, Korea. She is interested in how individuals and society perceive and accept new technologies. Her research interests also include psychological factors affecting users’ attitudes and behaviors.