Abstract
Artificial intelligence (AI), a branch of computer science based upon algorithms that can analyze data and make decisions autonomously, is becoming increasingly prevalent in the technology that powers modern society. Relatively little research has examined how humans modify their judgments in response to their interactions with AI. Our research explores how people respond to different types of risk management advice received from AI vs. a human expert in two contexts where AI is commonly deployed: medicine and finance. Through online studies with representative samples of Americans, we first find that participants generally prefer to receive medical and financial risk management advice from humans over AI. In two follow-up studies, we presented participants with a hypothetical medical or financial risk and asked them to make an initial decision—to address the risk immediately or to wait for more information—and to rate their confidence in this decision. Next, participants were informed that either a human expert or AI had analyzed their case and recommended either immediate risk management action or a wait-and-see approach. Participant then made a final decision using the same response scale as before. We compared participants’ initial and final decisions, examining the extent to which participants updated their decisions upon receiving their recommendation as a function of the recommendation itself and its source. We find that participants updated their decisions to a greater degree in response to recommendations from human experts as compared to AI, but the magnitude of this effect differed by context.
Supplemental data for this article is available online at https://doi.org/10.1080/13669877.2021.1958047 .
Disclosure statement
No potential conflict of interest was reported by the authors.
Notes
1 See Artificial Intelligence: Healthcare’s New Nervous System at https://www.accenture.com/_acnmedia/pdf-49/accenture-health-artificial-intelligence.pdf
2 Previous research has examined the willingness of individuals to accept advice from a non-human system versus a human source, but has used inconsistent terminology, labeling the non-human system either as an “algorithm” (Dietvorst, Simmons, and Massey Citation2015; Logg, Minson, and Moore Citation2019) or “AI” (Longoni, Bonezzi, and Morewedge Citation2019). We choose to label the non-human system as “AI” in our research because we believe such a label is usually how algorithmic systems are, and will be, presented to the general public.
3 Both studies were approved by the Health Sciences and Behavioral Sciences Institutional Review Board (protocol number HUM00162568 at the University of Michigan.