4,625
Views
2
CrossRef citations to date
0
Altmetric
Articles

Algorithm aversion? On the influence of advice accuracy on trust in algorithmic advice

ORCID Icon &
Pages 77-97 | Received 24 Jan 2022, Accepted 24 Apr 2022, Published online: 06 May 2022

References

  • Armstrong, J.S. (1980). The Seer-Sucker Theory: The value of experts in forecasting. Technology Review, 82(7), 16–24.
  • Bhattacherjee, & Premkumar. (2004). Understanding changes in belief and attitude toward information technology usage: A theoretical model and longitudinal test. MIS Quarterly, 28(2), 229–254. https://doi.org/10.2307/25148634
  • Bigman, Y.E., & Gray, K. (2018). People are averse to machines making moral decisions. Cognition, 181(181), 21–34. https://doi.org/10.1016/j.cognition.2018.08.003
  • Bonaccio, S., & Dalal, R.S. (2006). Advice taking and decision-making: An integrative literature review, and implications for the organizational sciences. Organizational Behavior and Human Decision Processes, 101(2), 127–151. https://doi.org/10.1016/j.obhdp.2006.07.001
  • Byström, K., & Järvelin, K. (1995). Task complexity affects information seeking and use. Information Processing & Management, 31(2), 191–213. https://doi.org/10.1016/0306-4573(95)80035-R
  • Castelo, N., Bos, M.W., & Lehman, D.R. (2019). Task-Dependent algorithm aversion. Journal of Marketing Research, 56(5), 809–825. https://doi.org/10.1177/0022243719851788
  • Dawes, R.M., & Corrigan, B. (1974). Linear models in decision making. Psychological Bulletin, 81(2), 95–106. https://doi.org/10.1037/h0037613
  • Deutsch, M. (1958). Trust and suspicion. Journal of Conflict Resolution, 2(4), 265–279. https://doi.org/10.1177/002200275800200401
  • Dietvorst, B.J., & Bharti, S. (2020). People reject algorithms in uncertain decision domains because they have diminishing sensitivity to forecasting error. Psychological Science, 31(10), 1302–1314. https://doi.org/10.1177/0956797620948841
  • Dietvorst, B.J., Simmons, J.P., & Massey, C. (2015). Algorithm aversion: People erroneously avoid algorithms after seeing them err. Journal of Experimental Psychology: General, 144(1), 114–126. https://doi.org/10.1037/xge0000033
  • Dietvorst, B.J., Simmons, J.P., & Massey, C. (2018). Overcoming algorithm aversion: People will use imperfect algorithms if they can (even slightly) modify them. Management Science, 64(3), 1155–1170. https://doi.org/10.1287/mnsc.2016.2643
  • Dijkstra, J.J. (1999). User agreement with incorrect expert system advice. Behavior & Information Technology, 18(6), 399–411. https://doi.org/10.1080/014492999118832
  • Dzindolet, M.T., Peterson, S.A., Pomranky, R.A., Pierce, L.G., & Beck, H.P. (2003). The role of trust in automation reliance. International Journal of Human-computer Studies, 58(6), 697–718. https://doi.org/10.1016/S1071-5819(03)00038-7
  • Dzindolet, M.T., Pierce, L.G., Beck, H.P., & Dawe, L.A. (2002). The perceived utility of human and automated aids in a visual detection task. Human Factors: The Journal of the Human Factors and Ergonomics Society, 44(1), 79–94. https://doi.org/10.1518/0018720024494856
  • Einhorn, H.J. (1986). Accepting error to make less error. Journal of Personality Assessment, 50(3), 387–395. https://doi.org/10.1207/s15327752jpa5003_8
  • Fildes, R., Goodwin, P., Lawrence, M., & Nikolopoulos, K. (2009). Effective forecasting and judgmental adjustments: An empirical evaluation and strategies for improvement in supply-chain planning. International Journal of Forecasting, 25(1), 3–23. https://doi.org/10.1016/j.ijforecast.2008.11.010
  • Fishbein, M., & Ajzen, I. (1977). Belief, attitude, intention, and behavior: An introduction to theory and research. Philosophy and Rhetoric, 10(2), 177–188.
  • Gefen, & Straub. (2003). Managing user trust in B2C e-Services. e-Service, 2(2), 7–24. https://doi.org/10.2979/esj.2003.2.2.7
  • Goodyear, K., Parasuraman, R., Chernyak, S., Madhavan, P., Deshpande, G., & Krueger, F. (2016). Advice taking from humans and machines: An fMRI and Effective Connectivity Study. Frontiers in Human Neuroscience, 10(10), 542. https://doi.org/10.3389/fnhum.2016.00542
  • Highhouse, S. (2008). Stubborn Reliance on Intuition and Subjectivity in Employee Selection. Industrial and Organizational Psychology, 1(3), 333–342. https://doi.org/10.1111/j.1754-9434.2008.00058.x
  • Hoff, K.A., & Bashir, M. (2015). Trust in automation: Integrating empirical evidence on factors that influence trust. Human Factors: The Journal of the Human Factors and Ergonomics Society, 57(3), 407–434. https://doi.org/10.1177/0018720814547570
  • Jian, J.-Y., Bisantz, A.M., Drury, C.G., & Llinas, J. (2000). Foundations for an empirically determined scale of trust in automated systems. International Journal of Cognitive Ergonomics, 4(1), 53–71. https://doi.org/10.1207/S15327566IJCE0401_04
  • Jussupow, E., Benbasat, I., & Heinzl, A. (2020). Why are we averse towards algorithms? A comprehensive literature review on algorithm aversion. Proceedings of the 28th European Conference on Information Systems (ECIS), June 15-17, 2020, An Online AIS Conference (pp. 168). https://aisel.aisnet.org/ecis2020_rp/168
  • Komiak, S.Y.X., & Benbasat, I. (2006). The effects of personalization and familiarity on trust and adoption of recommendation agents. MIS Quarterly, 30(4), 941–960. https://doi.org/10.2307/25148760
  • Komiak, S., Wang, W., & Benbasat, I. (2005). Comparing customer trust in virtual salespersons with customer trust in human salespersons. Proceedings of the 38th Annual Hawaii International Conference on System Sciences, 6 Jan, 2005, Big Island, HI, USA (pp. 175a–175a). IEEE. https://doi.org/10.1109/HICSS.2005.154
  • Kramer, R.M. (2010). 7 Trust barriers in cross-cultural negotiations: Psychological analysis. In M. N. K. Saunders, D. Skinner, G. Dietz, N. Gillespie, & R. J. Lewicki (Eds.), Organizational trust: A cultural perspective (pp. 182). Cambridge University Press.
  • Lankton, N., McKnight, D.H., & Thatcher, J.B. (2014). Incorporating trust-in-technology into Expectation Disconfirmation Theory. The Journal of Strategic Information Systems, 23(2), 128–145. https://doi.org/10.1016/j.jsis.2013.09.001
  • Lee, M.K.O., & Turban, E. (2001). A trust model for consumer internet shopping. International Journal of Electronic Commerce, 6(1), 75–91. https://doi.org/10.1080/10864415.2001.11044227
  • Lewicki, R.J., & Bunker, B.B. (1995). Trust in relationships: A model of development and decline. Jossey-Bass/Wiley.
  • Lewicki, R.J., Tomlinson, E.C., & Gillespie, N. (2006). Models of interpersonal trust development: Theoretical approaches, empirical evidence, and future directions. Journal of Management, 32(6), 991–1022. https://doi.org/10.1177/0149206306294405
  • Li, X., Hess, T.J., & Valacich, J.S. (2008). Why do we trust new technology? A study of initial trust formation with organizational information systems. The Journal of Strategic Information Systems, 17(1), 39–71. https://doi.org/10.1016/j.jsis.2008.01.001
  • Logg, J.M., Minson, J.A., & Moore, D.A. (2019). Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes, 151(151), 90–103. https://doi.org/10.1016/j.obhdp.2018.12.005
  • Longoni, C., Bonezzi, A., & Morewedge, C.K. (2019). Resistance to Medical Artificial Intelligence. Journal of Consumer Research, 46(4), 629–650. https://doi.org/10.1093/jcr/ucz013
  • Madhavan, P., & Wiegmann, D.A. (2007). Effects of information source, pedigree, and reliability on operator interaction with decision support systems. Human Factors: The Journal of the Human Factors and Ergonomics Society, 49(5), 773–785. https://doi.org/10.1518/001872007X230154
  • Madsen, M., & Gregor, S. (2000). Measuring human-computer trust. 11th australasian conference on information systems, Brisbane, Australia (pp. 6–8).
  • Mayer, R.C., Davis, J.H., & Schoorman, F.D. (1995). An integrative model of organizational trust. Academy of Management Review, 20(3), 709–734. https://doi.org/10.2307/258792
  • McKnight, D.H., & Chervany, N.L. (2001). Trust and distrust definitions: One bite at a time. Trust in Cyber-societies (pp. 27–54). Berlin, Heidelberg: Springer.
  • McKnight, D.H., Choudhury, V., & Kacmar, C. (2002). Developing and validating trust measures for e-Commerce: An integrative typology. Information Systems Research, 13(3), 334–359. https://doi.org/10.1287/isre.13.3.334.81
  • McKnight, D.H., Cummings, L.L., & Chervany, N.L. (1998). Initial Trust Formation in New Organizational Relationships. Academy of Management Review, 23(3), 473–490. https://doi.org/10.5465/amr.1998.926622
  • Meehl, P.E. (1954). Clinical versus statistical forecast: A theoretical analysis and a review of the evidence. Minneapolis, MN: University ofMinnesota Press.
  • Obermaier R and Müller F. (2008). Management accounting research in the lab – method and applications. Z Plan, 19(3), 325–351. https://doi.org/10.1007/s00187-008-0056-1
  • Önkal, D., Goodwin, P., Thomson, M., Gönül, S., & Pollock, A. (2009). The relative influence of advice from human experts and statistical methods on forecast adjustments. Journal of Behavioral Decision Making, 22(4), 390–409. https://doi.org/10.1002/bdm.637
  • Ord, K., Fildes, R.A., & Kourentzes, N. (2017). Principles of business forecasting. New York, NY: Wessex Press.
  • Pavlou, P.A., & Gefen, D. (2004). Building effective online marketplaces with institution-based trust. Information Systems Research, 15(1), 37–59. https://doi.org/10.1287/isre.1040.0015
  • Prahl, A., & Van Swol, L.M. (2017). Understanding algorithm aversion: When is advice from automation discounted? Journal of Forecasting, 36(6), 691–702. https://doi.org/10.1002/for.2464
  • Renier, L.A., Schmid Mast, M., & Bekbergenova, A. (2021). To err is human, not algorithmic – Robust reactions to erring algorithms. Computers in Human Behavior, 124(124), 106879. https://doi.org/10.1016/j.chb.2021.106879
  • Ring, P.S. (1996). Fragile and resilient trust and their roles in economic exchange. Business & Society, 35(2), 148–175. https://doi.org/10.1177/000765039603500202
  • Rousseau, D.M., Sitkin, S.B., Burt, R.S., & Camerer, C. (1998). Not so different after all: A cross-discipline view of trust. Academy of Management Review, 23(3), 393–404. https://doi.org/10.5465/amr.1998.926617
  • Schmidt, P., Biessmann, F., & Teubner, T. (2020). Transparency and trust in artificial intelligence systems. Journal of Decision Systems, 29(4), 260–278. https://doi.org/10.1080/12460125.2020.1819094
  • Siau, K., & Wang, W. (2018). Building trust in artificial intelligence, machine learning, and robotics. Cutter Business Technology Journal, 31(2), 47–53.
  • Soll, J.B., & Larrick, R.P. (2009). Strategies for revising judgment: How (and how well) people use others’ opinions. Journal of Experimental Psychology. Learning, Memory, and Cognition, 35(3), 780–805. https://doi.org/10.1037/a0015145
  • Xie, H., David, A., Mamun, M.R.A., Prybutok, V.R., & Sidorova, A. (2022). The formation of initial trust by potential passengers of self-driving taxis. Journal of Decision Systems, 1–30. https://doi.org/10.1080/12460125.2021.2023258
  • Yu, K., Berkovsky, S., Taib, R., Zhou, J., & Chen, F. (2019). Do I trust my machine teammate? An investigation from perception to decision. Proceedings of the 24th International Conference on Intelligent User Interfaces, March 2019, Marina del Rey, California (pp. 460–468). New York, NY: Association for Computing Machinery. https://doi.org/10.1145/3301275.3302277