770
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Improving Trust in AI with Mitigating Confirmation Bias: Effects of Explanation Type and Debiasing Strategy for Decision-Making with Explainable AI

& ORCID Icon
Received 17 Jul 2023, Accepted 13 Nov 2023, Published online: 27 Nov 2023

References

  • Ariely, D., & Zakay, D. (2001). A timely account of the role of duration in decision making. Acta psychologica, 108(2), 187–207. https://doi.org/10.1016/s0001-6918(01)00034-8
  • Barredo, A., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., Garcia, S., Gil-Lopez, S., Molina, D., Benjamins, R., Chatila, R., & Herrera, F. (2020). Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, 58, 82–115. https://doi.org/10.1016/j.inffus.2019.12.012
  • Bertrand, A., Belloum, R., Eagan, J. R., Maxwell, W. (2022). How cognitive biases affect XAI-assisted decision-making: A systematic review [Paper presentation]. Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society.
  • Buçinca, Z., Lin, P., Gajos, K. Z., & Glassman, E. L. (2020). Proxy tasks and subjective measures can be misleading in evaluating explainable AI systems [Paper presentation]. Proceedings of the 25th International Conference on Intelligent User Interfaces, Cagliari, IT. https://doi.org/10.1145/3377325.3377498
  • Buçinca, Z., Malaya, M. B., & Gajos, K. Z. (2021). To trust or to think: Cognitive forcing functions can reduce overreliance on AI in AI-assisted decision-making. Proceedings of the ACM on Human-Computer Interaction, 5(CSCW1), 1–21. https://doi.org/10.1145/3449287
  • Cook, M. B., & Smallman, H. S. (2007). Visual evidence landscapes: Reducing bias in collaborative intelligence analysis. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 51(4), 303–307. https://doi.org/10.1177/154193120705100433
  • Cook, M. B., & Smallman, H. S. (2008). Human factors of the confirmation bias in intelligence analysis: Decision support from graphical evidence landscapes. Human Factors, 50(5), 745–754. https://doi.org/10.1518/001872008X354183
  • Green, B., & Chen, Y. (2019). The principles and limits of algorithm-in-the-loop decision making. Proceedings of the ACM on Human-Computer Interaction, 3(CSCW), 1–24. https://doi.org/10.1145/3359152
  • Gunning, D. (2017). Explainable artificial intelligence (XAI). Defense Advanced Research Projects Agency (DARPA).
  • Guo, S., Du, F., Malik, S., Koh, E., Kim, S., Liu, Z., Kim, D., Zha, H., & Cao, N. (2019). Visualizing uncertainty and alternatives in event sequence predictions [Paper presentation]. Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, Glasgow, Scotland, United Kingdom. https://doi.org/10.1145/3290605.3300803
  • Ha, T., Kim, S., Seo, D., & Lee, S. (2020). Effects of explanation types and perceived risk on trust in autonomous vehicles. Transportation Research Part F: Traffic Psychology and Behaviour, 73, 271–280. https://doi.org/10.1016/j.trf.2020.06.021
  • Hayes, A. F. (2017). Introduction to mediation, moderation, and conditional process analysis: A regression-based approach. Guilford Press.
  • Hoffman, R. R., & Klein, G. (2017). Explaining explanation, part 1: Theoretical foundations. IEEE Intelligent Systems, 32(3), 68–73. https://doi.org/10.1109/MIS.2017.54
  • Hoffman, R. R., Mueller, S. T., Klein, G., Litman, J. (2018). Metrics for explainable AI: Challenges and prospects. arXiv preprint arXiv:1812.04608.
  • Kahneman, D. (2011). Thinking, fast and slow. Macmillan.
  • Kim, D. (2005). Cognition-based versus affect-based trust determinants in E-Commerce: Cross-cultural comparison study [Paper presentation]. Proceedings of the International Conference on Information Systems.
  • Klayman, J. (1995). Varieties of confirmation bias. Psychology of Learning and Motivation, 32, 385–418. https://doi.org/10.1016/S0079-7421(08)60315-1
  • Konovalov, A., & Krajbich, I. (2019). Revealed strength of preference: Inference from response times. Judgment and Decision Making, 14(4), 381–394. https://doi.org/10.1017/S1930297500006082
  • Kox, E. S., Kerstholt, J. H., Hueting, T. F., & de Vries, P. W. (2021). Trust repair in human-agent teams: The effectiveness of explanations and expressing regret. Autonomous Agents and Multi-Agent Systems, 35(2), 30. https://doi.org/10.1007/s10458-021-09515-9
  • Lai, V., Tan, C. (2019). On human predictions with explanations and predictions of machine learning models: A case study on deception detection [Paper presentation]. Proceedings of the Conference on Fairness, Accountability, and Transparency.
  • LeClerc, J., & Joslyn, S. (2015). The cry wolf effect and weather‐related decision making. Risk Analysiss, 35(3), 385–395. https://doi.org/10.1111/risa.12336
  • Liao, Q. V., Gruen, D., & Miller, S. (2020). Questioning the AI: Informing design practices for explainable AI user experiences [Paper presentation]. Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA. https://doi.org/10.1145/3313831.3376590
  • Lundberg, S. M., & Lee, S.-I. (2017). A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems (NIPS). Curran Associates, Inc.
  • Marett, K., & Adams, G. (2006). The role of decision support in alleviating the familiarity bias [Paper presentation]. Proceedings of the 39th Annual Hawaii International Conference on System Sciences (HICSS'06), Kauai, HI, USA. https://doi.org/10.1109/HICSS.2006.480
  • McAllister, D. J. (1995). Affect-and cognition-based trust as foundations for interpersonal cooperation in organizations. Academy of Management Journal, 38(1), 24–59. https://doi.org/10.2307/256727
  • McGuirl, J. M., & Sarter, N. B. (2006). Supporting trust calibration and the effective use of decision aids by presenting dynamic system confidence information. Human Factors, 48(4), 656–665. https://doi.org/10.1518/001872006779166334
  • Moro, S., Laureano, R., Cortez, P. (2011). Using data mining for bank direct marketing: An application of the CRISP-DM methodology [Paper presentation]. Proceedings of European Simulation and Modelling Conference (ESM’2011), Guimarães, Portugal (pp. 117–121).
  • Moro, S., Rita, P., & Cortez, P. (2012). Bank marketing. UCI Machine Learning Repository. https://doi.org/10.24432/C5K306
  • Naiseh, M., Al-Thani, D., Jiang, N., & Ali, R. (2023). How the different explanation classes impact trust calibration: The case of clinical decision support systems. International Journal of Human-Computer Studies, 169, 102941. https://doi.org/10.1016/j.ijhcs.2022.102941
  • Nickerson, R. S. (1998). Confirmation bias: A ubiquitous phenomenon in many guises. Review of General Psychology, 2(2), 175–220. https://doi.org/10.1037/1089-2680.2.2.175
  • Park, J. S., Barber, R., Kirlik, A., & Karahalios, K. (2019). A slow algorithm improves users’ assessments of the algorithm’s accuracy. Proceedings of the ACM on Human-Computer Interaction, 3(CSCW), 1–15. https://doi.org/10.1145/3359204
  • Ramos, M. H., Van Andel, S. J., & Pappenberger, F. (2013). Do probabilistic forecasts lead to better decisions? Hydrology and Earth System Sciences, 17(6), 2219–2232. https://doi.org/10.5194/hess-17-2219-2013
  • Schaffer, J., O'Donovan, J., Michaelis, J., Raglin, A., Höllerer, T. (2019). I can do better than your AI: expertise and explanations [Paper presentation]. Proceedings of the 24th International Conference on Intelligent User Interfaces.
  • Springer, A., Whittaker, S. (2019). Progressive disclosure: empirically motivated approaches to designing effective transparency [Paper presentation]. Proceedings of the 24th international conference on intelligent user interfaces.
  • Szymanski, M., Millecamp, M., & Verbert, K. (2021). Visual, textual or hybrid: The effect of user expertise on different explanations [Paper presentation]. 26th International Conference on Intelligent User Interfaces, College Station, TX, USA. https://doi.org/10.1145/3397481.3450662
  • Wang, D., Yang, Q., Abdul, A., & Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI [Paper presentation]. Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, Glasgow, Scotland, UK. https://doi.org/10.1145/3290605.3300831
  • Yang, X. J., Schemanske, C., & Searle, C. (2023). Toward quantifying trust dynamics: How people adjust their trust after moment-to-moment interaction with automation. Human Factors, 65(5), 862–878. https://doi.org/10.1177/00187208211034716

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.