3,374
Views
9
CrossRef citations to date
0
Altmetric
Research Articles

The role of domain expertise in trusting and following explainable AI decision support systems

ORCID Icon, ORCID Icon & ORCID Icon
Pages 110-138 | Received 28 Oct 2020, Accepted 19 Jul 2021, Published online: 11 Aug 2021

References

  • Acciarini, C., Brunetta, F., & Boccardelli, P. (2020). Cognitive biases and decision-making strategies in times of change: A systematic literature review. Management Decision, 59 (3), 638-652. https://doi.org/10.1108/MD-07-2019-1006
  • Ågerfalk, P. (2020). Artificial intelligence as digital agency. European Journal of Information Systems, 29(1), 1–8. https://doi.org/10.1080/0960085X.2020.1721947
  • Agrawal, A., Prediction machines. (2018). The simple economics of artificial intelligence. Harvard Business Review Press.
  • Ajzen, I. (1985). From intentions to actions: A theory of planned behavior. In J. Kuhl & J. Beckmann (Eds.), Action control (pp. 11–39). Springer, Heidelberg.
  • Ajzen, I. (1991). The theory of planned behavior. Organizational Behavior and Human Decision Processes, 50(2), 179–211. https://doi.org/10.1016/0749-5978(91)90020-T
  • Althöfer, I. (1990). An incremental negamax algorithm. Artificial Intelligence, 43(1), 57–65. https://doi.org/10.1016/0004-3702(90)90070-G
  • Anaraky, R.G., Knijnenburg, B.P., & Risius, M. (2020). Exacerbating mindless compliance: The danger of justifications during privacy decision making in the context of facebook applications. AIS Transactions on Human-Computer Interaction, 12 (2), 70–95. https://doi.org/10.17705/1thci.00129
  • Becker, J.-M., Klein, K., & Wetzels, M. (2012). Hierarchical latent variable models in PLS-SEM: Guidelines for using reflective-formative type models. Long Range Planning, 45(5–6), 359–394. https://doi.org/10.1016/j.lrp.2012.10.001
  • Becker, J.-M., Ringle, C.M., & Sarstedt, M. (2018). Estimating moderating effects in PLS-SEM and PLSc-SEM: Interaction term generation*data treatment. Journal of Applied Structural Equation Modeling, 2(2), 1–21. https://doi.org/10.47263/JASEM.2(2)01
  • Biran, O., & Cotton, C. (2017) Explanation and justification in machine learning: A survey. In: XAI workshop at the 26th International Joint Conference on Artificial Intelligence, pp 8–13. Melbourne, Australia: AAAI Press.
  • Biran, O., & McKeown, K. (2017) Human-Centric justification of machine learning predictions. In: 26th International Joint Conference on Artificial Intelligence, pp 1461–1467. Melbourne, Australia: AAAI Press.
  • Bollen, K.A., & Diamantopoulos, A. (2017). In defense of causal-formative indicators: A minority report. Psychological Methods, 22(3), 581–596. https://doi.org/10.1037/met0000056
  • Bowes, S.M., Ammirati, R.J., Costello, T.H., Basterfield, C., & Lilienfeld, S.O. (2020). Cognitive biases, heuristics, and logical fallacies in clinical practice: A brief field guide for practicing clinicians and supervisors. Professional Psychology, Research and Practice, 51(5), 435–445. https://doi.org/10.1037/pro0000309
  • Bussone, A., Stumpf, S., & O’Sullivan, D. (2015) The role of explanations on trust and reliance in clinical decision support systems. In: International Conference on Healthcare Informatics, pp 160–169. Dallas, United States of America: IEEE.https://doi.org/10.1109/ICHI.2015.26
  • CEGT Team (2020) CEGT-Ratinglist. Accessed 07 Jan 2020. http://www.cegt.net/40_4_Ratinglist/40_4_BestVersion/rangliste.html
  • ChessBase (2020) Fritz 17 - The giant PC chess program, now with Fat Fritz. Accessed 13 Oct 2020. https://shop.chessbase.com/en/products/fritz_17
  • Cooper, A. (1999). The inmates are running the asylum. Sams, Indianapolis.
  • CPW Team (2018) Simplified evaluation function. Accessed 13 Jan 2020. https://www.chessprogramming.org/index.php?title=Simplified_Evaluation_Function&oldid=2101
  • Davis, F. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly, 13(3), 319–340. https://doi.org/10.2307/249008
  • Dietz, G., den, H.D.N., & Sanders, K. (2006). Measuring trust inside organisations. Personnel Review, 35(5), 557–588. https://doi.org/10.1108/00483480610682299
  • Dietz, G., & Gillespie, N. (2011). Building and restoring organisational trust. Institute of Business Ethics.
  • Dijkstra, T., & Henseler, J. (2015a). Consistent and asymptotically normal PLS estimators for linear structural equations. Computational Statistics & Data Analysis, 81, 10–23. https://doi.org/10.1016/j.csda.2014.07.008
  • Dijkstra, T., & Henseler, J. (2015b). Consistent partial least squares path modeling. MIS Quarterly, 39(2), 297–316. https://doi.org/10.25300/MISQ/2015/39.2.02
  • Doney, P., Cannon, J., & Mullen, M. (1998). Understanding the influence of national culture on the development of trust. Academy of Management Review, 23(3), 601–620. https://doi.org/10.5465/amr.1998.926629
  • Du, S., & Xie, C. (2020). Paradoxes of artificial intelligence in consumer markets: Ethical challenges and opportunities. Journal of Business Research, 129. https://doi.org/10.1016/j.jbusres.2020.08.024
  • Fishbein, M., & Ajzen, I. (1980). Belief, attitude, intention and behaviour. An introduction to theory and research. Addison-Wesley, Reading.
  • Fornell, C., & Larcker, D. (1981). Structural equation models with unobservable variables and measurement error. Journal of Marketing Research, 18(3), 382–388. https://doi.org/10.1177/002224378101800313
  • Fosso Wamba, S., Bawack, R.E., Guthrie, C., Queirozc, M.M., Carillo, K.D.A. (2020). Are we preparing for a good AI society? A bibliometric review and research agenda. Technological Forecasting and Social Change:120482, 164. https://doi.org/10.1016/j.techfore.2020.120482
  • Franke, G., & Sarstedt, M. (2019). Heuristics versus statistics in discriminant validity testing: A comparison of four procedures. Internet Research, 29(3), 430–447. https://doi.org/10.1108/IntR-12-2017-0515
  • Freedman, R., Borg, J.S., Sinnott-Armstrong, W., Dickerson, J.P., & Conitzer, V. (2020). Adapting a kidney exchange algorithm to align with human values. Artificial Intelligence 283, 283, 103261. https://doi.org/10.1016/j.artint.2020.103261
  • Gefen, D., Benbasat, I., & Pavlou, P. (2008). A research agenda for trust in online environments. Journal of Management Information Systems, 24(4), 275–286. https://doi.org/10.2753/MIS0742-1222240411
  • Gefen, D., Karahanna, E., & Straub, D.W. (2003). Trust and TAM in online shopping: An integrated model. MIS Quarterly, 27(1), 51–90. https://doi.org/10.2307/30036519
  • Grace, J.B., & Bollen, K.A. (2008). Representing general theoretical concepts in structural equation models: The role of composite variables. Environmental and Ecological Statistics, 15(2), 191–213. https://doi.org/10.1007/s10651-007-0047-7
  • Gregor, S., & Benbasat, I. (1999). Explanations from intelligent systems: Theoretical foundations and implications for practice. MIS Quarterly, 23(4), 497–530. https://doi.org/10.2307/249487
  • Hafizoğlu, F.M., & Sen, S. (2019). Understanding the influences of past experience on trust in human-agent teamwork. ACM Transactions on Internet Technology, 19(4), 1–22. https://doi.org/10.1145/3324300
  • Hair, J.F, Hollingsworth, C., Randolph, A., & Chong, A.Y.L. (2017). An updated and expanded assessment of PLS-SEM in information systems research. Industrial Management & Data Systems, 117(3), 442–458. https://doi.org/10.1108/IMDS-04-2016-0130
  • Hair, J.F., Risher, J.J., Sarstedt, M., & Ringle, C.M. (2019). When to use and how to report the results of PLS-SEM. European Business Review, 31(1), 2–24. https://doi.org/10.1108/EBR-11-2018-0203
  • Hair, J.F., Sarstedt, M., Ringle, C. M., Gudergan, S. P. (2018). Advanced issues in partial least squares structural equation modeling. Sage.
  • Hair, .J.F. (2020). Next-generation prediction metrics for composite-based PLS-SEM. Industrial Management & Data Systems, 121(1), 5–11. https://doi.org/10.1108/IMDS-08-2020-0505
  • Hengstler, M., Enkel, E., & Duelli, S. (2016). Applied artificial intelligence and trust - The case of autonomous vehicles and medical assistance devices. Technological Forecasting and Social Change, 105, 105–120. https://doi.org/10.1016/j.techfore.2015.12.014
  • Henseler, J., Ringle, C., & Sarstedt, M. (2015). A new criterion for assessing discriminant validity in variance-based structural equation modeling. Journal of the Academy of Marketing Science, 43(1), 115–135. https://doi.org/10.1007/s11747-014-0403-8
  • Herlocker, J., Konstan, J., & Riedl, J. (2000). Explaining collaborative filtering recommendations. In Computer Supported Cooperative Work 241–250. Pennsylvania, United States of America: Association for Computing Machinery.
  • Hilton, D. (1996). Mental models and causal explanation: Judgements of probable cause and explanatory relevance. Thinking & Reasoning, 2(4), 273–308. https://doi.org/10.1080/135467896394447
  • Holzinger, A., Carrington, A., & Müller, H. (2020). Measuring the quality of explanations: The System Causability Scale (SCS): Comparing human and machine explanations. Künstliche Intelligenz, 34(2), 193–198. https://doi.org/10.1007/s13218-020-00636-z
  • Holzinger, A., Langs, G., Denk, H., Zatloukal, K., & Müller, H. (2019). Causability and explainabilty of artificial intelligence in medicine. WIREs Data Mining and Knowledge Discovery, 9(4). https://doi.org/10.1002/widm.1312
  • Hong, W., Chan, F.K.Y., Thong, J.Y.L., Chasalow, L.C., & Dhillon, G. (2014). A framework and guidelines for Context-Specific theorizing in information systems research. Information Systems Research, 25(1), 111–136. https://doi.org/10.1287/isre.2013.0501
  • Kim, G., Shin, B., & Lee, H.G. (2009). Understanding dynamics between initial trust and usage intentions of mobile banking. Information Systems Journal, 19(3), 283–311. https://doi.org/10.1111/j.1365-2575.2007.00269.x
  • Lamy, J.-B., Sekar, B., Guezennec, G., Bouaud, J., & Séroussi, B. (2019). Explainable artificial intelligence for breast cancer: A visual case-based reasoning approach. Artificial Intelligence in Medicine, 94, 42–53. https://doi.org/10.1016/j.artmed.2019.01.001
  • Li, X., Hess, T., & Valacich, J. (2008). Why do we trust new technology? A study of initial trust formation with organizational information systems. The Journal of Strategic Information Systems, 17(1), 39–71. https://doi.org/10.1016/j.jsis.2008.01.001
  • Lombrozo, T. (2007). Simplicity and probability in causal explanation. Cognitive Psychology, 55(3), 232–257. https://doi.org/10.1016/j.cogpsych.2006.09.006
  • Lundberg, S., & Lee, S.-I. (2017) A unified approach to interpreting model predictions. In: 31th Conference on Neural Information Processing Systems, pp 4765–4774. Long Beach, United States of America: Curran Associates Inc.
  • Mason, W., & Suri, S. (2012). Conducting behavioral research on Amazon’s mechanical turk. Behavior Research Methods, 44(1), 1–23. https://doi.org/10.3758/s13428-011-0124-6
  • Mayer, R., Davis, J., & Schoorman, D. (1995). An integrative model of organizational trust. Academy of Management Review, 20(3), 709–734. https://doi.org/10.5465/amr.1995.9508080335
  • McKnight, H., Choudhury, V., & Kacmar, C. (2002). Developing and validating trust measures for e-Commerce: An integrative typology. Information Systems Research, 13(3), 334–359. https://doi.org/10.1287/isre.13.3.334.81
  • McKnight, H., Cummings, L., & Chervany, N. (1998). Initial trust formation in new organizational relationships. Academy of Management Review, 23(3), 473–490. https://doi.org/10.5465/amr.1998.926622
  • Mesbah, N., Tauchert, C., Olt, C.M. et al. (2019) Promoting trust in AI-based expert systems. In: 25th Americas Conference on Information Systems
  • Miller, T. (2019). Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence, 267, 1–38. https://doi.org/10.1016/j.artint.2018.07.007
  • Miller, T., & Howe, P. Sonnenberg L.() 2017 explainable AI: Beware of inmates running the Asylum. arXiv preprint arXiv:1712.00547
  • Mishra, J., & Morrissey, M. (1990). Trust in Employee/Employer relationships: A survey of West Michigan managers. Public Personnel Management, 19(4), 443–486. https://doi.org/10.1177/009102609001900408
  • Newell, A., & Simon, H. (1972). Human problem solving. Pearson Education.
  • Nordberg, P., Horne, D., & Horne, D. (2007). The privacy paradox: Personal information disclosure intentions versus behaviors. Journal of Consumer Affairs, 41(1), 100–126. https://doi.org/10.1111/j.1745–6606.2006.00070.x
  • NRZ (2017) Die Angst vor neuer Technik ist so alt wie die Menschheit. https://www.nrz.de/wochenende/die-angst-vor-neuer-technik-ist-so-alt-wie-die-menschheit-id209190935.html. Accessed 03 Mar 2020
  • Pavlou, P.A., & Gefen, D. (2004). Building effective online marketplaces with institution-based trust. Information Systems Research, 15(1), 37–59. https://doi.org/10.1287/isre.1040.0015
  • Power, D. (2002). Decision support systems. Concepts and resources for managers. Quorum Books, Westport.
  • Pu, P., & Chen, L. (2006) Trust building with explanation interfaces. In: 11th International Conference on Intelligent User Interfaces, pp 93–100. Sydney, Australia: Association for Computing Machinery.
  • Rahwan, Z., Yoeli, E., & Fasolo, B. (2019). Heterogeneity in banker culture and its influence on dishonesty. Nature, 575(7782), 345–349. https://doi.org/10.1038/s41586-019-1741-y
  • Rai, A., Constantinides, P., & Sarker, S. (2018). Editor’s comments: Next-Generation digital platforms: Toward Human–AI Hybrids. MIS Quarterly, 43 (1), iii–x.
  • Reuters (2018) Amazon scraps secret AI recruiting tool that showed bias against women. Accessed 03 Mar 2020. https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G
  • Ribeiro, M.T., Singh, S., & Guestrin, C. (2016) “Why should I trust you?”. In: 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp 1135–1144. San Francisco, United States of America: Association for Computing Machinery.
  • Ringle, C., Wende, S., & Becker, J.-M. (2015) SmartPLS 3. http://www.smartpls.com
  • Ringle, C.M., Sarstedt, M., & Straub, D.W. (2012). Editor’s comments: A CRITICAL LOOK AT THE USE of PLS-SEM in “MIS quarterly”. MIS Quarterly, 36(1), iii–xiv. https://doi.org/10.2307/41410402
  • Robnik-Sikonja, M., & Kononenko, I. (2008). Explaining classifications for individual instances. IEEE Transactions on Knowledge and Data Engineering, 20(5), 589–600. https://doi.org/10.1109/TKDE.2007.190734
  • Roese, N., & Vohs, K. (2012). Hindsight bias. Perspectives on Psychological Science, 7(5), 411–426. https://doi.org/10.1177/1745691612454303
  • M. Mora, O. Gelman, A. L. Steenkamp, M. Raisinghani. (2012). Variance-Based structural equation modeling. In M. Mora, O. Gelman, A. L. Steenkamp, et al. (Eds.), Research methodologies, innovations and philosophies in software systems engineering and information systems (pp. 193–221). IGI Global, Hershey.
  • Rudin, C., Waltz, D., Anderson, R., Boulanger, A., Salleb-Aouissi, A., Chow, M., Dutta, H., Gross, P.N., Huang, B., Ierome, S., Isaac, D.F., Kressner, A., Passonneau, R.J., Radeva, A., & Wu, L. (2012). Machine learning for the New York City power grid. IEEE Transactions on Pattern Analysis and Machine Intelligence, 34(2), 328–345. https://doi.org/10.1109/TPAMI.2011.108
  • Russell, S., & Norvig, P., Artificial intelligence. (2016). A modern approach (3rd ed.). Pearson Education.
  • SAE International (2018) SAE international releases updated visual chart for its “levels of driving automation” standard for self-Driving vehicles. https://www.sae.org/news/press-room/2018/12/sae-international-releases-updated-visual-chart-for-its-%E2%80%9Clevels-of-driving-automation%E2%80%9D-standard-for-self-driving-vehicles. Accessed 05 Apr 2020
  • Sarstedt, M., Hair, J.F., Cheah, J.-H., Becker, J.-M., & Ringle, C.M. (2019). How to specify, estimate, and validate higher-order constructs in PLS-SEM. Australasian Marketing Journal, 27(3), 197–211. https://doi.org/10.1016/j.ausmj.2019.05.003
  • Sarstedt, M., Hair, J.F., Ringle, C.M., Thiele, K.O., & Gudergan, S.P. (2016). Estimation issues with PLS and CBSEM: Where the bias lies! Journal of Business Research, 69(10), 3998–4010. https://doi.org/10.1016/j.jbusres.2016.06.007
  • Saxena, N.A., Huang, K., DeFilippis, E., Radanovic, G., Parkes, D.C., & Liu, Y. (2020). How do fairness definitions fare? Testing public attitudes towards three algorithmic definitions of fairness in loan allocations. Artificial Intelligence 283, 283, 103238. https://doi.org/10.1016/j.artint.2020.103238
  • Schmidt, P., & Biessmann, F. (2019) Quantifying interpretability and trust in machine learning systems. arXiv preprint arXiv:1901.08558
  • Schmidt, P., Biessmann, F., & Teubner, T. (2020). Transparency and trust in artificial intelligence systems. Journal of Decision Systems, 29(4), 260–278. https://doi.org/10.1080/12460125.2020.1819094
  • Sheeran, P., & Webb, T. (2016). The Intention-Behavior gap. Social and Personality Psychology Compass, 10(9), 503–518. https://doi.org/10.1111/spc3.12265
  • Shmueli, G., Ray, S., Velasquez Estrada, J.M., & Chatla, S.B. (2016). The elephant in the room: Predictive performance of PLS models. Journal of Business Research, 69(10), 4552–4564. https://doi.org/10.1016/j.jbusres.2016.03.049
  • Shmueli, G., Sarstedt, M., Hair, J.F., Cheah, J.-H., Ting, H., Vaithilingam, S., & Ringle, C.M. (2019). Predictive model assessment in PLS-SEM: Guidelines for using PLSpredict. European Journal of Marketing, 53(11), 2322–2347. https://doi.org/10.1108/EJM-02-2019-0189
  • Siau, K., & Wang, W. (2018). Building trust in artificial intelligence, machine learning, and robotics. Cutter Business Technology Journal, 31 (2), 47–53.
  • Silver, D., Hubert, T., Schrittwieser, J., Antonoglou, I., Lai, M., Guez, A., Lanctot, M., Sifre, L., Kumaran, D., Graepel, T., Lillicrap, T., Simonyan, K., Hassabis, D. (2017) Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm. arXiv preprint arXiv:1712.01815
  • Sniehotta, F., Scholz, U., & Schwarzer, R. (2005). Bridging the intention–behaviour gap: Planning, self-efficacy, and action control in the adoption and maintenance of physical exercise. Psychology & Health, 20(2), 143–160. https://doi.org/10.1080/08870440512331317670
  • Söllner, M., Benbasat, I., Gefen, D., Leimeister, J.M., Pavlou, P.A. (2016). Trust. MIS Quarterly Research Curations.
  • Stahl, B.C., Andreou, A., Brey, P., Hatzakis, T., Kirichenko, A., Macnish, K., Laulhé Shaelou, S., Patel, A., Ryan, M., & Wright, D. (2021). Artificial intelligence for human flourishing – Beyond principles for machine learning. Journal of Business Research, 124, 374–388. https://doi.org/10.1016/j.jbusres.2020.11.030
  • Stanford Encyclopedia of Philosophy (2017) Aristotle’s Logic. https://plato.stanford.edu/entries/aristotle-logic/. Accessed 03 Oct 2020
  • Staw, B. (1996). The escalation of commitment: An update and appraisal. In Z. Shapira (Ed.), Organizational decision making (pp. 191–215). Cambridge University Press.
  • Thagard, P. (1989). Explanatory coherence. Behavioral and Brain Sciences, 12(3), 435–502. https://doi.org/10.1017/S0140525X00057046
  • The Telegraph (2018) Chinese businesswoman accused of jaywalking after AI camera spots her face on an advert. https://www.telegraph.co.uk/technology/2018/11/25/chinese-businesswoman-accused-jaywalking-ai-camera-spots-face/. Accessed 03 Mar 2020
  • Toreini, E., Aitken, M., Coopamootoo, K., Elliott, K., Zelaya, C.G., van Moorsel, A. (2020) The relationship between trust in AI and trustworthy machine learning technologies. In: Conference on Fairness, Accountability, and Transparency, pp 272–283. Barcelona, Spain: Association for Computing Machinery. https://doi.org/10.1145/3351095.3372834
  • Urbach, N., & Ahlemann, F. (2010). Structural equation modeling in information systems research using partial least squares. Journal of Information Technology Theory and Application, 11 (2), 5–40.
  • Van Der Maas, H., & Wagenmakers, E.-J. (2005). A psychometric analysis of chess expertise. American Journal of Psychology, 118 (1), 29–60.
  • Velloso, M. (2018) Accessed 30 Mar 2020. Difference between machine learning and AI. https://twitter.com/matvelloso/status/1065778379612282885.
  • Wang, P. (2008) What do you mean by “AI”? In: 1st Conference on Artificial General Intelligence, pp 362–373. Memphis, United States of America: IOS Press.
  • Wang, W., & Benbasat, I. (2005). Trust in and adoption of online recommendation agents. Journal of the Association for Information Systems, 6(3), 72–101. https://doi.org/10.17705/1jais.00065
  • Wang, W., & Benbasat, I. (2008). Attributions of trust in decision support technologies: A study of recommendation agents for E-Commerce. Journal of Management Information Systems, 24(4), 249–273. https://doi.org/10.2753/MIS0742-1222240410
  • Washington Examiner (1997) Be afraid. https://www.washingtonexaminer.com/weekly-standard/be-afraid-9802. Accessed 10 Jun 2020
  • Yan, Z., Kantola, R., & Zhang, P. (2011) A research model for human-Computer trust interaction. In: 10th IEEE International Conference on Trust, Security and Privacy in Computing and Communications, pp 274–281. Changsha, China: IEEE Computer Society.
  • Yu, Z., Du, H., Yi, F., Wang, Z., & Guo, B. (2019). Ten scientific problems in human behavior understanding. CCF Transactions on Pervasive Computing and Interaction, 1(1), 3–9. https://doi.org/10.1007/s42486-018-00003-w

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.