1,116
Views
5
CrossRef citations to date
0
Altmetric
Research Articles

A social evaluation of the perceived goodness of explainability in machine learning

, , & ORCID Icon
Pages 29-50 | Received 22 Nov 2020, Accepted 24 Jun 2021, Published online: 25 Jul 2021

References

  • Cui, X., Lee, J. M., & Hsieh, J. (2019). An Integrative 3C evaluation framework for Explainable Artificial Intelligence American Conference on Information Systems, Cancun.
  • Abdul, A., Vermeulen, J., Wang, D., Lim, B. Y., & Kankanhalli, M. (2018). Trends and trajectories for explainable, accountable and intelligible systems: An hci research agenda. Proceedings of the 2018 CHI conference on human factors in computing systems, Montreal, Canada, ACM.
  • Adadi, A., & Berrada, M. (2018). Peeking inside the black-box: A survey on Explainable Artificial Intelligence (XAI). IEEE Access, 6, 52138–52160. https://doi.org/10.1109/ACCESS.2018.2870052
  • Adhikari, A., Tax, D. M., Satta, R., & Faeth, M. (2019). LEAFAGE: Example-based and Feature importance-based Explanations for Black-box ML models. International Conference on Fuzzy Systems (FUZZ-IEEE), Lafayette, USA, IEEE.
  • Akay, M. F. (2009). Support vector machines combined with feature selection for breast cancer diagnosis. Expert Systems with Applications, 36(2), 3240–3247. https://doi.org/10.1016/j.eswa.2008.01.009
  • Alpaydin, E. (2020). Introduction to machine learning. MIT press.
  • Amershi, S., Weld, D., Vorvoreanu, M., Fourney, A., Nushi, B., Collisson, P., Suh, J., Iqbal, S.T., Bennett, P.N., Inkpen, K.M., Teevan, J., et al (2019). Guidelines for human-AI interaction. Conference on Human Factors in Computing Systems, Glasgow, Scotland, ACM.
  • Angelov, P., & Soares, E. (2019). Towards explainable deep neural networks (xDNN). Neural Networks, 130, 185-194. arXiv Preprint. arXiv:1912.02523. https://doi.org/10.1016/j.neunet.2020.07.010
  • Antaki, C., & Leudar, I. (1992). Explaining in conversation: Towards an argument model. European Journal of Social Psychology, 22(2), 181–194. https://doi.org/10.1002/ejsp.2420220206
  • Arntzen, F. (1993). Psychologie der Zeugenaussage: System der Glaubwürdigkeitsmerkmale. Beck.
  • Arrieta, A. B., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., Garcia, S., Gil-Lopez, S., Molina, D., Benjamins, R., et al. (2020). Explainable artificial intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, 58, 82–115. https://doi.org/10.1016/j.inffus.2019.12.012
  • Baird, A., & Maruping, L. M. (2021). The next generation of research on is use: A theoretical framework of delegation to and from agentic is artifacts. MIS Quarterly, 45(1), 1. https://doi.org/10.25300/MISQ/2021/15882
  • Barreno, M., Nelson, B., Joseph, A. D., & Tygar, J. D. (2010). The security of machine learning. Machine Learning, 81(2), 121–148. https://doi.org/10.1007/s10994-010-5188-5
  • Bentele, G., & Seidenglanz, R. (2005). Vertrauen und Glaubwürdigkeit. Begriffe, Ansätze, Forschungsübersicht und praktische Relevanz. Verlag für Sozialwissenschaften.
  • Bilgic, M., & Mooney, R. J. (2005). Explaining recommendations: Satisfaction vs. promotion. Beyond Personalization Workshop, IUI.
  • Bini, S. A. (2018). Artificial intelligence, machine learning, deep learning, and cognitive computing: What do these terms mean and how will they impact health care? The Journal of Arthroplasty, 33(8), 2358–2361. https://doi.org/10.1016/j.arth.2018.02.067
  • Bishop, C. M. (2006). Pattern recognition and machine learning. Springer Science+ Business Media.
  • Boone, H. N., & Boone, D. A. (2012). Analyzing likert data. Journal of Extension, 50(2), 1–5. http://www.joe.org/joe/2012april/tt2p.shtm
  • Burton, J. W., Stein, M. K., & Jensen, T. B. (2020). A systematic review of algorithm aversion in augmented decision making. Journal of Behavioral Decision Making, 33(2), 220–239. https://doi.org/10.1002/bdm.2155
  • Cai, C. J., Jongejan, J., & Holbrook, J. (2019). The effects of example-based explanations in a machine learning interface. 24th International Conference on Intelligent User Interfaces, Los Angeles, USA, ACM.
  • Carvalho, D. V., Pereira, E. M., & Cardoso, J. S. (2019). Machine learning interpretability: A survey on methods and metrics. Electronics, 8(8), 832. https://doi.org/10.3390/electronics8080832
  • Castelo, N., Bos, M. W., & Lehmann, D. R. (2019). Task-dependent algorithm aversion. Journal of Marketing Research, 56(5), 809–825. https://doi.org/10.1177/0022243719851788
  • Cawsey, A. (1993). Planning interactive explanations. International Journal of Man-Machine Studies, 38(2), 169–199. https://doi.org/10.1006/imms.1993.1009
  • Civerchia, F., Bocchino, S., Salvadori, C., Rossi, E., Maggiani, L., & Petracca, M. (2017). Industrial internet of things monitoring solution for advanced predictive maintenance applications. Journal of Industrial Information Integration, 7, 4–12. https://doi.org/10.1016/j.jii.2017.02.003
  • Cortez, P. (2009). Viticulture Commission of the Vinho Verde Region (CVRVV). University of Minho, Guimarães, Portugal. Retrieved 15.07.2020 from archive.ics.uci.edu/ml/datasets/wine+quality.
  • Cramer, H., Evers, V., Ramlal, S., Van Someren, M., Rutledge, L., Stash, N., Aroyo, L., Wielinga, B. (2008). The effects of transparency on trust in and acceptance of a content-based art recommender. User Modeling and User-Adapted Interaction, 18(5), 455. https://doi.org/10.1007/s11257-008-9051-3
  • Crollen, V., & Seron, X. (2012). Over-estimation in numerosity estimation tasks: More than an attentional bias? Acta psychologica, 140(3), 246–251. https://doi.org/10.1016/j.actpsy.2012.05.003
  • Dam, H. K., Tran, T., & Ghose, A. (2018). Explainable software analytics. Proceedings of the 40th International Conference on Software Engineering, Gothenburg, Sweden, IEEE/ACM.
  • Darlington, K. (2013). Aspects of intelligent systems explanation. Universal Journal of Control and Automation, 1(2), 40–51. https://doi.org/10.13189/ujca.2013.010204
  • Das, T., & Teng, B. S. (1999). Cognitive biases and strategic decision processes: An integrative perspective. Journal of Management Studies, 36(6), 757–778. https://doi.org/10.1111/1467-6486.00157
  • Dasgupta, P. (2000). Trust as a commodity. Trust: Making and Breaking Cooperative Relations, 4, 49–72.
  • Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. In MIS quarterly (319–340).
  • Dellermann, D., Ebel, P., Söllner, M., & Leimeister, J. M. (2019). Hybrid intelligence. Business & Information Systems Engineering, 61(5), 637–643. https://doi.org/10.1007/s12599-019-00595-2
  • Duval, A. (2019). Explainable Artificial Intelligence (XAI). In MA4K9 Scholarly Report, Mathematics Institute, The University of Warwick. pp. 1-53.
  • Farris, H. H., & Revlin, R. (1989). Sensible reasoning in two tasks: Rule discovery and hypothesis evaluation. Memory & Cognition, 17(2), 221–232. https://doi.org/10.3758/BF03197071
  • Featherman, M. S., & Pavlou, P. A. (2003). Predicting e-services adoption: A perceived risk facets perspective. International Journal of Human-computer Studies, 59(4), 451–474. https://doi.org/10.1016/S1071-5819(03)00111-3
  • Früh, W. (1994). Realitätsvermittlung durch Massenmedien. In Die permanente Transformation der Wirklichkeit. Opladen. Westdt. Verl. (Opladen), pp. 456.
  • Fürnkranz, J., Kliegr, T., & Paulheim, H. (2020). On cognitive preferences and the plausibility of rule-based models. Machine Learning, 109(4), 853–898. https://doi.org/10.1007/s10994-019-05856-5
  • Futia, G., & Vetrò, A. (2020). On the integration of knowledge graphs into deep learning models for a more comprehensible AI—three challenges for future research. Information, 11(2), 122–132. https://doi.org/10.3390/info11020122
  • García, S., Luengo, J., & Herrera, F. (2015). Data preprocessing in data mining. Springer.
  • Gelman, S. A., & Markman, E. M. (1986). Categories and induction in young children. Cognition, 23(3), 183-209.
  • Gilpin, L. H., Bau, D., Yuan, B. Z., Bajwa, A., Specter, M., & Kagal, L. (2018). Explaining explanations: An approach to evaluating interpretability of machine learning. arXiv Preprint. arXiv:1806.00069. In 2018 IEEE 5th International Conference on data science and advanced analytics (DSAA) (pp. 80-89). IEEE.Turin, Italy.
  • Gozna, L. F., Vrij, A., & Bull, R. (2001). The impact of individual differences on perceptions of lying in everyday life and in a high stake situation. Personality and Individual Differences, 31(7), 1203–1216. https://doi.org/10.1016/S0191-8869(00)00219-1
  • Gregor, S., & Benbasat, I. (1999). Explanations from intelligent systems: Theoretical foundations and implications for practice. MIS Quarterly, 23(4), 497–530. https://doi.org/10.2307/249487
  • Grice, H. P. (1975). Logic and conversation (Speech acts (pp. 41-58). Brill.
  • Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., & Pedreschi, D. (2019). A survey of methods for explaining black box models. ACM Computing Surveys (CSUR), 51(5), 93–138. https://doi.org/10.1145/3236009
  • Gunning, D. (2017). Explainable artificial intelligence (xai). In Defense Advanced Research Projects Agency (DARPA), nd Web (pp. 2). AI Magazine, 40(2), 44-58.
  • Habermas, J., McCarthy, T., & McCarthy, T. (1984). The theory of communicative action (Vol. 1). SciELO Brasil.
  • Hayes, B., & Shah, J. A. (2017). Improving robot controller transparency through autonomous policy explanation. 12th International Conference on Human-Robot Interaction (HRI), Vienna, Austria, ACM/IEEE.
  • Hempel, C. G., & Oppenheim, P. (1948). Studies in the Logic of Explanation. Philosophy of Science, 15(2), 135–175. https://doi.org/10.1086/286983
  • Herm, L.-V., Wanner, J., Seubert, F., & Janiesch, C. (2021). I don’t get it, but it seems valid! The connection between explainability and comprehensibility in (X) AI research. European Conference on Information Systems, Marrakech, Marrakech, Morocco, ACM.
  • Hilton, D. (1990). Conversational processes and causal explanation. Psychological Bulletin, 107(1), 65. https://doi.org/10.1037/0033-2909.107.1.65
  • Hilton, D. (1996). Mental models and causal explanation: Judgements of probable cause and explanatory relevance. Thinking & Reasoning, 2(4), 273–308. https://doi.org/10.1080/135467896394447
  • Hilton, D. (2017). Social attribution and explanation. In M. R. Waldmann (Ed.), The Oxford handbook of causal reasoning (pp. 645–674). Oxford University Press.
  • Hoff, K. A., & Bashir, M. (2015). Trust in automation: Integrating empirical evidence on factors that influence trust. Human Factors: The Journal of the Human Factors and Ergonomics Society, 57(3), 407–434. https://doi.org/10.1177/0018720814547570
  • Hoffman, R. R., Mueller, S. T., Klein, G., & Litman, J. (2018). Metrics for explainable AI: Challenges and prospects. arXiv Preprint. https://arxiv.org/abs/1812.04608
  • Holzinger, A., Biemann, C., Pattichis, C. S., & Kell, D. B. (2017). What do we need to build explainable AI systems for the medical domain? arXiv Preprint. https://arxiv.org/abs/1712.09923
  • Holzinger, A., Langs, G., Denk, H., Zatloukal, K., & Müller, H. (2019). Causability and explainability of artificial intelligence in medicine. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 9(4), e1312. https://doi.org/10.1002/widm.1312
  • Höök, K. (2000). Steps to take before intelligent user interfaces become real. Interacting with Computers, 12(4), 409–426. https://doi.org/10.1016/S0953-5438(99)00006-5
  • Hutson, M. (2020). Core progress in AI has stalled in some fields. American Association for the Advancement of Science.
  • James, G., Witten, D., Hastie, T., & Tibshirani, R. (2013). An introduction to statistical learning - with Applications in R (Vol. 1). Springer.
  • Janiesch, C., Zschech, P., & Heinrich, K. (2021). Machine Learning and Deep Learning. In Electronic Markets. Electronic Markets, https://doi.org/10.1007/s12525-021-00475-2
  • Kaufman, S., Rosset, S., Perlich, C., & Stitelman, O. (2012). Leakage in data mining: Formulation, detection, and avoidance. ACM Transactions on Knowledge Discovery from Data (TKDD), 6(4), 15–36. https://doi.org/10.1145/2382577.2382579
  • Kelley, H. H. (1967). Attribution theory in social psychology. University of Nebraska Press.
  • Kenny, D. A. (1994). Interpersonal perception: A social relations analysis. Guilford Press.
  • Kim, T. W. (2018). Explainable artificial intelligence (XAI), the goodness criteria and the grasp-ability test. arXiv Preprint. https://arxiv.org/abs/1810.09598
  • Kizilcec, R. F. (2016). How much information? Effects of transparency on trust in an algorithmic interface. CHI Conference on Human Factors in Computing Systems, San Jose, California, SIGCHI.
  • Kourou, K., Exarchos, T. P., Exarchos, K. P., Karamouzis, M. V., & Fotiadis, D. I. (2015). Machine learning applications in cancer prognosis and prediction. Computational and Structural Biotechnology Journal, 13, 8–17. https://doi.org/10.1016/j.csbj.2014.11.005
  • Krause, J., Dasgupta, A., Swartz, J., Aphinyanaphongs, Y., & Bertini, E. (2017). A workflow for visual diagnostics of binary classifiers using instance-level explanations. Conference on Visual Analytics Science and Technology (VAST), Phoenix, USA.
  • Liao, Q. V., Gruen, D., & Miller, S. (2020). Questioning the AI: Informing Design Practices for Explainable AI User Experiences. Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, Oahu, Hawai, IEEE.
  • Lim, B. Y., Dey, A. K., & Avrahami, D. (2009). Why and why not explanations improve the intelligibility of context-aware intelligent systems. SIGCHI Conference on Human Factors in Computing Systems, Boston, USA.
  • Lipton, Z. C. (2016).
  • Loewenstein, G. (1994). The psychology of curiosity: A review and reinterpretation. Psychological Bulletin, 116(1), 75. https://doi.org/10.1037/0033-2909.116.1.75
  • Logg, J. M., Minson, J. A., & Moore, D. A. (2019). Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes, 151, 90–103. https://doi.org/10.1016/j.obhdp.2018.12.005
  • Lombrozo, T. (2006). The structure and function of explanations. Trends in Cognitive Sciences, 10(10), 464–470. https://doi.org/10.1016/j.tics.2006.08.004
  • Long, J. S., & Freese, J. (2006). Regression models for categorical dependent variables using Stata (Vol. 7). Stata press
  • Lu, J., Lee, D. D., Kim, T. W., & Danks, D. (2020). Good Explanation for Algorithmic Transparency. Available at SSRN: 102139/ssrn3503603.
  • Luo, Y., Tseng, -H.-H., Cui, S., Wei, L., Ten Haken, R. K., & El Naqa, I. (2019). Balancing accuracy and interpretability of machine learning approaches for radiation treatment outcomes modeling. BJR| Open, 1(1), 20190021. https://www.birpublications.org/doi/full/10.1259/bjro.20190021
  • Maheswaran, D., & Chaiken, S. (1991). Promoting systematic processing in low-motivation settings: Effect of incongruent information on processing and judgment. Journal of Personality and Social Psychology, 61(1), 13. https://doi.org/10.1037/0022-3514.61.1.13
  • Marshal, M. (1988). Iris data set. NASA. Retrieved 01.05.2020 from https://archive.ics.uci.edu/ml/datasets/iris
  • Martin, K., Liret, A., Wiratunga, N., Owusu, G., & Kern, M. (2019). Developing a catalogue of explainability methods to support expert and non-expert users. International Conference on Innovative Techniques and Applications of Artificial Intelligence, Cambridge, United Kingdom,Springer, Cham.
  • McCann Michael, J. A. (2008). SECOM data set. McCann, Michael, Johnston, Adrian Retrieved 05.05.2020 from archive.ics.uci.edu/ml/datasets/secom
  • McKinney, S. M., Sieniek, M., Godbole, V., Godwin, J., Antropova, N., Ashrafian, H., Back, T., Chesus, M., Corrado, G.S., Darzi, A., Etemadi, M., et al. (2020). International evaluation of an AI system for breast cancer screening. nature, 577(7788), 89–94. https://doi.org/10.1038/s41586-019-1799-6
  • McKinney, V., Yoon, K., & Zahedi, F. M. (2002). The measurement of web-customer satisfaction: An expectation and disconfirmation approach. Information Systems Research, 13(3), 296–315. https://doi.org/10.1287/isre.13.3.296.76
  • McKnight, D. H., Cummings, L. L., & Chervany, N. L. (1998). Initial trust formation in new organizational relationships. Academy of Management Review, 23(3), 473–490. https://doi.org/10.5465/amr.1998.926622
  • Mercado, J. E., Rupp, M. A., Chen, J. Y., Barnes, M. J., Barber, D., & Procci, K. (2016). Intelligent agent transparency in human–agent teaming for Multi-UxV management. Human Factors: The Journal of the Human Factors and Ergonomics Society, 58(3), 401–415. https://doi.org/10.1177/0018720815621206
  • Miller, T. (2019). Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence, 267, 1–38. https://doi.org/10.1016/j.artint.2018.07.007
  • Miller, T., Howe, P., & Sonenberg, L. (2017). Explainable AI: Beware of inmates running the asylum or: How I learnt to stop worrying and love the social and behavioural sciences. arXiv Preprint. https://arxiv.org/abs/1712.00547
  • Ming, Y., Qu, H., & Bertini, E. (2018). Rulematrix: Visualizing and understanding classifiers with rules. IEEE Transactions on Visualization and Computer Graphics, 25(1), 342–352. https://doi.org/10.1109/TVCG.2018.2864812
  • Mohseni, S., & Ragan, E. (2018). A human-grounded evaluation benchmark for local explanations of machine learning. arXiv Preprint, arXiv:1801.05075.
  • Mohseni, S., Zarei, N., & Ragan, E. (2018). A survey of evaluation methods and measures for interpretable machine learning. arXiv Preprint. https://arxiv.org/abs/1811.11839
  • Mohseni, S., Zarei, N., & Ragan, E. (2019). A Multidisciplinary survey and framework for design and evaluation of explainable AI systems. arXiv. In Human-Computer Interaction. ACM Trans. Interact. Intell. Syst., 1(1).  https://www.cise.ufl.edu/~eragan/papers/Mohseni_2020_XAI_survey.pdf
  • Moore, D. S., & Kirkland, S. (2007). The basic practice of statistics (Vol. 2). WH Freeman New York.
  • Morocho-Cayamcela, M. E., Lee, H., & Lim, W. (2019). Machine learning for 5G/B5G mobile and wireless communications: Potential, limitations, and future directions. IEEE Access, 7, 137184–137206. https://doi.org/10.1109/ACCESS.2019.2942390
  • Mueller, S. T., Hoffman, R. R., Clancey, W., Emrey, A., & Klein, G. (2019). Explanation in human-AI systems: A literature meta-review, synopsis of key ideas and publications, and bibliography for explainable AI. arXiv Preprint. https://arxiv.org/abs/1902.01876
  • Nanayakkara, S., Fogarty, S., Tremeer, M., Ross, K., Richards, B., Bergmeir, C., et al. (2018). Characterising risk of in-hospital mortality following cardiac arrest using machine learning: A retrospective international registry study. PLoS Medicine, 15(11), 11. https://doi.org/10.1371/journal.pmed.1002709
  • Narayanan, M., Chen, E., He, J., Kim, B., Gershman, S., & Doshi-Velez, F. (2018). How do humans understand explanations from machine learning systems? an evaluation of the human-interpretability of explanation. arXiv Preprint, https://arxiv.org/abs/1802.00682
  • Nawratil, U. (2013). Glaubwürdigkeit in der sozialen Kommunikation. Springer-Verlag.
  • Nicolaou, A. I., & McKnight, D. H. (2006). Perceived information quality in data exchanges: Effects on risk, trust, and intention to use. Information Systems Research, 17(4), 332–351. https://doi.org/10.1287/isre.1060.0103
  • Páez, A. (2019). The pragmatic turn in explainable artificial intelligence (XAI). Minds and Machines, 29(3), 441–459. https://doi.org/10.1007/s11023-019-09502-w
  • Patel, K., Fogarty, J., Landay, J. A., & Harrison, B. (2008). Investigating statistical machine learning as a tool for software development. SIGCHI Conference on Human Factors in Computing Systems, Florence, Italy.
  • Pavlou, P. A., Liang, H., & Xue, Y. (2007). Understanding and mitigating uncertainty in online exchange relationships: A principal-agent perspective. MIS Quarterly, 31(1), 105–136. https://doi.org/10.2307/25148783
  • Power, D. J. (2008). Decision support systems: A historical overview (Handbook on decision support systems 1 (pp. 121-140). Springer.
  • Pratt, J. W., & Zeckhauser, R. (1991). Principals and agents: The structure of business. Harvard Business School Press, Boston, Massachusetts.
  • Pu, P., & Chen, L. (2006). Trust building with explanation interfaces. 11th International Conference on Intelligent User Interfaces, Sydney, Australia, ACM.
  • Pu, P., Chen, L., & Hu, R. (2011). A user-centric evaluation framework for recommender systems. 5th Conference on Recommender Systems, Chicago, USA, ACM.
  • Quionero-Candela, J., Sugiyama, M., Schwaighofer, A., & Lawrence, N. D. (2009). Dataset shift in machine learning. The MIT Press.
  • Rai, A. (2020). Explainable AI: From black box to glass box. Journal of the Academy of Marketing Science, 48(1), 137–141. https://doi.org/10.1007/s11747-019-00710-5
  • Read, S. J., & Marcus-Newhall, A. (1993). Explanatory coherence in social explanations: A parallel distributed processing account. Journal of Personality and Social Psychology, 65(3), 429. https://doi.org/10.1037/0022-3514.65.3.429
  • Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). Why should i trust you?: Explaining the predictions of any classifier. 22nd SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, USA.
  • Rohweder, J. P., Kasten, G., Malzahn, D., Piro, A., & Schmid, J. (2008). Informationsqualität—Definitionen, Dimensionen und Begriffe (Daten-und Informationsqualität (pp. 25-45). Springer.
  • Rosenfeld, A., & Richardson, A. (2019). Explainability in human–agent systems. Autonomous Agents and Multi-Agent Systems, 33(6), 673–705. https://doi.org/10.1007/s10458-019-09408-y
  • Rotter, J. B. (1980). Interpersonal trust, trustworthiness, and gullibility. American Psychologist, 35(1), 1. https://doi.org/10.1037/0003-066X.35.1.1
  • Rozenblit, L., & Keil, F. (2002). The misunderstood limits of folk science: An illusion of explanatory depth. Cognitive Science, 26(5), 521–562. https://doi.org/10.1207/s15516709cog2605_1
  • Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence, 1(5), 206–215. https://doi.org/10.1038/s42256-019-0048-x
  • Saintilan, P., & Schreiber, D. (2017). Managing organizations in the creative economy: Organizational behaviour for the cultural sector. Routledge.
  • Salleh, M. N. M., Talpur, N., & Hussain, K. (2017). Adaptive neuro-fuzzy inference system: Overview, strengths, limitations, and solutions. International Conference on Data Mining and Big Data, Fukuoka, Japan, LNCS.
  • Samek, W., Wiegand, T., & Müller, K.-R. (2017). Explainable artificial intelligence: Understanding, visualizing and interpreting deep learning models. arXiv Preprint. https://arxiv.org/abs/1708.08296
  • Saxena, A., & Goebel, K. (2008). Turbofan engine degradation simulation data set. In NASA Ames Prognostics Data Repository. NASA Ames Research Center, Moffett Field, CA. http://ti.arc.nasa.gov/project/prognostic-data-repository
  • Schaffer, J., O’Donovan, J., Michaelis, J., Raglin, A., & Höllerer, T. (2019). I can do better than your AI: Expertise and explanations. Proceedings of the 24th International Conference on Intelligent User Interfaces, Marina del Ray, Los Angeles, USA.
  • Scherer, M. J., & Glueckauf, R. (2005). Assessing the benefits of assistive technologies for activities and participation. Rehabilitation Psychology, 50(2), 132. https://doi.org/10.1037/0090-5550.50.2.132
  • Schmidt, P., & Biessmann, F. (2019). Quantifying interpretability and trust in machine learning systems. arXiv Preprint. https://arxiv.org/abs/1901.08558
  • Schneider, J., & Handali, J. (2019). Personalized explanation in machine learning: A conceptualization. arXiv Preprint. https://arxiv.org/abs/1901.00770
  • Sheridan, T. B., & Hennessy, R. T. (1984). Research and modeling of supervisory control behavior. Report of a workshop. National Research Council Washington D.C., Committee on Human Factors.
  • Silver, D., Huang, A., Maddison, C. J., Guez, A., Sifre, L., Van Den Driessche, G., et al. (2016). Mastering the game of Go with deep neural networks and tree search. nature, 529(7587), 484–504. https://doi.org/10.1038/nature16961
  • Soll, J. B., & Klayman, J. (2004). Overconfidence in interval estimates. Journal of Experimental Psychology: Learning, Memory, and Cognition, 30(2), 299. https://doi.org/10.1037/0278-7393.30.2.299
  • Thagard, P. (1989). Explanatory coherence. Behavioral and Brain Sciences, 12(3), 435–502. https://doi.org/10.1017/S0140525X00057046
  • Tintarev, N., & Masthoff, J. (2012). Evaluating the effectiveness of explanations for recommender systems. User Modeling and User-Adapted Interaction, 22(4–5), 399–439. https://doi.org/10.1007/s11257-011-9117-5
  • Wang, R. Y., & Strong, D. M. (1996). Beyond accuracy: What data quality means to data consumers. Journal of Management Information Systems, 12(4), 5–33. https://doi.org/10.1080/07421222.1996.11518099
  • Wanner, J., Herm, L.-V., Heinrich, K., & Janiesch, C. (2021). Stop ordering machine learning algorithms by their explainability! An empirical investigation of the tradeoff between performance and explainability. 20th IFIP Conference e-Business, e-Services, and e-Society (I3E), Galway, Ireland,Springer.
  • Wanner, J., Popp, L., Fuchs, K., Heinrich, K., Herm, L.-V., & Janiesch, C. (2021). Adoption barriers of AI: A context-specific acceptance model for industrial maintenance. 29th European Conference on Information Systems, Marrackech, Morocco, ACM.
  • Weitz, K., Schiller, D., Schlagowski, R., Huber, T., & André, E. (2019). “Do you trust me?” Increasing user-trust by integrating virtual agents in explainable AI interaction design. 19th International Conference on Intelligent Virtual Agents, Paris, France, ACM.
  • Weld, D. S., & Bansal, G. (2019). The challenge of crafting intelligible intelligence. Communications of the ACM, 62(6), 70–79. https://doi.org/10.1145/3282486
  • Wirth, W., Rössler, P., & Wirth, W. (1999). Methodologische und konzeptionelle Aspekte der Glaubwürdigkeitsforschung. Deutscher Universitätsverlag. (pp. 47-66).
  • Wolf, C. T., & Ringland, K. E. (2020). Designing accessible, explainable AI (XAI) experiences. ACM SIGACCESS Accessibility and Computing, (125), 1. https://doi.org/10.1145/3386296.3386302
  • Wolling, J. (2004). Qualitätserwartungen, Qualitätswahrnehmungen und die Nutzung von Fernsehserien. Publizistik, 49(2), 171–193. https://doi.org/10.1007/s11616-004-0035-y
  • Yang, Y. J., & Bang, C. S. (2019). Application of artificial intelligence in gastroenterology. World Journal of Gastroenterology, 25(14), 1666. https://doi.org/10.3748/wjg.v25.i14.1666
  • Zhang, Q., Yang, L. T., Chen, Z., & Li, P. (2018). A survey on deep learning for big data. Information Fusion, 42, 146–157. https://doi.org/10.1016/j.inffus.2017.10.006

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.