References
- Abdul, A., Vermeulen, J., Wang, D., Lim, B. Y., & Kankanhalli, M. (2018). Trends and trajectories for explainable, accountable and intelligible systems. Proceedings of the 2018 CHI conference on human factors in computing systems - CHI ’18, Montréal, Canada,1–18. https://doi.org/https://doi.org/10.1145/3173574.3174156
- Adadi, A., & Berrada, M. (2018). Peeking inside the black-box: A survey on explainable artificial intelligence (XAI). IEEE Access, 6, 52138–52160. https://doi.org/https://doi.org/10.1109/ACCESS.2018.2870052
- Alt, R. (2018). Electronic markets and current general research. Electronic Markets, 28(2), 123–128. https://doi.org/https://doi.org/10.1007/s12525-018-0299-0
- Arnold, V., Clark, N., Collier, P. A., Leech, S. A., & Sutton, S. G. (2006). The differential use and effect of knowledge-based system explanations in novice and expert judgment decisions. MIS Quarterly: Management Information Systems, 30(1), 79. https://doi.org/https://doi.org/10.2307/25148718
- Arnold, V., & Sutton, S. G. (1998). The theory of technology dominance: Understanding the impact of intelligent decision maker´s judgments. Advances in Accounting Behavioral Research, 1(3), 175–194.
- Berente, N., Gu, B., Recker, J., & Santhanam, R. (2019). Call for papers MISQ special issue on managing AI. MIS Quarterly, 1–5. https://misq.org/skin/frontend/default/misq/pdf/CurrentCalls/ManagingAI.pdf
- Binns, R. (2018). Fairness in machine learning: Lessons from political philosophy. Maschine Learning Research, 81, 1–11. https://arxiv.org/abs/1712.03586
- Blair, A., & Saffidine, A. (2019). AI surpasses humans at six-player poker. Science, 365(6456), 864–865. https://doi.org/https://doi.org/10.1126/science.aay7774
- Böhmann, T., Leimeister, J. M., & Möslein, K. (2014). Service-systems-engineering. WIRTSCHAFTSINFORMATIK, 56(2), 83–90. https://doi.org/https://doi.org/10.1007/s11576-014-0406-6
- Brewster, F. W., II. (2002). Using tactical decision exercises to study tactics. Military Review, 82(6), 3–9. https://www.semanticscholar.org/paper/Using-Tactical-Decision-Exercises-to-Study-Tactics-Brewster/acc2892aa434e4a743e7638f769b06be0ee0639d
- Brundage, M., Avin, S., Clark, J., Toner, H., Eckersley, P., Garfinkel, B., Dafoe, A., Scharre, P., Zeitzoff, T., Filar, B., Anderson, H., Roff, H., Allen, G. C., Steinhardt, J., Flynn, C., & Amodei, D. (2018). The malicious use of artificial intelligence: Forecasting, prevention, and mitigation. 1–101.
- Bruun, E. P. G., & Duka, A. (2018). Artificial intelligence, jobs and the future of work: Racing with the machines. Basic Income Studies, 13(2), 1–15. https://doi.org/https://doi.org/10.1515/bis-2018-0018
- Caliskan, A., Bryson, J. J., & Narayanan, A. (2017). Semantics derived automatically from language corpora contain human-like biases. Science, 356(6334), 183–186. https://doi.org/https://doi.org/10.1126/science.aal4230
- Chandrasekaran, B., Tanner, M. C., & Josephson, J. R. (1989). Explaining control strategies in problem solving. IEEE Expert, 4(1), 9–15. https://doi.org/https://doi.org/10.1109/64.21896
- European Comission, High-Level Expert Group on AI. (2019). Ethics guidelines for trustworthy AI. Retrieved June 5, 20209, from https://ai.bsa.org/wp-content/uploads/2019/09/AIHLEG_EthicsGuidelinesforTrustworthyAI-ENpdf.pdf
- Dastin, J. (2018). Amazon scraps secret AI recruiting tool that showed bias against women. Reuters. https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secretai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G
- Dewey, M., & Wilkens, U. (2019). The bionic radiologist: Avoiding blurry pictures and providing greater insights. Npj Digital Medicine, 2(1), 65. https://doi.org/https://doi.org/10.1038/s41746-019-0142-9
- Dhaliwal, J. S., & Benbasat, I. (1996). The use and effects of knowledge-based system explanations: Theoretical foundations and a framework for empirical evaluation. Information Systems Research, 7(3), 342–362. https://doi.org/https://doi.org/10.1287/isre.7.3.342
- Doshi-Velez, F., & Kim, B. (2017). Towards a rigorous science of interpretable machine learning, arXiv preprint, 1–13. https://arxiv.org/abs/1702.08608
- Eiras-Franco, C., Guijarro-Berdiñas, B., Alonso-Betanzos, A., & Bahamonde, A. (2019). A scalable decision-tree-based method to explain interactions in dyadic data. Decision Support Systems, 127, 113141. https://doi.org/https://doi.org/10.1016/j.dss.2019.113141
- Elbanna, A., Dwivedi, Y., Bunker, D., & Wastell, D. (2020). The search for smartness in working, living and organising: Beyond the ‘technomagic. Information Systems Frontiers, 22(2), 275–280. https://doi.org/https://doi.org/10.1007/s10796-020-10013-8
- Fagan, L. M., Shortliffe, E. H., & Buchanan, B. G. (1980). COMPUTER-BASED MEDICAL DECISION MAKING: FROM MYCIN TO VM. Automedica.
- Fernandez, A., Herrera, F., Cordon, O., Jose Del Jesus, M., & Marcelloni, F. (2019). Evolutionary fuzzy systems for explainable artificial intelligence: Why, when, what for, and where to? IEEE Computational Intelligence Magazine, 14(1), 69–81. https://doi.org/https://doi.org/10.1109/MCI.2018.2881645
- Friedman, C. P., Elstein, A. S., Wolf, F. M., Murphy, G. C., Franz, T. M., Heckerling, P. S., Fine, P. L., Miller, T. M., & Abraham, V. (1999). Enhancement of clinicians’ diagnostic reasoning by computer-based consultation. JAMA, 282(19), 1851–1856. https://doi.org/https://doi.org/10.1001/jama.282.19.1851
- Fürnkranz, J., Kliegr, T., & Paulheim, H. (2020). On cognitive preferences and the plausibility of rule-based models. Machine Learning, 109, 853–898. https://doi.org/https://doi.org/10.1007/s10994-019-05856–5 4 109 doi:https://doi.org/10.1007/s10994-019-05856-5
- Garlick, B. (2017). Flying smarter: AI & machine learning in aviation autopilot systems. Stanford University.
- Giboney, J. S., Brown, S. A., Lowry, P. B., & Nunamaker, J. F. (2015). User acceptance of knowledge-based system recommendations: Explanations, arguments, and fit. Decision Support Systems, 72, 1–10. https://doi.org/https://doi.org/10.1016/j.dss.2015.02.005
- Gilpin, L. H., Bau, D., Yuan, B. Z., Bajwa, A., Specter, M., & Kagal, L. (2018). Explaining explanations: An overview of interpretability of machine learning. 2018 IEEE 5th International Conference on Data Science and Advanced Analytics (DSAA), 80–89. https://arxiv.org/abs/1806.00069
- Goddard, K., Roudsari, A., & Wyatt, J. C. (2011). Automation bias - a hidden issue for clinical decision support system use. Studies in Health Technology and Informatics, 164, 17–22. https://doi.org/https://doi.org/10.3233/978-1-60750-709-3-17
- Goddard, K., Roudsari, A., & Wyatt, J. C. (2012). Automation bias: A systematic review of frequency, effect mediators, and mitigators. Journal of the American Medical Informatics Association?: JAMIA, 19(1), 121–127. https://doi.org/https://doi.org/10.1136/amiajnl-2011-000089
- Gönül, M. S., Önkal, D., & Lawrence, M. (2006). The effects of structural characteristics of explanations on use of a DSS. Decision Support Systems, 42(3), 1481–1493. https://doi.org/https://doi.org/10.1016/j.dss.2005.12.003
- Goodman, B., & Flaxman, S. (2017). European union regulations on algorithmic decision-making and a right to explanation. AI Magazine, 38(3), 50–57. https://doi.org/https://doi.org/10.1609/aimag.v38i3.2741
- Gregor, S., & Yu, X. (2002). Exploring the explanatory capabilities of intelligent system technologies. In V. Dimitrov & V. Korotkich (Eds.), Fuzzy Logic (pp. 288–300). Physica-Verlag HD.
- Gregor, S., & Benbasat, I. (1999). Explanations from intelligent systems: Theoretical foundations and implications for practice. MIS Quarterly, 23(4), 497–530. https://doi.org/https://doi.org/10.2307/249487
- Grigorescu, S., Trasnea, B., Cocias, T., & Macesanu, G. (2020). A survey of deep learning techniques for autonomous driving. Journal of Field Robotics, 37(3), 362–386. https://doi.org/https://doi.org/10.1002/rob.21918
- Grudin, J. (2019). AI summers’ do not take jobs. Communications of the ACM, 59(2), 8–9.
- Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Pedreschi, D., & Giannotti, F. (2018). A survey of methods for explaining black box models. ACM Computing Surveys (CSUR), 51(5), 1–42. https://doi.org/https://doi.org/10.1145/3236009
- Gunning, D. (2017). Explainable artificial intelligence (XAI). DARPA Program Update November. https://www.darpa.mil/attachments/XAIProgramUpdate.pdf
- Haugeland, J. (1985). Artificial intelligence: The very idea. Massachusetts Institute of Technology, MIT PRESS.
- Herse, S., Vitale, J., Tonkin, M., Ebrahimian, D., Ojha, S., Johnston, B., Judge, W., & Williams, M. A. (2018). Do you trust me, blindly? Factors influencing trust towards a robot recommender system. RO-MAN 2018-27th IEEE International Symposium on Robot and Human Interactive Communication. https://doi.org/https://doi.org/10.1109/ROMAN.2018.8525581
- Hohman, F., Kahng, M., Pienta, R., & Chau, D. H. (2019). Visual analytics in deep learning: An interrogative survey for the next frontiers. IEEE Transactions on Visualization and Computer Graphics, 25(8), 2674–2693. https://doi.org/https://doi.org/10.1109/TVCG.2018.2843369
- Holzinger, A., Langs, G., Denk, H., Zatloukal, K., Müller, H. (2019). Causability and explainability of artificial intelligence in medicine. WIREs Data Mining and Knowledge Discovery, 9(4), 1–13. https://doi.org/https://doi.org/10.1002/widm.1312
- Jobin, A., Ienca, M., & Vayena, E. (2019). Artificial intelligence: The global landscape of ethics guidelines. Nature Maschine Intelligence, 1(9), 389–399. https://doi.org/https://doi.org/10.1038/s42256-019-0088-2
- Kaplan, A., & Haenlein, M. (2019). Siri, Siri, in my hand: Who’s the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence. Business Horizons, 62(1), 15–25. https://doi.org/https://doi.org/10.1016/j.bushor.2018.08.004
- Kasnakoglu, B. T. (2016). Antecedents and consequences of co-creation in credence-based service contexts. The Service Industries Journal, 36(1–2), 1–20. https://doi.org/https://doi.org/10.1080/02642069.2016.1138472
- Kistan, T., Gardi, A., & Sabatini, R. (2018). Machine learning and cognitive ergonomics in air traffic management: Recent developments and considerations for certification. Aerospace, 5(4), 103. https://doi.org/https://doi.org/10.3390/aerospace5040103
- Kühl, N., Lobana, J., & Meske, C. (2019). Do you comply with AI? - Personalized explanations of learning algorithms and their impact on employees’ compliance behavior. 40th International Conference on Information Systems (ICIS), 1–6. https://arxiv.org/pdf/2002.08777.pdf
- Kusner, M. J., Loftus, J., Russell, C., & Silva, R. (2017). Counterfactual Fairness. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, & R. Garnett (Eds.), Advances in neural information processing systems 30 (pp. 4066–4076). Curran Associates, Inc.
- Lapuschkin, S., Wäldchen, S., Binder, A., Montavon, G., Samek, W., & Müller, K.-R. (2019). Unmasking clever hans predictors and assessing what machines really learn. Nature Communications, 10(1096), 1–8. https://doi.org/https://doi.org/10.1038/s41467-019-08987-4
- Li, M., & Gregor, S. (2011). Outcomes of effective explanations: Empowering citizens through online advice. Decision Support Systems, 52(1), 119–132. https://doi.org/https://doi.org/10.1016/j.dss.2011.06.001
- Lynch, J., & Schuler, D. (1990). Consumer evaluation of the quality of hospital services from an economics of information perspective. Journal of Health Care Marketing, 10(2), 16–22. https://pubmed.ncbi.nlm.nih.gov/10105192/
- Malhotra, A., Melville, N. P., & Watson, R. T. (2013). Spurring Impactful Research on Information Systems for Environmental Sustainability. MIS Quarterly, 37(4), 1265–1274. https://doi.org/https://doi.org/10.1002/mrdd
- Mao, J.-Y., & Benbasat, I. (2000). The use of explanations in knowledge-based systems: Cognitive perspectives and a process-tracing analysis. Journal of Management Information Systems, 17(2), 153–179. https://doi.org/https://doi.org/10.1080/07421222.2000.11045646
- Martens, D., & Provost, F. (2014). Explaining data-driven document classifications. MIS Quarterly: Management Information Systems, 38(1), 73–99. https://doi.org/https://doi.org/10.25300/MISQ/2014/38.1.04
- Matzner, M., Büttgen, M., Demirkan, H., Spohrer, J., Alter, S., Fritzsche, A., Ng, I. C. L., Jonas, J. M., Martinez, V., Möslein, K. M., & Neely, A. (2018). Digital transformation in service management. Journal of Service Management Research, 2(2), 3–21. https://doi.org/https://doi.org/10.15358/2511-8676-2018-2-3
- Mccarthy, J., Minsky, M., Rochester, N., & Shannon, C. E. (2006). A proposal for the dartmouth summer research project on artificial intelligence, August 31, 1955. AI Magazine, 27(4), 12–14. https://doi.org/https://doi.org/10.1609/aimag.v27i4.1904
- McKinney, S. M., Sieniek, M., Godbole, V., Godwin, J., Antropova, N., Ashrafian, H., Back, T., Chesus, M., Corrado, G. C., Darzi, A., Etemadi, M., Garcia-Vicente, F., Gilbert, F. J., Halling-Brown, M., Hassabis, D., Jansen, S., Karthikesalingam, A., Kelly, C. J., King, D., & Shetty, S. (2020). International evaluation of an AI system for breast cancer screening. Nature, 577(7788), 89–94. https://doi.org/https://doi.org/10.1038/s41586-019-1799-6
- Mikalef, P., Popovic, A., Lundström, J. E., & Conboy, K. (2020). special issue call for papers: Dark side of analytics and AI. The European Journal of Information Systems. https://www.journalconferencejob.com/ejis-dark-side-of-analytics-and-ai
- Miller, T. (2019). Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence, 267, 1–38. https://doi.org/https://doi.org/10.1016/j.artint.2018.07.007
- Narla, A., Kuprel, B., Sarin, K., Novoa, R., & Ko, J. (2018). Automated Classification of Skin Lesions: From Pixels to Practice. Journal of Investigative Dermatology, 138(10), 2108–2110. https://doi.org/https://doi.org/10.1016/j.jid.2018.06.175
- Papamichail, K. N., & French, S. (2005). Design and evaluation of an intelligent decision support system for nuclear emergencies. Decision Support Systems, 41(1), 84–111. https://doi.org/https://doi.org/10.1016/j.dss.2004.04.014
- Parikh, R. B., Teeple, S., & Navathe, A. S. (2019). Addressing Bias in Artificial Intelligence in Health Care. JAMA, 322(24), 2377–2378. https://doi.org/https://doi.org/10.1001/jama.2019.18058
- Radaelli, L., de Montioye, Y.-A., Singh, V. K., & Pentland, A. P. (2015). Unique in the shopping mall: On the reidentifiability of credit and card metadata. Science, 347(6221), 536–539. https://doi.org/https://doi.org/10.1126/science.1256297
- Rahman, M., Yu, X., & Srinivasan, B. (1999). A neural networks based approach for fast mining characteristic rules. In Foo N,(eds) advanced topics in artificial intelligence. AI 199. Lecture notes in computer science, vol 1747 (pp. 36–47). Springer. https://doi.org/https://doi.org/10.1007/3-540-46695-9_4
- Rai, A., Constantinides, P., & Sarker, S. (2019). Editor’s comments: Next-generation digital platforms: Toward human-AI hybrids. MIS Quarterly, 43(1), 3–4. s
- Ras, G., van Gerven, M., & Haselager, P. (2018). Explanation methods in deep learning: Users, values, concerns and challenges. In H. J. Escalante, S. Escalera, I. Guyon, X. Baró, Y. Güçütürk, U. Güçlü, & M. Gerven (Eds.), Explainable and interpretable models in computer vision and machine learning. The Springer series on challenges in machine learning (pp. pp. 19–36). Cham, Schweiz.
- Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). “Why Should I Trust You?”. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144. https://doi.org/https://doi.org/10.1145/2939672.2939778
- Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence, 1(5), 206–215. https://doi.org/https://doi.org/10.1038/s42256-019-0048-x
- Schneider, J., & Handali, J. (2019). Personalized explanation in machine learning: A conceptualization. 27th European Conference on Information Systems (ECIS 2019), 1–17. https://arxiv.org/pdf/1901.00770.pdf
- Schneider, J., Handali, J., Vlachos, M., & Meske, C. (2020). Deceptive AI Explanations: Creation and Detection. arxiv, 2001, 07641. https://arxiv.org/pdf/2001.07641.pdf
- Searle, J. R. (1980). Minds, brains, and programs. Behavioral and Brain Sciences, 3(3), 417–424. https://doi.org/https://doi.org/10.1017/S0140525X00005756
- Shirer, M., & Daquila, M. (2019). Worldwide spending on artificial intelligence systems will be nearly $98 Billion in 2023, According to New IDC Spending Guide. International Data Corporation (IDC).
- Silver, D., Schrittwieser, J., Simonyan, K., Antonoglou, I., Huang, A., Guez, A., Hubert, T., Baker, L., Lai, M., Bolton, A., Chen, Y., Lillicrap, T., Hui, F., Sifre, L., van den Driessche, G., Graepel, T., & Hassabis, D. (2017). Mastering the game of Go without human knowledge. Nature, 550(7676), 354–359. https://doi.org/https://doi.org/10.1038/nature24270
- Su, J., Vargas, D. V., & Sakurai, K. (2019). One pixel attack for fooling deep neural networks. IEEE Transactions on Evolutionary Computation, 23(5), 828–841. https://doi.org/https://doi.org/10.1109/TEVC.2019.2890858
- Sutton, S. G., Arnold, V., & Holt, M. (2018). How much automation is too much? Keeping the human relevant in knowledge work. Journal of Emerging Technologies in Accounting, 15(2), 15–25. https://doi.org/https://doi.org/10.2308/jeta-52311
- Swartout, W. R., & Smoliar, S. W. (1987). On making expert systems more like experts. Expert Systems, 4(3), 196–208. https://doi.org/https://doi.org/10.1111/j.1468-0394.1987.tb00143.x
- Union, E. (2016). Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Da (Off. J. Eur. Union L119; pp. 1–88).
- van Lent, M., Fisher, W., & Mancuso, M. (2004). An explainable artificial intelligence system for small-unit tactical behavior. Proceedings of the 16th Conference on Innovative Applications of Artificial Intelligence, 900–907. https://www.aaai.org/Papers/IAAI/2004/IAAI04-019.pdf
- Vidgen, R., Shaw, S., & Grant, D. B. (2017). Management challenges in creating value from business analytics. European Journal of Operational Research, 261(2), 626–639. https://doi.org/https://doi.org/10.1016/j.ejor.2017.02.023
- Wang, H., Li, C., Gu, B., & Min, W. (2019). Does AI-based credit scoring improve financial inclusion? Evidence from online payday lending. In proceedings of the 40th international conference on information systems, Paper ID 3418, Munich, Germany, pp. 1–9.
- Watson, H. (2017). Preparing for the cognitive generation of decision support. MIS Quarterly Executive, 16(2), 153–169. https://www.semanticscholar.org/paper/Preparing-for-the-Cognitive-Generation-of-Decision-Watson/766825192ccec1419564c9882a857339cc4e9a44
- Wood, S., & Schulman, K. (2019). The doctor-of-the-future is in: patient responses to disruptive health-care innovations. Journal of the Association for Consumer Research, 4(3), 231–243. https://doi.org/https://doi.org/10.1086/704106
- Xiao, L., Shen, X.-L., Cheng, X., Mou, J., & Zarifis, A. (2020). Call for Papers - The Dark Sides of AI. Electronic Markets. https://www.springer.com/journal/12525/updates/17695144
- Yampolskiy, R. V. (2019). Predicting future AI failures from historic examples. Foresight, 21(1), 138–152. https://doi.org/https://doi.org/10.1108/FS-04-2018-0034
- Ye, L. R., & Johnson, P. E. (1995). The impact of explanation facilities on user acceptance of expert systems advice. MIS Quarterly, 19(2), 157–172. https://doi.org/https://doi.org/10.2307/249686
- Zolbanin, H. M., Delen, D., Crosby, D., & Wright, D. (2019). A predictive analytics-based decision support system for drug courts. Information Systems Frontiers, 22, 1–20. https://doi.org/https://doi.org/10.1007/s10796-019-09934-w
- Zuboff, S. (2015). Big other: Surveillance capitalism and the prospects of an information civilization. Journal of Information Technology, 30(1), 75–89. https://doi.org/https://doi.org/10.1057/jit.2015.5