1,253
Views
2
CrossRef citations to date
0
Altmetric
Research Articles

Explainable AI: The Effect of Contradictory Decisions and Explanations on Users’ Acceptance of AI Systems

, &
Pages 1807-1826 | Received 31 Oct 2021, Accepted 06 Sep 2022, Published online: 14 Oct 2022

References

  • Abdul, A., Vermeulen, J., Wang, D., Lim, B. Y., & Kankanhalli, M. (2018). Trends and trajectories for explainable, accountable and intelligible systems: An HCI research agenda [Paper presentation]. Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems.
  • Adadi, A., & Berrada, M. (2018). Peeking inside the black-box: A survey on explainable artificial intelligence (XAI). IEEE Access, 6, 52138–52160. https://doi.org/10.1109/ACCESS.2018.2870052
  • Agrawal, A., Gans, J., Goldfarb, A. (2016, October 7). Managing the machines. Data Science Association. Retrieved October 16, 2020, from http://www.datascienceassn.org/sites/default/files/Managing%20the%20Machines%20-%20AI%20is%20Making%20Prediction%20Cheap,%20Posing%20New%20Challenges%20for%20Managers.pdf
  • Agrawal, A., Gans, J., Goldfarb, A. (2017). What to expect from artificial intelligence. MIT Sloan Management Review. Retrieved October 16, 2020, from https://static1.squarespace.com/static/578cf5ace58c62ac649ec9ce/t/589a5c99440243b575aaedaa/1486511270947/What+to+Expect+From+Artificial+Intelligence.pdf
  • Agustina, L., Meyliana, M., & Tin, S. T. S. (2017). Assessing accounting students’ performance in “cognitive misfit” condition. Journal of Business & Retail Management Research, 11(4), 131–139. https://doi.org/10.24052/JBRMR/V11IS04/AASPICMC
  • Ahmad, S. N., & Laroche, M. (2017). Analyzing electronic word of mouth: A social commerce construct. International Journal of Information Management, 37(3), 202–213. https://doi.org/10.1016/j.ijinfomgt.2016.08.004
  • Amershi, S., Weld, D., Vorvoreanu, M., Fourney, A., Nushi, B., Collisson, P., Suh, J., Iqbal, S., Bennett, P., Inkpen, K., Teevan, J., Kikin-Gil, R., & Horvitz, E. (2019). Guidelines for human-AI interaction [Paper presentation]. Proceedings of the Conference on Human Factors in Computing Systems. https://doi.org/10.1145/3290605.3300233
  • Araújo, J., & Pestana, G. (2017). A framework for social well-being and skills management at the workplace. International Journal of Information Management, 37(6), 718–725. https://doi.org/10.1016/j.ijinfomgt.2017.07.009
  • Araujo, T. (2018). Living up to the chatbot hype: The influence of anthropomorphic design cues and communicative agency framing on conversational agent and company perceptions. Computers in Human Behavior, 85, 183–189. https://doi.org/10.1016/j.chb.2018.03.051
  • Arnold, V., Clark, N., Collier, P. A., Leech, S. A., & Sutton, S. G. (2006). The differential use and effect of knowledge-based system explanations in novice and expert judgment decisions. MIS Quarterly, 30(1), 79–97. https://doi.org/10.2307/25148718
  • Asiegbu, I. F., Powei, D. M., & Iruka, C. H. (2012). Consumer attitude: Some reflections on its concept, trilogy, relationship with consumer behavior, and marketing implications. European Journal of Business and Management, 4(13), 38–50.
  • Batra, D., & Antony, S. R. (2001). Consulting support during conceptual database design in the presence of redundancy in requirements specifications: An empirical study. International Journal of Human-Computer Studies, 54(1), 25–51. https://doi.org/10.1006/ijhc.2000.0406
  • Beaudry, A., & Pinsonneault, A. (2010). The other side of acceptance: Studying the direct and indirect effects of emotions on information technology use. MIS Quarterly, 34(4), 689–710. https://doi.org/10.2307/25750701
  • Bhattacherjee, A. (2001). Understanding information systems continuance: An expectation-confirmation model. MIS Quarterly, 25(3), 351–370. https://doi.org/10.2307/3250921
  • Bierhoff, H. W., & Petermann, F. (2013). Forschungsmethoden der Psychologie. Hogrefe.
  • Bouakkaz, M., Ouinten, Y., Loudcher, S., & Strekalova, Y. (2017). Textual aggregation approaches in OLAP context: A survey. International Journal of Information Management, 37(6), 684–692. https://doi.org/10.1016/j.ijinfomgt.2017.06.005
  • Brandtzaeg, P. B., Følstad, A. (2017). Why people use chatbots [Paper presentation]. Proceedings of the International Conference on Internet Science.
  • Brynjolfsson, E., & McAfee, A. (2016). The second machine age: Work, progress, and prosperity in a time of brilliant technologies. WW Norton & Company.
  • Byrne, R. M. (2019). Counterfactuals in explainable artificial intelligence (XAI): Evidence from human reasoning [Paper presentation]. Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence.
  • Capra, M. C. (2004). Mood-driven behavior in strategic interactions. American Economic Review, 94(2), 367–372. https://doi.org/10.1257/0002828041301885
  • Chan, D. (1996). Cognitive misfit of problem-solving style at work: A facet of person-organization fit. Organizational Behavior and Human Decision Processes, 68(3), 194–207. https://doi.org/10.1006/obhd.1996.0099
  • Chea, S., Luo, M. M. (2007). Cognition, emotion, satisfaction, and post-adoption behaviors of e-service customers [Paper presentation]. Proceedings of the Hawaii International Conference on System Sciences.
  • Chi, N. W., Chang, H. T., & Huang, H. L. (2015). Can personality traits and daily positive mood buffer the harmful effects of daily negative mood on task performance and service sabotage? A self-control perspective. Organizational Behavior and Human Decision Processes, 131, 1–15. https://doi.org/10.1016/j.obhdp.2015.07.005
  • Choi, I. Y., Oh, M. G., Kim, J. K., & Ryu, Y. U. (2016). Collaborative filtering with facial expressions for online video recommendation. International Journal of Information Management, 36(3), 397–402. https://doi.org/10.1016/j.ijinfomgt.2016.01.005
  • Chui, M., Manyika, J., Miremadi, M., Henke, N., Chung, R., Nel, P., & Malhotra, S. (2018). Notes from the AI frontier: Insights from hundreds of use cases. McKinsey Global Institute. Retrieved October 16, 2020, from http://governance40.com/wp-content/uploads/2018/12/MGI_Notes-from-AI-Frontier_Discussion-paper.pdf
  • Chung, W. (2014). BizPro: Extracting and categorizing business intelligence factors from textual news articles. International Journal of Information Management, 34(2), 272–284. https://doi.org/10.1016/j.ijinfomgt.2014.01.001
  • Chuttur, M. Y. (2009). Overview of the technology acceptance model: Origins, developments and future directions. Working Papers on Information Systems, 9(37), 1–24. http://sprouts.aisnet.org/9-37
  • Cook, M., Heshmat, S. (2019). The symbiosis of humans and machines: Planning for our AI-augmented future. Cognizant. Retrieved October 11, 2020, from https://www.cognizant.com/whitepapers/planning-for-our-ai-augmented-future-codex4744.pdf
  • Courtney, H., Kirkland, J., & Viguerie, P. (1997). Strategy under uncertainty. Harvard Business Review, 75(6), 67–79.
  • Craig, A. D., Reiman, E. M., Evans, A. C., & Bushnell, M. C. (1996). Functional imaging of an illusion of pain. Nature, 384(6606), 258–260.
  • Critchley, H. D. (2005). Neural mechanisms of autonomic, affective and cognitive integration. The Journal of Comparative Neurology, 493(1), 154–166. https://doi.org/10.1002/cne.20749
  • Dalal, N. P., & Kasper, G. M. (1994). The design of joint cognitive systems: The effect of cognitive coupling on performance. International Journal of Human-Computer Studies, 40(4), 677–702. https://doi.org/10.1006/ijhc.1994.1031
  • Dennis, A. R., & Carte, T. A. (1998). Using geographical information systems for decision making: Extending cognitive fit theory to map-based presentations. Information Systems Research, 9(2), 194–203. https://doi.org/10.1287/isre.9.2.194
  • Devine, P. G., Tauer, J. M., Barron, K. E., Elliot, A. J., & Vance, K. M. (1999). Moving beyond attitude change in the study of dissonance-related processes. In E. Harmon-Jones & J. Mills (Eds.), Science conference series. Cognitive dissonance: Progress on a pivotal theory in social psychology (pp. 297–323). American Psychological Association.
  • Dietvorst, B. J., Simmons, J. P., & Massey, C. (2018). Overcoming algorithm aversion: People will use imperfect algorithms if they can (even slightly) modify them. Management Science, 64(3), 1155–1170. https://doi.org/10.1287/mnsc.2016.2643
  • Doyle, J. R. (2012). Survey of time preference, delay discounting models. Judgment and Decision Making, 8(2), 116–135. https://doi.org/10.2139/ssrn.1685861
  • Dugdale, J. (1996). A cooperative problem-solver for investment management. International Journal of Information Management, 16(2), 133–147. https://doi.org/10.1016/0268-4012(95)00074-7
  • Edwards, J. S., Duan, Y., & Robins, P. C. (2000). An analysis of expert systems for business decision making at different levels and in different roles. European Journal of Information Systems, 9(1), 36–46. https://doi.org/10.1057/palgrave.ejis.3000344
  • Ehsan, U., & Riedl, M. O. (2020). Human-centered explainable AI: Towards a reflective sociotechnical approach. In C. Stephanidis, M. Kuros, H. Degen, & L. Reinerman-Jones (Eds.), HCI international 2020 - Late breaking papers: Multimodality and intelligence. HCII 2020. Lecture notes in computer science (pp. 449–466). Springer. https://doi.org/10.1007/978-3-030-60117-1_33
  • Elliot, A. J., & Devine, P. G. (1994). On the motivational nature of cognitive dissonance: Dissonance as psychological discomfort. Journal of Personality and Social Psychology, 67(3), 382–394. https://doi.org/10.1037/0022-3514.67.3.382
  • Festinger, L. (1957). A theory of cognitive dissonance. Stanford University Press.
  • Festinger, L., & Carlsmith, J. M. (1959). Cognitive consequences of forced compliance. The Journal of Abnormal and Social Psychology, 58(2), 203–210. https://doi.org/10.1037/h0041593
  • Finucane, M. L., Alhakami, A., Slovic, P., & Johnson, S. M. (2000). The affect heuristic in judgments of risks and benefits. Journal of Behavioral Decision Making, 13(1), 1–17. https://doi.org/10.1002/(SICI)1099-0771(200001/03)13:1<1::AID-BDM333>3.0.CO;2-S
  • Fischer, G. (2001). User modeling in human–computer interaction. User Modeling and User-Adapted Interaction, 11(1/2), 65–86. https://doi.org/10.1023/A:1011145532042
  • Frias-Martinez, E., Magoulas, G., Chen, S., & Macredie, R. (2006). Automated user modeling for personalized digital libraries. International Journal of Information Management, 26(3), 234–248. https://doi.org/10.1016/j.ijinfomgt.2006.02.006
  • Gedikli, F., Jannach, D., & Ge, M. (2014). How should I explain? A comparison of different explanation types for recommender systems. International Journal of Human-Computer Studies, 72(4), 367–382. https://doi.org/10.1016/j.ijhcs.2013.12.007
  • George, J. M. (1991). State or trait: Effects of positive mood on prosocial behaviors at work. Journal of Applied Psychology, 76(2), 299–307. https://doi.org/10.1037/0021-9010.76.2.299
  • Giboney, J. S., Brown, S. A., Lowry, P. B., & Nunamaker, J. F. Jr. (2015). User acceptance of knowledge-based system recommendations: Explanations, arguments, and fit. Decision Support Systems, 72, 1–10. https://doi.org/10.1016/j.dss.2015.02.005
  • Glass, G. V. (1966). Testing homogeneity of variances. American Educational Research Journal, 3(3), 187–190. https://doi.org/10.3102/00028312003003187
  • Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., & Pedreschi, D. (2019). A survey of methods for explaining black box models. ACM Computing Surveys, 51(5), 1–42. https://doi.org/10.1145/3236009
  • Gunning, D., & Aha, D. (2019). DARPA’s explainable artificial intelligence (XAI) program. AI Magazine, 40(2), 44–58. https://doi.org/10.1609/aimag.v40i2.2850
  • Guszcza, J., Lewis, H., Evans-Greenwood, P. (2017). Cognitive collaboration: Why humans and computers think better together. Deloitte University Press. Retrieved October 17, 2020, from https://www2.deloitte.com/us/en/insights/deloitte-review/issue-20/augmented-intelligence-human-computer-collaboration.html
  • Harari, Y. N. (2016). Homo Deus: A brief history of tomorrow. Random House.
  • Harmon-Jones, E. (2000). Cognitive dissonance and experienced negative affect: Evidence that dissonance increases experienced negative affect even in the absence of aversive consequences. Personality and Social Psychology Bulletin, 26(12), 1490–1501. https://doi.org/10.1177/01461672002612004
  • Hasan, B. (2010). Exploring gender differences in online shopping attitude. Computers in Human Behavior, 26(4), 597–601. https://doi.org/10.1016/j.chb.2009.12.012
  • Hoffman, R. R., Klein, G., & Mueller, S. T. (2018). Explaining explanation for “explainable AI”. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 62(1), 197–201. https://doi.org/10.1177/1541931218621047
  • Hohman, F., Kahng, M., Pienta, R., & Chau, D. H. (2019). Visual analytics in deep learning: An interrogative survey for the next frontiers. IEEE Transactions on Visualization and Computer Graphics, 25(8), 2674–2693. https://doi.org/10.1109/TVCG.2018.2843369
  • Holzinger, A., Langs, G., Denk, H., Zatloukal, K., & Mueller, H. (2019). Causability and explainability of artificial intelligence in medicine. Data Mining and Knowledge Discovery, 9(4), e1312. https://doi.org/10.1002/widm.1312
  • Howell, D. (2002). Statistical methods for psychology (5th ed.). Duxbury Press.
  • Huang, Z., Chen, H., Guo, F., Xu, J. J., Wu, S., & Chen, W. H. (2006). Expertise visualization: An implementation and study based on cognitive fit theory. Decision Support Systems, 42(3), 1539–1557. https://doi.org/10.1016/j.dss.2006.01.006
  • Hwang, M. I. (1994). Decision making under time pressure: A model for information systems research. Information & Management, 27(4), 197–203. https://doi.org/10.1016/0378-7206(94)90048-5
  • Im, I., Hong, S., & Kang, M. S. (2011). An international comparison of technology adoption: Testing the UTAUT model. Information & Management, 48(1), 1–8. https://doi.org/10.1016/j.im.2010.09.001
  • Ingram, J., Maciejewski, G., & Hand, C. J. (2020). Changes in diet, sleep, and physical activity are associated with differences in negative mood during COVID-19 lockdown. Frontiers in Psychology, 11, 2328. https://doi.org/10.3389/fpsyg.2020.588604
  • Jain, V. (2014). 3D model of attitude. International Journal of Advanced Research in Management and Social Sciences, 3(3), 1–12.
  • Janssen, C. P., Donker, S. F., Brumby, D. P., & Kun, A. L. (2019). History and future of human-automation interaction. International Journal of Human-Computer Studies, 131, 99–107. https://doi.org/10.1016/j.ijhcs.2019.05.006
  • Jarrahi, M. H. (2018). Artificial intelligence and the future of work: Human-AI symbiosis in organizational decision making. Business Horizons, 61(4), 577–586. https://doi.org/10.1016/j.bushor.2018.03.007
  • Jordan, P. J., Lawrence, S. A., & Troth, A. C. (2006). The impact of negative mood on team performance. Journal of Management & Organization, 12(2), 131–145. https://doi.org/10.1017/S1833367200004077
  • Kahneman, D., & Tversky, A. (1979). Prospect theory: An analysis of decision under risk. Econometrica, 47(2), 263–291. https://doi.org/10.2307/1914185
  • Kao, J. H., Chan, T. C., Lai, F., Lin, B. C., Sun, W. Z., Chang, K. W., Leu, F. Y., & Lin, J. W. (2017). Spatial analysis and data mining techniques for identifying risk factors of out-of-hospital cardiac arrest. International Journal of Information Management, 37(1), 1528–1538. https://doi.org/10.1016/j.ijinfomgt.2016.04.008
  • Keane, M. T., Kenny, E. M., Delaney, E., Smyth, B. (2021). If only we had better counterfactual explanations: Five key deficits to rectify in the evaluation of counterfactual XAI techniques. Retrieved October 12, 2020, from https://arxiv.org/abs/2103.01035
  • Kerly, A., Bull, S. (2006). The potential for chatbots in negotiated learner modelling: A wizard-of-oz study [Paper presentation]. Proceedings of the International Conference on Intelligent Tutoring Systems.
  • Kim, J. K., Kim, H. K., Oh, H. Y., & Ryu, Y. U. (2010). A group recommendation system for online communities. International Journal of Information Management, 30(3), 212–219. https://doi.org/10.1016/j.ijinfomgt.2009.09.006
  • Kizilcec, R. F. (2016). How much information? Effects of transparency on trust in an algorithmic interface [Paper presentation]. Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems.
  • Klein, G. (2015). A naturalistic decision making perspective on studying intuitive decision making. Journal of Applied Research in Memory and Cognition, 4(3), 164–168. https://doi.org/10.1016/j.jarmac.2015.07.001
  • Klemmer, S. R., Sinha, A. K., Chen, J., Landay, J. A., Aboobaker, N., & Wang, A. (2000). Suede: A wizard of Oz prototyping tool for speech user interfaces [Paper presentation]. Proceedings of the 13th Annual ACM Symposium on User Interface Software and Technology. https://doi.org/10.1145/354401.354406
  • Kolbjørnsrud, V., Amico, R., Thomas, R. J. (2016). The promise of artificial intelligence: Redefining management in the workforce of the future. Accenture Institute for High Performance Business. Retrieved October 2, 2020, from https://www.accenture.com/_acnmedia/PDF-19/AI_in_Management_Report.PDF
  • Lamberti, D. M., & Wallace, W. A. (1990). Intelligent interface design: An empirical assessment of knowledge presentation in expert systems. MIS Quarterly, 14(3), 279–311. https://doi.org/10.2307/248891
  • Laumer, S., Maier, C., Weitzel, T., Eckhardt, A. (2012). The implementation of large-scale information systems in small and medium-sized enterprises–A case study of work-and health-related consequences [Paper presentation]. Proceedings of the Hawaii International Conference on System Sciences.
  • Lebib, F. Z., Mellah, H., & Drias, H. (2017). Enhancing information source selection using a genetic algorithm and social tagging. International Journal of Information Management, 37(6), 741–749. https://doi.org/10.1016/j.ijinfomgt.2017.07.011
  • Lee, M. K. (2018). Understanding perception of algorithmic decisions: Fairness, trust, and emotion in response to algorithmic management. Big Data & Society, 5(1), 205395171875668. https://doi.org/10.1177/2053951718756684
  • Lerner, J. S., Li, Y., Valdesolo, P., & Kassam, K. S. (2015). Emotion and decision making. Annual Review of Psychology, 66(1), 799–823. https://doi.org/10.1146/annurev-psych-010213-115043
  • Licklider, J. C. (1960). Man-computer symbiosis. IRE Transactions on Human Factors in Electronics, HFE-1(1), 4–11. https://doi.org/10.1109/THFE2.1960.4503259
  • Lipton, Z. C. (2018). The Mythos of Model Interpretability: In machine learning, the concept of interpretability is both important and slippery. Communications of the ACM, 61(10), 36–43. https://doi.org/10.1145/3233231
  • MacFarland, T. W., & Yates, J. M. (2016). Introduction to nonparametric statistics for the biological sciences using R. Springer.
  • Maedche, A., Legner, C., Benlian, A., Berger, B., Gimpel, H., Hess, T., Hinz, O., Morana, S., & Söllner, M. (2019). AI-based digital assistants. Business & Information Systems Engineering, 61(4), 535–544. https://doi.org/10.1007/s12599-019-00600-8
  • Makanyeza, C. (2014). Measuring consumer attitude towards imported poultry meat products in a developing market: An assessment of reliability, validity and dimensionality of the tri-component attitude model. Mediterranean Journal of Social Sciences, 5(20), 874–881. https://doi.org/10.5901/mjss.2014.v5n20p874
  • Martinie, M. A., Milland, L., & Olive, T. (2013). Some theoretical considerations on attitude, arousal and affect during cognitive dissonance. Social and Personality Psychology Compass, 7(9), 680–688. https://doi.org/10.1111/spc3.12051
  • Martinie, M. A., Olive, T., Milland, L., Joule, R. V., & Capa, R. L. (2013). Evidence that dissonance arousal is initially undifferentiated and only later labeled as negative. Journal of Experimental Social Psychology, 49(4), 767–770. https://doi.org/10.1016/j.jesp.2013.03.003
  • Matz, D. C., & Wood, W. (2005). Cognitive dissonance in groups: The consequences of disagreement. Journal of Personality and Social Psychology, 88(1), 22–37. https://doi.org/10.1037/0022-3514.88.1.22
  • McAfee, A., & Brynjolfsson, E. (2016). Human work in the robotic future: Policy for the age of automation. Foreign Affairs, 95(4), 139–150.
  • Miron, A. M., & Brehm, J. W. (2006). Reactance theory-40 years later. Zeitschrift Für Sozialpsychologie, 37(1), 9–18. https://doi.org/10.1024/0044-3514.37.1.9
  • Möller, J., Trilling, D., Helberger, N., & van Es, B. (2018). Do not blame it on the algorithm. Information, Communication & Society, 21(7), 959–977. https://doi.org/10.1080/1369118X.2018.1444076
  • Mueller, S. T., Hoffman, R. R., Clancey, W., Emrey, A., Klein, G. (2019). Explanation in human-AI systems: A literature meta-review, synopsis of key ideas and publications, and bibliography for explainable AI. Retrieved October 11, 2020, from https://arxiv.org/abs/1902.01876
  • Nesterkin, D. A. (2013). Organizational change and psychological reactance. Journal of Organizational Change Management, 26(3), 573–594. https://doi.org/10.1108/09534811311328588
  • Nissen, M. E., & Sengupta, K. (2006). Incorporating software agents into supply chains: Experimental investigation with a procurement task. MIS Quarterly, 30(1), 145–166. https://doi.org/10.2307/25148721
  • Ogiela, L., & Ogiela, M. R. (2014). Cognitive systems for intelligent business information management in cognitive economy. International Journal of Information Management, 34(6), 751–760. https://doi.org/10.1016/j.ijinfomgt.2014.08.001
  • Pavey, L., & Sparks, P. (2009). Reactance, autonomy and paths to persuasion: Examining perceptions of threats to freedom and informational value. Motivation and Emotion, 33(3), 277–290. https://doi.org/10.1007/s11031-009-9137-1
  • Pentland, A., Daggett, M., Hurley, M. (2019, March). Human-AI decision systems. MIT Connection Science. Retrieved September 13, 2020, from https://connection.mit.edu/sites/default/files/publication-pdfs/Human-AIDecisionSystems.pdf
  • Pieters, W. (2011). Explanation and trust: What to tell the user in security and AI? Ethics and Information Technology, 13(1), 53–64. https://doi.org/10.1007/s10676-010-9253-3
  • Pisz, I., & Łapuńka, I. (2017). Logistics project planning under conditions of risk and uncertainty. Transport Economics and Logistics, 68(1), 89–102. https://doi.org/10.5604/01.3001.0010.5325
  • Putri, F. F., Hovav, A. (2014). Employees compliance with BYOD security policy: Insights from reactance, organizational justice, and protection motivation theory [Paper presentation]. Proceedings of the European Conference on Information System.
  • Pyszczynski, T., Greenberg, J., Solomon, S., Sideris, J., & Stubing, M. J. (1993). Emotional expression and the reduction of motivated cognitive bias: Evidence from cognitive dissonance and distancing from victims’ paradigms. Journal of Personality and Social Psychology, 64(2), 177–186. https://doi.org/10.1037/0022-3514.64.2.177
  • Ragini, J. R., Anand, P. R., & Bhaskar, V. (2018). Big data analytics for disaster response and recovery through sentiment analysis. International Journal of Information Management, 42, 13–24. https://doi.org/10.1016/j.ijinfomgt.2018.05.004
  • Rai, A., Constantinides, P., & Sarker, S. (2019). Editor’s comments: Next-generation digital platforms: toward human–AI hybrids. MIS Quarterly, 43(1), iii–iix.
  • Rai, A. (2020). Explainable AI: From black box to glass box. Journal of the Academy of Marketing Science, 48(1), 137–141. https://doi.org/10.1007/s11747-019-00710-5
  • Ramírez-Noriega, A., Juárez-Ramírez, R., & Martínez-Ramírez, Y. (2017). Evaluation module based on Bayesian networks to Intelligent Tutoring Systems. International Journal of Information Management, 37(1), 1488–1498. https://doi.org/10.1016/j.ijinfomgt.2016.05.007
  • Ransbotham, S., Kiron, D., Gerbert, P., & Reeves, M. (2017). Reshaping business with artificial intelligence: Closing the gap between ambition and action. MIT Sloan Management Review, 59(1), 1–17.
  • Rauniar, R., Rawski, G., Yang, J., & Johnson, B. (2014). Technology acceptance model (TAM) and social media usage: An empirical study on Facebook. Journal of Enterprise Information Management, 27(1), 6–30. https://doi.org/10.1108/JEIM-04-2012-0011
  • Read, W., Robertson, N., & McQuilken, L. (2011). A novel romance: The technology acceptance model with emotional attachment. Australasian Marketing Journal, 19(4), 223–229. https://doi.org/10.1016/j.ausmj.2011.07.004
  • Rekik, R., Kallel, I., Casillas, J., & Alimi, A. M. (2018). Assessing web sites quality: A systematic literature review by text and association rules mining. International Journal of Information Management, 38(1), 201–216. https://doi.org/10.1016/j.ijinfomgt.2017.06.007
  • Rhodewalt, F., & Comer, R. (1979). Induced-compliance attitude change: Once more with feeling. Journal of Experimental Social Psychology, 15(1), 35–47. https://doi.org/10.1016/0022-1031(79)90016-7
  • Rietz, T., Benke, I., Maedche, A. (2019). The impact of anthropomorphic and functional chatbot design features in enterprise collaboration systems on user acceptance [Paper presentation]. Proceedings of the 2019 Conference on Wirtschaftsinformatik.
  • Rosenfeld, A., & Richardson, A. (2019). Explainability in human–agent systems. Autonomous Agents and Multi-Agent Systems, 33(6), 673–705. https://doi.org/10.1007/s10458-019-09408-y
  • Rosenthal, R. (1968). An application of the Kolmogorov-Smirnov test for normality with estimated mean and variance. Psychological Reports, 22(2), 570. https://doi.org/10.2466/pr0.1968.22.2.570
  • Roth, E. M., Bennett, K. B., & Woods, D. D. (1987). Human interaction with an “intelligent” machine. International Journal of Man-Machine Studies, 27(5–6), 479–525. https://doi.org/10.1016/S0020-7373(87)80012-3
  • Rushkoff, D. (2019). Team human. Norton & Company.
  • Rzepka, C., Berger, B. (2018). User interaction with AI-enabled systems: A systematic review of IS research [Paper presentation]. Proceedings of the International Conference of Information Systems.
  • Samek, W., Binder, A., Montavon, G., Lapuschkin, S., & Muller, K. (2017). Evaluating the visualization of what a deep neural network has learned. IEEE Transactions on Neural Networks and Learning Systems, 28(11), 2660–2673. https://doi.org/10.1109/TNNLS.2016.2599820
  • Scheutz, M., DeLoach, S. A., & Adams, J. A. (2017). A framework for developing and using shared mental models in human-agent teams. Journal of Cognitive Engineering and Decision Making, 11(3), 203–224. https://doi.org/10.1177/1555343416682891
  • Selvaraju, R. R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., Batra, D. (2017). Grad-cam: Visual explanations from deep networks via gradient-based localization [Paper presentation]. Proceedings of the IEEE International Conference on Computer Vision.
  • Sharp, J. H. (2006). Development, extension, and application: A review of the technology acceptance model. Director, 7(9), 3–11.
  • Sheehan, B., Jin, H. S., & Gottlieb, U. (2020). Customer service chatbots: Anthropomorphism and adoption. Journal of Business Research, 115, 14–24. https://doi.org/10.1016/j.jbusres.2020.04.030
  • Shin, D., & Biocca, F. (2018). Exploring immersive experience in journalism what makes people empathize with and embody immersive journalism? New Media & Society, 20(8), 2800–2823. https://doi.org/10.1177/1461444817733133
  • Shin, D. (2019). Toward fair, accountable, and transparent algorithms: Case studies on algorithm initiatives in Korea and China. Javnost: The Public, 26(3), 1–17. https://doi.org/10.1080/13183222.2019.1589249
  • Shin, D., & Park, Y. J. (2019). Role of fairness, accountability, and transparency in algorithmic affordance. Computers in Human Behavior, 98, 277–284. https://doi.org/10.1016/j.chb.2019.04.019
  • Shin, D. (2021). The effects of explainability and causability on perception, trust, and acceptance: Implications for explainable AI. International Journal of Human-Computer Studies, 146, 102551. https://doi.org/10.1016/j.ijhcs.2020.102551
  • Slovic, P., Fischhoff, B., & Lichtenstein, S. (1977). Behavioral decision theory. Annual Review of Psychology, 28(1), 1–39. https://doi.org/10.1146/annurev.ps.28.020177.000245
  • Sohn, K., & Kwon, O. (2020). Technology acceptance theories and factors influencing artificial intelligence-based intelligent products. Telematics and Informatics, 47, 101324. https://doi.org/10.1016/j.tele.2019.101324
  • Sundar, S. S. (2020). Rise of machine agency: A framework for studying the psychology of human–AI interaction (HAII). Journal of Computer-Mediated Communication, 25(1), 74–88. https://doi.org/10.1093/jcmc/zmz026
  • Tang, K. Y., & Hsiao, C. H. (2016). The literature development of technology acceptance model. International Journal of Conceptions on Management and Social Sciences, 4(1), 1–4.
  • Tintarev, N., & Masthoff, J. (2012). Evaluating the effectiveness of explanations for recommender systems. User Modeling and User-Adapted Interaction, 22(4–5), 399–439. https://doi.org/10.1007/s11257-011-9117-5
  • Tjoa, E., & Guan, C. (2021). A survey on explainable artificial intelligence (XAI): Toward medical XAI. IEEE Transactions on Neural Networks and Learning Systems, 32(11), 4793–4813. https://doi.org/10.1109/TNNLS.2020.3027314
  • Tsekouras, D., Li, T. (2015). The dual role of perceived effort in personalized recommendations [Paper presentation]. Proceedings of the European Conference on Information System.
  • Tseng, S. S., Chen, H. C., Hu, L. L., & Lin, Y. T. (2017). CBR-based negotiation RBAC model for enhancing ubiquitous resources management. International Journal of Information Management, 37(1), 1539–1550. https://doi.org/10.1016/j.ijinfomgt.2016.05.009
  • van den Bosch, K., & Bronkhorst, A. (2018). Human-AI cooperation to benefit military decision making. NATO. Retrieved January 3, 2021, from https://www.karelvandenbosch.nl/documents/2018_Bosch_etal_NATO-IST160_Human-AI_Cooperation_in_Military_Decision_Making.pdf
  • Vessey, I. (1991). Cognitive fit: A theory‐based analysis of the graphs versus tables literature. Decision Sciences, 22(2), 219–240. https://doi.org/10.1111/j.1540-5915.1991.tb00344.x
  • Vinson, N. G., Molyneaux, H., & Martin, J. D. (2019). Explanations in artificial intelligence decision making: A user acceptance perspective. In P. Isaias & K. Blashki (Eds.), Handbook of research on human-computer interfaces and new modes of interactivity (pp. 96–117). IGI Global.
  • Wachter, S., Mittelstadt, B., & Floridi, L. (2017). Why a right to explanation of automated decision-making does not exist in the general data protection regulation. International Data Privacy Law, 7(2), 76–99. https://doi.org/10.1093/idpl/ipx005
  • Wang, Y. Y., & Wang, Y. S. (2022). Development and validation of an artificial intelligence anxiety scale: An initial application in predicting motivated learning behavior. Interactive Learning Environments, 30(4), 619–634. https://doi.org/10.1080/10494820.2019.1674887
  • Wason, P. C. (1960). On the failure to eliminate hypotheses in a conceptual task. Quarterly Journal of Experimental Psychology, 12(3), 129–140. https://doi.org/10.1080/17470216008416717
  • Wason, P. C. (1968). On the failure to eliminate hypotheses: A second look. In P. C. Wason & P. N. Johnson-Laird (Eds.), Thinking and reasoning (pp. 165–174). Penguin.
  • Williams, M. D., Rana, N. P., & Dwivedi, Y. K. (2015). The unified theory of acceptance and use of technology (UTAUT): A literature review. Journal of Enterprise Information Management, 28(3), 443–488. https://doi.org/10.1108/JEIM-09-2014-0088
  • Wilson, H. J., & Daugherty, P. R. (2018). Collaborative intelligence: Humans and AI are joining forces. Harvard Business Review, 96(4), 114–123.
  • Woolson, R. F. (2007). Wilcoxon signed-rank test. Wiley Encyclopedia of Clinical Trials, 1–3. https://doi.org/10.1002/9780471462422.eoct979
  • World Economic Forum. (2016). The global risks report 2016 (11th ed.). World Economic Forum. Retrieved March 2, 2021, from https://www.weforum.org/reports/the-global-risks-report-2016
  • Yang, Q., Steinfeld, A., Rosé, C., & Zimmerman, J. (2020). Re-examining whether, why, and how human-ai interaction is uniquely difficult to design [Paper presentation]. Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. https://doi.org/10.1145/3313831.3376301
  • Yaqoob, I., Hashem, I. A. T., Gani, A., Mokhtar, S., Ahmed, E., Anuar, N. B., & Vasilakos, A. V. (2016). Big data: From beginning to future. International Journal of Information Management, 36(6), 1231–1247. https://doi.org/10.1016/j.ijinfomgt.2016.07.009
  • Ye, L. R., & Johnson, P. E. (1995). The impact of explanation facilities on user acceptance of expert systems advice. MIS Quarterly, 19(2), 157–172. https://doi.org/10.2307/249686
  • Zanna, M. P., & Cooper, J. (1974). Dissonance and the pill: An attribution approach to studying the arousal properties of dissonance. Journal of Personality and Social Psychology, 29(5), 703–709. https://doi.org/10.1037/h0036651
  • Zhang, B., & Sundar, S. S. (2019). Proactive vs. reactive personalization: Can customization of privacy enhance user experience? International Journal of Human-Computer Studies, 128, 86–99. https://doi.org/10.1016/j.ijhcs.2019.03.002
  • Zhao, Y., Tang, L. C., Darlington, M. J., Austin, S. A., & Culley, S. J. (2008). High value information in engineering organisations. International Journal of Information Management, 28(4), 246–258. https://doi.org/10.1016/j.ijinfomgt.2007.09.007

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.