496
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Who Made That Decision and Why? Users’ Perceptions of Human Versus AI Decision-Making and the Power of Explainable-AI

ORCID Icon, ORCID Icon, ORCID Icon &
Received 14 Dec 2023, Accepted 24 Apr 2024, Published online: 20 May 2024

References

  • Abdollahi, B., & Nasraoui, O. (2018). Transparency in fair machine learning: The case of explainable recommender systems. In Human and Machine Learning (pp. 21–35). Springer. https://doi.org/10.1007/978-3-319-90403-0_2
  • Albassam, W. A. (2023). The power of artificial intelligence in recruitment: An analytical review of current AI-based recruitment strategies. International Journal of Professional Business Review, 8(6), e02089. https://doi.org/10.26668/businessreview/2023.v8i6.2089
  • Araujo, T., Helberger, N., Kruikemeier, S., & de Vreese, C. H. (2020). In AI we trust? Perceptions about automated decision-making by artificial intelligence. AI & Society, 35(3), 611–623. https://doi.org/10.1007/s00146-019-00931-w
  • Aysolmaz, B., Müller, R., & Meacham, D. (2023). The public perceptions of algorithmic decision-making systems: Results from a large-scale survey. Telematics and Informatics, 79, 101954. https://doi.org/10.1016/j.tele.2023.101954
  • Bankins, S., Formosa, P., Griep, Y., & Richards, D. (2022). AI decision making with dignity? Contrasting workers’ justice perceptions of human and AI decision making in a human resource management context. Information Systems Frontiers, 24(3), 857–875. https://doi.org/10.1007/s10796-021-10223-8
  • Barredo Arrieta, A., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., Garcia, S., Gil-Lopez, S., Molina, D., Benjamins, R., Chatila, R., & Herrera, F. (2020). Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, 58, 82–115. https://doi.org/10.1016/j.inffus.2019.12.012
  • Binns, R., Van Kleek, M., Veale, M., Lyngs, U., Zhao, J., & Shadbolt, N. (2018). Reducing a human being to a percentage’ perceptions of justice in algorithmic secisions[Paper presentation]. Proceedings of CHI Conference on Human Factors in Computing Systems (pp. 1–14). https://doi.org/10.1145/3173574.3173951
  • Böckle, M., Yeboah-Antwi, K., & Kouris, I. (2021). Can you trust the black box? The effect of personality traits on trust in AI-enabled user interfaces [Paper presentation]. International Conference on Human–Computer Interaction (pp. 3–20). Springer. https://doi.org/10.1007/978-3-030-77772-2_1
  • Choung, H., Seberger, J. S., & David, P. (2023). When AI is perceived to be fairer than a human: understanding perceptions of algorithmic decisions in a job application context. International Journal of Human–Computer Interaction. Advance online publication. https://doi.org/10.1080/10447318.2023.2266244
  • Colquitt, J. A., & Rodell, J. B. (2015). Measuring justice and fairness. The Oxford Handbook of Justice in the Workplace, 1, 187–202. https://doi.org/10.1093/oxfordhb/9780199981410.013.8
  • Conati, C., Barral, O., Putnam, V., & Rieger, L. (2021). Toward personalized XAI: A case study in intelligent tutoring systems. Artificial Intelligence, 298, 103503. https://doi.org/10.1016/j.artint.2021.103503
  • Deldjoo, Y., Jannach, D., Bellogin, A., Difonzo, A., & Zanzonelli, D. (2023). Fairness in recommender systems: Research landscape and future directions. User Modeling and User-Adapted Interaction, 34(1), 59–108. https://doi.org/10.1007/s11257-023-09364-z
  • Dodge, J., Liao, Q. V., Zhang, Y., Bellamy, R. K., & Dugan, C. (2019). Explaining models: An empirical study of how explanations impact fairness judgment [Paper presentation]. Proceedings of the 24th International Conference on Intelligent User Interfaces (pp. 275–285). https://doi.org/10.1145/3301275.3302310
  • Ebermann, C., Selisky, M., & Weibelzahl, S. (2023). Explainable AI: The effect of contradictory decisions and explanations on users’ acceptance of AI systems. International Journal of Human–Computer Interaction, 39(9), 1807–1826. https://doi.org/10.1080/10447318.2022.2126812
  • Efendić, E., Van de Calseyde, P. P., Bahník, Š., & Vranka, M. A. (2024). Taking algorithmic (vs. human) advice reveals different goals to others. International Journal of Human–Computer Interaction, 40(1), 45–54. https://doi.org/10.1080/10447318.2023.2210886
  • Gedikli, F., Jannach, D., & Ge, M. (2014). How should I explain? A comparison of different explanation types for recommender systems. International Journal of Human-Computer Studies, 72(4), 367–382. https://doi.org/10.1016/j.ijhcs.2013.12.007
  • Gosling, S. D., Rentfrow, P. J., & Swann, W. B. Jr, (2003). A very brief measure of the Big-Five personality domains. Journal of Research in Personality, 37(6), 504–528. https://doi.org/10.1016/S0092-6566(03)00046-1
  • Grgic-Hlaca, N., Redmiles, E. M., Gummadi, K. P., & Weller, A. (2018). Human perceptions of fairness in algorithmic decision making: A case study of criminal risk prediction [Paper presentation]. Proceedings of the 2018 World Wide Web Conference (pp. 903–912). https://doi.org/10.1145/3178876.3186138
  • Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., & Pedreschi, D. (2019). A survey of methods for explaining black box models. ACM Computing Surveys, 51(5), 1–42. https://doi.org/10.1145/3236009
  • Harrison, G., Hanson, J., Jacinto, C., Ramirez, J., & Ur, B. (2020). An empirical study on the perceived fairness of realistic, imperfect machine learning models [Paper presentation]. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (pp. 392–402). https://doi.org/10.1145/3351095.3372831
  • Helberger, N., Araujo, T., & de Vreese, C. H. (2020). Who is the fairest of them all? Public attitudes and expectations regarding automated decision-making. Computer Law & Security Review, 39, 105456. https://doi.org/10.1016/j.clsr.2020.105456
  • Hilliard, A., Guenole, N., & Leutner, F. (2022). Robots are judging me: Perceived fairness of algorithmic recruitment tools. Frontiers in Psychology, 13, 940456. https://doi.org/10.3389/fpsyg.2022.940456
  • Holzinger, A., Carrington, A., & Müller, H. (2020). Measuring the quality of explanations: The system causability scale (SCS). Kunstliche Intelligenz, 34(2), 193–198. https://doi.org/10.1007/s13218-020-00636-z
  • Hu, Z. F., Kuflik, T., Mocanu, I. G., Najafian, S., & Shulner Tal, A. (2021). Recent studies of XAI-review [Paper presentation]. Adjunct Proceedings of the 29th ACM Conference on User Modeling, Adaptation and Personalization (pp. 421–431). https://doi.org/10.1145/3450614.3463354
  • Huang, C., Zhang, Z., Mao, B., & Yao, X. (2023). An overview of artificial intelligence ethics. IEEE Transactions on Artificial Intelligence, 4(4), 799–819. https://doi.org/10.1109/TAI.2022.3194503
  • Hunkenschroer, A. L., & Luetge, C. (2022). Ethics of AI-enabled recruiting and selection: A review and research agenda. Journal of Business Ethics, 178(4), 977–1007. https://doi.org/10.1007/s10551-022-05049-6
  • Islam, M. R., Ahmed, M. U., Barua, S., & Begum, S. (2022). A systematic review of explainable artificial intelligence in terms of different application domains and tasks. Applied Sciences, 12(3), 1353. https://doi.org/10.3390/app12031353
  • Kern, C., Gerdon, F., Bach, R. L., Keusch, F., & Kreuter, F. (2022). Humans versus machines: Who is perceived to decide fairer? Experimental evidence on attitudes toward automated decision-making. Patterns (New York, N.Y.), 3(10), 100591. https://doi.org/10.1016/j.patter.2022.100591
  • Kouki, P., Schaffer, J., Pujara, J., O’Donovan, J., & Getoor, L. (2020). Generating and understanding personalized explanations in hybrid recommender systems. ACM Transactions on Interactive Intelligent Systems, 10(4), 1–40. https://doi.org/10.1145/3365843
  • Krishnakumar, A. (2019). Assessing the fairness of AI recruitment systems [Master Thesis]. TU Delq. http://resolver.tudelft.nl/uuid:1ce06e89-72a7-47fe-bdbd-93775732a30c
  • Lee, M. K. (2018). Understanding perception of algorithmic decisions: Fairness, trust, and emotion in response to algorithmic management. Big Data & Society, 5(1), 205395171875668. https://doi.org/10.1177/2053951718756684
  • Lee, M. K., & Baykal, S. (2017). Algorithmic mediation in group decisions: Fairness perceptions of algorithmically mediated vs. discussion-based social division [Paper presentation]. Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing (pp. 1035–1048). https://doi.org/10.1145/2998181.2998230
  • Lee, M. K., Jain, A., Cha, H. J., Ojha, S., & Kusbit, D. (2019). Procedural justice in algorithmic fairness: Leveraging transparency and outcome control for fair algorithmic mediation [Paper presentation]. Proceedings of the ACM on Human–Computer Interaction, 3(CSCW) (pp. 1–26). https://doi.org/10.1145/3359284
  • Li, Y., Chen, H., Xu, S., Ge, Y., Tan, J., Liu, S., & Zhang, Y. (2023). Fairness in recommendation: Foundations, methods and applications. ACM Transactions on Intelligent Systems and Technology, 14(5), 1–48. https://doi.org/10.1145/3610302
  • Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., & Galstyan, A. (2021). A survey on bias and fairness in machine learning. ACM Computing Surveys, 54(6), 1–35. https://doi.org/10.1145/3457607
  • Millecamp, M., Htun, N. N., Conati, C., & Verbert, K. (2020). What’s in a user? Towards personalising transparency for music recommender interfaces [Paper presentation]. Proceedings of the 28th ACM Conference on User Modeling, Adaptation and Personalization (pp. 173–182). https://doi.org/10.1145/3340631.3394844
  • Minh, D., Wang, H. X., Li, Y. F., & Nguyen, T. N. (2022). Explainable artificial intelligence: A comprehensive review. Artificial Intelligence Review, 55(5), 3503–3568. https://doi.org/10.1007/s10462-021-10088-y
  • Nagtegaal, R. (2021). The impact of using algorithms for managerial decisions on public employees’ procedural justice. Government Information Quarterly, 38(1), 101536. https://doi.org/10.1016/j.giq.2020.101536
  • Narayanan, D., Nagpal, M., McGuire, J., Schweitzer, S., & De Cremer, D. (2023). Fairness perceptions of artificial intelligence: A review and path forward. International Journal of Human–Computer Interaction, 40(1), 4–23. https://doi.org/10.1080/10447318.2023.2210890
  • Newman, D. T., Fast, N. J., & Harmon, D. J. (2020). When eliminating bias isn’t fair: Algorithmic reductionism and procedural justice in human resource decisions. Organizational Behavior and Human Decision Processes, 160, 149–167. https://doi.org/10.1016/j.obhdp.2020.03.008
  • Nunes, I., & Jannach, D. (2017). A systematic review and taxonomy of explanations in decision support and recommender systems. User Modeling and User-Adapted Interaction, 27(3-5), 393–444. https://doi.org/10.1007/s11257-017-9195-0
  • Nyathani, R. (2022). AI-powered recruitment the future of HR digital transformation. Journal of Artificial Intelligence & Cloud Computing, 133, 1–5. https://doi.org/10.47363/JAICC/2022(1)133
  • Pagano, T. P., Loureiro, R. B., Lisboa, F. V. N., Peixoto, R. M., Guimarães, G. A. S., Cruz, G. O. R., Araujo, M. M., Santos, L. L., Cruz, M. A. S., Oliveira, E. L. S., Winkler, I., & Nascimento, E. G. S. (2023). Bias and unfairness in machine learning models: A systematic review on datasets, tools, fairness metrics, and identification and mitigation methods. Big Data and Cognitive Computing, 7(1), 15. https://doi.org/10.3390/bdcc7010015
  • Pessach, D., & Shmueli, E. (2022). A review on fairness in machine learning. ACM Computing Surveys, 55(3), 1–44. https://doi.org/10.1145/3494672
  • Plane, A. C., Redmiles, E. M., Mazurek, M. L., & Tschantz, M. C. (2017). Exploring user perceptions of discrimination in online targeted advertising [Paper presentation]. 26th USENIX Security Symposium (USENIX Security 17) (pp. 935–951). https://www.usenix.org/conference/usenixsecurity17/technical-sessions/presentation/plane
  • Schoeffer, J. (2022). A Human-Centric Perspective on Fairness and Transparency in Algorithmic Decision-Making [Paper presentation]. CHI Conference on Human Factors in Computing Systems, In Extended Abstracts (pp. 1–6). https://doi.org/10.1145/3491101.3503811
  • Schoeffer, J., & Kuehl, N. (2021). Appropriate fairness perceptions? On the effectiveness of explanations in enabling people to assess the fairness of automated decision systems [Paper presentation]. Companion Publication of the 2021 Conference on Computer Supported Cooperative Work and Social Computing (pp. 153–157). https://doi.org/10.1145/3462204.3481742
  • Schoeffer, J., Kuehl, N., & Machowski, Y. (2022). “There is not enough information”: On the effects of explanations on perceptions of informational fairness and trustworthiness in automated decision-making [Paper presentation]. Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency (pp. 1616–1628). https://doi.org/10.1145/3531146.3533218
  • Schoeffer, J., Machowski, Y., & Kuehl, N. (2021). Perceptions of fairness and trustworthiness based on explanations in human vs. automated decision-making. arXiv preprint arXiv:2109.05792. https://doi.org/10.48550/arXiv.2109.05792
  • Schrum, M., Ghuy, M., Hedlund-Botti, E., Natarajan, M., Johnson, M., & Gombolay, M. (2023). Concerning trends in likert scale usage in human-robot interaction: Towards improving best practices. ACM Transactions on Human-Robot Interaction, 12(3), 1–32. https://doi.org/10.1145/3572784
  • Shin, D. (2021). The effects of explainability and causability on perception, trust, and acceptance: Implications for explainable AI. International Journal of Human-Computer Studies, 146, 102551. https://doi.org/10.1016/j.ijhcs.2020.102551
  • Shulner-Tal, A., Kuflik, T., & Kliger, D. (2022). Fairness, explainability and in-between: Understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics and Information Technology, 24(1), 1–13. https://doi.org/10.1007/s10676-022-09623-4
  • Shulner-Tal, A., Kuflik, T., & Kliger, D. (2023). Enhancing fairness perception–Towards human-centred AI and personalized explanations understanding the factors influencing laypeople’s fairness perceptions of algorithmic decisions. International Journal of Human–Computer Interaction, 39(7), 1455–1482. https://doi.org/10.1080/10447318.2022.2095705
  • Silva, A., Schrum, M., Hedlund-Botti, E., Gopalan, N., & Gombolay, M. (2023). Explainable Artificial Intelligence: Evaluating the Objective and Subjective Impacts of xAI on Human-Agent Interaction. International Journal of Human–Computer Interaction, 39(7), 1390–1404. https://doi.org/10.1080/10447318.2022.2101698
  • Speicher, T., Ali, M., Venkatadri, G., Ribeiro, F., Arvanitakis, G., Benevenuto, F., Gummadi, K., Loiseau, P., & Mislove, A. (2018). Potential for Discrimination in Online Targeted Advertising [Paper presentation]. Proceedings of the Conference on Fairness, Accountability, and Transparency (FAT*) PMLR 81 (pp. 5–19). https://proceedings.mlr.press/v81/speicher18a.html
  • Stahl, B. C., Antoniou, J., Bhalla, N., Brooks, L., Jansen, P., Lindqvist, B., Kirichenko, A., Marchal, S., Rodrigues, R., Santiago, N., Warso, Z., & Wright, D. (2023). A systematic review of artificial intelligence impact assessments. Artificial Intelligence Review, 56(11), 1–33. https://doi.org/10.1007/s10462-023-10420-8
  • Starke, C., Baleis, J., Keller, B., & Marcinkowski, F. (2022). Fairness perceptions of algorithmic decision-making: A systematic review of the empirical literature. Big Data & Society, 9(2), 205395172211151. https://doi.org/10.1177/20539517221115189
  • Tal, A. S., Batsuren, K., Bogina, V., Giunchiglia, F., Hartman, A., Loizou, S. K., Kuflik, T., & Otterbacher, J. (2019). “End to end” towards a framework for reducing biases and promoting transparency of algorithmic systems [Paper presentation]. 2019 14th International Workshop on Semantic and Social Media Adaptation and Personalization (SMAP) (pp. 1–6). IEEE. https://doi.org/10.1109/SMAP.2019.8864914
  • Van Berkel, N., Goncalves, J., Russo, D., Hosio, S., & Skov, M. B. (2021). Effect of information presentation on fairness perceptions of machine learning predictors [Paper presentation]. Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (pp. 1–13). https://doi.org/10.1145/3411764.3445365
  • van Berkel, N., Sarsenbayeva, Z., & Goncalves, J. (2023). The methodology of studying fairness perceptions in Artificial Intelligence: Contrasting CHI and FAccT. International Journal of Human-Computer Studies, 170, 102954. https://doi.org/10.1016/j.ijhcs.2022.102954
  • Vardarlier, P., & Zafer, C. (2020). Use of artificial intelligence as business strategy in recruitment process and social perspective. In Digital Business Strategies in Blockchain Ecosystems: Transformational Design and Future of Global Business (pp. 355–373). Springer. https://doi.org/10.1007/978-3-030-29739-8_17
  • Wang, R., Harper, F. M., & Zhu, H. (2020). Factors influencing perceived fairness in algorithmic decision-making: Algorithm outcomes, development procedures, and individual differences [Paper presentation]. CHI Conference on Human Factors in Computing Systems, Proceedings of the 2020 (pp. 1–14). https://doi.org/10.1145/3313831.3376813
  • Wang, Y., Ma, W., Zhang, M., Liu, Y., & Ma, S. (2023). A survey on the fairness of recommender systems. ACM Transactions on Information Systems, 41(3), 1–43. https://doi.org/10.1145/3547333
  • Wesche, J. S., Hennig, F., Kollhed, C. S., Quade, J., Kluge, S., & Sonderegger, A. (2022). People’s reactions to decisions by human vs. algorithmic decision-makers: The role of explanations and type of selection tests. European Journal of Work and Organizational Psychology, 33(2), 146–157. https://doi.org/10.1080/1359432X.2022.2132940
  • Woodruff, A., Fox, S. E., Rousso-Schindler, S., & Warshaw, J. (2018). A qualitative exploration of perceptions of algorithmic fairness [Paper presentation]. Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (pp. 1–14). https://doi.org/10.1145/3173574.3174230
  • Xivuri, K., & Twinomurinzi, H. (2021). A systematic review of fairness in artificial intelligence algorithms. In Responsible AI and Analytics for an Ethical and Inclusive Digitized Society: 20th IFIP WG 6.11 Conference on e-Business, e-Services and e-Society, I3E 2021, Galway, Ireland, September 1–3, 2021, Proceedings 20 (pp. 271–284). Springer International Publishing. https://doi.org/10.1007/978-3-030-85447-8_24
  • Yurrita, M., Draws, T., Balayn, A., Murray-Rust, D., Tintarev, N., & Bozzon, A. (2023)., April). Disentangling Fairness Perceptions in Algorithmic Decision-Making: The Effects of Explanations, Human Oversight, and Contestability [Paper presentation]. Proceedings of the CHI Conference on Human Factors in Computing Systems (pp. 1–21). https://doi.org/10.1145/3544548.3581161