2,249
Views
5
CrossRef citations to date
0
Altmetric
Research Articles

Fairness Perceptions of Artificial Intelligence: A Review and Path Forward

ORCID Icon, ORCID Icon, ORCID Icon, ORCID Icon & ORCID Icon
Pages 4-23 | Received 15 Aug 2022, Accepted 02 May 2023, Published online: 26 May 2023

References

  • Acikgoz, Y., Davison, K. H., Compagnone, M., & Laske, M. (2020). Justice perceptions of artificial intelligence in selection. International Journal of Selection and Assessment, 28(4), 399–416. https://doi.org/10.1111/ijsa.12306
  • Adadi, A., & Berrada, M. (2018). Peeking inside the black-box: A survey on explainable artificial intelligence (XAI). IEEE Access. 6, 52138–52160. https://doi.org/10.1109/ACCESS.2018.2870052
  • Adams, J. S. (1965). Inequity in social exchange. In L. Berkowitz (Ed.), Advances in experimental social psychology (Vol. 2, pp. 267–299). Academic Press.
  • Ahnert, G., Smirnov, I., Lemmerich, F., Wagner, C., & Strohmaier, M. (2021, June). The FairCeptron: A framework for measuring human perceptions of algorithmic fairness. In Adjunct Proceedings of the 29th ACM Conference on User Modeling, Adaptation and Personalization (pp. 401–403). ACM. https://doi.org/10.1145/3450614.3463291
  • Alan, A., Costanza, E., Fischer, J., Ramchurn, S. D., Rodden, T., Jennings, N. R. (2014). A field study of human-agent interaction for electricity tariff switching. 13th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2014) (pp. 965–972).
  • Alarie, B., Niblett, A., & Yoon, A. H. (2018). How artificial intelligence will affect the practice of law. University of Toronto Law Journal, 68(Suppl. 1), 106–124. https://doi.org/10.3138/utlj.2017-0052
  • Albach, M., & Wright, J. R. (2021). The role of accuracy in algorithmic process fairness across multiple domains. In Proceedings of the 22nd ACM Conference on Economics and Computation (pp. 29–49). ACM. https://doi.org/10.1145/3465456.3467620
  • Ambrose, M. L., & Cropanzano, R. (2003). A longitudinal analysis of organizational fairness: An examination of reactions to tenure and promotion decisions. The Journal of Applied Psychology, 88(2), 266–275. https://doi.org/10.1037/0021-9010.88.2.266
  • Antifakos, S., Kern, N., Schiele, B., & Schwaninger, A. (2005). Towards improving trust in context-aware systems by displaying system confidence. In Proceedings of the 7th International Conference on Human Computer Interaction with Mobile Devices & Services, MobileHCI ’05 (pp. 9–14). New York, NY: Association for Computing Machinery. https://doi.org/10.1145/1085777.1085780
  • Araujo, T., Helberger, N., Kruikemeier, S., & De Vreese, C. H. (2020). In AI we trust? Perceptions about automated decision-making by artificial intelligence. AI & Society, 35(3), 611–623. https://doi.org/10.1007/s00146-019-00931-w
  • Barredo Arrieta, A., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., Garcia, S., Gil-Lopez, S., Molina, D., Benjamins, R., Chatila, R., & Herrera, F. (2020). Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, 58(C), 82–115. https://doi.org/10.1016/j.inffus.2019.12.012
  • Bai, B., Dai, H., Zhang, D., Zhang, F., Hu, H. (2021). The impacts of algorithmic work assignment on fairness perceptions and productivity. In Academy of Management Proceedings Vol. 2021, No. 1, (pp. 12335). Academy of Management. https://doi.org/10.5465/AMBPP.2021.175
  • Bankins, S., Formosa, P., Griep, Y., & Richards, D. (2022). AI decision making with dignity? contrasting workers’ justice perceptions of human and AI decision making in a human resource management context. Information Systems Frontiers, 24(3), 857–875. https://doi.org/10.1007/S10796-021-10223-8/FIGURES/3
  • Banks, J. (2021). Good robots, bad robots: Morally valenced behavior effects on perceived mind, morality, and trust. International Journal of Social Robotics, 13(8), 2021–2038. https://doi.org/10.1007/s12369-020-00692-3
  • Bansal, G., Wu, T., Zhou, J., Fok, R., Nushi, B., Kamar, E., Ribeiro, M. T., & Weld, D. (2021). Does the whole exceed its parts? The effect of AI explanations on complementary team performance. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (pp. 1–16). New York, NY: Association for Computing Machinery. https://doi.org/10.1145/3411764.3445717
  • Barberis, N., & Thaler, R. (2003). A survey of behavioral finance. In G. M. Constantinides, M. Harris, & R. M. Stulz (Eds.), Handbook of the economics of finance (Vol. 1, pp. 1053–1128). Elsevier.
  • Barlas, P., Kleanthous, S., Kyriakou, K., & Otterbacher, J. (2019, June). What makes an image tagger fair? In Proceedings of the 27th ACM Conference on User Modeling, Adaptation and Personalization (pp. 95–103). ACM. https://doi.org/10.1145/3320435.3320442
  • Barnett, A., Savic, M., Pienaar, K., Carter, A., Warren, N., Sandral, E., Manning, V., & Lubman, D. I. (2021). Enacting ‘more-than-human’care: Clients’ and counsellors’ views on the multiple affordances of chatbots in alcohol and other drug counselling. The International Journal on Drug Policy, 94(3), 102910. https://doi.org/10.1016/j.drugpo.2020.102910
  • Barsky, A., & Kaplan, S. A. (2007). If you feel bad, it’s unfair: A quantitative synthesis of affect and organizational justice perceptions. The Journal of Applied Psychology, 92(1), 286–295. https://doi.org/10.1037/0021-9010.92.1.286
  • Ben Mimoun, M. S., Poncin, I., & Garnier, M. (2017). Animated conversational agents and e- consumer productivity: The roles of agents and individual characteristics. Information and Management, 54(5), 545–559. https://doi.org/10.1016/j.im.2016.11.008
  • Benbya, H., Davenport, T. H., & Pachidi, S. (2020). Artificial intelligence in organizations: Current state and future opportunities. MIS Quarterly Executive, 19(4), 9–21. https://doi.org/10.2139/ssrn.3741983
  • Bies, R. J., & Moag, J. F. (1986). Interactional justice: Communication criteria of fairness. In R. J. Lewicki, B. H. Sheppard, & M. H. Bazerman (Eds.), Research on negotiations in organizations (Vol. 1, pp. 43–55). JAI Press.
  • Binns, R. (2020, January). On the apparent conflict between individual and group fairness. In Proceedings of the 2020 conference on fairness, accountability, and transparency (pp. 514–524). ACM. https://doi.org/10.1145/3351095.3372864
  • Binns, R. (2022a). Analogies and disanalogies between machine-driven and human-driven legal judgement. Journal of Cross-Disciplinary Research in Computational Law, 1(1). https://journalcrcl.org/crcl/article/view/5
  • Binns, R. (2022b). Human Judgment in algorithmic loops: Individual justice and automated decision‐making. Regulation & Governance, 16(1), 197–211. https://doi.org/10.1111/rego.12358
  • Binns, R., Van Kleek, M., Veale, M., Lyngs, U., Zhao, J., Shadbolt, N. (2018, April). It’s reducing a human being to a percentage’ perceptions of justice in algorithmic decisions. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (pp. 1–14). ACM.
  • Bird, S., Dudík, M., Edgar, R., Horn, B., Lutz, R., Milan, V., Sameki, M., Wallach, H., Walker, K. (2020). Fairlearn: A toolkit for assessing and improving fairness in AI. [Microsoft, Technical Report No. MSR-TR-2020-32].
  • Brockner, J., De Cremer, D., van Dijke, M., De Schutter, L., Holtz, B., & Van Hiel, A. (2021). Factors affecting supervisors’ enactment of interpersonal fairness: The interactive relationship between their managers’ informational fairness and supervisors’ sense of power. Journal of Organizational Behavior, 42(6), 800–813. https://doi.org/10.1002/job.2466
  • Brockner, J., Konovsky, M., Cooper-Schneider, R., Folger, R., Martin, C., & Bies, R. J. (1994). Interactive effects of procedural justice and outcome negativity on victims and survivors of job loss. Academy of Management Journal, 37(2), 397–409. https://doi.org/10.2307/256835
  • Brown, A., Chouldechova, A., Putnam-Hornstein, E., Tobin, A., Vaithianathan, R. (2019, May). Toward algorithmic accountability in public services: A qualitative study of affected community perspectives on algorithmic decision-making in child welfare services. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (pp. 1–12). ACM.
  • Butcher, J., & Beridze, I. (2019). What is the state of artificial intelligence governance globally? The RUSI Journal, 164(5–6), 88–96. https://doi.org/10.1080/03071847.2019.1694260
  • Camerer, C. (1999). Behavioral economics: Reunifying psychology and economics. Proceedings of the National Academy of Sciences, 96(19), 10575–10577. https://doi.org/10.1073/pnas.96.19.10575
  • Castelvecchi, D. (2016). Can we open the black box of AI? Nature, 538(7623), 20–23. https://doi.org/10.1038/538020a
  • Chang, M. L., Trafton, G., McCurry, J. M., & Thomaz, A. L. (2021, August). Unfair! perceptions of fairness in human-robot teams. In 2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN (pp. 905–912). IEEE. https://doi.org/10.1109/RO-MAN50785.2021.9515428
  • Charisi, V., Imai, T., Rinta, T., Nakhayenze, J. M., & Gomez, R. (2021, June). Exploring the concept of fairness in everyday, imaginary and robot scenarios: A cross-cultural study with children in Japan and Uganda [Paper presentation]. Interaction Design and Children (pp. 532–536). https://doi.org/10.1145/3459990.3465184
  • Chen, B. M., Stremitzer, A., & Tobia, K. (2021). Having your day in robot court. UCLA School of Law, Public Law Research Paper (pp. 21–20).
  • Cheng, H.-F., Stapleton, L., Wang, R., Bullock, P., Chouldechova, A., Wu, Z. S. S., & Zhu, H. (2021). Soliciting stakeholders’ fairness notions in child maltreatment predictive systems. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (pp. 1–17). New York, NY: Association for Computing Machinery. https://doi.org/10.1145/3411764.3445308
  • Choi, S., Mattila, A. S., & Bolton, L. E. (2021). To err is human (-oid): How do consumers react to robot service failure and recovery? Journal of Service Research, 24(3), 354–371. https://doi.org/10.1177/1094670520978798
  • Chouldechova, A., Benavides-Prado, D., Fialko, O., & Vaithianathan, R. (2018). A case study of algorithm-assisted decision making in child maltreatment hotline screening decisions [Paper presentation]. Conference on Fairness, Accountability and Transparency (pp. 134–148). PMLR.
  • Colquitt, J. A. (2001). On the dimensionality of organizational justice: A construct validation of a measure. The Journal of Applied Psychology, 86(3), 386–400. https://doi.org/10.1037/0021-9010.86.3.386
  • Colquitt, J. A., & Rodell, J. B. (2015). Chapter 8. Measuring justice and fairness. In R. S. Cropanzano & M. L. Ambrose (eds). The Oxford Handbook of Justice in the Workplace (pp. 187). Oxford University Press.
  • Colquitt, J. A., Hill, E. T., & De Cremer, D. (2023). Forever focused on fairness: 75 years of organizational justice in personnel psychology. Personnel Psychology, 76(2), 413–435. https://doi.org/10.1111/peps.12556
  • Colquitt, J. A., Scott, B. A., Rodell, J. B., Long, D. M., Zapata, C. P., Conlon, D. E., & Wesson, M. J. (2013). Justice at the millennium, a decade later: A meta-analytic test of social exchange and affect-based perspectives. The Journal of Applied Psychology, 98(2), 199–236. https://doi.org/10.1037/a0031757
  • Cook, K. S., & Hegtvedt, K. A. (1983). Distributive justice, equity, and equality. Annual Review of Sociology, 9(1), 217–241. https://doi.org/10.1146/annurev.so.09.080183.001245
  • Corbett-Davies, S., Pierson, E., Feller, A., Goel, S., & Huq, A. (2017). Algorithmic decision making and the cost of fairness. In Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Part F1296 (pp. 797–806). ACM. https://doi.org/10.1145/3097983.3098095
  • Costanza-Chock, S. (2020). Design justice: Community-led practices to build the worlds we need. MIT Press.
  • Cowgill, B. (2018). Bias and productivity in humans and algorithms: Theory and evidence from resume screening (pp. 29). Columbia Business School, Columbia University.
  • Cropanzano, R., Rupp, D. E., Mohler, C. J., & Schminke, M. (2001). Three roads to organizational justice. In G. R. Ferris (Ed.), Research in personnel and human resources management (Vol. 20, pp. 1–123). Elsevier Science/JAI Press. https://doi.org/10.1016/S0742-7301(01)20001-2
  • Daileyl, R. C., & Kirk, D. J. (1992). Distributive and procedural justice as antecedents of job dissatisfaction and intent to turnover. Human Relations, 45(3), 305–317. https://doi.org/10.1177/001872679204500306
  • Daly, J. P., & Tripp, T. M. (1996). Is outcome fairness used to make procedural fairness judgments when procedural information is inaccessible? Social Justice Research, 9(4), 327–349. https://doi.org/10.1007/BF02196989
  • De Cremer, D. (2007). Emotional effects of distributive justice as a function of autocratic leader behavior. Journal of Applied Social Psychology, 37(6), 1385–1404. https://doi.org/10.1111/j.1559-1816.2007.00217.x
  • De Cremer, D. (2020). What does building a fair AI really entail? Harvard Business Review
  • De Cremer, D., & Chun, J. (2021). Algorithmic evaluation and its unfairness: The centrality of respect and the lack thereof [Unpublished manuscript].
  • De Cremer, D., & De Schutter, L. (2021). How to use algorithmic decision-making to promote inclusiveness in organizations. AI and Ethics, 1(4), 563–567. https://doi.org/10.1007/s43681-021-00073-0
  • De Cremer, D., Kasparov, G. (2021). AI should augment human intelligence, not replace it. Harvard Business Review.
  • De Cremer, D., & McGuire, J. (2022). Human–algorithm collaboration works best if humans lead (because it is fair!). Social Justice Research, 35(1), 33–55. https://doi.org/10.1007/s11211-021-00382-z
  • De Cremer, D., & Tyler, T. R. (2005). Managing group behavior: The interplay between procedural justice, sense of self, and cooperation. Advances in Experimental Social Psychology, 37, 151–218. https://doi.org/10.1016/S0065-2601(05)37003-1
  • De Cremer, D., Brockner, J., Fishman, A., van Dijke, M., van Olffen, W., & Mayer, D. M. (2010). When do procedural fairness and outcome fairness interact to influence employees’ work attitudes and behaviors? The moderating effect of uncertainty. The Journal of Applied Psychology, 95(2), 291–304. https://doi.org/10.1037/a0017866
  • De Cremer, D., Narayanan, D., Deppeler, A., Nagpal, M., & McGuire, J. (2022). The road to a human-centred digital society: Opportunities, challenges and responsibilities for humans in the age of machines. AI and Ethics, 2(4), 579–583. https://doi.org/10.1007/s43681-021-00116-6
  • De Cremer, D., Zeelenberg, M., & Murnighan, J. K. (2013). Social Psychology and Economics. Psychology Press.
  • de Fine Licht, K., & de Fine Licht, J. (2020). Artificial intelligence, transparency, and public decision-making. AI & Society, 35(4), 917–926. https://doi.org/10.1007/s00146-020-00960-w
  • Deutsch, M. (1975). Equity, equality, and need: What determines which value will be used as the basis of distributive justice? Journal of Social Issues, 31(3), 137–149. https://doi.org/10.1111/j.1540-4560.1975.tb01000.x
  • DeVito, M. A., Gergle, D., & Birnholtz, J. (2017). Algorithms ruin everything’: #RIPTwitter, folk theories, and resistance to algorithmic change in social media. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems CHI ’17. (pp. 3163–3174). New York, NY: Association for Computing Machinery. https://doi.org/10.1145/3025453.3025659
  • Diakopoulos, N. (2015). Algorithmic accountability: Journalistic investigation of computational power structures. Digital Journalism, 3(3), 398–415. https://doi.org/10.1080/21670811.2014.976411
  • Diakopoulos, N., & Koliska, M. (2017). Algorithmic transparency in the news media. Digital Journalism, 5(7), 809–828. https://doi.org/10.1080/21670811.2016.1208053
  • Dietvorst, B. J., & Bartels, D. M. (2022). Consumers object to algorithms making morally relevant tradeoffs because of algorithms’ consequentialist decision strategies. Journal of Consumer Psychology, 32(3), 406–424. https://doi.org/10.1002/jcpy.1266
  • Dietvorst, B. J., & Bharti, S. (2020). People reject algorithms in uncertain decision domains because they have diminishing sensitivity to forecasting error. Psychological Science, 31(10), 1302–1314. https://doi.org/10.1177/0956797620948841
  • Dietvorst, B. J., Simmons, J. P., & Massey, C. (2018). Overcoming algorithm aversion: People will use imperfect algorithms if they can (even slightly) modify them. Management Science, 64(3), 1155–1170. https://doi.org/10.1287/mnsc.2016.2643
  • Dietz, J., Robinson, S. L., Folger, R., Baron, R. A., & Schulz, M. (2003). The impact of community violence and an organization’s procedural justice climate on workplace aggression. Academy of Management Journal, 46(3), 317–326. https://doi.org/10.2307/30040625
  • Dineen, B. R., Noe, R. A., & Wang, C. (2004). Perceived fairness of web‐based applicant screening procedures: Weighing the rules of justice and the role of individual differences. Human Resource Management, 43(2–3), 127–145. https://doi.org/10.1002/hrm.20011
  • Dodge, J., Liao, Q. V., Zhang, Y., Bellamy, R. K., Dugan, C. (2019, March). Explaining models: An empirical study of how explanations impact fairness judgment. In Proceedings of the 24th International Conference on Intelligent User Interfaces (pp. 275–285). ACM.
  • Edwards, L., & Veale, M. (2017). Slave to the algorithm: Why a right to an explanation is probably not the remedy you are looking for. Duke Law & Technology Review's, 16, 18–84. https://doi.org/10.31228/osf.io/97upg
  • Elahi, M., Abdollahpouri, H., Mansoury, M., & Torkamaan, H. (2021, June). Beyond algorithmic fairness in recommender systems. In Adjunct Proceedings of the 29th ACM Conference on User Modeling, Adaptation and Personalization (pp. 41–46). ACM. https://doi.org/10.1145/3450614.3461685
  • Eslami, M., Vaccaro, K., Lee, M. K., Elazari Bar On, A., Gilbert, E., & Karahalios, K. (2019). User attitudes towards algorithmic opacity and transparency in online reviewing platforms. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (pp. 1–14). New York, NY, USA: Association for Computing Machinery. https://doi.org/10.1145/3290605.3300724
  • Ferraro, A., Serra, X., & Bauer, C. (2021, August). What is fair? Exploring the artists’ perspective on the fairness of music streaming platforms. In IFIP Conference on Human-Computer Interaction (pp. 562–584). Springer.
  • Firestone, C. (2020). Performance vs. competence in human–machine comparisons. Proceedings of the National Academy of Sciences, 117(43), 26562–26571. https://doi.org/10.1073/pnas.1905334117
  • Fjeld, J., Achten, N., Hilligoss, H., Nagy, A., & Srikumar, M. (2020). Principled artificial intelligence: Mapping consensus in ethical and rights-based approaches to principles for AI. Berkman Klein Center Research Publication.
  • Fleisher, W. (2021, July). What’s fair about individual fairness? In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society (pp. 480–490). ACM. https://doi.org/10.1145/3461702.3462621
  • Fleiß, J., Bäck, E., Thalmann, S. (2020). Explainability and the intention to use AI-based conversational agents. In Proceedings of the First International Workshop on Explainable and Interpretable Machine Learning (XI-ML 2020).
  • Folger, N., Brosi, P., Stumpf-Wollersheim, J., & Welpe, I. M. (2022). Applicant reactions to digital selection methods: A signaling perspective on innovativeness and procedural justice. Journal of Business and Psychology, 37(4), 735–757. https://doi.org/10.1007/s10869-021-09770-3
  • Formosa, P., Rogers, W., Griep, Y., Bankins, S., & Richards, D. (2022). Medical AI and human dignity: Contrasting perceptions of human and artificially intelligent (AI) decision making in diagnostic and medical resource allocation contexts. Computers in Human Behavior, 133, 107296. https://doi.org/10.1016/j.chb.2022.107296
  • Fountaine, T., McCarthy, B., & Saleh, T. (2019). Building the AI-powered organization technology isn’t the biggest challenge, culture is. Harvard Business Review, 97(4), 62–74. https://hbr.org/2019/07/building-the-ai-powered-organization
  • Gamez, P., Shank, D. B., Arnold, C., & North, M. (2020). Artificial virtue: The machine question and perceptions of moral character in artificial moral agents. AI & Society, 35(4), 795–809. https://doi.org/10.1007/s00146-020-00977-1
  • Gino, F., & Pisano, G. (2008). Toward a theory of behavioral operations. Manufacturing & Service Operations Management, 10(4), 676–691. https://doi.org/10.1287/msom.1070.0205
  • Glikson, E., & Woolley, A. W. (2020). Human trust in artificial intelligence: Review of empirical research. Academy of Management Annals, 14(2), 627–660. https://doi.org/10.5465/annals.2018.0057
  • Goldfarb, A., & Lindsay, J. (2020). Artificial intelligence in war: Human judgment as an organizational strength and a strategic liability. Brookings Institution.
  • Goldman, E. (2005). Search engine bias and the demise of search engine utopianism. Yale JL & Tech, 8, 188. http://hdl.handle.net/20.500.13051/7858
  • Gonçalves, J., Weber, I., Masullo, G. M., Torres da Silva, M., & Hofhuis, J. (2021). Common sense or censorship: How algorithmic moderators and message type influence perceptions of online content deletion. New Media & Society. Advance online publication. https://doi.org/10.1177/14614448211032310
  • Goodman, B., & Flaxman, S. (2017). European Union regulations on algorithmic decision-making and a “right to explanation”. AI Magazine, 38(3), 50–57. https://doi.org/10.1609/aimag.v38i3.2741
  • Green, B. (2020, January). The false promise of risk assessments: Epistemic reform and the limits of fairness. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (pp. 594–606). ACM.
  • Green, B. (2022). Escaping the impossibility of fairness: From formal to substantive algorithmic fairness. Philosophy & Technology, 35(4), 90. https://doi.org/10.1007/s13347-022-00584-6
  • Green, B., & Chen, Y. (2019). Disparate interactions: An algorithm-in-the-loop analysis of fairness in risk assessments. In FAT* 2019 – Proceedings of the 2019 Conference on Fairness, Accountability, and Transparency (pp. 90–99). ACM. https://doi.org/10.1145/3287560.3287563
  • Greenberg, J. (1990). Organizational justice: Yesterday, today, and tomorrow. Journal of Management, 16(2), 399–432. https://doi.org/10.1177/014920639001600208
  • Greenberg, J. (1993). Stealing in the name of justice: Informational and interpersonal moderators of theft reactions to underpayment inequity. Organizational Behavior and Human Decision Processes, 54(1), 81–103. https://doi.org/10.1006/obhd.1993.1004
  • Greenberg, J. (2002). Advances in organizational justice. Stanford University Press.
  • Greenberg, J., & Cropanzano, R. (1993). The social side of fairness: Interpersonal and informational classes of organizational justice. Justice in the workplace: Approaching fairness in human resource management. Lawrence Erlbaum Associates.
  • Grgić-Hlača, N., Redmiles, E. M., Gummadi, K. P., & Weller, A. (2018a). Human perceptions of fairness in algorithmic decision making: A case study of criminal risk prediction. In Proceedings of the 2018 World Wide Web Conference (pp. 903–912). ACM.https://doi.org/10.1145/3178876.3186138
  • Grgić-Hlača, N., Weller, A., & Redmiles, E. M. (2020). Dimensions of diversity in human perceptions of algorithmic fairness. arXiv preprint arXiv:2005.00808.
  • Grgić-Hlača, N., Zafar, M. B., Gummadi, K. P., Weller, A. (2018b, April). Beyond distributive fairness in algorithmic decision making: Feature selection for procedurally fair learning. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 32, No. 1). https://doi.org/10.1609/aaai.v32i1.11296
  • Grudin, J. (1996). The organizational contexts of development and use. ACM Computing Surveys, 28(1), 169–171. https://doi.org/10.1145/234313.234384
  • Gunning, D., Stefik, M., Choi, J., Miller, T., Stumpf, S., & Yang, G. Z. (2019). XAI—Explainable artificial intelligence. Science Robotics, 4(37), eaay7120. https://doi.org/10.1126/scirobotics.aay7120
  • Gupta, M., Parra, C. M., & Dennehy, D. (2021). Questioning racial and gender bias in AI-based recommendations: Do espoused national cultural values matter? Information Systems Frontiers, 24(5), 1465–1481. https://doi.org/10.1007/s10796-021-10156-2
  • Hakami, E., & Hernández Leo, D. (2020). How are learning analytics considering the societal values of fairness, accountability, transparency and human well-being?: A literature review. In A. Martínez-Monés, A. Álvarez, M. Caeiro-Rodríguez, Y. Dimitriadis, (Eds.). LASI-SPAIN 2020: Learning analytics summer institute Spain 2020: Learning analytics. Time for adoption? (p. 121–141). CEUR.
  • Hancock, P. A., Billings, D. R., Schaefer, K. E., Chen, J. Y., De Visser, E. J., & Parasuraman, R. (2011). A meta-analysis of factors affecting trust in human–robot interaction. Human Factors, 53(5), 517–527. https://doi.org/10.1177/0018720811417254
  • Hannan, J., Chen, H. Y. W., & Joseph, K. (2021, July). Who gets what, according to whom? an analysis of fairness perceptions in service allocation. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society (pp. 555–565). ACM.https://doi.org/10.1145/3461702.3462568
  • Harrison, G., Hanson, J., Jacinto, C., Ramirez, J., & Ur, B. (2020). An empirical study on the perceived fairness of realistic, imperfect machine learning models. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (pp. 392–402). ACM. https://doi.org/10.1145/3351095.3372831
  • Hase, P., & Bansal, M. (2020). Evaluating explainable AI: Which algorithmic explanations help users predict model behavior? In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (pp. 5540–5552). https://doi.org/10.18653/v1/2020.acl-main.491
  • Helberger, N., Araujo, T., & de Vreese, C. H. (2020). Who is the fairest of them all? Public attitudes and expectations regarding automated decision-making. Computer Law & Security Review, 39, 105456. https://doi.org/10.1016/j.clsr.2020.105456
  • Hobson, Z., Yesberg, J. A., Bradford, B., & Jackson, J. (2021). Artificial fairness? Trust in algorithmic police decision-making. Journal of Experimental Criminology, 19(1), 165–189.
  • Höddinghaus, M., Sondern, D., & Hertel, G. (2021). The automation of leadership functions: Would people trust decision algorithms? Computers in Human Behavior, 116, 106635. https://doi.org/10.1016/j.chb.2020.106635
  • Holstein, K., Wortman Vaughan, J., Daumé, H., III, Dudik, M., Wallach, H. (2019, May). Improving fairness in machine learning systems: What do industry practitioners need? In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (pp. 1–16). ACM.
  • Htun, N. N., Lecluse, E., & Verbert, K. (2021, April). Perception of fairness in group music recommender systems. In 26th International Conference on Intelligent User Interfaces (pp. 302–306). ACM. https://doi.org/10.1145/3397481.3450642
  • Hunkenschroer, A. L., & Luetge, C. (2022). Ethics of AI-enabled recruiting and selection: A review and research agenda. Journal of Business Ethics, 178(4), 977–1007. https://doi.org/10.1007/s10551-022-05049-6
  • Jarrahi, M. H., Newlands, G., Lee, M. K., Wolf, C. T., Kinder, E., & Sutherland, W. (2021). Algorithmic management in a work context. Big Data & Society, 8(2), 205395172110203. 20539517211020332. https://doi.org/10.1177/20539517211020332
  • Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389–399. https://doi.org/10.1038/s42256-019-0088-2
  • Jones, D. A., & Skarlicki, D. P. (2013). How perceptions of fairness can change: A dynamic model of organizational justice. Organizational Psychology Review, 3(2), 138–160. https://doi.org/10.1177/2041386612461665
  • Kaibel, C., Koch-Bayram, I., Biemann, T., Mühlenbock, M. (2019). Applicant perceptions of hiring algorithms-uniqueness and discrimination experiences as moderators. Academy of Management Annual Meeting Proceedings, 2019(1), 18172. https://doi.org/10.5465/AMBPP.2019.210
  • Karizat, N., Delmonaco, D., Eslami, M., Andalibi, N. (2021). Algorithmic folk theories and identity: How TikTok users co-produce knowledge of identity and engage in algorithmic resistance. Proceedings of the ACM on Human-Computer Interaction, 5(CSCW2), 1–44. https://doi.org/10.1145/3476046
  • Kasinidou, M., Kleanthous, S., Barlas, P., & Otterbacher, J. (2021). I agree with the decision, but they didn’t deserve this: future developers’ perception of fairness in algorithmic decisions. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (pp. 690–700). Canada: ACM. https://doi.org/10.1145/3442188.3445931
  • Kasirzadeh, A. (2022). Algorithmic fairness and structural injustice: Insights from feminist political philosophy. arXiv preprint arXiv:2206.00945. https://doi.org/10.1145/3514094.3534188
  • Kaur, H., Nori, H., Jenkins, S., Caruana, R., Wallach, H., Wortman Vaughan, J. (2020). Interpreting interpretability: Understanding data scientists’ use of interpretability tools for machine learning. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (pp. 1–14). ACM.
  • Kellogg, K. C., Valentine, M. A., & Christin, A. (2020). Algorithms at work: The new contested terrain of control. Academy of Management Annals, 14(1), 366–410. https://doi.org/10.5465/annals.2018.0174
  • Kieslich, K., Keller, B., & Starke, C. (2022). Artificial intelligence ethics by design. Evaluating public perception on the importance of ethical design principles of artificial intelligence. Big Data & Society, 9(1), 205395172210929. https://doi.org/10.1177/20539517221092956
  • Kim, T. Y., & Leung, K. (2007). Forming and reacting to overall fairness: A cross-cultural comparison. Organizational Behavior and Human Decision Processes, 104(1), 83–95. https://doi.org/10.1016/j.obhdp.2007.01.004
  • Kleanthous, S., Kasinidou, M., Barlas, P., & Otterbacher, J. (2022). Perception of fairness in algorithmic decisions: Future developers’ perspective. Patterns, 3(1), 100380. https://doi.org/10.1016/j.patter.2021.100380
  • Köbis, N., & Mossink, L. D. (2021). Artificial intelligence versus Maya Angelou: Experimental evidence that people cannot differentiate AI-generated from human-written poetry. Computers in Human Behavior, 114, 106553. https://doi.org/10.1016/j.chb.2020.106553
  • Konradt, U., Garbers, Y., Erdogan, B., & Bauer, T. (2016). Patterns of change in fairness perceptions during the hiring process. International Journal of Selection and Assessment, 24(3), 246–259. https://doi.org/10.1111/ijsa.12144
  • Kushwaha, A. K., Pharswan, R., & Kar, A. K. (2021). Always trust the advice of AI in difficulties? Perceptions around AI in decision making. In Conference on e-Business, e-Services and e-Society (pp. 132–143). Springer.
  • Kuutti, K., Bannon, L. J. (2014, April). The turn to practice in HCI: Towards a research agenda. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 3543–3552). ACM.
  • Lai, M. C., Brian, M., & Mamzer, M. F. (2020). Perceptions of artificial intelligence in healthcare: Findings from a qualitative survey study among actors in France. Journal of Translational Medicine, 18(1), 1–13. https://doi.org/10.1186/s12967-019-02204-y
  • Langer, M., & Landers, R. N. (2021). The future of artificial intelligence at work: A review on effects of decision automation and augmentation on workers targeted by algorithms and third-party observers. Computers in Human Behavior, 123(4), 106878. https://doi.org/10.1016/j.chb.2021.106878
  • Langer, M., Baum, K., König, C. J., Hähne, V., Oster, D., & Speith, T. (2021). Spare me the details: How the type of information about automated interviews influences applicant reactions. International Journal of Selection and Assessment, 29(2), 154–169. https://doi.org/10.1111/ijsa.12325
  • Langer, M., König, C. J., & Papathanasiou, M. (2019). Highly automated job interviews: Acceptance under the influence of stakes. International Journal of Selection and Assessment, 27(3), 217–234. https://doi.org/10.1111/ijsa.12246
  • Langer, M., König, C. J., Back, C., & Hemsing, V. (2022). Trust in Artificial Intelligence: Comparing trust processes between human and automated trustees in light of unfair bias. Journal of Business and Psychology, 38(2), 1–16. https://doi.org/10.1007/s10869-022-09829-9
  • Latonero, M. (2018). Governing artificial intelligence: Upholding human rights & dignity. Data & Society, 1–37. https://datasociety.net/library/governing-artificial-intelligence/
  • Le Bui, M., & Noble, S. U. (2020). We’re missing a moral framework of justice in artificial intelligence. In Dubber, M. D., Pasquale, F., & Das, S. (Eds.), The Oxford handbook of ethics of AI, (pp. 163–179). Oxford University Press.
  • Lee, M. K. (2018). Understanding perception of algorithmic decisions: Fairness, trust, and emotion in response to algorithmic management. Big Data & Society, 5(1), 205395171875668. 2053951718756684. https://doi.org/10.1177/2053951718756684
  • Lee, M. K., Baykal, S. (2017, February). Algorithmic mediation in group decisions: Fairness perceptions of algorithmically mediated vs. discussion-based social division. In Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing (pp. 1035–1048). ACM.
  • Lee, M. K., & Rich, K. (2021, May). Who is included in human perceptions of AI?: Trust and perceived fairness around healthcare AI and cultural mistrust. Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (pp. 1–14). https://doi.org/10.1145/3411764.3445570
  • Lee, M. K., Jain, A., Cha, H. J., Ojha, S., Kusbit, D. (2019a). Procedural justice in algorithmic fairness: Leveraging transparency and outcome control for fair algorithmic mediation. Proceedings of the ACM on Human-Computer Interaction, 3(CSCW), 1–26. https://doi.org/10.1145/3359284
  • Lee, M. K., Kusbit, D., Kahng, A., Kim, J. T., Yuan, X., Chan, A., See, D. (2019b). WeBuildAI: Participatory Framework for Algorithmic Governance. Proceedings of the ACM on Human-Computer Interaction, 3(CSCW), 1–35. https://doi.org/10.1145/3359283
  • Lepri, B., Oliver, N., Letouzé, E., Pentland, A., & Vinck, P. (2018). Fair, transparent, and accountable algorithmic decision-making processes. Philosophy & Technology, 31(4), 611–627. https://doi.org/10.1007/s13347-017-0279-x
  • Leventhal, G. S. (1976). The distribution of rewards and resources in groups and organizations. In L. Berkowitz & W. Walster (Eds.), Advances in experimental social psychology (Vol. 9, pp. 91–131). Academic Press.
  • Li, L., Lassiter, T., Oh, J., Lee, M. K. (2021, July). Algorithmic hiring in practice: Recruiter and HR professional’s perspectives on AI use in hiring. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society (pp. 166–176). ACM.
  • Lima, G., Grgić-Hlača, N., & Cha, M. (2021, May). Human perceptions on moral responsibility of AI: A case study in AI-assisted bail decision-making. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (pp. 1–17). ACM. https://doi.org/10.1145/3411764.3445260
  • Lind, E. A., & Tyler, T. R. (1988). The social psychology of procedural justice. Springer Science & Business Media.
  • Litoiu, A., Ullman, D., Kim, J., & Scassellati, B. (2015, March). Evidence that robots trigger a cheating detector in humans. In Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction (pp. 165–172). https://doi.org/10.1145/2696454.2696456
  • Logg, J. M., Minson, J. A., & Moore, D. A. (2019). Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes, 151, 90–103. https://doi.org/10.1016/j.obhdp.2018.12.005
  • Longoni, C., Bonezzi, A., & Morewedge, C. K. (2019). Resistance to medical artificial intelligence. Journal of Consumer Research, 46(4), 629–650. https://doi.org/10.1093/jcr/ucz013
  • Lyons, H., Velloso, E., & Miller, T. (2021). Fair and Responsible AI: A focus on the ability to contest. arXiv preprint arXiv:2102.10787
  • Marcinkowski, F., Kieslich, K., Starke, C., Lünich, M. (2020, January). Implications of AI (un-) fairness in higher education admissions: The effects of perceived AI (un-) fairness on exit, voice and organizational reputation. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (pp. 122–130). ACM.
  • Martin, K. (2019). Ethical implications and accountability of algorithms. Journal of Business Ethics, 160(4), 835–850. https://doi.org/10.1007/s10551-018-3921-3
  • Masrour, F., Tan, P. N., & Esfahanian, A. H. (2020, November). Fairness perception from a network-centric perspective. In 2020 IEEE International Conference on Data Mining (ICDM) (pp. 1178–1183). IEEE. https://doi.org/10.1109/ICDM50108.2020.00145
  • McGuire, J., & De Cremer, D. (2022). Algorithms, leadership, and morality: Why a mere human effect drives the preference for human over algorithmic leadership. AI and Ethics. Advance online publication. https://doi.org/10.1007/s43681-022-00192-2
  • Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., & Galstyan, A. (2022). A survey on bias and fairness in machine learning. ACM Computing Surveys, 54(6), 1–35. https://doi.org/10.1145/3457607
  • Miller, S. M., & Keiser, L. R. (2021). Representative bureaucracy and attitudes toward automated decision making. Journal of Public Administration Research and Theory, 31(1), 150–165. https://doi.org/10.1093/jopart/muaa019
  • Miller, T. (2019). Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence, 267(C), 1–38. https://doi.org/10.1016/j.artint.2018.07.007
  • Mirowska, A., & Mesnet, L. (2022). Preferring the devil you know: Potential applicant reactions to artificial intelligence evaluation of interviews. Human Resource Management Journal, 32(2), 364–383. https://doi.org/10.1111/1748-8583.12393
  • Mitchell, S., Potash, E., Barocas, S., D'Amour, A., & Lum, K. (2021). Algorithmic fairness: Choices, assumptions, and definitions. Annual Review of Statistics and Its Application, 8(1), 141–163. https://doi.org/10.1146/annurev-statistics-042720-125902
  • Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The ethics of algorithms: Mapping the debate. Big Data & Society, 3(2), 205395171667967. https://doi.org/10.1177/2053951716679679
  • Morse, L., Teodorescu, M. H. M., Awwad, Y., & Kane, G. C. (2022). Do the ends justify the means? Variation in the distributive and procedural fairness of machine learning algorithms. Journal of Business Ethics, 181(4), 1083–1095. https://doi.org/10.1007/s10551-021-04939-5
  • Nagtegaal, R. (2021). The impact of using algorithms for managerial decisions on public employees’ procedural justice. Government Information Quarterly, 38(1), 101536. https://doi.org/10.1016/j.giq.2020.101536
  • Newman, D. T., Fast, N. J., & Harmon, D. J. (2020). When eliminating bias isn’t fair: Algorithmic reductionism and procedural justice in human resource decisions. Organizational Behavior and Human Decision Processes, 160, 149–167. https://doi.org/10.1016/j.obhdp.2020.03.008
  • Noble, S. M., Foster, L. L., & Craig, S. B. (2021). The procedural and interpersonal justice of automated application and resume screening. International Journal of Selection and Assessment, 29(2), 139–153. https://doi.org/10.1111/ijsa.12320
  • Noble, S. U. (2013). Google search: Hyper-visibility as a means of rendering black women and girls invisible. InVisible Culture, 19. https://doi.org/10.47761/494a02f6.50883fff
  • Nørskov, S., Damholdt, M. F., Ulhøi, J. P., Jensen, M. B., Ess, C., & Seibt, J. (2020). Applicant fairness perceptions of a robot-mediated job interview: A video vignette-based experimental survey. Frontiers in Robotics and AI, 7(163), 586263. https://doi.org/10.3389/frobt.2020.586263
  • Ogunniye, G., Legastelois, B., Rovatsos, M., Dowthwaite, L., Portillo, V., Perez Vallejos, E., Zhao, J., & Jirotka, M. (2021). Understanding user perceptions of trustworthiness in E-recruitment systems. IEEE Internet Computing, 25(6), 23–32. https://doi.org/10.1109/MIC.2021.3115670
  • Ötting, S. K., & Maier, G. W. (2018). The importance of procedural justice in human–machine interactions: Intelligent systems as new decision agents in organizations. Computers in Human Behavior, 89(479), 27–39. https://doi.org/10.1016/j.chb.2018.07.022
  • Park, H., Ahn, D., Hosanagar, K., & Lee, J. (2021, May). Human-AI interaction in human resource management: Understanding why employees resist algorithmic evaluation at workplaces and how to mitigate burdens. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (pp. 1–15). ACM. https://doi.org/10.1145/3411764.3445304
  • Pierson, E. (2017). Demographics and discussion influence views on algorithmic fairness. arXiv preprint arXiv:1712.09124
  • Rader, E., Cotter, K., & Cho, J. (2018). Explanations as mechanisms for supporting algorithmic transparency. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (pp. 1–13). ACM. https://doi.org/10.1145/3173574.3173677
  • Rader, E., & Gray, R. (2015). Understanding user beliefs about algorithmic curation in the facebook news feed. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, CHI’15 (pp. 173–82). New York, NY: Association for Computing Machinery. https://doi.org/10.1145/2702123.2702174
  • Raisch, S., & Krakowski, S. (2021). Artificial intelligence and management: The automation–augmentation paradox. Academy of Management Review, 46(1), 192–210. https://doi.org/10.5465/amr.2018.0072
  • Rajkomar, A., Hardt, M., Howell, M. D., Corrado, G., & Chin, M. H. (2018). Ensuring fairness in machine learning to advance health equity. Annals of Internal Medicine, 169(12), 866–872. https://doi.org/10.7326/M18-1990
  • Renier, L. A., Mast, M. S., & Bekbergenova, A. (2021). To err is human, not algorithmic–Robust reactions to erring algorithms. Computers in Human Behavior, 124(C), 106879. https://doi.org/10.1016/j.chb.2021.106879
  • Robert, L. P., Pierce, C., Marquis, L., Kim, S., & Alahmad, R. (2020). Designing fair AI for managing employees in organizations: A review, critique, and design agenda. Human–Computer Interaction, 35(5–6), 545–575. https://doi.org/10.1080/07370024.2020.1735391
  • Rolland, F., & Steiner, D. D. (2007). Test‐taker reactions to the selection process: Effects of outcome favorability, explanations, and voice on fairness perceptions. Journal of Applied Social Psychology, 37(12), 2800–2826. https://doi.org/10.1111/j.1559-1816.2007.00282.x
  • Saha, D., Schumann, C., Mcelfresh, D., Dickerson, J., Mazurek, M., Tschantz, M. (2020, November). Measuring non-expert comprehension of machine learning fairness metrics. In International Conference on Machine Learning (pp. 8377–8387). ACM.
  • Saxena, N. A., Huang, K., DeFilippis, E., Radanovic, G., Parkes, D. C., & Liu, Y. (2020). How do fairness definitions fare? Testing public attitudes towards three algorithmic definitions of fairness in loan allocations. Artificial Intelligence, 283(5), 103238. https://doi.org/10.1016/j.artint.2020.103238
  • Schadenberg, B. R., Reidsma, D., Heylen, D. K., & Evers, V. (2021). “I see what you did there” understanding people’s social perception of a robot and its predictability. ACM Transactions on Human-Robot Interaction, 10(3), 1–28. https://doi.org/10.1145/3461534
  • Schick, J., & Fischer, S. (2021). Dear computer on my desk, which candidate fits best? An assessment of candidates’ perception of assessment quality when using AI in personnel selection. Frontiers in Psychology, 12, 4868. https://doi.org/10.3389/fpsyg.2021.739711
  • Schlicker, N., Langer, M., Ötting, S. K., Baum, K., König, C. J., & Wallach, D. (2021). What to expect from opening up ‘black boxes’? Comparing perceptions of justice between human and automated agents. Computers in Human Behavior, 122(4), 106837. https://doi.org/10.1016/j.chb.2021.106837
  • Schoeffer, J., & Kuehl, N. (2021, October). Appropriate fairness perceptions? On the effectiveness of explanations in enabling people to assess the fairness of automated decision systems. In Companion Publication of the 2021 Conference on Computer Supported Cooperative Work and Social Computing (pp. 153–157). ACM.https://doi.org/10.1145/3462204.3481742
  • Schoeffer, J., Machowski, Y., & Kuehl, N. (2021). A study on fairness and trust perceptions in automated decision making. arXiv preprint arXiv:2103.04757
  • Selbst, A. D., Boyd, D., Friedler, S. A., Venkatasubramanian, S., & Vertesi, J. (2019, January). Fairness and abstraction in sociotechnical systems. In Proceedings of the conference on fairness, accountability, and transparency (pp. 59–68). ACM. https://doi.org/10.1145/3287560.3287598
  • Shandilya, A., Dash, A., Chakraborty, A., Ghosh, K., & Ghosh, S. (2021). Fairness for whom? Understanding the reader’s perception of fairness in text summarization. arXiv preprint arXiv:2101.12406
  • Shapiro, D. L., Buttner, E. H., & Barry, B. (1994). Explanations: What factors enhance their perceived adequacy? Organizational Behavior and Human Decision Processes, 58(3), 346–368. https://doi.org/10.1006/obhd.1994.1041
  • Shin, D. (2020). User perceptions of algorithmic decisions in the personalized AI system: Perceptual evaluation of fairness, accountability, transparency, and explainability. Journal of Broadcasting & Electronic Media, 64(4), 541–565. https://doi.org/10.1080/08838151.2020.1843357
  • Shin, D. (2022). How do people judge the credibility of algorithmic sources? AI & Society, 37(1), 81–96. https://doi.org/10.1007/s00146-021-01158-4
  • Shin, D., & Park, Y. J. (2019). Role of fairness, accountability, and transparency in algorithmic affordance. Computers in Human Behavior, 98(1), 277–284. https://doi.org/10.1016/j.chb.2019.04.019
  • Shin, D., Zaid, B., & Ibahrine, M. (2020, November). Algorithm appreciation: algorithmic performance, developmental processes, and user interactions. In 2020 International Conference on Communications, Computing, Cybersecurity, and Informatics (CCCI) (pp. 1–5). IEEE. https://doi.org/10.1109/CCCI49893.2020.9256470
  • Short, E., Hart, J., Vu, M., & Scassellati, B. (2010, March). No fair!! an interaction with a cheating robot. 2010 5th ACM/IEEE International Conference on Human-Robot Interaction (HRI (pp. 219–226). IEEE. https://doi.org/10.1109/HRI.2010.5453193
  • Shulner-Tal, A., Kuflik, T., & Kliger, D. (2022). Fairness, explainability and in-between: Understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics and Information Technology, 24(1), 1–13. https://doi.org/10.1007/s10676-022-09623-4
  • Simshaw, D. (2018). Ethical issues in robo-lawyering: The need for guidance on developing and using artificial intelligence in the practice of law. Hastings Law Journal, 70(1), 173. https://repository.uclawsf.edu/hastings_law_journal/vol70/iss1/4
  • Skewes, J., Amodio, D. M., & Seibt, J. (2019). Social robotics and the modulation of social perception and bias. Philosophical Transactions of the Royal Society of London Series B, 374(1771), 20180037. https://doi.org/10.1098/rstb.2018.0037
  • Skinner, Z., Brown, S., & Walsh, G. (2020, April). Children of color’s perceptions of fairness in AI: An exploration of equitable and inclusive co-design. In Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems, (pp. 1–8). ACM.https://doi.org/10.1145/3334480.3382901
  • Smith, J., Sonboli, N., Fiesler, C., & Burke, R. (2020). Exploring user opinions of fairness in recommender systems. arXiv preprint arXiv:2003.06461.
  • Snyder, H. (2019). Literature review as a research methodology: An overview and guidelines. Journal of Business Research, 104, 333–339. https://doi.org/10.1016/j.jbusres.2019.07.039
  • Sonboli, N., Smith, J. J., Cabral Berenfus, F., Burke, R., & Fiesler, C. (2021, June). Fairness and transparency in recommendation: The users’ perspective. In Proceedings of the 29th ACM Conference on User Modeling, Adaptation and Personalization (pp. 274–279). ACM.https://doi.org/10.1145/3450613.3456835
  • Srivastava, M., Heidari, H., Krause, A. (2019, July). Mathematical notions vs. human perception of fairness: A descriptive approach to fairness for machine learning. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (pp. 2459–2468). ACM.
  • Stai, B., Heller, N., McSweeney, S., Rickman, J., Blake, P., Vasdev, R., Edgerton, Z., Tejpaul, R., Peterson, M., Rosenberg, J., Kalapara, A., Regmi, S., Papanikolopoulos, N., & Weight, C. (2020). Public perceptions of artificial intelligence and robotics in medicine. Journal of Endourology, 34(10), 1041–1048. https://doi.org/10.1089/end.2020.0137
  • Stapleton, L., Lee, M. H., Qing, D., Wright, M., Chouldechova, A., Holstein, K., Wu, Z. S., & Zhu, H. (2022). Imagining new futures beyond predictive systems in child welfare: A qualitative study with impacted stakeholders. In 2022 ACM Conference on Fairness, Accountability, and Transparency (pp. 1162–1177). ACM. https://doi.org/10.1145/3531146.3533177
  • Starke, C., Baleis, J., Keller, B., & Marcinkowski, F. (2021). Fairness perceptions of algorithmic decision-making: A systematic review of the empirical literature. arXiv preprint arXiv:2103.12016
  • Stellmach, H., & Lindner, F. (2019). Perception of an uncertain ethical reasoning robot. i-com, 18(1), 79–91. https://doi.org/10.1515/icom-2019-0002
  • Streicher, B., Jonas, E., Maier, G. W., Frey, D., & Spießberger, A. (2012). Procedural fairness and creativity: Does voice maintain people’s creative vein over time? Creativity Research Journal, 24(4), 358–363. https://doi.org/10.1080/10400419.2012.730334
  • Suen, H. Y., Chen, M. Y. C., & Lu, S. H. (2019). Does the use of synchrony and artificial intelligence in video interviews affect interview ratings and applicant attitudes? Computers in Human Behavior, 98, 93–101. https://doi.org/10.1016/j.chb.2019.04.012
  • Swart, J. (2021). Experiencing algorithms: How young people understand, feel about, and engage with algorithmic news selection on social media. Social Media + Society, 7(2), 205630512110088. https://doi.org/10.1177/20563051211008828
  • Telkamp, J. B., & Anderson, M. H. (2022). The Implications of diverse human moral foundations for assessing the ethicality of artificial intelligence. Journal of Business Ethics, 178(4), 961–976. https://doi.org/10.1007/s10551-022-05057-6
  • Thibaut, J., & Walker, L. (1978). A theory of procedure. California Law Review, 66(3), 541. https://doi.org/10.2307/3480099
  • Tomaino, G., Abdulhalim, H., Kireyev, P., & Wertenbroch, K. (2020). Denied by an (Unexplainable) Algorithm: Teleological Explanations for Algorithmic Decisions Enhance Customer Satisfaction (SSRN Scholarly Paper ID 3683754). Social Science Research Network. https://doi.org/10.2139/ssrn.3683754
  • Törnblom, K. Y., & Vermunt, R. (1999). An integrative perspective on social justice: Distributive and procedural fairness evaluations of positive and negative outcome allocations. Social Justice Research, 12(1), 39–64. https://doi.org/10.1023/A:1023226307252
  • Torraco, R. J. (2005). Writing integrative literature reviews: Guidelines and examples. Human Resource Development Review, 4(3), 356–367. https://doi.org/10.1177/1534484305278283
  • Tulk, S., Wiese, E. (2018). Trust and approachability mediate social decision making in human-robot interaction. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 62(1), 704–708. https://doi.org/10.1177/1541931218621160
  • Tyler, T. R., & Bies, R. J. (2015). Beyond formal procedures: The interpersonal context of procedural justice. In Applied social psychology and organizational settings (pp. 77–98). Psychology Press.
  • Vaccaro, K., Sandvig, C., Karahalios, K. (2020). “At the End of the Day Facebook Does What ItWants” How users experience contesting algorithmic content moderation. Proceedings of the ACM on Human-Computer Interaction, 4, 1–22. https://doi.org/10.1145/3415238
  • Vaccaro, M., & Waldo, J. (2019). The effects of mixing machine learning and human judgment. Communications of the ACM, 62(11), 104–110. https://doi.org/10.1145/3359338
  • Van Bavel, J. J., & Pereira, A. (2018). The partisan brain: An identity-based model of political belief. Trends in Cognitive Sciences, 22(3), 213–224. https://doi.org/10.1016/j.tics.2018.01.004
  • van Berkel, N., Goncalves, J., Hettiachchi, D., Wijenayake, S., Kelly, R. M., Kostakos, V. (2019). Crowdsourcing perceptions of fair predictors for machine learning: A recidivism case study. Proceedings of the ACM on Human-Computer Interaction, 3(CSCW), 1–21. https://doi.org/10.1145/3359130
  • van Berkel, N., Goncalves, J., Russo, D., Hosio, S., & Skov, M. B. (2021). Effect of information presentation on fairness perceptions of machine learning predictors. In Proceedings of the CHI Conference on Human Factors in Computing Systems. ACM. https://doi.org/10.1145/3411764.3445365
  • van Berkel, N., Sarsenbayeva, Z., & Goncalves, J. (2023). The methodology of studying fairness perceptions in Artificial Intelligence: Contrasting CHI and FAccT. International Journal of Human-Computer Studies, 170(C), 102954. https://doi.org/10.1016/j.ijhcs.2022.102954
  • van Berkel, N., Tag, B., Goncalves, J., & Hosio, S. (2022). Human-centred artificial intelligence: A contextual morality perspective. Behaviour & Information Technology, 41(3), 502–518. https://doi.org/10.1080/0144929X.2020.1818828
  • Van den Bos, K., Wilke, H. A., & Lind, E. A. (1998). When do we need procedural fairness? The role of trust in authority. Journal of Personality and Social Psychology, 75(6), 1449–1458. https://doi.org/10.1037/0022-3514.75.6.1449
  • Verma, S., & Rubin, J. (2018). Fairness definitions explained. In 2018 IEEE/ACM International Workshop on Software Fairness (Fairware) (pp. 1–7). IEEE. https://doi.org/10.1145/3194770.3194776
  • Vimalkumar, M., Gupta, A., Sharma, D., & Dwivedi, Y. (2021). Understanding the effect that task complexity has on automation potential and opacity: Implications for algorithmic fairness. AIS Transactions on Human-Computer Interaction, 13(1), 104–129. https://doi.org/10.17705/1thci.00144
  • Walker, K., Croak, M. (2021). An update on our progress in responsible AI innovation. Google Blog. https://blog.google/technology/ai/update-our-progress-responsible-ai-innovation/
  • Wang, A. J. (2018). Procedural justice and risk-assessment algorithms. SSRN. https://ssrn.com/abstract=3170136 or https://doi.org/10.2139/ssrn.3170136
  • Wang, R., Harper, F. M., Zhu, H. (2020, April). Factors influencing perceived fairness in algorithmic decision-making: Algorithm outcomes, development procedures, and individual differences. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (pp. 1–14). ACM.
  • Wang, X., & Yin, M. (2021). Are explanations helpful? A comparative study of the effects of explanations in AI-assisted decision-making. In 26th International Conference on Intelligent User Interfaces (pp. 318–28). ACM. https://doi.org/10.1145/3397481.3450650
  • Wangmo, T., Lipps, M., Kressig, R. W., & Ienca, M. (2019). Ethical concerns with the use of intelligent assistive technology: Findings from a qualitative study with professional stakeholders. BMC Medical Ethics, 20(1), 1–11. https://doi.org/10.1186/s12910-019-0437-z
  • Werth, R. (2019). Risk and punishment: The recent history and uncertain future of actuarial, algorithmic, and “evidence‐based” penal techniques. Sociology Compass, 13(2), e12659. https://doi.org/10.1111/soc4.12659
  • Wiener, M., Cram, W., & Benlian, A. (2021). Algorithmic control and gig workers: A legitimacy perspective of Uber drivers. European Journal of Information Systems. Advance online publication. https://doi.org/10.1080/0960085X.2021.1977729
  • Wilson, H. J., & Daugherty, P. R. (2018). Collaborative intelligence: Humans and AI are joining forces. Harvard Business Review, 96(4), 114–123. https://hbr.org/2018/07/collaborative-intelligence-humans-and-ai-are-joining-forces
  • Wojcieszak, M., Thakur, A., Ferreira Gonçalves, J. F., Casas, A., Menchen-Trevino, E., & Boon, M. (2021). Can AI enhance people’s support for online moderation and their openness to dissimilar political views? Journal of Computer-Mediated Communication, 26(4), 223–243. https://doi.org/10.1093/jcmc/zmab006
  • Wonseok, J., Woo, K. Y., & Yeonheung, K. (2021). Who made the decisions: Human or robot umpires? The effects of anthropomorphism on perceptions toward robot umpires. Telematics and Informatics, 64, 101695. https://doi.org/10.1016/j.tele.2021.101695
  • Woodruff, A., Fox, S. E., Rousso-Schindler, S., & Warshaw, J. (2018, April). A qualitative exploration of perceptions of algorithmic fairness. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (pp. 1–14). ACM. https://doi.org/10.1145/3173574.3174230
  • Yalcin, G., Lim, S., Puntoni, S., & van Osselaer, S. M. (2022). Thumbs up or down: Consumer reactions to decisions by algorithms versus humans. Journal of Marketing Research, 59(4), 696–717. https://doi.org/10.1177/00222437211070016
  • Yang, S. J. H., Ogata, H., Matsui, T., & Chen, N.-S. (2021). Human-centered artificial intelligence in education: Seeing the invisible through the visible. Computers and Education: Artificial Intelligence, 2, 100008. https://doi.org/10.1016/j.caeai.2021.100008
  • Zahedi, Z., Sengupta, S., & Kambhampati, S. (2020). Why didn’t you allocate this task to them?'Negotiation-Aware Task Allocation and Contrastive Explanation Generation. arXiv preprint arXiv:2002.01640
  • Zhang, L., & Yencha, C. (2022). Examining perceptions towards hiring algorithms. Technology in Society, 68(C), 101848. https://doi.org/10.1016/j.techsoc.2021.101848
  • Zhang, P., Nah, F. F. H., & Preece, J. (2004). Guest editorial: HCI studies in management information systems. Behaviour & Information Technology, 23(3), 147–151. https://doi.org/10.1080/01449290410001669905
  • Zhou, J., Verma, S., Mittal M.., & F., Chen. (2021). Understanding relations between perception of fairness and trust in algorithmic decision making. In 2021 8th International Conference on Behavioral and Social Computing (BESC) (pp. 1–5). ACM. https://doi.org/10.1109/BESC53957.2021.9635182

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.