1,557
Views
0
CrossRef citations to date
0
Altmetric
Editorial

AI Fairness in Action: A Human-Computer Perspective on AI Fairness in Organizations and Society

ORCID Icon, , , &
Pages 1-3 | Received 14 Oct 2023, Accepted 17 Oct 2023, Published online: 25 Oct 2023

Artificial intelligence (AI) systems are being increasingly adopted by society, governments, and organizations in various decision-making contexts. For example, organizations use AI systems to decide whether applicants can be considered for a job, whether bonuses and other rewards should be allocated, or whether promotions and further training need to be invested in. In fact, as AI is seen as an important catalyst of economic growth, organizations today seem to know no boundaries in their AI adoption efforts, making employees and society more dependent on and thus also more vulnerable to the decisions made by or in partnership with AI (De Cremer, in press). It is then also no surprise that the people affected by AI decisions have legitimate concerns about whether AI systems are employed fairly and will lead to favorable outcomes. Accordingly, concerns about fairness with respect to AI participation in decision-making is an issue today of critical importance (Kellogg et al., Citation2020).

Violations of fairness in organizations, and in society more broadly, elicit negative reactions from people as unfair acts and decisions are often biased, unclear, and disrespectful (Colquitt, Citation2001). Unfair decision-making is costly for organizations as it can lead them to discriminate against marginalized groups, miss opportunities for identifying talent, and instill sentiments of disappointment and resistance, leading to significant economic costs to both the organization and society. It is therefore an important task for companies and society alike to address biases in our decision-making and invest significantly in our efforts to promote fair decision-making. For example, a study by Stanford University (Walsh, Citation2020) revealed that at least “25% of growth in U.S. GDP between 1960 and 2010 could be attributed to greater gender and racial balance in the workplace”. These findings can be used as a strong indication that in the context of AI as a main driver of future economic activity, paying attention to fairness in our organizations is a necessity if we want to achieve positive outcomes for society.

So, how can we promote AI fairness? The predominant approach is based on the belief that fairness is increased by making technical improvements to the AI models themselves (cf., Chouldechova & Roth, Citation2018; Corbett-Davies et al., Citation2017; del Barrio et al., Citation2020; Green & Chen, Citation2019). Much effort, for instance, has been spent on assessing whether algorithms are trained on biased data, and how such datasets might be ‘debiased’. Another common approach is to improve how algorithmic models are designed by, for instance, encoding pre-defined mathematical definitions of fairness. A key problem with this approach, however, lies in its assumption that people are perfectly rational and as such, will always view optimally designed AI models to be fair. This rationalist view of AI fairness remains widely accepted by practitioners and scholars across disciplines.

By contrast, relatively little attention thus far has been placed on the subjective nature of fairness. As social scientists have argued for decades, fairness is in the eye of the beholder and as such, subjective factors – inherent to the irrational nature of humans – color what is seen as fair or not when an AI system decides (De Cremer, Citation2020). As a matter of fact, this subjective nature of fairness perceptions highlights an important point: fairness is a social construct. What people perceive to be fair – whether it is a decision made by a human or an AI – follows from how humans perceive, interpret, and interact with reality.

AI systems are, however, incapable of extending due consideration and relational care to this human tendency – instead, as noted, they can only rely on quantifiable and often reductionistic metrics (cf. Binns et al., Citation2018). When we focus excessively on fairness as a designed feature of AI, we fail to appreciate what fairness looks like ‘in action’: as something that is routinely experienced and perceived by the people who employ and interact with AI systems.

As fairness is acutely subjective in nature, and it is those subjective experiences that determine behavioral outcomes, it follows that the use of AI systems in decision-making raises important concerns about the fairness of processes, treatment, and outcomes (De Cremer, Citation2020). Therefore, we need to situate the psychology of how humans perceive fairness into the realm of algorithmic decision-making. Such an effort requires an integrative approach to better understand how we can examine AI fairness, what its meaning is, its impact, and ultimately its position in the development, design, and use of intelligent machines at work. This approach can, for example, help organizations to adopt AI in ways that foster greater fairness perceptions among employees, which in turn will promote the willingness of the workforce to work and experiment with AI (De Cremer, in press).

To this end, we argue that important omissions from the literature are: (a) a conceptual foundation that situates research on the perceptual dimensions of AI fairness in contrast with the predominant rational approach that omits subjective factors, and (b) an integrative overview of empirical research on the fairness perceptions of those who use and are affected by AI systems in our society and organizations. In this special issue, we aim to fill these voids by presenting theoretical and empirical work that contribute to our understanding of what AI fairness encapsulates when taking human perceptions and behavior into account, and the potential mechanisms and boundary conditions governing these effects.

1. This special issue

Below, we discuss the papers that appear in this special issue on AI fairness in action. These papers explore the human-computer connection to promote a better understanding of AI fairness, where humans and computers co-exist, and how these insights can improve the fairness of algorithms – in terms of its perceptions, implementation, and use.

The first paper by Narayanan, Nagpal, McGuire, Schweitzer, and De Cremer adds to the literature by presenting an integrative review and overview of how literature on fairness perceptions in tandem with the use of AI can help us understand better the position of AI in the work and social context (e.g., where to/where not to use AI in the loop of humans). Specifically, by categorizing existing empirical literature on perceptions of AI fairness along four dimensions – distributive, procedural, interpersonal, and informational – Narayanan and colleagues propose a multi-level framework for furthering research on AI fairness in ways that remain sensitive to the acutely subjective nature of fairness.

The second paper, by Cratsley and Fast, questions whether the mere act of creating an AI system may positively bias people in in their assessments of how fair that system is. This paper explores the novel phenomenon of the “inventor’s bias,” which refers to the propensity for inventors to be overly optimistic about the positive features of their creations. In a series of experiments, the authors find that, relative to other stakeholders, people adopting the role of inventor of an underperforming, objectively unfair AI system reported greater perceived fairness of and greater desire for their company to continue using the system. These assessments were driven by identification with the system: inventors were biased because their creations became part of their own sense of self. These results carry important implications for how subjective factors of the designers of AI systems may contribute to the development and implementation of unfair systems.

The third paper by De Schutter and De Cremer zooms in on how algorithms can help leaders and businesses make fair decisions in the case of ethical dilemmas. In dilemma situations, a range of options can be considered and to decide which ones are fair implies exploring and imagining what would happen under alternative conditions. Such situations thus invite counterfactual thinking, which is not explore very well yet by AI advisors. In addition, if counterfactuals are pre-programmed, they are limited as the psychological perception and reality of counterfactuals are not accounted for. To remedy this, the authors introduce fairness theory that focuses on the psychology of “would”, “could”, and “should” counterfactuals in arriving at the decision whether an option is fair or not. Employing these insights, they identify key counterfactual scenarios (i.e., what-if scenarios) programmers can incorporate in their AI advisor development process to make more sophisticated decisions that are fair. They conclude that counterfactual modelling can lead to more nuanced AI recommendations and prompt leaders to consider counterfactual scenarios that algorithms do not model.

The fourth paper by Efendić, Van de Calseyde, Bahník, and Vranka examines how those who solicit advice from algorithms are viewed by others. People are turning more to algorithms for guidance. However, how does seeking guidance from an algorithm, compared to a human, influence others’ perceptions of the seeker’s intentions? Across five studies, Efendić and colleagues observed that when individuals seek advice from algorithms, observers often believe the primary objective of the algorithm to be the advice seeker’s main goal. This narrows down the perceived reasons for the seeker’s actions, overshadowing other potential motivations. On the other hand, when advice comes from humans, observers consider a wider range of motives. As a result, the perceived objectives of those seeking advice, such as fairness, profit-driven motives, and altruism, vary based on whether they consulted an algorithm or a human. This variation stems partly from the distinct expectations people hold about the data algorithms and humans use for recommendations. These findings are significant for understanding the subjective nature of how fairness perceptions are formed of those who take advice from algorithms.

The fifth paper, by Dolata and Schwabe, presents the notion that society is undergoing an important change in terms of how perceptions of fairness are constructed, with algorithms playing a major role in determining those notions of fairness. To demonstrate this socio-algorithmic construction of fairness, the authors analyzed online discourse surrounding the price surge implemented by ride-hailing companies following the Brooklyn Subway Shooting in New York City in April 2022. Specifically, they looked at comments, posts and tweets on Twitter and Reddit about the fairness and justifiability of the price surge. The controversy surrounding the price surge brought to light the fact that the autonomous decisions made by ride-hailing algorithms, even if not explicitly addressed in online discourse, strongly impacted notions and assessments of fairness. This demonstrates a meaningful change in the way that notions of fairness are constructed, from merely social to socio-algorithmic. The authors also note that this demonstrates a novel distribution of moral responsibility between the various social and technological entities that are part of the construction of fairness perceptions. Importantly, this paper also highlights the challenges associated with algorithmically determined pricing strategies.

The final paper, by Katsaros, Kim, and Tyler, draws on procedural justice models from legal theory to explore pathways for encouraging end-users to play an active role in the self-governance of online platforms. Specifically, they review existing empirical literature on user experiences of procedural justice on online platforms, with a focus on understanding how end-users’ expectations of having voice, receiving explanations, and being treated with respect come into conflict with extant AI-based content-moderation techniques used on these platforms. In so doing, they offer suggestions for embedding the necessary antecedents of procedural justice into the design and implementation of algorithmic content-moderation efforts on platforms. Only when content moderation is perceived procedurally just, the authors argue, it becomes possible to encourage users take personal responsibility for following rules and participate actively in the self-governance of platforms.

2. Conclusion

The papers in this special issue help to demonstrate that AI fairness is largely a function of the interactive effect between the opportunities that AI brings to the table and the way human end-users accommodate and collaborate with AI over time, thus affecting their fairness perceptions (cf. De Cremer & Kasparov, Citation2022). This insight has important implications for how organizations and societies at large will need to approach the adoption and implementation of AI to ensure that their use is perceived to be fair by end-users. We hope that future research will take stock of the theoretical insights and empirical demonstrations contained within this issue and use them as a basis from which to further understand the promotion of fair adoption, implementation, and use of AI.

David De Cremer, Jack McGuire, and Shane Schweitzer
D’Amore-McKim School of Business, Northeastern University, Boston, MA, USA
[email protected]

Devesh Narayanan
Department of Management Science and Engineering, Stanford University, Stanford, CA, USA

Mahak Nagpal
Opus College of Business, University of St. Thomas, St Paul, MN, USA

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Notes on contributors

David De Cremer

David De Cremer is the Dunton Family Dean and Professor of Management & Technology at the D’Amore McKim School of Business, Northeastern University. He is a former Provost chair at National University of Singapore, the KPMG endowed professor at Cambridge University and founder of the Centre on AI Technology for Humankind in Singapore.

Devesh Narayanan

Devesh Narayanan is a PhD student in Management Science and Engineering at Stanford University. His research concerns the normative, relational, and behavioral underpinnings of popular calls for “ethical”, “trustworthy”, and “human-centered” AI. He holds an M.A. in Philosophy and B.Eng. in Mechanical Engineering, both from the National University of Singapore.

Mahak Nagpal

Mahak Nagpal is an Assistant Professor of Ethics and Business Law at the Opus College of Business, University of St. Thomas. Broadly, her research considers ethical perspectives related to human-AI interaction in the workplace. She received her Ph.D. in Organization Management from Rutgers Business School.

Jack McGuire

Jack McGuire is a Post-doctoral Research Associate at the D’Amore McKim School of Business, Northeastern University. His research examines the psychological consequences of artificial intelligence and its increasing application in the workplace. He received his Ph.D. in Management & Organization from the NUS Business School.

Shane Schweitzer

Shane Schweitzer is an Assistant Professor of Management and Organizational Development at the D’Amore McKim School of Business at Northeastern University. His current research concerns perceptions of advanced, humanlike technologies.

References

  • Binns, R., Van Kleek, M., Veale, M., Lyngs, U., Zhao, J., Shadbolt, N. (2018, April). It’s reducing a human being to a percentage’ perceptions of justice in algorithmic decisions. In Proceedings of the 2018 Chi Conference on Human Factors in Computing Systems (pp. 1–14).
  • Chouldechova, A., & Roth, A. (2018). The frontiers of fairness in machine learning. arXiv Preprint arXiv:1810.08810.
  • Colquitt, J. A. (2001). On the dimensionality of organizational justice: A construct validation of a measure. The Journal of Applied Psychology, 86(3), 386–400. https://doi.org/10.1037/0021-9010.86.3.386
  • Corbett-Davies, S., Pierson, E., Feller, A., Goel, S., & Huq, A. (2017). Algorithmic decision making and the cost of fairness. In Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Part F1296, pp. 797–806. https://doi.org/10.1145/3097983.3098095
  • De Cremer, D. (2020, November). What does building a fair AI really entail? Harvard Business Review. https://hbr.org/2020/09/what-does-building-a-fair-ai-really-entail
  • De Cremer, D. (in press). The AI-savvy leader: 9 ways to take back control and make AI work. Harvard Business Review Press.
  • De Cremer, D., & Kasparov, G. (2022). The ethical AI—paradox: Why better technology needs more and not less human responsibility. AI and Ethics, 2(1), 1–4. https://doi.org/10.1007/s43681-021-00075-y
  • del Barrio, E., Gordaliza, P., & Loubes, J. M. (2020). Review of mathematical frameworks for fairness in machine learning. arXiv Preprint arXiv:2005.13755.
  • Green, B., & Chen, Y. (2019). Disparate interactions: An algorithm-in-the-loop analysis of fairness in risk assessments. In FAT* 2019 - Proceedings of the 2019 Conference on Fairness, Accountability, and Transparency, pp. 90–99. https://doi.org/10.1145/3287560.3287563
  • Kellogg, K. C., Valentine, M. A., & Christin, A. (2020). Algorithms at work: The new contested terrain of control. Academy of Management Annals, 14(1), 366–410. https://doi.org/10.5465/annals.2018.0174
  • Walsh, D. (2020). Is workplace equality the economy’s hidden engine? Insights by Stanford Business. https://www.gsb.stanford.edu/insights/workplace-equality-economys-hidden-engine

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.