2,249
Views
5
CrossRef citations to date
0
Altmetric
Research Articles

Fairness Perceptions of Artificial Intelligence: A Review and Path Forward

ORCID Icon, ORCID Icon, ORCID Icon, ORCID Icon & ORCID Icon
Pages 4-23 | Received 15 Aug 2022, Accepted 02 May 2023, Published online: 26 May 2023
 

Abstract

A key insight from research on organizational justice is that fairness is in the eye of the beholder. With increasing discussions – especially among computer scientists and policymakers – about the potential biases and unfairness of decisions made by Artificial Intelligence (AI) systems, there is a critical need to consider how decision-subjects perceive the fairness of AI-led decision-making. Drawing upon theoretical and empirical perspectives on perceived fairness in organizational justice scholarship, this review categorizes and analyzes perceptions of AI fairness as they impact the effective implementation of AI in workplaces and beyond. Specifically, we review existing empirical research on AI fairness according to distinct dimensions of perceived fairness – distributive, procedural, interpersonal, and informational – with a focus on its potential to inform organizational decision-making. In doing so, we provide new insights and offer directions for future interdisciplinary research in this burgeoning field.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Notes

1 In what follows, we use the term “decision-subject” to refer to those who are affected by AI-augmented decision-making, both (a) within organizations (eg, employees), and (b) outside organizations (eg, existing or potential customers). During our literature review process, we attempted to distinguish between the two groups, but found no noteworthy differences in their fairness perceptions of AI-augmented decision-making, and as such, we did not press this distinction in our write-up. Our thanks to an anonymous reviewer for pointing out the need for this clarification.

2 Importantly, organizational justice scholars have been relatively slow to attend to issues of perceived AI fairness in their research, despite being especially well-placed to do so (cf. De Cremer, Citation2020; Robert et al., Citation2020). As such, by demonstrating the close interconnections between research in organizational justice and AI fairness, we hope that our paper will inspire organizational scholars to better engage with existing scholarship on this topic – especially in the field of human-computer interaction – and in turn, to employ their disciplinary expertise in further advancing scholarship on perceived AI fairness.

3 Integrative reviews, in general, aim at “[reviewing], [critiquing] and [synthesizing] representative literature on a topic in an integrated way such that new frameworks and perspectives on the topic are generated” (Torraco, Citation2005). Such reviews are often useful for analysing new, emerging topics – to generate initial and/or preliminary conceptualizations, and to combine perspectives from different research traditions (Snyder, Citation2019). As such, this method appears especially well-suited for reviewing the emerging interdisciplinary topic of perceived AI fairness.

4 We use the term ‘organizational justice’ to refer to a specific sub-field of organizational scholarship focused on understanding how justice and fairness are perceived by organizational actors. This field has thus far largely treated ‘justice’ and ‘fairness’ as equivalent concepts (cf. Colquitt & Rodell, Citation2015; Greenberg, Citation2002). However, recent scholarship has argued that, while related, justice and fairness are distinct (Costanza-Chock, Citation2020; Green, Citation2022; Kasirzadeh, Citation2022; Le Bui & Noble, Citation2020). For the sake of conceptual clarity, throughout this paper, we employ the term ‘[organizational] justice’ to refer to the specific field of scholarship and its associated concepts, and use the term ‘fairness’ in all other cases.

5 Another important reason why, in our view, Colquitt’s framework is well-suited to the purposes of our review pertains to the interdisciplinary nature of organizational justice scholarship. Empirical research on how decision-subjects perceive the fairness of AI systems is conducted across a variety of disciplinary traditions with their unique perspectives and theoretical assumptions – psychology, HCI, organizational studies, etc. – and this diversity of perspectives is, in our view, an important reason why the field appears to be currently thriving. As such, it seems unproductive to organize our literature review in ways that reify these disciplinary boundaries: by, for example, sorting our papers according to field, or by using a framework that unduly privileges contributions from one field over all others. Colquitt’s framework, given its general focus on categorizing the different types of fairness perceptions from the point of view of the decision-subject, retains a broad applicability, and therefore seems especially appropriate for the purposes for our review. Our thanks to an anonymous reviewer for raising this important point.

6 This recently-published paper was not part of our original review corpus, but was instead added later during the peer review process. We thank the anonymous reviewer who brought this paper to our attention.

7 As other recent reviewers of this literature have also observed (cf. van Berkel et al., Citation2023), it is surprising to note that several extant studies do not report, in sufficient detail, the exact demographic make-up and positionality of their participants. This lack of detail makes it difficult for researchers to fully evaluate the state of current literature, and to identify critical gaps for future research. As such, we echo van Berkel et al.’s call for researchers working on this topic to carefully “report details on recruitment strategy, including compensation and recruitment source (eg, students, crowdworkers), as well as demographic factors of the participant sample (eg, location, age distribution)” in their studies (p. 11).

Additional information

Funding

The first and fourth authors of this paper were supported by a research project grant from the National University of Singapore (NUS) Centre for Trusted Internet and Community (Grant Number: CTIC-RP-20-06) awarded to the last author.

Notes on contributors

Devesh Narayanan

Devesh Narayanan is a research assistant at the NUS Centre on AI Technology for Humankind. His research concerns the normative-theoretical and behavioural underpinnings of popular calls for “ethical”, “trustworthy”, and “human-centered” AI. He holds an M.A. in Philosophy and B.Eng. in Mechanical Engineering, both from the National University of Singapore.

Mahak Nagpal

Mahak Nagpal is a Postdoctoral Fellow at the Centre on AI Technology for Humankind, National University of Singapore (NUS) Business School. Broadly, her research considers ethical perspectives related to human-AI interaction in the workplace. She received her Ph.D. in Organization Management from Rutgers Business School.

Jack McGuire

Jack McGuire is a PhD candidate in the Department of Management & Organisation at NUS Business School. Prior to this, he was the Experimental Lab Manager of the Cambridge Experimental and Behavioural Economics Group (CEBEG) at Judge Business School, University of Cambridge.

Shane Schweitzer

Shane Schweitzer is a postdoctoral research associate in the Centre for Trusted Internet and Community and the Centre on AI Technology for Humankind at National University of Singapore. His current research concerns perceptions of advanced, humanlike technologies.

David De Cremer

David De Cremer is a Provost’s Chair and Professor of Management and Organization at the NUS Business School, and the Director of the Centre on AI Technology for Humankind. Before moving to NUS, he was the KPMG Endowed Professor of Management Studies at the Judge Business School, Cambridge University.

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 61.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 306.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.