17,555
Views
30
CrossRef citations to date
0
Altmetric
Literature Reviews

Algorithmic bias: review, synthesis, and future research directions

ORCID Icon & ORCID Icon
Pages 388-409 | Received 25 Apr 2020, Accepted 29 Apr 2021, Published online: 06 Jun 2021

ABSTRACT

As firms are moving towards data-driven decision making, they are facing an emerging problem, namely, algorithmic bias. Accordingly, algorithmic systems can yield socially-biased outcomes, thereby compounding inequalities in the workplace and in society. This paper reviews, summarises, and synthesises the current literature related to algorithmic bias and makes recommendations for future information systems research. Our literature analysis shows that most studies have conceptually discussed the ethical, legal, and design implications of algorithmic bias, whereas only a limited number have empirically examined them. Moreover, the mechanisms through which technology-driven biases translate into decisions and behaviours have been largely overlooked. Based on the reviewed papers and drawing on theories such as the stimulus-organism-response theory and organisational justice theory, we identify and explicate eight important theoretical concepts and develop a research model depicting the relations between those concepts. The model proposes that algorithmic bias can affect fairness perceptions and technology-related behaviours such as machine-generated recommendation acceptance, algorithm appreciation, and system adoption. The model also proposes that contextual dimensions (i.e., individual, task, technology, organisational, and environmental) can influence the perceptual and behavioural manifestations of algorithmic bias. These propositions highlight the significant gap in the literature and provide a roadmap for future studies.

1. Introduction

Firms are increasingly adopting data analytics, big data technologies, and artificial intelligence (AI) to transform and improve their key operations and organisational decision making (Chen et al., Citation2012; Mikalef et al., Citation2018). According to the International Data Corporation, the Western Europe market for big data and analytics software reached 14.6 USD billion in 2018; this market is projected to grow with a five-year compound annual growth rate of 8.2% (Wells & Spinoni, Citation2019). A survey of the financial institutions conducted in 2018 in the United States (US) also showed that all of the 200 participating institutions used a form of machine-learning (ML) technology and 91% of the largest banks used deep-learning algorithms to make data-driven decisions (PYMNTS, Citation2018). Specifically, algorithmic systems are used to automate decision-making processes or assist human decision making by providing decision makers with algorithmically generated information such as results of classification (e.g., high-risk versus low-risk borrowers) and predictive analysis (e.g., credit default risk) along with the consequent recommendations (e.g., granting a loan to an applicant).

Despite the enormous value that AI and analytics technologies offer, their black-boxed algorithms may impose ethical risks at different levels of organisations and society (Martin, Citation2019; Someh et al., Citation2019). A major ethical concern is that AI algorithms that promise to enhance the accuracy and effectiveness of organisational decisions are likely to replicate and reinforce the social biases that exist in society (O’neil, Citation2016). Accordingly, algorithmic processes that are used to automate or assist decision making about people may produce discriminatory results that violate the norms of justice and equality and that adversely impact particular people or communities in the workplace or society. This phenomenon is referred to as algorithmic bias and occurs when the outputs of an algorithm benefit or disadvantage certain individuals or groups more than others without a justified reason for such unequal impacts. While computational scientists have developed mathematical techniques to detect and mitigate biases in algorithms, Information Systems (IS) researchers have largely fallen behind in addressing the behavioural, organisational, and social implications, antecedents, and consequences of this issue (Someh et al., Citation2019). This study addresses this gap by conducting an extensive literature review and proposing a theoretical model.

Algorithmic bias could lead to misinformed decisions and negative consequences for individuals, organisations, and society. Individual consequences include customers paying higher prices than usual and occupational inequality affecting minorities in the workplace. Organisational effects involve violating equal opportunity policies, creating an unethical climate, high employee turnover, and high customer churn rate due to algorithmic discrimination and dissatisfaction. Societal-level effects include an increased wealth gap between historically disadvantaged groups and others. For example, the algorithm used by Apple card for credit limit decisions has been accused of giving far lower credit limits to women than their spouses even if the wife has a higher credit score (Thorbecke, Citation2019). Also, Amazon, where men hold around 75 percent of the firm’s managerial positions, stopped using algorithmic systems for making recruitment decisions after discovering gender bias in those systems (Hamilton, Citation2018). Regulators and lawmakers have taken some steps to combat analytics-driven bias issues. For example, the European Union’s General Data Protection Regulation (GDPR) and the US’s Future of AI Act of 2017 impose restrictions on data processing and AI-powered business practices, hoping to promote algorithmic accountability and reduce algorithmic bias (Domanski, Citation2019; N. Lee, Citation2018b). Nonetheless, given the complexity of organisational decision-making ecosystems, it is not clear whether these policies and legal instruments are sufficient to mitigate biases in algorithmic practices. Moreover, we do not yet know how interactions between individuals and algorithmic systems can shape, trigger, or prevent data-driven biases in organisational decision making.

IS research can play a pivotal role in unpacking this phenomenon because algorithmic bias is a socio-technical construct that involves technology and human actors (Gal et al., Citation2017; Ransbotham et al., Citation2016; Wong, Citation2019). Users of algorithmic systems interpret algorithmic outputs based on a myriad of factors including personal prejudices, transparency of algorithmic processes, and organisational rules and policies (Silva & Kenney, Citation2019). Accordingly, users can accept or reject the recommendations generated by the machine in contexts such as hiring, pricing, lending, and criminal sentencing. Perceptions associated with algorithmic bias can also drive people to continue or discontinue using an algorithm or analytics tool. Understanding how algorithmic bias affects user behaviours and decisions can help mitigate it more effectively. Accordingly, this study reviews the literature that has addressed this phenomenon from a socio-behavioural perspective to understand what themes of research have emerged in the past decade and what socio-technical, behavioural, and organisational aspects can be further examined by IS researchers.

A small number of systematic literature reviews related to algorithmic bias have been conducted and published recently (Appendix 1), but they have limitations. First, their focuses are either too broad, such as data science ethics (Saltz & Dewar, Citation2019) and algorithmic accountability (Wieringa, Citation2020), or too narrow, including AI for employee management (Robert et al., Citation2020) and facial analysis systems (Khalil et al., Citation2020). The broad literature analysis studies did not capture the nuances of algorithmic decision making and behavioural consequences of algorithmic bias and the narrow literature reviews did not provide comprehensive conceptualisations and explanations that would be sufficiently applicable in a set of decision contexts. Second, their target audiences are mainly design science researchers and practitioners (Favaretto et al., Citation2019; Robert et al., Citation2020; Saltz & Dewar, Citation2019). Given the social implications and consequences of algorithmic bias, there is also a critical need to understand and guide algorithmic system user behaviours. Third, most of the existing reviews are not theory-grounded (Favaretto et al., Citation2019; Khalil et al., Citation2020; Saltz & Dewar, Citation2019), and the ones that have adopted theories (Robert et al., Citation2020; Wieringa, Citation2020) have not tried to develop a theoretical model to synthesise the current research and direct future studies. This is a crucial issue because theory is needed to guide behavioural research. This paper addresses these shortcomings and aims to enhance understanding of algorithmic bias, provide theoretical insights into this phenomenon, and direct IS research in this area.

This study provides several major contributions to research and practice. First, we inductively identify seven research themes and use them for classifying and summarising the current literature related to algorithmic bias. Drawing on the results of the reviewed papers along with relevant theories including the stimulus-organism-response theory (Mehrabian & Russell, Citation1974), the contextual factors framework (Petter et al., Citation2013), and the organisational justice theory (Colquitt & Rodell, Citation2015), we extract eight important, theoretical, and relevant concepts. These concepts and their definitions and characterisations inform future research in the areas of algorithmic bias and data-driven decision making. Based on the identified concepts, the reviewed papers, and the underpinning theories, we develop a theoretical model involving seven novel propositions that offer valuable insights into the behavioural consequences of algorithmic bias. The propositions theorise that biases that emerge in algorithm outputs can influence user behaviours through perceived fairness and that these relations are moderated by various contextual factors. The propositions involve the relations that have been discussed in the reviewed papers conceptually or in a limited empirical way as well as the relations that have not been addressed in those studies. Hence, the theoretical model illuminates the critical research gaps and potentials for future IS research. From a practical perspective, the model developed in this study along with results of future studies built on our model can help AI and analytics practitioners and stakeholders including managers, business analysts, and policymakers mitigate biases in data-driven technologies, procedures, and decisions, which will ultimately foster diversity, equity, and inclusion.

The organisation of the paper is as follows. First, an overview of the notion of algorithmic bias is provided and the processes through which it develops are explained. Second, the research method and data analysis procedure are presented. Third, the results of the thematic literature review including the seven research themes are explained. Fourth, the eight theoretical concepts are described. Fifth, a theoretical model that involves the seven propositions are presented. Finally, the paper concludes with discussing the findings, contributions, and limitations of our research.

2. Background

Algorithmic bias is a socio-technical phenomenon (Favaretto et al., Citation2019). Its social aspect comprises the biases that have long existed in society affecting certain groups such as underprivileged and marginalised communities, whereas its technical facet involves the manifestation of social biases in algorithms’ outcomes. Social bias in non-algorithmic contexts pertains to being against or in favour of individuals or groups based on their social identities such as race, gender, and nationality (Fiske, Citation1998). Social biases encompass stereotypes, prejudices, and discrimination (Fiske, Citation1998; N. Lee, Citation2018b). Stereotypes are over-generalised beliefs about specific categories of people (Fiske et al., Citation2002). For instance, “women are not good at managerial positions” is a stereotype that has long put female candidates at a disadvantage for executive positions (Bauer, Citation2015). Prejudice is an attitude or feeling towards a social group and its members without just grounds and sufficient evidence (Fiske, Citation1998). Racism, for instance, may be used to justify the belief that specific racial categories are superior or inferior to others (Swim et al., Citation1995). Discrimination consists of biased actions against a group of people in contexts such as hiring and criminal sentencing (Fiske, Citation1998).

Social biases may be integrated into algorithms at different stages of algorithmic operations (Domanski, Citation2019). If an algorithm’s input (i.e., training dataset) is contaminated with the social biases that exist in an organisation or society, those biases may emerge in the output of the algorithm. For example, in the loan approval process, a risk analysis algorithm may be trained using a dataset that reflects disproportionately higher rejection rates for female applicants without relevant reasons such as differences in credit scores or late payment history when compared to male applicants. The algorithm will then assume that gender is a relevant factor in assessing credit risks. Consequently, the algorithm will replicate and worse, reinforce the gender bias that might have previously led to biased loan decisions (Fuster et al., Citation2018). Incomplete, unrepresentative, and poorly selected input data may also lead to algorithmic bias (Favaretto et al., Citation2019). A talent analytics system may unfairly recommend male candidates over equally-qualified female candidates for executive positions because females have not historically had the opportunity to show their potential in those positions and hence, they are not well represented in the training dataset (Gal et al., Citation2017).

Algorithm design in terms of the choice of features, their weights, and objective functions may also be responsible for biased algorithmic outputs (Favaretto et al., Citation2019; Silva & Kenney, Citation2019). Using features that involve socially-sensitive attributes (e.g., gender and race) or their proxies (e.g., geographic location for race), if not adjusted properly, may lead to biased outcomes. Moreover, objective functions in optimisation models may maximise utility or minimise costs at the expense of injecting social biases into algorithmic practices and outcomes. For instance, research shows that algorithm-delivered ads promoting job opportunities in the science, technology, engineering, and mathematics fields tend to be displayed to men more frequently than to women simply because the algorithm’s objective is to optimise the advertising costs (Lambrecht & Tucker, Citation2019). Thus, if showing ads to women on job posting platforms is costlier than presenting the same ads to men, gender-based discrimination emerges on those platforms.

Biased algorithm outputs resulting from faulty inputs or model characteristics, if fed back into the algorithm for tuning purposes, may reinforce the existing biases in the system (Silva & Kenney, Citation2019; Williams et al., Citation2018). For instance, the predictive crime-mapping tools that are used to algorithmically aid police allocation across a city to prevent crime in hotspot areas may involve bias-reinforcing loops (Babuta & Oswald, Citation2019; Ensign et al., Citation2018; O’neil, Citation2016). The reason is that the historical data that are fed into the algorithm include the crime incidents reported by police officers; thus, the more police officers patrol in a neighbourhood, the more likely it is that incidents are discovered in the neighbourhood. Higher numbers of reported incidents lead to dispatching more police officers to an area, triggering a feedback loop in the police allocation system (Ensign et al., Citation2018). Unfortunately, this issue primarily affects areas with high poverty rates, which are also home to significant numbers of underprivileged populations and racial minorities (O’neil, Citation2016). Consequently, algorithmic biases, coupled with potential systemic biases targeting specific racial groups, may synergistically fuel racial injustice in society.

Prior studies in the areas of computer science and data science have proposed a variety of mathematical strategies to mitigate algorithmic biases at the pre-processing, in-processing, and post-processing phases of data analytics pipelines (Favaretto et al., Citation2019; Haas, Citation2019; Shrestha & Yang, Citation2019). Pre-processing mechanisms are used to debias training datasets before feeding them into learning algorithms. In-processing techniques involve adding parameters named regulizers to algorithms to penalise them for biased outcomes. Post-processing methods focus on recalibrating algorithmic outputs to enhance equality across social groups. Such computational procedures, however, are not necessarily adequate in addressing biases in data-driven decisions, as they do not capture the social, behavioural, and organisational aspects of algorithmic bias (Favaretto et al., Citation2019; Wong, Citation2019). Those aspects could amplify or help reduce the effects of this phenomenon on human perceptions, decision making, and actions. For instance, a recruiter with implicit prejudices against a social group may use the biased outputs of an algorithm to justify an unfair decision. Conversely, an ethical recruiter may recognise the biases in a talent analytics system and stop using it as it is perceived to be unfair, unreliable, and socially harmful. Given the socio-technical nature of biases in algorithms, it is crucial to understand how social processes and contextual factors impact the adoption and use of biased information and algorithmic technologies. Accordingly, the main goal of this paper is to enhance understanding of the current state of research in this area and theoretically direct future organisational and behavioural studies in the IS domain.

3. Methodology

We employed a well-established systematic review methodology to analyse and synthesise the existing literature on algorithmic bias and propose a theoretical model accordingly (Mikalef et al., Citation2018; Webster & Watson, Citation2002). The first stage involved identifying potentially relevant articles. To ensure complete coverage, we used a range of databases including Elsevier’s Scopus, IEEE Xplore Digital Library, Web of Science, ACM, ABI/INFORM, EBSCO, JSTOR, and ScienceDirect to conduct our search. We retrieved peer-reviewed articles published in scholarly journals and conference proceedings during the ten-year period of 2010–2019. We selected this period because algorithmic bias is an emerging concept that has gained attention in the industry and academia only in the past few years (Robert et al., Citation2020). Moreover, the results of the papers that were published more than a decade ago were based on the analytics technologies available at that time, possibly making those results less applicable today (Saltz & Dewar, Citation2019). We also included papers published in the proceedings of the International Conference on Information Systems (ICIS) (2010–2019) because this major IS conference offers mini-tracks on AI and fairness; therefore, its papers could be relevant and insightful. Furthermore, we searched the Social Science Research Network (SSRN) website, which hosts papers that are not generally peer-reviewed but have the potential to make significant theoretical and practical impacts. Finally, to broaden the search and find more articles for consideration, we reviewed the citations of all the identified articles.

Our search terms included “algorithmic bias”, “algorithmic fairness”, “analytics bias”, “analytics fairness”, “AI bias”, and “AI fairness”. Although this study mainly focused on the implications of algorithmic bias in the realm of IS, we felt that excluding relevant research published in non-IS outlets would negatively impact the completeness of the results, particularly because of the multi-faceted nature of biases in intelligent systems. Hence, we retrieved all papers with themes related to technical, social, behavioural, organisational, and ethical aspects of algorithmic bias. In this stage, we also removed duplicates across databases and other sources including the SSRN and ICIS proceedings. This resulted in a total of 227 articles in the dataset.

The next stage involved screening the articles’ titles, keywords, and abstracts to exclude publications that were conceptually or contextually irrelevant. An article would be considered relevant if 1) it mainly focused on the notion of algorithmic bias and 2) its conceptualisation was consistent with the socio-technical definition of it that focuses on social biases integrated into algorithms that may lead to discriminatory outcomes in firms and society (Domanski, Citation2019). This definition has been widely used in the IS, computer science, data science, and law literature. For example, an article that used the phrase “algorithmic bias” only in its list of keywords did not meet the first requirement. Also, an article that investigated the role of bias in algorithmic prediction of soil absorption did not meet the second criterion. As part of the exclusion procedure, we removed abstracts, panel reports, posters, and doctoral consortium proposals from the papers list, as they did not provide major insights. As a result, 146 articles remained in the dataset. Next, we decided to remove the highly technical papers from the analysis. Those papers primarily focused on developing mathematical solutions for bias detection and mitigation in AI models. They provided computational proofs or tested their proposed techniques using hypothetical or real-world datasets to demonstrate the effectiveness of their bias-aware, ML techniques. These topics, however, were beyond the scope of this study. The final dataset consisted of 56 papers. The inclusion and exclusion criteria are summarised in .

Figure 1. Stages of the article selection process.

Figure 1. Stages of the article selection process.

In the thematic analysis phase, the two researchers independently and inductively analysed the 56 non-technical papers in terms of the main theme of their content. Next, the two researchers met to compare and combine their coding outcomes. In most cases, the researchers agreed on the theme of content associated with the papers (e.g., legal aspects). In some cases, the wording of the theme codes was different between the two researchers (e.g., ethical versus moral); however, the codes were still semantically consistent. There were also a few cases in which the two researchers used different theme codes for a paper (e.g., ethical versus socio-technical). The research team discussed their coding rationale to refine their results and achieve a consensus on using a theme code that better represented the goals, directions, and outcomes of each paper. During this process, several content themes were merged (e.g., ethical, social, and philosophical considerations) to keep the final structure representative yet parsimonious. In the end, seven themes of content were identified and used to label the 56 papers.Footnote1

After thematically analysing the reviewed literature, the two researchers theorised algorithmic bias and its behavioural effects. Corley and Gioia (Citation2011) define theory as “a statement of concepts and their interrelationships that shows how and/or why a phenomenon occurs” (p. 12). Accordingly, the two researchers collaboratively extracted eight relevant and important theoretical concepts from the reviewed studies. Next, based on the results of the reviewed studies and other supporting literature, a theoretical model including seven proposed relations was developed.

4. Results

This section presents the results including a summary of the seven identified research themes followed by some reflections on the results.

4.1. Thematic summary of the literature

Overview and taxonomies: This research theme introduces the notion of bias in algorithmic systems, provides anecdotal evidence, and explains the sources, consequences, and mitigation strategies in a range of domains from employment and people analytics (Gal et al., Citation2017) and healthcare (Shaw et al., Citation2019) to education (Williams et al., Citation2018), credit markets (Fuster et al., Citation2018), and criminal justice (Mulligan et al., Citation2019). These papers collectively warn about the detrimental effects of biases in algorithms and call for more empirical research from social and socio-technical perspectives to illuminate different aspects of this emerging threat (Ransbotham et al., Citation2016; Shrestha & Yang, Citation2019; Silva & Kenney, Citation2019).

Ethical, social, and philosophical considerations: The papers in this category argue that algorithmic bias is essentially a social matter; thus, the ethical implications and social consequences of this phenomenon should be at the centre of examining and addressing it. Accordingly, ethical principles (Coates & Martin, Citation2019), design standards (Koene, Citation2017; Koene et al., Citation2018), and codes of conduct (Katyal, Citation2019; Loi et al., Citation2019) should guide firms in private and public sectors to ensure algorithmic accountability (Wong, Citation2019), avoid unjustified and unintended algorithmic discrimination (Koene, Citation2017), enhance equity (Huq, Citation2018), and protect civil rights (Katyal, Citation2019).

Legal and regulatory implications: Several studies discuss the affordances and limitations of laws and regulatory approaches in addressing biases in algorithmic systems. Accordingly, a combination of anti-discrimination and data protection laws should be used to enforce unbiased and accurate data processing and algorithmic decision making (Barocas & Selbst, Citation2016; Domanski, Citation2019; Hacker, Citation2018). For instance, GDPR and similar regulations that include right to explanation provisions allow individuals to demand an explanation for the output of an algorithm and exercise their rights to contest automated and machine-assisted decisions (Almada, Citation2019; Garcia, Citation2016). While some researchers cast doubt on the effectiveness of these legal instruments (Edwards & Veale, Citation2017), others assert that the existing regulatory mechanisms can hold organisations and decision-makers accountable and facilitate protecting human rights, freedom, and legitimate interests (Domanski, Citation2019; Kroll et al., Citation2016).

Socio-technical design: These studies have mainly addressed the role of socio-technical design principles in identifying and combating biases in algorithmic systems. They propose that user-centred approaches (Rantavuo, Citation2019) and inter-disciplinary discourses (Draude et al., Citation2019) rooted in social and technical domains are required to address the issue of algorithmic bias holistically. For instance, setting design objectives that are in line with not only business goals, but also long-term wellbeing of people can help reduce biases in algorithmic systems (Rantavuo, Citation2019). Furthermore, socio-technical features such as algorithmic transparency in the forms of explainability and auditability can improve users’ trust in and experience with algorithmic systems (Ebrahimi & Hassanein, Citation2019; Lysaght et al., Citation2019; Springer & Whittaker, Citation2019).

Concerns, perceptions, and needs: Several studies have conducted exploratory analyses to understand people’s concerns, judgements, and needs associated with biased algorithmic decisions and their consequences in society. The results have consistently confirmed that people from affected communities (Brown et al., Citation2019; Woodruff et al., Citation2018), young individuals (Perez Vallejos et al., Citation2017), and experts from academia, government, and industry (Webb et al., Citation2018) are concerned about the use of biased algorithmic systems. Moreover, analytics practitioners including ML developers and product managers call for technical, procedural, and managerial support to be able to address biases in algorithms (Holstein et al., Citation2019; Veale et al., Citation2018).

Antecedents of fairness perceptions: The papers in this category examine perceptions associated with algorithmic bias to understand how social and technical factors influence individuals’ judgements of fairness in data-driven systems. Accordingly, the technical components of algorithms including their internal processes (Grgić-Hlača et al., Citation2018) and outcomes (Lee et al., Citation2017; Saxena et al., Citation2019) as well as the explanation styles employed to make algorithmic processes and outcomes transparent and interpretable to users and stakeholders (Binns et al., Citation2018; Dodge et al., Citation2019) can shape fairness perceptions about algorithmic systems. Moreover, individual characteristics (Dodge et al., Citation2019; Lee et al., Citation2017) and task types (Binns et al., Citation2018; Saxena et al., Citation2019) determine whether an algorithm is judged as fair.

Impacts of machine advice on decisions: Two studies investigated the factors that could influence machine-assisted human decisions. The results showed that in the context of face recognition, people’s assessments of face attributes in terms of age and beauty were significantly influenced by the outputs of an AI application (Rhue, Citation2019). In contrast, in the realm of recidivism risk assessment, receiving machine advice regarding recidivism risks had only a small effect on individuals’ predictions of a defendant’s risk and that effect was skewed towards predicting no recidivism (Grgić-Hlača et al., Citation2019). Hence, contextual factors can change the perceptual and behavioural effects of biases in data-driven decision making.

4.2. Some reflections and the next steps

The thematic analysis results collectively revealed that the majority of the non-technical work on algorithmic bias has been conceptual and that most of the empirical studies have mainly aimed to acknowledge people’s concerns and stakeholders’ needs in handling algorithmic bias. A few studies have attempted to identify causal links between theoretical constructs to understand the mechanisms through which biased AI and analytics systems can influence individuals’ perceptions and decisions. Some of those studies, however, have reported contradictory results regarding effective explanation styles (Binns et al., Citation2018; Dodge et al., Citation2019) and impacts of machine advice on decisions (Grgić-Hlača et al., Citation2019; Rhue, Citation2019), highlighting the need for additional research to better understand the behavioural implications and consequences of algorithmic bias. To this end, we use the reviewed literature to identify and elaborate on the relevant theoretical concepts and build a theory-driven research agenda.

5. Important theoretical concepts from the literature

The results of the thematic analysis demonstrated that algorithmic bias has been approached in prior studies in two major ways. Some studies have viewed it from an objective perspective and offered one or more statistical metrics to measure it (Shrestha & Yang, Citation2019; Verma & Rubin, Citation2018). Others have focused on perceived fairness, representing the human judgements associated with biases in algorithms (M. Lee, Citation2018a; Woodruff et al., Citation2018). Moreover, Ebrahimi and Hassanein (Citation2019) argue that the ultimate behavioural outcome of biases in algorithms is accepting or rejecting a recommendation generated by machine. Such recommendations could impact human assessments and predictions (Rhue, Citation2019). Hence, we considered algorithmic bias (quantified), perceived fairness, and recommendation acceptance as three relevant theoretical concepts derived from the literature. To strengthen the theoretical underpinnings of these three concepts and develop the relations between them robustly, we drew on the stimulus-organism-response theory (Mehrabian & Russell, Citation1974). This theory contends that environmental stimuli influence individuals’ internal (psychological) states, which leads to behavioural responses (Mehrabian & Russell, Citation1974). This three-stage theory has been employed in the extant literature to explain people’s decisions, intentions, and actions in contexts such as online shopping (Shen & Khalifa, Citation2012). In the realm of algorithmic decision making, stimulus involves quantitatively-assessed algorithmic bias (Shrestha & Yang, Citation2019). Organism includes perceived fairness (Robert et al., Citation2020). Response involves behavioural reactions to algorithmic bias such as recommendation acceptance (or rejection) (Ebrahimi & Hassanein, Citation2019).

Furthermore, several of the reviewed papers have noted that contextual factors ranging from policies, regulations, and standards (Barocas & Selbst, Citation2016; Domanski, Citation2019) to task characteristics (Barlas et al., Citation2019) and algorithmic technology characteristics (Binns et al., Citation2018; Lee, Jain et al., Citation2019) could change the way algorithmic bias leads to behavioural responses. For example, Lee et al. (Citation2017) suggest that fairness perceptions of algorithms vary based on tasks and cultures as well as stakeholders’ personal beliefs and organisational philosophies. To categorise the influential contextual variables and to theorise their impacts on fairness perceptions and behavioural responses, we adapted a contextual factors framework established in the IS literature (Petter et al., Citation2013). Accordingly, individual characteristics, task characteristics, project characteristics, organisational characteristics, and social or environmental characteristics are the key contextual factors in the area of IS success. We only replaced project characteristics with technology characteristics because technological affordances of data-driven systems such as transparency, auditability, and control are more relevant than factors like IT project management skills in the space of algorithmic bias. Using technology characteristics along with organisational and environmental characteristics among the contextual factors is also consistent with the technology-organisation-environment framework that is widely adopted in the IS literature (Picoto et al., Citation2014).

In summary, we identified eight major theoretical concepts. The following section presents a detailed description of the concepts including their definitions, operationalisations, extension potentials, and theoretical relevance.

5.1. Algorithmic bias

As discussed earlier, the notion of algorithmic bias has its roots in social phenomena such as discrimination, unfairness, and social injustice (Williams et al., Citation2018). Algorithmic bias is therefore a social concept with definitions that vary across social systems and philosophical paradigms (Binns, Citation2018; Danks & London, Citation2017; Saxena et al., Citation2019; Shrestha & Yang, Citation2019). For instance, according to the egalitarianism doctrines, people should be treated equally, particularly with respect to social, political, and economic affairs, for a social system to be considered just and unbiased (Binns, Citation2018). However, there is a debate about what should be equalised (Wong, Citation2019). Some philosophers propose that equalising benefits, such as welfare (i.e., pleasure or preference satisfaction), resources (i.e., income and assets), and capabilities (i.e., ability and resources necessary to accomplish tasks), and burdens is the ultimate goal of justice in society. Based on this approach, algorithmic bias appears when an algorithm distributes benefits and burdens unequally among different individuals or groups. Other philosophers believe that inequalities in welfare, resources, and capabilities are acceptable as long as they result from people’s free choices and informed risk taking, not from their inherent characteristics, talents, or luck (Binns, Citation2018). Accordingly, an algorithm is considered biased if it distributes benefits and burdens unequally and the unequal distribution is due to differences in individuals’ inherent characteristics, talents, or luck. John Rawls, a prominent moral and political philosopher, argues that in a fair social system, individuals’ prospects for success in the pursuit of jobs and other social positions should depend on their native abilities and willingness to cultivate those abilities rather than their social class or background; nonetheless, providing differential benefits to underprivileged groups is morally justifiable (Rawls, Citation2001). According to Rawls’s conception of justice, it is acceptable for an algorithm to disproportionately benefit underprivileged communities and this may not be considered a case of algorithmic bias. In summary, when algorithmic bias is conceptualised and measured based on a philosophical paradigm, depending on the paradigm’s definition of equality and justice, algorithmic bias may be formulated differently (Lee et al., Citation2017). In the same vein, bias conceptions may vary across religions, cultures, organisations, and legal systems (Danks & London, Citation2017; Saxena et al., Citation2019).

Despite the differences in the definitions of algorithmic bias due to variations in the philosophical, legal, and social perspectives towards justice, all definitions have two major aspects in common: 1) a deviation from an equality principle emerges in the outputs of a biased algorithmic system, and 2) the deviation occurs systematically and repeatably, not randomly. Therefore, an overarching definition that we propose for algorithmic bias is a systematic deviation from equality that emerges in the outputs of an algorithm. Consistent with this definition, a range of objective metrics have been used in the literature to quantify and statistically measure biases in algorithms (Verma & Rubin, Citation2018). These metrics generally assume that the algorithm’s (and decision maker’s) goal is to maximise accuracy subject to specific constraints that promote equality and justice.

Bias metrics are categorised into individual-level and group-level metrics (Haas, Citation2019). Individual-level metrics are used to ensure that people who have similar qualifications with respect to a task receive similar outcomes (Lee, Jain et al., Citation2019). This requires that similarity between individuals in the context of a specific task be defined objectively and measured accurately. Group-level metrics aim to ensure that algorithmic outcomes do not disproportionately and negatively affect particular groups. A set of group-level metrics have been proposed in the literature. Each metric can be used to achieve specific equality objectives. Using a loan application example, and present a set of equality objectives and metrics that can be used to assess algorithmic bias. The true positive, true negative, false positive, and false negative values in are associated with the number of cases out of a pool of loan applicants from each gender that are correctly (i.e., true) or incorrectly (i.e., false) predicted to default (i.e., negative outcome) or not default (i.e., positive outcome).

Table 1. Confusion matrix for risk assessment and loan application decisions, adapted from Verma and Rubin (Citation2018)

Table 2. A non-exhaustive list of equality objectives and corresponding metrics and constraints, adapted from Shrestha and Yang (Citation2019) and Verma and Rubin (Citation2018)

As indicated in , Demographic parity, also referred to as statistical parity, denotes that the algorithmic outcome (i.e., default prediction) should be independent of irrelevant characteristics including protected attributes (i.e., gender) to be considered unbiased (N. Lee, Citation2018b). Protected attributes are traits or characteristics that, by law, may not be used as the basis for decisions (Dodge et al., Citation2019). Demographic parity aims to ensure that different groups, including disadvantaged (females) and advantaged (males) groups, receive a positive outcome (i.e., loan) at an equal rate. In other words, in the case of loan applications, the percentage of the male applicants who receive the loan should be equal to the percentage of the female applicants who get the loan. Predictive parity requires that the precision or positive predictive values be equal or very close (based on a pre-determined threshold) for different groups (i.e., males and females). Error rate balance requires that the model’s inaccurate predictions do not disproportionately affect a group (i.e., females). The metric equalised odds suggests that among those who are predicted to be low-risk and, therefore, receive the loan, different groups (i.e., males and females) should be equally likely to have been 1) correctly classified as low-risk and 2) incorrectly classified as low-risk. The first part of the equalised odds conditions (i.e., balanced true positive rates) is also known as equal opportunity.

These equality objectives and bias metrics, however, are not universally applicable and their implications vary across domains. In the context of civil justice, including criminal sentencing and policing, error rate balance may be more suitable, whereas equality in hiring and promotion processes may be measured in terms of demographic parity or equal opportunity (Binns, Citation2018; Chouldechova, Citation2017). Furthermore, different stakeholders may prioritise equality objectives differently. In the loan application example, the financial institution’s objective could be predictive parity and balanced error rates, particularly the false positive rate, because granting loans to high-risk individuals is inefficient and financially harmful to the institution. In contrast, loan applicants may view equality mainly in terms of equalised odds and equal opportunity. Lawmakers and social activists who promote equality and inclusion in society may emphasise demographic parity (N. Lee, Citation2018b). Nonetheless, these objectives could be conflicting and it may be impossible to achieve them all at the same time (Shrestha & Yang, Citation2019; Wong, Citation2019). For example, if an algorithm satisfies the predictive parity condition, but the actual percentage of the high-risk males and females (i.e., prevalence) are different, it may not be mathematically possible to achieve equal false positive rates and equal false negative rates between males and females (Chouldechova, Citation2017). Hence, choosing an appropriate metric is a context-dependent decision and in many cases, a political challenge (Wong, Citation2019). However, regardless of which metric is used, algorithmic bias can be coded as a binary variable (i.e., biased or not biased), as an ordinal variable with more than two possible values (i.e., different levels of bias based on pre-determined thresholds), or as a continuous variable (i.e., different degrees of bias on a continuum) (Shrestha & Yang, Citation2019).

5.2. Perceived fairness

Perceived fairness refers to the extent to which an algorithm is perceived to be fair (Lee et al., Citation2017; Saxena et al., Citation2019; Woodruff et al., Citation2018). Prior studies have highlighted the applicability of the organisational justice theory in conceptualising perceived fairness in algorithmic ecosystems (Binns et al., Citation2018; Lee, Jain et al., Citation2019; Robert et al., Citation2020). According to the organisational justice theory, from an employee’s point of view, justice “reflects the degree to which one’s company or top management is perceived to act consistently, equitably, respectfully, and truthfully in decision contexts” (Colquitt & Rodell, Citation2015). Unlike philosophical paradigms that take a prescriptive approach to ascertain what is objectively right or wrong, organisational justice frameworks tend to take a descriptive approach to characterise the subjective nature of justice in organisations. Organisational justice has been conceptualised as a multi-dimensional construct. Two of the major components of justice in organisational, non-algorithmic settings are distributive justice and procedural justice (Greenberg, Citation1990). Distributive justice pertains to the perceived fairness of decision outcomes, whereas procedural justice refers to the perceived fairness of the processes that lead to decision outcomes (McFarlin & Sweeney, Citation1992). More specifically, distributive justice is fostered if decision outcomes are in line with allocation norms such as equality, equity (i.e., contribution-based allocation), and need-based allocation of benefits and harms (Colquitt, Citation2001; Colquitt & Rodell, Citation2015). Procedural justice is fostered if decision-making processes adhere to fair process criteria including consistency, accuracy, ethicality, and representativeness (Colquitt & Rodell, Citation2015).

In the realm of algorithmic decision making, we use the term fairness instead of justice to keep the terminology consistent with the extant literature (Robert et al., Citation2020) and to distinguish between algorithmic bias, as a computationally-measured construct, and perceived fairness, as a subjective construct. Distributive fairness denotes whether people perceive the outcomes of an algorithm or algorithmic decision as being fair, whereas procedural fairness conceptualises fairness perceptions with respect to internal processes of data-driven systems such as the rules and logics incorporated into algorithms (Binns et al., Citation2018; Grgić-Hlača et al., Citation2018; Robert et al., Citation2020). These two aspects of fairness in algorithmic systems are distinct yet interrelated as people may perceive identical outcomes as being fair or unfair depending on the algorithmic processes that have led to those outcomes (Lee, Jain et al., Citation2019).

In algorithmic contexts, distributive fairness can be operationalised at the individual and group levels. At the individual level, distributive fairness denotes whether an algorithmic outcome associated with two or more individuals is perceived to be fair. In the context of food donation and distribution, Lee et al. (Citation2017) found that some people viewed fairness as efficiency (i.e., least costly decision), some conceptualised it as equity (i.e., merit-based or need-based resource allocation), and others perceived it as equality (i.e., allocating resources to individuals equally regardless of their circumstances and efficiency of allocation decisions). In another study, Saxena et al. (Citation2019) measured the perceived fairness of different loan decisions that were experimentally made for two loan applicants with different repayment rates and from different races. At the group level, fairness perceptions can be assessed with respect to the outcomes of an algorithm for different social groups such as females and males (Ebrahimi & Hassanein, Citation2019). Perceptions about whether the outcomes of an immigration algorithm has affected people from different countries or racial groups fairly can be considered a group-level, distributive fairness perception (Heaven, Citation2020; Kroll et al., Citation2016).

Procedural fairness can be operationalised in terms of judgements corresponding to algorithmic processes concerning features and their weights. For example, in criminal risk assessment, using variables such as family, social circle, and neighbourhood that are correlated with race may be objectionable and perceived to be unfair (Binns, Citation2018). Grgic-Hlaca et al. (Citation2018) operationalised fairness in terms of perceptions related to the latent properties of the features (i.e., variables) used in an algorithm such as relevance, privacy, and volitionality. Relevance denotes whether the feature is perceived to be relevant to the decision-making situation. Privacy refers to whether the feature is perceived to be reliant on privacy-sensitive information. Volitionality indicates the extent to which the feature is perceived to reflect people’s own will, not luck or other unchosen circumstances. Perceived consistency or parity of treatment is another feature property that can be considered a component of procedural fairness (Brown et al., Citation2019).

5.3. Behavioural responses

Behavioural responses have been primarily defined as the acceptance of machine-generated advice in algorithmic decision making (Ebrahimi & Hassanein, Citation2019; Grgić-Hlača et al., Citation2019). For example, in the US, judges use the COMPAS tool to help them decide on when to grant an individual bail. The tool generates recidivism risk scores demonstrating the defendant’s risk of committing another crime before being tried. The extent to which a judge accepts the machine-generated advice is considered to be the judge’s behavioural response to that advice. Similarly, the recommendations put forth by talent analytics systems regarding hiring, promotion, and firing decisions can be accepted or rejected by decision makers, representing a binary operationalisation of information acceptance (Ebrahimi & Hassanein, Citation2019; Gal et al., Citation2017).

Behavioural responses can also be defined in terms of algorithm aversion or appreciation. The concept of avoiding algorithms after learning that they are imperfect is referred to as algorithm aversion (Dietvorst et al., Citation2018). In other words, preferring a human agent over an algorithm to complete a task is considered algorithm aversion (Jussupow et al., Citation2020). The opposite side of algorithm aversion is algorithm appreciation, which pertains to positive attitudes and behaviours towards an algorithm. Although the notions of algorithm aversion and algorithm appreciation were not explicitly mentioned in the papers reviewed in this study, their results revealed differences between fairness perceptions towards algorithms and human agents in decision-making tasks (Barlas et al., Citation2019; M. Lee, Citation2018a). Hence, we consider algorithm appreciation as a behavioural response in our model.

In line with the IS research that has long highlighted the importance of understanding technology adoption intentions and behaviours (Venkatesh et al., Citation2012), we also include algorithmic system adoption as a behavioural response in the model. An algorithmic system is composed of not only one or more algorithms as its computational engine, but also elements such as user interfaces and interactive features (Lee, Jain et al., Citation2019). Accordingly, one may appreciate an algorithm but does not adopt the algorithmic system for long-term use. Similarly, a manager may adopt a data analytics system for decision making, but they may not necessarily accept all the recommendations generated by the algorithm due to task-specific and other contextual factors. Thus, we consider algorithmic recommendation acceptance, algorithm appreciation, and algorithmic system adoption as three dimensions of behavioural responses.

5.4. Individual characteristics

Individual characteristics include attitudes, beliefs, and socio-demographic characteristics that are relevant in the context of an algorithmic decision. Individuals in this context refer to the users of data analytics systems including business analysts, managers, and other organisational decision makers. Attitudes and beliefs involve prejudicial tendencies and stereotypes that people may hold about specific social groups and their members (N. Lee, Citation2018b). Moral identity, defined as one’s sense of morality and moral values, is a relevant factor that has not been discussed in the algorithmic bias literature but can be impactful in algorithmic decision making (Blasi et al., Citation1994). This construct is relevant mainly because algorithmic bias is formulated with respect to moral values and ethical standards; therefore, people’s moral beliefs may set a baseline for evaluating biases in algorithms (Ebrahimi & Hassanein, Citation2019; Yapo & Weiss, Citation2018). Individual characteristics may also be related to technology. For example, the propensity to trust in technology, including AI and analytics, can shape people’s perceptions and behaviours regarding algorithmic biases (Dodge et al., Citation2019; Glikson & Woolley, Citation2020; M. Lee, Citation2018a; Mcknight et al., Citation2011; Rossi, Citation2019; Springer & Whittaker, Citation2019). Many people believe that algorithms are fairer than humans, whereas others trust that algorithms lack human intuition (Barlas et al., Citation2019; Lee et al., Citation2017). Accordingly, propensity to trust in technology can play a major role in algorithmic decision making.

5.5. Task characteristics

Task characteristics refer to the features relevant to a specific task (Ghasemaghaei & Hassanein, Citation2019). In general, tasks are activities that support a firm, and technologies are used to augment the completion of firm tasks (Petter et al., Citation2013). For example, algorithms are used to make or aid low-impact and high-impact decisions about data subjects. Consistent with the GDPR provisions, if decisions made through the use of algorithms do not have any binding impact on data subjects and do not deprive them of their legitimate rights, the outcomes of decision making is of a low impact (Brkan, Citation2019). For instance, recommending an entertainment product for purchase and displaying product ads to web users are among low-impact tasks. However, if the decisions made using algorithms impact individuals’ rights in contexts such as hiring, credit approval, and criminal sentencing, the decisions are more sensitive because they have high impacts on individuals’ lives. Tasks may also be categorised based on whether they require mechanical skills or human skills (M. Lee, Citation2018a). Predicting risk in stock prices or luggage screening may involve mechanical skills, whereas predicting a job applicant’s success or a student’s performance in college may need more human judgements (M. Lee, Citation2018a). Another relevant characteristic of algorithmic tasks is the types of data subjects involving people about whom decisions are made. Disadvantaged and advantaged populations are two major types of data subjects that are commonly distinguished in the studies related to social biases and algorithms. Biases against these two types of people could have different implications for decision makers, organisations, and society. In some contexts, treating disadvantaged groups favourably may not even be considered a case of social bias (Rawls, Citation2001). Types of data subjects, however, are not limited to advantaged and disadvantaged groups. Socially powerful rivals such as religious, political, and ethnic groups in multinational states (including countries in Eastern Europe and Africa) may also exhibit discriminatory practices against each other. Accordingly, algorithms may replicate such biases and depending on the social power and other characteristics of the rival groups, users’ perceptions of and reactions to the algorithmic bias may vary.

5.6. Technology characteristics

Technology characteristics in the realm of algorithmic systems pertain to the socio-technical and interactive features provided through user interfaces, which can enhance the usability of those systems and enable users to detect biases and react to them accordingly (Binns et al., Citation2018; Dodge et al., Citation2019; Veale et al., Citation2018). Algorithmic transparency in the form of machine-generated explanations about the processes and outcomes of an algorithm is one of the prominent algorithmic technology characteristics (Binns et al., Citation2018; Dodge et al., Citation2019; Rader et al., Citation2018; Springer & Whittaker, Citation2019). Transparency, also known as interpretability, can be measured as a binary (non-transparent versus transparent) or continuous variable (Lee, Jain et al., Citation2019), or as a categorical variable representing different explanation styles. For example, Binns et al. (Citation2018) and Dodge et al. (Citation2019) operationalised transparency in terms of four explanation styles including input influence (i.e., relative influence of each variable on a decision), sensitivity (i.e., amount of change in a variable needed for the decision to change accordingly), case-based (i.e., providing examples to justify the decisions), and demographic (i.e., presenting aggregate outcomes statistics for each demographic class). The notion of transparency could also be extended to providing users with ground truth or bias measures to enable them to assess the biases integrated into an algorithm more objectively (Ebrahimi & Hassanein, Citation2019; Grgić-Hlača et al., Citation2019).

User control and auditability are the other relevant technology characteristics in AI applications and data analytics systems (Sandvig et al., Citation2014; Springer & Whittaker, Citation2019). User control denotes the degree to which an analytical system provides individuals with the ability to influence decisions or algorithm outcomes (i.e., outcome control) and the processes under those outcomes (i.e., process control). Outcome control emphasises users’ rights and ability to reject algorithmic outcomes, whereas process control revolves around taking part in selecting input data and influencing algorithms’ rules and logics (Lee, Jain et al., Citation2019). Auditability refers to algorithm diagnosis functionalities embedded into intelligent systems that provide users and third-parties with practical mechanisms to experimentally assess biases in algorithms (Lee, Kusbit et al., Citation2019; Springer & Whittaker, Citation2019).

5.7. Organisational characteristics

Organisational characteristics refer to the organisational policies, norms, rules, and standards that directly or indirectly impact the use of algorithmic technologies and data-driven decision making (Petter et al., Citation2013). For example, Wong (Citation2019) explains that decisions on selecting bias metrics and trade-offs between equality and algorithmic performance are, in fact, a political matter in an institution because they involve competing values. Therefore, algorithmic accountability policies should account for organisational politics and require algorithmic decision makings to be public, reasonable, and revisable. Publicity fosters a balance between algorithm-driven equality and other social values. Reasonableness potentially makes algorithmic decisions acceptable by those who are most adversely affected. Being revisable promotes establishing dispute resolution mechanisms in the process of implementing and using algorithmic systems. Organisational characteristics also involve bias governance frameworks that are developed based on ethical principles and best practices and that encompass technical and organisational aspects of algorithmic biases in a firm (Coates & Martin, Citation2019).

Ethical design standards that can improve algorithmic accountability and help avoid unintended and unjustified algorithm-fuelled discrimination are also among relevant organisational characteristics (Kroll et al., Citation2016). For instance, the IEEE 7003 Standards for Algorithmic Bias Considerations, which have been offered as part of the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, aim to promote equality and fairness (Koene, Citation2017; Koene et al., Citation2018). Similarly, the code of ethics developed within the Swiss Alliance for Data-Intensive Services induces firms to make fairer data-driven decisions (Loi et al., Citation2019).

5.8. Environmental characteristics

Environmental characteristics are generally the factors that characterise the external organisational environment, including a broad normative framework of laws, customs, and assumptions that exist in society (Arrighetti et al., Citation1997). Anti-discrimination laws, data protection acts, and algorithmic accountability regulations are among the relevant legal instruments in the context of algorithmic decision making (Barocas & Selbst, Citation2016; Domanski, Citation2019). The anti-discrimination laws include legislation that aims to legally prevent organisations from making decisions for individuals on the basis of protected attributes such as age, gender, race, and religion (Hacker, Citation2018; Huq, Citation2018). The data protection acts and regulations involve GDPR, Future of AI Act of 2017, and California Consumer Privacy Act of 2018, which include some provisions to foster algorithm-driven fairness (Domanski, Citation2019; N. Lee, Citation2018b). The algorithmic accountability regulations aim to hold developers and companies using algorithmic systems responsible for the results of those systems (Domanski, Citation2019; Kroll et al., Citation2016; Lysaght et al., Citation2019). A regulatory approach to enforce accountability includes private corporations being penalised for ethically-challenged algorithmic outcomes (Domanski, Citation2019). Another approach is to require companies to make their algorithm design and outcomes transparent, auditable to third parties, and explainable to individuals impacted by algorithmic decisions (Ajunwa, Citation2020; Domanski, Citation2019; Rossi, Citation2019).

While prior studies have addressed the roles of legal instruments in mitigating algorithmic bias, less attention has been paid to non-regulatory environmental factors such as social norms and culture. Social norms include the unwritten rules of behaviour that are considered acceptable in a group or society and guide people’s actions (Dickinger et al., Citation2008). Similarly, cultural values are the core principles and ideals that a community is built upon (Schwartz, Citation1999). In the algorithmic decision-making space, relevant and applicable social norms and cultural values involve the fairness conceptions, attitudes, and behaviours that are deemed normal and acceptable in society.

6. Theoretical model and research agenda

The eight theoretical concepts explained in the previous section and the relations among them are shown in . While some of the relations proposed in the model have been conceptually discussed or partially examined before, other relations have not been addressed in prior studies. This section explains the seven propositions and future research directions. summarises the propositions, including the derived and new relations, and highlights important gaps and opportunities for future studies.

Table 3. Derived, extrapolated, and new relations proposed in this study

Figure 2. Theoretical model.

Figure 2. Theoretical model.

Research has shown that an algorithm’s outputs and process characteristics impact fairness perceptions associated with the algorithm. Lee et al. (Citation2017) found that people value equitable and equal distribution of resources (e.g., donations). Accordingly, any allocation mechanism that violates equity and equality principles could be judged as unfair. For example, in line with the notion of demographic parity, if the ratio of people from each social group that receive a benefit is not equal across all groups, the algorithm that has led to this decision may be perceived as unfair. In the context of loan decisions, Saxena et al. (Citation2019) reported that people viewed ratio-based lending (i.e., splitting the money between the two candidates in proportion to their abilities to pay back their loans) as a fair method for giving loans to people. This merit-based mechanism implies that any two loan applicants with the same payback rates should be treated equally regardless of their socio-demographic characteristics such as gender and race. This is consistent with the equality objectives in algorithmic systems such as equality of opportunity and predictive rate parity. Hence, an algorithm whose outcome is biased based on one of the metrics will likely be perceived as unfair; however, the strength of this relation may vary based on which metric is used. We propose:

Proposition 1: Algorithmic bias negatively influences perceived fairness.

People believe that the use of algorithmic systems in companies should accord with moral values and ethical principles (Woodruff et al., Citation2018). Thus, if users of an algorithm realise that its outcomes or the logics incorporated into it are unfair, they likely consider courses of action to neutralise the unethical effects of the algorithm (Grgić-Hlača et al., Citation2019; M. Lee, Citation2018a; Yapo & Weiss, Citation2018). These courses of actions or behavioural responses include refraining from accepting the algorithmic recommendation (Ebrahimi & Hassanein, Citation2019; Grgić-Hlača et al., Citation2019), engaging in algorithm aversion (Dietvorst et al., Citation2018), and refusing to adopt an algorithmic system (Lee, Jain et al., Citation2019). For example, Jussupow et al. (Citation2020) argue that many people believe that algorithms are perfect. Nonetheless, if they realise that an algorithm is imperfect (i.e., generates unfair outcomes), their expectations are disconfirmed and they blame and punish the algorithm by not using it. This is in line with the traditional organisational justice literature that has shown that if managerial and organisational decisions are perceived to be unfair, the affected employees may show anger and resentment, which may lead to retaliatory actions against the organisation (Skarlicki & Folger, Citation1997). We propose:

Proposition 2: Perceived fairness positively influences recommendation acceptance, algorithm appreciation, and system adoption.

Extant literature has traditionally shown that stereotypical beliefs and prejudicial attitudes against underrepresented populations such as women (Rudman & Kilianski, Citation2000) and immigrants (Pereira et al., Citation2010) lead to discriminatory behaviours. This can be extended to the algorithmic domains. For instance, a manager who believes men are more successful in executive positions may not choose an algorithmically-recommended and highly-qualified female candidate over a male candidate and may stop appreciating and using the system that constantly recommends women for executive positions. On the other hand, the manager may judge a biased algorithmic recommendation as fair. This effect can be explained by the confirmation bias theory that suggests that people tend to have a positive attitude towards information and technologies that help confirm their existing beliefs (Jussupow et al., Citation2020; Kahneman, Citation2011). Accordingly, prejudiced people may use the biased outputs of an algorithm to justify their prejudices and stereotypical beliefs when making decisions. As another possibly influential factor, moral identity serves as a self-regulator that motivates people to engage in moral actions (He et al., 2014). Hence, it is expected that moral identity drives identifying social biases in algorithms, leading to morally-acceptable behaviours (e.g., not accepting discriminatory machine advice).

Regarding the technology-oriented individual characteristics, Rossi (Citation2019) highlights the fact that trust in AI systems can enhance their adoption and effective use. M. Lee (Citation2018a) noted that many people did not generally trust algorithms for employee evaluation decisions and therefore they would not support it. Accordingly, we argue that the propensity to trust in technology, particularly AI and analytics technology, likely results in higher confidence in algorithmic outputs and thereby lead to heightened fairness perceptions and willingness to accept an algorithmic recommendation or adopt a system even if the algorithm’s outputs are biased. We propose:

Proposition 3: Individual characteristics moderate the relations among algorithmic bias, perceived fairness, and behavioural responses.

M. Lee (Citation2018a) found that people generally view human decisions to be fairer than algorithmic decisions on managerial tasks that need human skills such as hiring and work evaluation. The reason for this is that people may believe that algorithms do not have the human intuition and experience to make fair judgements and decisions. In the context of dating apps and profile descriptions based on user images, Barlas et al. (Citation2019) revealed that individuals generally judge machine-generated descriptions as being fairer than those generated by humans. Thus, the type of task (mechanical versus human-like) influences perceived fairness. Also, we propose that regardless of fairness perceptions, the recommendations generated by an algorithm are more likely to be accepted if the type of task is mechanical because it requires lower levels of human skills and intuitions. Hence, the type of task impacts the relation between perceived fairness and recommendation acceptance.

In addition, we argue that decision sensitivity and the task’s impacts on people’s lives can determine how people perceive fairness in algorithms and, accordingly, whether or not they accept recommendations, appreciate algorithms, and adopt data-driven technologies (Brkan, Citation2019). For high-impact tasks, people are expected to be more concerned about biases in algorithmic decisions, whereas for low-impact tasks, individuals’ sensitivity to biases is likely lower and therefore they are less likely to react to unfair algorithmic processes and outcomes.

Regarding types of data subjects, making unfair decisions against advantaged groups may be perceived to be less unethical than doing so against historically disadvantaged groups. For example, if a higher education admission tool discriminates against females, users are more likely to notice it and react to it compared with when the tool predominantly selects female applicants. Among disadvantaged groups, some may also be deemed to need more protection against discrimination in specific societies and countries depending on culture, social norms, and regulations (N. Lee, Citation2018b). We propose:

Proposition 4: Task characteristics moderate the relations among algorithmic bias, perceived fairness, and behavioural responses.

Lack of transparency is a major concern associated with using algorithmic tools (Lysaght et al., Citation2019). Springer and Whittaker (Citation2019) suggest that algorithmic transparency can improve users’ trust in and experience with algorithmic systems because it facilitates the interpretation of algorithm outputs and the formation of mental models associated with system operations. In other words, transparency enables people to reliably judge process characteristics, which would otherwise be opaque to them (Lee, Jain et al., Citation2019). Binns et al. (Citation2018) found that the case-based explanation style (i.e., providing examples to justify the decisions) had the most significant impact on perceived justice associated with an algorithmic decision, whereas Dodge et al. (Citation2019) reported that sensitivity-based explanations (i.e., amount of change in a variable needed for the decision to change accordingly) were more effective than case-based explanations at making biases transparent and empowering people to assess justice in algorithms. Ebrahimi and Hassanein (Citation2019) discussed that algorithmic transparency helps decrease users’ acceptance of discriminatory recommendations provided by analytics systems. Lee, Jain et al. (Citation2019) showed that transparency had mixed effects on fairness perceptions. By enabling people to understand equalities in resource allocation, transparency increased perceived fairness, whereas by allowing people to recognise uneven distributions and differences, transparency decreased perceived fairness. Accordingly, while transparency does not guarantee fairness in procedures, it allows people to identify biases in the outcomes of an algorithm.

User control and auditability are also among the user-centred design features that can influence the relations among algorithmic bias, fairness perceptions, and decisions in algorithmic environments. Research in non-algorithmic settings has demonstrated that having more control over a decision process enhances the perceived fairness of the results (Lind et al., Citation1983). In algorithmic environments, Amershi et al. (Citation2014) promote the idea of deploying interactive ML systems to allow individuals to train algorithms by providing demonstrations and examples. In the context of recommender systems, Harper et al. (Citation2015) found that the results of user-tuned recommender algorithms were evaluated more positively than recommender systems that did not provide users with any control. Among the reviewed papers in this study, Lee, Jain et al. (Citation2019) found that outcome control can improve fairness perceptions by enabling people to identify the limitations of algorithm outputs and accordingly adjust the decisions to make them fairer. Moreover, auditing an algorithm allows users and third-parties to assess biases in it, leading to higher levels of confidence and trust in its results (Kim, Citation2017; Rossi, Citation2019; Springer & Whittaker, Citation2019). Hence, users of an auditable algorithmic system are more likely to detect biases and react to them by refusing to accept the advice generated by the system, refusing to adopt the system, and engaging in algorithm aversion. We propose:

Proposition 5: Algorithmic technology characteristics moderate the relations among algorithmic bias, perceived fairness, and behavioural responses.

Organisational policies regarding algorithmic decision making including publicity and reasonableness can induce decision makers to be more cautious about biases in algorithms (Wong, Citation2019). Similarly, diversity, equity, and inclusion policies and rules in organisations can foster ethicality, driving decision makers to identify unfairness in algorithmic processes and outcomes and engage in fair behavioural responses (e.g., biased algorithm aversion). Setting ethical codes and standards to guide employees in using algorithmic systems and actively communicating the values in business practices can also shape the organisational climate characterised by ethicality, leading to more responsible algorithmic decision making (Lee, Kusbit et al., Citation2019; Loi et al., Citation2019). This is in accordance with the business ethics studies that have demonstrated that firms that establish policies and procedures with moral consequences can prevent workplace deviant behaviours (Bulutlar & Öz, Citation2009).

Additionally, research suggests that user-centred principles and standards focusing on human-computer interaction can help mitigate injustice in algorithms (Rantavuo, Citation2019). For example, raising awareness in a firm about the complexities and issues associated with algorithmic bias likely encourages designers and users of AI applications and analytics tools to not only consider business goals, but also take into account the well-being of the subjects of algorithmic decisions (Koene, Citation2017; Koene et al., Citation2018; Loi et al., Citation2019; Rantavuo, Citation2019). Therefore, if users of an algorithm perceive that it provides unfair results, depending on the organisation’s characteristics (e.g., policies and standards), they are likely to reject the algorithm-generated recommendations and refuse to adopt the algorithmic system. We propose:

Proposition 6: Organisational characteristics moderate the relations among algorithmic bias, perceived fairness, and behavioural responses.

Anti-discrimination laws, data protection acts, and algorithmic accountability regulations can change individuals’ behaviours by altering the conditions towards making healthier and more prosocial behaviours (Kinzig et al., Citation2013). Data subjects are able to exercise their rights related to data access, restriction of processing, and decision explanation under these acts and regulations (Almada, Citation2019; Garcia, Citation2016). Hence, organisational users of intelligent systems are more likely to care about and identify biases in algorithms and make morally and legally justifiable algorithmic decisions compared with the situations in which these legal pressures are absent.

Furthermore, policy and legal instruments influence social norms and, accordingly, individuals’ behaviours, preferences, and values become self-reinforcing even in the absence of such instruments (Kinzig et al., Citation2013). Regulations and laws can serve to direct or generate social norms by signalling to the members of a community about the important issues that exist in society. The society’s culture can also play a role in shaping individuals’ perceptions and behaviours concerning ethical dilemmas. For example, prior research suggests that high-power-distance and masculine cultures promote and possibly legitimise the unequal distribution of power between men and women (Glick, Citation2006). Therefore, people in those cultures are less likely to question gender inequality or view it as socially or morally unacceptable (Dohi & Fooladi, Citation2008). Accordingly, if individuals perceive environmental pressures (e.g., social norms, acts, regulations) or are culturally induced or trained to make unbiased decisions, they may not accept or adopt algorithmic systems that generate biased outcomes. We propose:

Proposition 7: Environmental characteristics moderate the relations among algorithmic bias, perceived fairness, and behavioural responses.

6.1. Directions for future research

The proposed model in highlights the directions for future research. Specifically, researchers can make important contributions to the development of knowledge in the algorithmic bias arena by empirically testing the propositions developed in this study. As presented in , we classified the proposed relations into two categories including 1) the ones that were derived or extrapolated from the conceptual discussions or empirical evidence provided in the reviewed papers and 2) the novel relations that were neither conceptually nor empirically addressed in the reviewed studies. While the proposed relations in the second category need special attention and empirical verification in future IS studies, it is also necessary to test the relations in the first group because most of them were either discussed only conceptually in the reviewed papers (e.g., the impact of perceived fairness on recommendation acceptance) or tested in a limited way (e.g., the impact of task type on the effect of algorithmic bias on perceived fairness). Specifically, researchers should employ experimental or observational methods to understand which dispositional and situational factors can directly or interactively influence user behaviours in response to lack of justice and equality in algorithmic outcomes.

Researchers can address gaps in the extant literature by examining the mechanisms through which algorithmic bias impacts individuals’ behavioural responses towards algorithmic outcomes (propositions 1 and 2). In addition, our proposed research model addresses a gap in the literature by highlighting the most critical contextual factors that enable researchers to understand how and under what circumstances algorithmic bias impacts individuals’ behavioural responses towards algorithmic outcomes (propositions 3, 4, 5, 6, and 7). For example, researchers can use theories grounded in behavioural economics (e.g., confirmation bias theory) to understand whether or how human biases lead people to accept biased information generated by an algorithm in the presence or absence of organisational policies and legislations related to AI outcomes (Arnott & Gao, Citation2019; Kahneman, Citation2011) (propositions 3, 6, and 7). Moreover, drawing on the anchoring bias theory (i.e., depending too heavily on the first piece of information when making decisions) along with the dual process and dual system theories (i.e., people’s use of two sets of decision-making processes), IS researchers can unravel how algorithm outputs can drive users with different characteristics (e.g., need for cognition) to consciously or subconsciously detect and react to algorithmic bias (propositions 1, 2, and 3). This research line can be implemented as part of the explainable AI research stream, which emphasises the impact of transparency and interpretability of ML models on trust in technology, system adoption, and information acceptance (Dodge et al., Citation2019) (proposition 5). Human-computer-interaction scholars and user experience researchers can also conduct impactful research by examining how algorithmic system characteristics such as information visualisation and interactive features can facilitate auditability and outcome control, inform fairness judgements, and trigger behavioural responses to biases in different types of algorithmic tasks (propositions 1, 4, and 5).

The propositions developed in this study, albeit focused mainly on user behaviours, can also be used to examine system developers’ fairness perceptions and behavioural responses to algorithmic bias (propositions 1 and 2). Behavioural responses in this case would involve employing bias detection and mitigation approaches during the pre-processing, in-processing, and post-processing phases of data analytics pipelines and the teamwork around that. Accordingly, the effects of organisational policies and practices regarding ethical system design can be assessed in terms of enabling and motivating data engineers, ML developers, and user experience designers with different characteristics to mitigate biases in algorithms proactively and properly (propositions 3 and 6). Drawing on the results of such behavioural studies, design science researchers can also make crucial contributions to this research line by innovatively designing, constructing, analysing, and evaluating artefacts such as methodologies that can enable algorithmic system developers to implement ethics by design approaches (Veale et al., Citation2018) (proposition 5).

7. Discussion and conclusion

Data analytics and AI technologies, although providing immense opportunities for firms to improve their decision-making performance, may lead to algorithm-driven biases against individuals. Such discriminatory consequences of using data-oriented technologies may add to the already existing prejudices in the workplace and society such as biases based on gender (Rudman & Kilianski, Citation2000), race (Huq, Citation2018), and nationality (Pereira et al., Citation2010). Our literature analysis showed that despite the growing importance of assessing algorithmic bias in the public and private sectors, most of the non-technical studies so far have tried to draw attention to this issue from a conceptual standpoint without much empirical effort to shed light on its behavioural, organisational, and social impacts. Our research model and propositions developed based on the literature and theoretical frameworks suggest that technology by itself does not result in discrimination and injustice, but it can trigger individuals’ perceptions, leading to either acceptance or rejection of biased information and technology. Moreover, the proposed links between objective biases in algorithmic systems, fairness perceptions, and behavioural responses can be affected by contextual factors such as users’ socially-biased beliefs and attitudes, types of tasks, and organisational policies and rules that can induce or reduce the disruptive role of algorithmic bias.

From a theoretical perspective, a major contribution of this study is that it enhances understanding of the notion of algorithmic bias by providing a thematic classification of the current literature, systematically identifying and defining important theoretical concepts based on the literature, and synthesising the theoretical relations that have been conceptually or empirically addressed in previous studies (). Another theoretical contribution of this study is that it contextualises different behavioural and organisational theories including the stimulus-organism-response theory, contextual factors framework, and justice theory in the realm of algorithmic systems and makes a theoretical connection between algorithmic bias and a series of socio-behavioural constructs. Unlike prior literature review studies related to algorithmic bias that focused on specific contexts (Khalil et al., Citation2020; Robert et al., Citation2020), developed design agenda for data science practitioners (Robert et al., Citation2020; Saltz & Dewar, Citation2019), or provided little theoretical insights into the notion of algorithmic bias and its behavioural consequences (Favaretto et al., Citation2019; Khalil et al., Citation2020), this study adopted a more theoretical approach, which makes the results useful for behavioural and organisational researchers.

From a practical standpoint, this paper suggests that business analysts, organisational decision-makers, and policymakers should be mindful of algorithmic bias and its potentially negative consequences in social and organisational settings. The theoretical model developed in this study helps practitioners make more informed, evidence-based management decisions in algorithmic settings. Further, our model contends that developers of ML algorithms and AI applications should not only use computational techniques to mitigate biases, but also augment their systems with transparency, auditability, and control features to empower users to play an active role in bias detection and mitigation (Lee, Jain et al., Citation2019). Additionally, developers should integrate appropriate steps into the system development lifecycle (e.g., requirements gathering and usability testing processes) to understand how individuals in specific contexts perceive and react to algorithmic biases that possibly emerge in those systems and, accordingly, how developers should adjust system specifications to ultimately address biases in algorithmic decisions.

We reviewed 56 relevant papers in this study, most of which were conceptual papers (Appendix 2). Thus, IS researchers can extend this literature analysis in the future by including additional empirical studies that will likely be published in the years to come. Moreover, this study and the model developed as part of it focuses on biases in decision support algorithms, whereas future research can examine an extended scope of algorithmic bias encompassing instances and consequences of biases in other intelligent systems such as self-driving vehicles (Danks & London, Citation2017) and online recommender systems (Lin et al., Citation2019). Researchers in future studies can also investigate how different forms and levels of human intervention in making algorithmic decisions about people can help prevent or reduce the impact of algorithmic bias on those decisions. We invite IS researchers to empirically test our propositions to extend understanding of the complex and multi-faceted phenomenon of algorithmic bias, potentially helping address it more holistically and effectively.

Disclosure of potential conflicts of interest

No potential conflict of interest was reported by the author(s).

Supplemental material

Supplemental Material

Download Zip (101.8 KB)

Supplementary material

Supplemental data for this article can be accessed here.

Notes

1. To further validate the thematic analysis results, an independent coder was employed to label the papers using the seven theme codes. The interrater reliability measured using Cohen’s kappa statistic showed a substantial agreement between the raters (κ = 0.75), confirming the robustness of our coding approach and outcomes (Landis & Koch, Citation1977).

References

  • Ajunwa, I. (2020). The paradox of automation as anti-bias intervention. Cardozo L. Review, 41(5), 1671. https://cardozolawreview.com/the-paradox-of-automation-as-anti-bias-intervention/
  • Almada, M. (2019). Human intervention in automated decision-making: Toward the construction of contestable systems. Proceedings of the seventeenth international conference on artificial intelligence and law, Montreal, QC, Canada.
  • Amershi, S., Cakmak, M., Knox, W. B., & Kulesza, T. (2014). Power to the people: The role of humans in interactive machine learning. AI Magazine, 35(4), 105–120. https://doi.org/10.1609/aimag.v35i4.2513
  • Arnott, D., & Gao, S. (2019). Behavioral economics for decision support systems researchers. Decision Support Systems, 122, 113063. https://doi.org/10.1016/j.dss.2019.05.003
  • Arrighetti, A., Bachmann, R., & Deakin, S. (1997). Contract law, social norms and inter-firm cooperation. Cambridge Journal of Economics, 21(2), 171–195. https://doi.org/10.1093/oxfordjournals.cje.a013665
  • Babuta, A., & Oswald, M. (2019). Data analytics and algorithmic bias in policing. The royal united services institute for defence and security studies. Royal United Services Institute for Defence and Security Studies.
  • Barlas, P., Kleanthous, S., Kyriakou, K., & Otterbacher, J. (2019). What makes an image tagger fair? Proceedings of the 27th ACM conference on user modeling, adaptation and personalization, Larnaca, Cyprus.
  • Barocas, S., & Selbst, A. D. (2016). Big Data’s disparate impact. Social Science Research Network, 62. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2477899#
  • Bauer, N. M. (2015). Emotional, sensitive, and unfit for office? Gender stereotype activation and support female candidates. Political Psychology, 36(6), 691–708. https://doi.org/10.1111/pops.12186
  • Binns, R. (2018). What can political philosophy teach us about algorithmic fairness? IEEE Security & Privacy, 16(3), 73–80. https://doi.org/10.1109/MSP.2018.2701147
  • Binns, R., Van Kleek, M., Veale, M., Lyngs, U., Zhao, J., & Shadbolt, N. (2018). ‘It’s reducing a human being to a percentage’ perceptions of justice in algorithmic decisions. Proceedings of the 2018 CHI conference on human factors in computing systems, Montreal, QC, Canada.
  • Blasi, A., Kurtines, W., & Gewirtz, J. (1994). Moral identity: Its role in moral functioning. Fundamental Research in Moral Development, 2, 168–179.
  • Brkan, M. (2019). Do algorithms rule the world? Algorithmic decision-making and data protection in the framework of the GDPR and beyond. International Journal of Law and Information Technology, 27(2), 91–121. https://doi.org/10.1093/ijlit/eay017
  • Brown, A., Chouldechova, A., Putnam-Hornstein, E., Tobin, A., & Vaithianathan, R. (2019). Toward algorithmic accountability in public services: A qualitative study of affected community perspectives on algorithmic decision-making in child welfare services. Proceedings of the 2019 CHI conference on human factors in computing systems, Glasgow, UK.
  • Bulutlar, F., & Öz, E. Ü. (2009). The effects of ethical climates on bullying behaviour in the workplace. Journal of Business Ethics, 86(3), 273–295. https://doi.org/10.1007/s10551-008-9847-4
  • Chen, H., Chiang, R. H., & Storey, V. C. (2012). Business intelligence and analytics: From big data to big impact. MIS Quarterly, 36(4), 4. https://doi.org/10.2307/41703503
  • Chouldechova, A. (2017). Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. Big Data, 5(2), 153–163. https://doi.org/10.1089/big.2016.0047
  • Coates, D., & Martin, A. (2019). An instrument to evaluate the maturity of bias governance capability in artificial intelligence projects. IBM Journal of Research and Development, 63(4/5), 7: 1–7: 15. https://doi.org/10.1147/JRD.2019.2915062
  • Colquitt, J. A. (2001). On the dimensionality of organizational justice: A construct validation of a measure. Journal of Applied Psychology, 86(3), 386. https://doi.org/10.1037/0021-9010.86.3.386
  • Colquitt, J. A., & Rodell, J. B. (2015). Measuring justice and fairness. In The Oxford handbook of justice in the workplace (Vol. 1, pp. 187–202).  Oxford University Press.
  • Corley, K. G., & Gioia, D. A. (2011). Building theory about theory building: What constitutes a theoretical contribution? Academy of Management Review, 36(1), 12–32. https://doi.org/10.5465/amr.2009.0486
  • Danks, D., & London, A. J. (2017). Algorithmic bias in autonomous systems. IJCAI.
  • Dickinger, A., Arami, M., & Meyer, D. (2008). The role of perceived enjoyment and social norm in the adoption of technology with network externalities. European Journal of Information Systems, 17(1), 4–11. https://doi.org/10.1057/palgrave.ejis.3000726
  • Dietvorst, B. J., Simmons, J. P., & Massey, C. (2018). Overcoming algorithm aversion: People will use imperfect algorithms if they can (even slightly) modify them. Management Science, 64(3), 1155–1170. https://doi.org/10.1287/mnsc.2016.2643
  • Dodge, J., Liao, Q. V., Zhang, Y., Bellamy, R. K., & Dugan, C. (2019). Explaining models: An empirical study of how explanations impact fairness judgment. Proceedings of the 24th international conference on intelligent user interfaces. Marina del Ray, CA, USA.
  • Dohi, I., & Fooladi, M. M. (2008). Individualism as a solution for gender equality in Japanese society in contrast to the social structure in the United States. Forum on Public Policy.
  • Domanski, R. (2019). The AI pandorica: linking ethically-challenged technical outputs to prospective policy approaches. Proceedings of the 20th annual international conference on digital government research, Dubai, UAE.
  • Draude, C., Klumbyte, G., Lücking, P., & Treusch, P. (2019). Situated algorithms: A sociotechnical systemic approach to bias. Online Information Review, 44(2), 325–342. https://doi.org/10.1108/OIR-10-2018-0332
  • Ebrahimi, S., & Hassanein, K. (2019). Empowering users to detect data analytics discriminatory recommendations. Proceedings of the 40th International Conference on Information Systems, Munich, Germany.
  • Edwards, L., & Veale, M. (2017). Slave to the algorithm? Why a ‘Right to an Explanation’ Is probably not the remedy you are looking for. Social Science Research Network, 67. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2972855
  • Ensign, D., Friedler, S. A., Neville, S., Scheidegger, C., & Venkatasubramanian, S. (2018). Runaway feedback loops in predictive policing. Conference on fairness, accountability and transparency, New York City, NY, USA.
  • Favaretto, M., De Clercq, E., & Elger, B. S. (2019). Big Data and discrimination: Perils, promises and solutions. A systematic review. Journal of Big Data, 6(1), 12. https://doi.org/10.1186/s40537-019-0177-4
  • Fiske, S. T. (1998). Stereotyping, prejudice, and discrimination. The Handbook of Social Psychology, 2(4), 357–411. McGraw-Hill. https://psycnet.apa.org/record/1998-07091-025
  • Fiske, S. T., Cuddy, A. J., Glick, P., & Xu, J. (2002). A model of (often mixed) stereotype content: Competence and warmth respectively follow from perceived status and competition. Journal of Personality and Social Psychology, 82(6), 878. https://doi.org/10.1037/0022-3514.82.6.878
  • Fuster, A., Goldsmith-Pinkham, P., Ramadorai, T., & Walther, A. (2018). Predictably unequal? The effects of machine learning on credit markets. Social Science Research Network, 94. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3072038
  • Gal, U., Jensen, T. B., & Stein, M.-K. (2017). People analytics in the age of big data: An agenda for IS research. ICIS 2017: Transforming Society with Digital Innovation. Proceedings of the 38th International Conference on Information Systems, Seoul, South Korea.
  • Garcia, M. (2016). Racist in the machine: The disturbing implications of algorithmic bias. World Policy Journal, 33(4), 111–117. https://doi.org/10.1215/07402775-3813015
  • Ghasemaghaei, M., & Hassanein, K. (2019). Dynamic model of online information quality perceptions and impacts: A literature review. Behaviour & Information Technology, 38(3), 302–317. https://doi.org/10.1080/0144929X.2018.1531928
  • Glick, P. (2006). Ambivalent sexism, power distance, and gender inequality across cultures. In S. Guimond (Ed.), Social Comparison and Social Psychology: Understanding Cognition, Intergroup Relations, and Culture, 283. Cambridge University Press https://doi.org/10.1017/CBO9780511584329.015
  • Glikson, E., & Woolley, A. W. (2020). Human trust in artificial intelligence: Review of empirical research. Academy of Management Annals, 14(2), 627–660. https://doi.org/10.5465/annals.2018.0057
  • Greenberg, J. (1990). Organizational justice: Yesterday, today, and tomorrow. Journal of Management, 16(2), 399–432. https://doi.org/10.1177/014920639001600208
  • Grgić-Hlača, N., Engel, C., & Gummadi, K. P. (2019). Human decision making with machine assistance: An experiment on bailing and jailing. Proceedings of the ACM on Human-Computer Interaction, 3(CSCW), 1–25. https://doi.org/10.1145/3359280
  • Grgic-Hlaca, N., Redmiles, E. M., Gummadi, K. P., & Weller, A. (2018). Human perceptions of fairness in algorithmic decision making: A case study of criminal risk prediction. Proceedings of the 2018 World Wide Web conference.
  • Grgić-Hlača, N., Zafar, M. B., Gummadi, K. P., & Weller, A. (2018). Beyond distributive fairness in algorithmic decision making: Feature selection for procedurally fair learning. Thirty-Second AAAI conference on artificial intelligence, Lyon, France.
  • Haas, C. (2019). The price of fairness-A framework to explore trade-offs in algorithmic fairness. ICIS.
  • Hacker, P. (2018). Teaching fairness to artificial intelligence: Existing and novel strategies against algorithmic discrimination under EU law. Common Market Law Review, 55(4), 1143–1185. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3164973
  • Hamilton, I. A. (2018). Why it’s totally unsurprising that Amazon’s recruitment AI was biased against women. Business Insider. Retrieved November 11 from https://www.businessinsider.com/amazon-ai-biased-against-women-no-surprise-sandra-wachter–2018–10
  • Harper, F. M., Xu, F., Kaur, H., Condiff, K., Chang, S., & Terveen, L. (2015). Putting users in control of their recommendations. Proceedings of the 9th ACM conference on recommender systems, Vienna, Austria.
  • Heaven, W. D. (2020, August 5, 2020). The UK is dropping an immigration algorithm that critics say is racist. MIT Technology Review. https://www.technologyreview.com/2020/08/05/1006034/the-uk-is-dropping-an-immigration-algorithm-that-critics-say-is-racist/
  • Holstein, K., Wortman Vaughan, J., Daumé III, H., Dudik, M., & Wallach, H. (2019). Improving fairness in machine learning systems: What do industry practitioners need? Proceedings of the 2019 CHI conference on human factors in computing systems, Glasgow, UK.
  • Huq, A. Z. (2018). Racial equity in algorithmic criminal justice. Duke LJ, 68(6), 1043. https://scholarship.law.duke.edu/dlj/vol68/iss6/1/
  • Jussupow, E., Benbasat, I., & Heinzl, A. (2020). Why are we averse towards algorithms? A comprehensive literature review on algorithm aversion. ECIS. Proceedings of the 28th European Conference on Information Systems, virtual.
  • Kahneman, D. (2011). Thinking, fast and slow. Macmillan.
  • Katyal, S. K. (2019). Private accountability in the age of artificial intelligence. UCLA Law Review, 66, 54. https://www.uclalawreview.org/private-accountability-age-algorithm/
  • Khalil, A., Ahmed, S. G., Khattak, A. M., & Al-Qirim, N. (2020). Investigating bias in facial analysis systems: A systematic review. IEEE Access, 8, 130751–130761. https://doi.org/10.1109/ACCESS.2020.3006051
  • Kim, P. T. (2017). Auditing algorithms for discrimination. U. Pa. Law Review Online, 166, 189. https://www.pennlawreview.com/2017/12/12/auditing-algorithms-for-discrimination/
  • Kinzig, A. P., Ehrlich, P. R., Alston, L. J., Arrow, K., Barrett, S., Buchman, T. G., Daily, G. C., Levin, B., Levin, S., & Oppenheimer, M. (2013). Social norms and global environmental challenges: The complex interaction of behaviors, values, and policy. Bio Science, 63(3), 164–175. https://doi.org/10.1525/bio.2013.63.3.5
  • Koene, A. (2017). Algorithmic bias: Addressing growing concerns [leading edge]. IEEE Technology and Society Magazine, 36(2), 31–32. https://doi.org/10.1109/MTS.2017.2697080
  • Koene, A., Dowthwaite, L., & Seth, S. (2018). IEEE P7003™ standard for algorithmic bias considerations: Work in progress paper. Proceedings of the international workshop on software fairness, Gothenburg, Sweden.
  • Kroll, J. A., Barocas, S., Felten, E. W., Reidenberg, J. R., Robinson, D. G., & Yu, H. (2016). Accountable algorithms. U. Pa. L. Review, 165(3), 633. https://scholarship.law.upenn.edu/penn_law_review/vol165/iss3/3/
  • Landis, J. R., & Koch, G. G. (1977). The measurement of observer agreement for categorical data. biometrics, 159–174.
  • Lambrecht, A., & Tucker, C. (2019). Algorithmic bias? An empirical study of apparent gender-based discrimination in the display of STEM career ads. Management Science, 65(7), 2966–2981. https://doi.org/10.1287/mnsc.2018.3093
  • Lee, M. (2018a). Understanding perception of algorithmic decisions: Fairness, trust, and emotion in response to algorithmic management. Big Data & Society, 5(1), 2053951718756684. https://doi.org/10.1177/2053951718756684
  • Lee, M., Jain, A., Cha, H., Ojha, S., & Kusbit, D. (2019). Procedural justice in algorithmic fairness: Leveraging transparency and outcome control for fair algorithmic mediation. Proceedings of the ACM on Human-Computer Interaction, 3(CSCW), 1–26.
  • Lee, M., Kim, J., & Lizarondo, L. (2017). A human-centered approach to algorithmic services: Considerations for fair and motivating smart community service management that allocates donations to non-profit organizations. Proceedings of the 2017 CHI conference on human factors in computing systems, Denver, CO, USA.
  • Lee, M., Kusbit, D., Kahng, A., Kim, J., Yuan, X., Chan, A., See, D., Noothigattu, R., Lee, S., & Psomas, A. (2019). WeBuildAI: Participatory framework for algorithmic governance. Proceedings of the ACM on Human-Computer Interaction, 3(CSCW), Artcile 181, 1–35. https://doi.org/10.1145/3359283
  • Lee, N. (2018b). Detecting racial bias in algorithms and machine learning. Journal of Information, Communication and Ethics in Society 16, 252–260. https://doi.org/10.1108/JICES-06-2018-0056
  • Lin, K., Sonboli, N., Mobasher, B., & Burke, R. (2019). Crank up the volume: Preference bias amplification in collaborative recommendation. RMSE workshop (in conjunction with the 13th ACM conference on Recommender Systems (RecSys)), Copenhagen, Denmark.
  • Lind, E. A., Lissak, R. I., & Conlon, D. E. (1983). Decision control and process control effects on procedural fairness judgments 1. Journal of Applied Social Psychology, 13(4), 338–350. https://doi.org/10.1111/j.1559-1816.1983.tb01744.x
  • Loi, M., Heitz, C., Ferrario, A., Schmid, A., & Christen, M. (2019). Towards an ethical code for data-based business. 2019 6th Swiss Conference on Data Science (SDS), Bern, Switzerland.
  • Lysaght, T., Lim, H. Y., Xafis, V., & Ngiam, K. Y. (2019). AI-assisted decision-making in healthcare. Asian Bioethics Review, 11(3), 299–314. https://doi.org/10.1007/s41649-019-00096-0
  • Martin, K. (2019). Designing ethical algorithms. MIS Quarterly Executive, 18(5), 2. https://aisel.aisnet.org/misqe/vol18/iss2/5/
  • McFarlin, D. B., & Sweeney, P. D. (1992). Distributive and procedural justice as predictors of satisfaction with personal and organizational outcomes. Academy of Management Journal, 35(3), 626–637. https://doi.org/10.2307/256489
  • Mcknight, D. H., Carter, M., Thatcher, J. B., & Clay, P. F. (2011). Trust in a specific technology: An investigation of its components and measures. ACM Transactions on Management Information Systems (TMIS), 2(2), 12. https://doi.org/10.1145/1985347.1985353
  • Mehrabian, A., & Russell, J. A. (1974). An approach to environmental psychology. the MIT Press.
  • Mikalef, P., Pappas, I. O., Krogstie, J., & Giannakos, M. (2018). Big data analytics capabilities: A systematic literature review and research agenda. Information Systems and e-Business Management, 16(3), 547–578. https://doi.org/10.1007/s10257-017-0362-y
  • Mulligan, D. K., Kroll, J. A., Kohli, N., & Wong, R. Y. (2019). This thing called fairness: disciplinary confusion realizing a value in technology. Proceedings of the ACM on Human-Computer Interaction, 3(CSCW), 1–36. https://doi.org/10.1145/3359221
  • O’neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Broadway Books.
  • Pereira, C., Vala, J., & Costa‐Lopes, R. (2010). From prejudice to discrimination: The legitimizing role of perceived threat in discrimination against immigrants. European Journal of Social Psychology, 40(7), 1231–1250. https://doi.org/10.1002/ejsp.718
  • Perez Vallejos, E., Koene, A., Portillo, V., Dowthwaite, L., & Cano, M. (2017). Young people’s policy recommendations on algorithm fairness. Proceedings of the 2017 ACM on Web Science Conference,New York City, NY, USA.
  • Petter, S., DeLone, W., & McLean, E. R. (2013). Information systems success: The quest for the independent variables. Journal of Management Information Systems, 29(4), 7–62. https://doi.org/10.2753/MIS0742-1222290401
  • Picoto, W. N., Bélanger, F., & Palma-dos-reis, A. (2014). An organizational perspective on m-business: Usage factors and value determination. European Journal of Information Systems, 23(5), 571–592. https://doi.org/10.1057/ejis.2014.15
  • PYMNTS. (2018). In the age of algorithms, will banks ever graduate to true AI? New York City, NY, USA. Retrieved April 5 from https://www.pymnts.com/news/artificial-intelligence/2018/bank-technology-true-ai-machine-learning/
  • Rader, E., Cotter, K., & Cho, J. (2018). Explanations as mechanisms for supporting algorithmic transparency. Proceedings of the 2018 CHI conference on human factors in computing systems,Montreal, QC, Canada.
  • Ransbotham, S., Fichman, R. G., Gopal, R., & Gupta, A. (2016). Special section introduction—Ubiquitous IT and digital vulnerabilities. Information Systems Research, 27(4), 834–847. https://doi.org/10.1287/isre.2016.0683
  • Rantavuo, H. (2019). Designing for intelligence: User-centred design in the age of algorithms. Proceedings of the 5th International ACM in-cooperation HCI and UX conference,Jakarta Surabaya, Bali, Indonesia.
  • Rawls, J. (2001). Justice as fairness: A restatement. Harvard University Press.
  • Rhue, L. (2019). Beauty’s in the AI of the beholder: How AI anchors subjective and objective predictions. international conference on information systems, Munich, Germany.
  • Robert, L., Pierce, C., Morris, L., Kim, S., & Alahmad, R. (2020). Designing Fair AI for Managing Employees in Organizations: A Review. Human-Computer Interaction, 35(5-6), 545-575.  https://doi.org/10.1080/07370024.2020.1735391.
  • Rossi, F. (2019). Building trust in artificial intelligence. Journal of International Affairs, 72(1), 127–134. https://www.jstor.org/stable/26588348
  • Rudman, L. A., & Kilianski, S. E. (2000). Implicit and explicit attitudes toward female authority. Personality & Social Psychology Bulletin, 26(11), 1315–1328. https://doi.org/10.1177/0146167200263001
  • Saltz, J. S., & Dewar, N. (2019). Data science ethical considerations: A systematic literature review and proposed project framework. Ethics and Information Technology, 21(3), 197–208. https://doi.org/10.1007/s10676-019-09502-5
  • Sandvig, C., Hamilton, K., Karahalios, K., & Langbort, C. (2014). Auditing algorithms: Research methods for detecting discrimination on internet platforms. Presented to Data and Discrimination: Converting Critical Concerns into Productive Inquiry, A Preconference at the 64th Annual Meeting of the International Communication Association, Seattle, WA, USA. https://www.semanticscholar.org/paper/Auditing-Algorithms-%3A-Research-Methods-for-on-Sandvig-Hamilton/b7227cbd34766655dea10d0437ab10df3a127396
  • Saxena, N.-A., Huang, K., DeFilippis, E., Radanovic, G., Parkes, D. C., & Liu, Y. (2019). How do fairness definitions fare? Examining public attitudes towards algorithmic definitions of fairness. Proceedings of the 2019 AAAI/ACM conference on AI, ethics, and society, Honolulu, HI, USA.
  • Schwartz, S. H. (1999). A theory of cultural values and some implications for work. Applied Psychology: An International Review, 48(1), 23–47. https://doi.org/10.1111/j.1464-0597.1999.tb00047.x
  • Shaw, J., Rudzicz, F., Jamieson, T., & Goldfarb, A. (2019). Artificial intelligence and the implementation challenge. Journal of Medical Internet Research, 21(7), e13659. https://doi.org/10.2196/13659
  • Shen, K. N., & Khalifa, M. (2012). System design effects on online impulse buying. Internet Research, 22(4), 396–425. https://doi.org/10.1108/10662241211250962
  • Shrestha, Y. R., & Yang, Y. (2019). Fairness in algorithmic decision-making: Applications in multi-winner voting, machine learning, and recommender systems. Algorithms, 12(9), 199. https://doi.org/10.3390/a12090199
  • Silva, S., & Kenney, M. (2019). Algorithms, platforms, and ethnic bias. Communications of the ACM, 62(11), 37–39. https://doi.org/10.1145/3318157
  • Skarlicki, D. P., & Folger, R. (1997). Retaliation in the workplace: The roles of distributive, procedural, and interactional justice. Journal of Applied Psychology, 82(3), 434. https://doi.org/10.1037/0021-9010.82.3.434
  • Someh, I., Davern, M., Breidbach, C. F., & Shanks, G. (2019). Ethical issues in big data analytics: A stakeholder perspective. Communications of the Association for Information Systems, 44(1), 34. https://doi.org/10.17705/1CAIS.04434
  • Springer, A., & Whittaker, S. (2019). Making transparency clear: The dual importance of explainability and auditability. IUI Workshops.
  • Swim, J. K., Aikin, K. J., Hall, W. S., & Hunter, B. A. (1995). Sexism and racism: Old-fashioned and modern prejudices. Journal of Personality and Social Psychology, 68(2), 199. https://doi.org/10.1037/0022-3514.68.2.199
  • Thorbecke, C. (2019). New York probing Apple Card for alleged gender discrimination after viral tweet. ABC News. Retrieved february/22/2020 from https://abcnews.go.com/US/york-probing-apple-card-alleged-gender-discrimination-viral/story?id=66910300
  • Veale, M., Van Kleek, M., & Binns, R. (2018). Fairness and accountability design needs for algorithmic support in high-stakes public sector decision-making. Proceedings of the 2018 CHI conference on human factors in computing systems, Montreal, QC, Canada.
  • Venkatesh, V., Thong, T., & Xu, X. (2012). Consumer acceptance and use of information technology: Extending the unified theory of acceptance and use of technology. MIS Quarterly, 36(1), 157–178. https://doi.org/10.2307/41410412
  • Verma, S., & Rubin, J. (2018). Fairness definitions explained. 2018 IEEE/ACM international workshop on software fairness (FairWare), Gothenburg, Sweden.
  • Webb, H., Koene, A., Patel, M., & Vallejos, E. P. (2018). Multi-stakeholder dialogue for policy recommendations on algorithmic fairness. Proceedings of the 9th international conference on social media and society.
  • Webster, J., & Watson, R. T. (2002). Analyzing the past to prepare for the future: Writing a literature review. MIS Quarterly, 26(2). https://www.jstor.org/stable/4132319.
  • Wells, D., & Spinoni, E. (2019). Western Europe Big Data and analytics software forecast, 2018–2023. International Data Corporation. Retrieved April 5 from https://www.idc.com/getdoc.jsp?containerId=EUR145601519
  • Wieringa, M. (2020). What to account for when accounting for algorithms: A systematic literature review on algorithmic accountability. Proceedings of the 2020 conference on fairness, accountability, and transparency, Barcelona, Spain.
  • Williams, B. A., Brooks, C. F., & Shmargad, Y. (2018). How algorithms discriminate based on data they lack: Challenges, solutions, and policy implications. Journal of Information Policy, 8, 78–115. https://doi.org/10.5325/jinfopoli.8.2018.0078
  • Wong, P.-H. (2019). Democratizing Algorithmic Fairness. Philosophy & Technology 33, 225–244. http://doi.org/10.1007/s13347-019-00355-w
  • Woodruff, A., Fox, S. E., Rousso-Schindler, S., & Warshaw, J. (2018). A qualitative exploration of perceptions of algorithmic fairness. Proceedings of the 2018 CHI conference on human factors in computing systems, Montreal, QC, Canada.
  • Yapo, A., & Weiss, J. (2018). Ethical implications of bias in machine learning. Proceedings of Hawaii international confernece on system sciences, Waikoloa, HI, USA.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.