31,718
Views
20
CrossRef citations to date
0
Altmetric
Guest Editorial

Thinking responsibly about responsible AI and ‘the dark side’ of AI

, , &

ABSTRACT

Artificial Intelligence (AI) has been argued to offer a myriad of improvements in how we work and live. The notion of AI comprises a wide-ranging set of technologies that allow individuals and organizations to integrate and analyze data and use that insight to improve or automate decision-making. While most attention has been placed on the positive aspects companies realize by the adoption by the adoption and use of AI, there is a growing concern around the negative and unintended consequences of such technologies. In this special issue we have made a call for research papers that help us explore the dark side of AI use. By adopting a dark side lens, we aimed to expand our understanding of how AI should be implemented in practice, and how to minimize or avoid negative outcomes. In this editorial, we build on the notion of responsible AI, to highlight the different ways in which AI can potentially produce unintended consequences, as well as to suggest alternative paths future IS research can follow to improve our knowledge about how to mitigate such occurrences. We further expand on dark side theorizing in order to uncover hidden assumptions of current literature as well as to propose other prominent themes that can guide future IS research on AI adoption and use.

1. Introduction

Technological progress is often measured by increased speed and efficiency and how it can make humans and society faster, better, stronger, and happier (Conboy, Citation2019). Therefore, information systems (IS) research is dominated by studies that focus on technology’s revolutionary or positive power. However, in more recent years, the IS field is beginning to place its attention on complex and often alarming ways in which the use of IT affects organisational and social life – the dark side of technology and its use. There is a growing, fractious tension between our technological capabilities and the human social structures within which the technologies reside. This tension questions the true extent to which technology makes us faster, better, stronger, and happier. Then there are the side effects – technology’s role in making us more unhealthy, sadder, and exhausted (Conboy, Citation2019; Tarafdar et al., Citation2015). The contemporary popular press is strewn with examples of technology overuse and addiction (Tarafdar et al., Citation2020; Turel & Ferguson, Citation2020), technostress (Tarafdar et al., Citation2015), security and privacy concerns (D’Arcy et al., Citation2014; Goel et al., Citation2017), fake news (Talwar et al., Citation2019) and the immediate, global reach, and potency of modern weaponry (Small & Jollands, Citation2006). In particular, artificial intelligence (AI) has attracted attention regarding the dark side implications it can potentially result in. Notable examples include the potential to increase bias and inequality (Akter et al., Citation2021; Luengo-Oroz et al., Citation2021), a lack of transparency (Haibe-Kains et al., Citation2020), and reduced human agency and freedom (Floridi & Cowls, Citation2019).

Such adverse outcomes of AI use, along with the threat of loss of control or autonomy over superior AI entities, has sparked an ongoing debate about the need to establish a set of principles to effectively govern AI (Barredo Arrieta et al., Citation2020; Fjeld et al., Citation2020). In the past several years, every organisation connected to technology policy has proposed a set of guiding principles around AI (Google, Citation2021; IBM, Citation2021; OECD, Citation2021). As a result, there is a growing interest in the notion of responsible AI as a set of propositions or normative declarations about how AI generally ought to be developed, deployed, and governed (Theodorou & Dignum, Citation2020). These principles that jointly comprise responsible AI are suggested under the logic that they will prevent or minimise the intended and unintended negative effects that AI will introduce to everyday life (Wang et al., Citation2020). Such concerns, and the subsequent focus on establishing responsible AI principles, have largely emerged from prominent cases where negative effects have been noted. These “AI gone wrong” cases help highlight the important intricacies and aspects when designing and deploying AI (Desouza et al., Citation2020). However, utilising such a dark-side lens pre-emptively can enable researchers and practitioners to envision the potential negative or unintended effects of AI before they happen and develop appropriate policies and safeguards (Tarafdar et al., Citation2013).

While the current discourse of AI research has predominantly focused on the potential value and positive impacts of AI (Davenport & Ronanki, Citation2018), it is perhaps ironic that the field has yet to adopt a dark side metaphor to examine issues that have traditionally been overlooked, ignored, or suppressed (Linstead et al., Citation2014). The purpose of this special issue is to open up this debate and spark a discussion around the dark side of AI to critically assess the negative and unintended consequences of AI. Issues typically marginalised as outliers, abnormal or deviant, are rarely investigated rigorously. However, such occurrences can allow us to deploy novel technological innovations such as AI in a much more safe, effective, and responsible way (Shneiderman, Citation2020). Thus, responsible AI principles go hand-in-hand with a thorough understanding of AI through a dark-side lens, as they can be informed by negative or unintended outcomes of AI and operate pre-emptively towards their appearance (Ågerfalk et al., CitationForthcoming; Clarke, Citation2019). In addition, the dark-side lens can allow us to envision circumstances or scenarios that have been previously ignored by mainstream research, opening up a more comprehensive and nuanced understanding of the AI phenomenon (Salo et al., Citation2018).

In this editorial, we adopt a definition of AI, which does not emphasise a specific technology, but instead follows an integrative approach to the key underlying aspects that characterise the AI phenomenon. Specifically, we adopt the definition of Mikalef and Gupta (Citation2021) and define AI as … ” the ability of a system to identify, interpret, make inferences, and learn from data to achieve predetermined organisational and societal goals”. We start by discussing the phenomenon of responsible AI in the next section, its origins, and the current discussion around the notion. The notion of responsible AI is then decomposed and used as an exemplar of potentially damaging issues that can emerge from AI use and as a means of setting up an agenda for future research. Within this agenda, we highlight where the papers of this special issue have contributed and how the dark-side lens that they have adopted helps us advance our knowledge within the IS domain. In sequence, we challenge some key assumptions around the AI phenomenon by utilising a dark-side lens as a way of questioning the dominant paradigm around AI and suggesting a series of research questions. Finally, we conclude with some thoughts around the future of AI within the IS domain, and some key takes from our experience in managing this special issue.

2. What is responsible AI?

The growing interest in AI has also been accompanied by a plethora of cases where negative or unintended consequences have emerged (Fuchs, Citation2018). A rather infamous example was the Twitter-based chatbot “Tay”, released by Microsoft in 2016. In less than 24 hours, the chatbot that was originally designed to engage in “conversational understanding”, was taken down after users were teaching it to use misogynist, racist, and politically incorrect phrases (Neff & Nagy, Citation2016). Cases such as these have prompted policy-makers, researchers, and this practitioners to think about how the development and use of AI can follow “responsible” principles (Barredo Arrieta et al., Citation2020). AI, therefore, introduces new ethical, legal, and governance challenges – including but not limited to – unintended discrimination, biased outcomes, and issues related to the customers’ awareness and knowledge about how AI is involved in decision outcomes (Singapore_Government, Citation2021). While earlier studies have looked at isolated factors relating to responsible principles, such as elimination of bias (Brighton & Gigerenzer, Citation2015), explainability of AI outcomes (Gunning et al., Citation2019), or safety and security (Srivastava et al., Citation2017), the last few years have seen a move towards a more holistic understanding of what constitutes responsible AI (Dignum, Citation2019). A recent report published by the Berkman Klein Center for Internet & Society at Harvard University documented 38 such initiatives from different entities and organisations (Fjeld et al., Citation2020).

Based on the growing consensus on responsible AI, there is an underlying agreement that it constitutes a set of principles that ensure ethical, transparent, and accountable use of AI technologies consistent with user expectations, organisational values, and societal laws and norms (Accenture, Citation2021). In this regard, the notion of responsible AI captures a diverse set of requirements that need to be met throughout the system’s entire life cycle (European Commission, Citation2019). As a testament to its importance, responsible AI has even gained traction at the policy-making level, with several countries now defining the fundamental principles that underlie responsible AI and that organisations, private and public, should need to follow (Jobin et al., Citation2019). For instance, the AI readiness index measuring the degree to which countries are implementing AI technologies now includes a new sub-index, quantifying the degree to which responsible AI principles are adopted (IDRC, Citation2020). At the same time, there is also a movement towards responsible AI certification from independent agencies (RAII, Citation2021), as well as best-practice advocacy from large tech companies such as Google (Google, Citation2022). At the European level, the European Commission recently tasked a High-Level Expert Group on Artificial Intelligence (AI HLEG), an independent expert group, to develop an integrative framework for responsible and trustworthy AI (European Commission, Citation2019). Through an open consultation process, the expert group developed a guideline that describes the key components that responsible AI includes. Grounded on these principles, the guidelines then provide seven essential requirements that AI systems should meet for responsible and trustworthy AI to be realised, with an eighth of following laws and regulations (European Commission, Citation2019).

While there are several studies concerning responsible governance of AI applications on a societal and regulatory level (Buiten, Citation2019; Erdélyi & Goldsmith, Citation2018), empirical studies at the organisational, business unit, and individual level remain scarce. Despite this lack of empirical research within these contexts, the existing body of work has been able to identify and describe the key dimensions that comprise responsible AI, with work stemming from the perspectives of various stakeholders, including private companies (Benjamins et al., Citation2019; Google, Citation2021), academic research (Clarke, Citation2019; Kumar et al., Citation2019; Thiebes et al., Citation2021), consultancy firms (Accenture, Citation2021; KPMG, Citation2020; PwC, Citation2021), institutions (European Commission, Citation2019; Library of Congress, Citation2019; Singapore_Government, Citation2021), non-profit organisations (IEEE, Citation2021), and even intergovernmental forums and organisations (OECD, Citation2021; Twomey & Martin, Citation2020). These dimensions of responsible AI are complementary in nature and are argued to be crucial in minimising any negative or unintended consequences of AI use. presents a short description of each dimension with some indicative references.

Table 1. Descriptions of responsible AI dimensions with indicative references

3. Looking forward: a research agenda

Ensuring that harmful or unintended consequences are minimised or do not occur during the lifespan of AI projects requires a comprehensive understanding of the role of responsible principles during the design, implementation and maintenance of AI applications. The debate about what aspects of AI systems should be emphasised when developing and using them often follows the latest headlines where adverse effects of AI are noted. Some prominent cases include the mishap of Facebook mis-tagging individuals as primates (Wiggers, Citation2021), Google’s biased hiring AI application which disadvantaged women and Asian candidates (Weston, Citation2021), as well as Amazon’s hiring AI application that also ingrained biases against women (Jeffrey, Citation2018). Such cases set a precedent for paying careful attention to how data is tagged and how this is done to ensure non-bias, as well as how the training of AI algorithms might also influence outcomes. As a result, fairness has recently emerged as a key concern for managers and data scientists (Osoba & Welser, Citation2017).

Nevertheless, if we are to take a holistic view of AI and the potential unintended consequences it can introduce, it is important to examine how each principle for responsible use can lead to negative effects if left unaccounted for. This agenda highlights some key issues that emerge due to the lack of attention to these principles. Examining how negative effects emerge and how they unfold during the design and deployment of AI applications can aid research and practice in understanding how the value from such investments emanates. Furthermore, it is crucial that IS research effectively translates these high-level principles into concrete governance practices that shape AI governance practices. presents some example issues that emerge due to non-consideration of key principles and how research can explore these issues. We do not aim to create an exhaustive list of topics that researchers could focus on through this research agenda. Instead, we emphasise how adopting a dark side perspective can open up new avenues of thinking of phenomena and potential solutions.

Table 2. Examples of issues and emerging research questions

3.1. Fairness

Fairness and bias have indeed featured in recent contemporary literature on AI. In this special issue alone, for example, Kordzadeh and Ghasemaghaei (Citation2021) and Marjanovic et al. (Citation2021) cite examples of algorithmic bias and injustice and the implications of both and of the challenges and steps needed in the design of AI to increase fairness and reduce bias. This research to date is undoubtedly welcome.

However, our conceptualisation of the dark side and responsible AI raises new questions to be addressed and perhaps an adjustment in existing research approaches. For example, dark side research tends to be topic-led, so bias is usually studied where bias is the dominant focus of the study. We suggest that bias becomes an almost assumed part of every study, even if it only forms a relatively small part of that study. The reason for this is that so many papers claim to demonstrate a new value-adding and sometimes transformative application of AI. However, we argue that before any new AI technology or new application of technology is to be persuasive, it must discuss and ideally test for any limitations in terms of fairness.

Second, many studies of fairness and bias tend to focus on one particular decision, e.g., one phase of HR recruitment, one round of budget allocation, one set of medical examinations. This illustrates a point discussed below that dark side phenomena are often timeless. We argue that fitness and bias be evaluated across determinable periods and that the temporal variable of the fairness or bias is considered. For example, over a 30-year career, if a minority group suffer from a small degree of AI-induced bias at a late stage in their career, the overall impact on their cumulative earnings over their career may be relatively small, where they lose out on a bonus or promotion in the last one or two years of their career. However, suppose that bias relates to a promotion or job retention in the first year or two. In that case, it may have a detrimental impact on future career development if, for example, they are not afforded training or finances at this formative stage of their career. Years three to thirty of that career may never materialise.

3.2. Transparency

Transparency is a concept that regularly appears in contemporary conversations around AI, particularly about enabling more lucidity on AI outcomes and the procedures around responsibility and accountability (Barredo Arrieta et al., Citation2020; Dignum, Citation2019). In this special issue, the study of Giermindl et al. (Citation2021) highlights that one of the risks around AI is that it can impair transparency and accountability as decisions become more opaque and automated, thus being distanced from a human agent. Such phenomena have placed a strong requirement for explainable AI (Barredo Arrieta et al., Citation2020), whereby the outcomes and data used to derive a decision from AI must be documentable in a format that can be interpreted by different stakeholders (Gunning et al., Citation2019).

Nevertheless, explainable AI comes with a whole set of challenges, including the difficulty in translating in a structured way the approach through which complex algorithmic networks derive outcomes and the challenge of continuously changing algorithms based on updated data (Barredo Arrieta et al., Citation2020). At the same time, explainable AI at the user level is subject to an extensive list of contingencies that need to be considered, such as the type of AI outcome at hand, the context, the timeliness, and the criticality of the AI decision. Furthermore, the direct communication of a machine’s output towards a human recipient is subject to human interpretation. Therefore, an exciting area of future research focus is to understand how humans perceive different forms and formats of AI explanations and to what extent they fulfill their requirements (Thomson & Schoenherr, Citation2020).

Apart from explainable AI, transparency in AI also incorporates aspects such as traceability and communication. This poses a significant research challenge on defining AI governance practices that assign anchors at different parts of AI projects to identify why AI decisions are erroneous. Enabling such ways of documentation throughout the process also facilitates the requirement of auditability. Thus, an interesting area for future study is understanding how traceability requirements shape the formation of AI governance practices within different contexts.

3.3. Accountability

Accountability has been extensively discussed in the literature on AI, particularly in cases where the outcome is high-critical; e.g., healthcare (Lebovitz et al., Citation2021) and the unclear accountability of autonomous vehicles by referring to the justification of intentions, motives, and rationalities. Especially in light of the reported dark side consequences of AI accountability, principles need to impact the design and deployment of AI systems. In this special issue, the study of Rana et al. (Citation2021) encourages managers to avoid inappropriate outcomes of AI-BA systems. They call for beneficial, explicable, and transparent systems. Highlighting procedural measures on holistic design and deployment, accountability of the underlying AI systems used in the organisations, they discuss how an organisation can assign accountability anchors to AI-integrated business analysis systems. Giermindl et al. (Citation2021) show how people analytics can impair transparency and accountability.

While these articles demonstrate the importance and some options for improving accountability of AI systems, there is a need for IS research to further delve into how accountability as a key principle driving AI technologies influences the development, evaluation and use of applications. For instance, it is important to understand how key stakeholders throughout developing AI systems are responsible and accountable for adverse outcomes. Understanding this distinction and identifying how accountability changes work practices and distribution of responsibilities within the business and IT teams are essential for examining its impacts on AI deployments. It also raises the additional requirement for setting best-practice approaches to ensure that AI systems and their development process are auditable.

Finally, the question of accountability is also inextricably tied with the types of testing performed to ensure that AI applications do not cause harm. One of the key questions when developing and testing AI applications, one of the key questions is how to provide at different development phases that AI applications fulfill a required threshold of safety and how that should be assessed.

3.4. Robustness and safety

Extending on the previous point of accountability, the importance of developing robust and safe AI applications has been one that has been heavily discussed over the last years (Gehr et al., Citation2018; Russell et al., Citation2015; Seshia et al., Citation2016). Such concerns revolve around both intentional and unintentional issues that can emerge during the design and use of systems. Nevertheless, now that AI applications are becoming ubiquitous, understanding how these key concerns translate into functional requirements from the domain experts to the algorithms necessitates a deeper understanding.

One such key issue when developing AI applications concerns the decisional dilemmas AI applications need to address and how they align with the ethical principles we adhere to as a society (Daly et al., Citation2021). There is, of course, significant variation throughout cultures on what is ethically acceptable and what is not, so the design and implementation of AI technologies must take into account safety and security while following appropriate ethical directions. Therefore, studies could examine how ethical dilemmas translate into different design approaches of AI technologies and optimal ways of designing based on the various contingencies of end-users and contexts (Leidner & Tona, Citation2021). In other words, values are likely to change in importance to the context (Kazim & Koshiyama, Citation2021). Thus, the practical instantiation of ethical values and principles will require trade-offs that require further understanding.

Apart from unintentional concerns regarding the robustness and safety of AI, there are also important considerations regarding how to develop and implement security policies around AI. Such security policies span several levels and stages of AI deployment, and their implementation is likely to be subject to different aspects of the socio-technical context. This means that it is important to examine the environment in which such policies are designed and deployed and identify how humans interpret and act on security policies. Finally, such security policies must incorporate preventive measures for cases of data poisoning and ensure that AI models are not leaked but can be used securely to demonstrate the reproducibility of outcomes and decisions.

3.5. Data governance

The principle of data governance concerns all the sequence of the information value chain in AI projects, from creating and collecting data to its end use either by human agents through AI insight or on automated AI-based decisions. In this special issue, the study of Rinta-Kahila et al. (Citation2021) underscores the importance of establishing appropriate governance practices in AI projects to avoid unintended consequences. Specifically, the authors build on a case study of the Australian government’s “Robodebt” program designed to calculate and collect welfare overpayment debts from citizens automatically but ended up causing severe distress to citizens and welfare agency staff. In another study in this special issue, Aaen et al. (Citation2021) follow a longitudinal case study in the Danish healthcare sector, tracking down the emergence, expansion, and eventual collapse of a large-scale data analytics project. The authors highlight the need for governance practices for the functional, stakeholder, and data subsystems dynamics to maintain balance and support project success.

Apart from the results presented in the studies mentioned above, data governance in the age of AI also imposes additional requirements around privacy and data protection, protecting against unauthorised access to data and ensuring that data is of high quality and integral. These aspects around data governance create ample opportunities for further research. For instance, we still have rudimentary knowledge on how data should be treated to ensure that socially constructed biases, inaccuracies, errors, and mistakes are omitted. At the same time, it is important for organisations developing AI not to sacrifice the quality of outcomes due to over-cleaning data. A counter problem concerns the biased tagging of data used to train AI. While early studies have highlighted the risks and adverse effects of doing so (Lloyd, Citation2018), it is interesting to examine how practices around data tagging are changing and organisations’ approaches to minimise biased AI outcomes.

In the study of Cheng et al. (Citation2021) featured in this special issue, the authors approach the issue of perceived risks in AI-based applications. They show that perceptions of risks can dampen the perceived benefits of AI applications. However, risk perceptions are minimised when organisations establish safeguards for minimising risks and privacy concerns communicated with users. Studies such as this one raise the question about how organisations should develop appropriate data governance practices and advertise and share these with users.

3.6. Laws and regulations

A more responsible AI can be introduced in law and regulations, highlighting the dark side issues. In the comprehensive literature analysis of Kordzadeh and Ghasemaghaei (Citation2021), most papers on algorithmic bias conceptually discussed the ethical, legal, and design implications, while few provided an empirical examination of their claims. In the special issue, we find such empirical evidence on the dark side aspects of AI and big data analysis in organisations, governments, and society.

Marjanovic et al. (Citation2021) sketch out algorithmic justice. They argue that society, particularly automated welfare decisions, is ingrained with systemic algorithmic injustices. The need for an intricate balance of leveraging the potential of AI while ensuring that individuals, particularly vulnerable groups, are not harmed by the effects of the new AI is emphasised. Legislators and policy-makers need to ensure that legal frameworks do not encode injustice for individuals, creating systematic abuse. The article teaches how automated decisions are ingrained with systemic algorithmic injustice that produces dire consequences concerning welfare. Departing from their illuminating case, the authors also call for justice in automated decision-making cases, particularly in relation to societal welfare.

Seeing the market disruptions, predicted consequences of job automation, calls have been made for laws and regulations of AI on a national and global level. In this special issue, Aaen et al. (Citation2021) present a data ecosystem model that calls for a more balanced attitude of data as a strategic asset. Via an intriguing narrative, we learn how a once legally compliant project approved for collecting data on four diseases was re-used and leveraged for broader purposes. The increasing data, often depicted as a non-consumable and non-perishable resource for strategic use, together with extended subsystems functionality and heterogeneous stakeholder goals, instead made the project implode. Subsequently, the whole system was cancelled, including the initially legally compliant functions.

While laws and regulations are undoubtedly important in protecting different stakeholders from negative or unintended consequences of AI, they also impose hurdles when defining restrictions without appropriate ways to deal with such requirements. Hence, policy-making needs to understand how laws and regulations are perceived by the relevant bodies that are subject to comply with them to improve their adherence. In cases where there is too much vagueness, private and public bodies might breach corresponding laws or regulations, or they may even result in non-adoption or use of AI in fear of legal consequences.

3.7. Human oversight

In terms of human oversight, very pertinent questions arise around how we achieve a balance between AI autonomy and ensuring effective and responsible outcomes. Alternatively, we need to ensure that humans retain autonomy while using AI to the most effective extent possible. This topic has brought rise to the notion of conjoined agency and how to balance the strengths of humans and machines in symbiotic relationships (Fügener et al., Citation2021; Murray et al., Citation2020).

In addition, our conceptualisation of the dark side questions the current mode of human oversight, which is for AI to conduct day-to-day decisions and actions, and humans maintain oversight by checking any strange, deviant, abnormal occurrences. Our first suggestion is that real damage occurs when a problem is an exception but embedded in many or all AI decisions and actions. By only maintaining oversight of the abnormal, one may ignore AI-induced problems and errors that affect many or all. In addition, given that we have shown deviant and abnormal decisions and actions may not necessarily be a bad thing, it is vital to ensure that processes for human oversight are not automatically designed to eliminate or reduce deviant AI decisions- which is the default position in most studies and applications of AI.

Also, as conjectured below, dark side research usually assumes that the dark side being sought is known. Suppose we extend this to processes for human oversight. In that case, it is logical that this needs to include oversight of known issues such as bias or specified technical errors and scanning for unknown and unprecedented AI decisions and actions that may have unknown negative impacts. In this special issue, Rinta-Kahila et al. (Citation2021) and their discussion of unintended algorithmic consequences provide an illustrative case of this and highlight the need for more research in this area.

3.8. Societal well-being

The issue of AI being developed to promote societal well-being has been featured as a key requirement, particularly from government and regulatory agencies. In this special issue, the paper of Rinta-Kahila et al. (Citation2021) builds on a case study that documents the destructive effects of algorithmic decision-making in public services by causing distress to citizens and employees, as well as financial damage to the welfare agency.

Building on such dark side examples and considering the principle that AI should be developed with societal well-being in mind allows us to examine where adverse effects stem from, what mechanisms sustain or constrain them, and how they can be minimised. There is typically no intent to do so in most prominent examples of AI causing severe societal harm. Therefore, it is important to understand what socio-technical structures and dynamics facilitate such negative outcomes of AI and where more weight needs to be placed to minimise such occurrences. Furthermore, with AI replacing many human and manual tasks, it is necessary to understand what new opportunities emerge for this workforce and how they should be re-trained to fit the AI era.

In addition, there is a growing discussion regarding the monitoring and control of online digital platforms, where AI-based content is used to promote polarisation (Parra et al., Citation2021). An important area IS research can contribute to is the role of regulation on digital platforms, how it should be implemented, and what consequences it has for the online communities using such platforms. The recent cases that showed that AI algorithms promote higher engagement and polarisation in online communities raise the question of how AI-based recommendations are used without promoting fake news or presenting polarising content.

Finally, with sustainability being a key concern for many government agencies and private organisations, a central question is how AI can be leveraged to support circular economy strategies (Haftor et al., Citation2021). While there have been many proposals on potential applications, there is a need to explore such approaches in a more systematic way that also considers the footprint of AI technologies in terms of energy usage and resource utilisation.

4. Rethinking the concept of “the dark side”

As part of this special issue, we reviewed key IS literature that refers to the dark side and found an increasing number of papers dealing with dark side issues, but not so much that specifically address the concept of the dark side itself. This contrasts with the studies in other fields (e.g., Linstead et al., Citation2014). To further thinking on this important topic in the IS domain, we adopted a critical approach to this exercise (c.f. Alvesson & Sandberg, Citation2011; Myers & Klein, Citation2011), questioning the core assumptions embedded in studies examining the dark side of technology. We now present some of these assumptions that underpin using the dark side concept in IS.

4.1. The dark side is characterised by the abnormal, the deviant, the outlier

Existing dark side research in IS tends to examine how AI and analytics can lead to exceptional, strange cases. For example, using AI to detect illness may miss rare manifestations of a disease or a deviant; one in a thousand people may use the AI for their own unusual, perverse gain. This emphasis on deviancy is logical, understandable. When one thinks of dark, we often think of that small, dark corner hidden out of sight within which something sinister lurks. This concept underpins conceptualisations of “dark” in other fields. For example, when the field of organisational behaviour (OB) first began to look at the dark side in their context systematically (e.g., bullying, theft, etc.), the first qualifying criterion for something to be considered dark was that it was abnormal or deviant (Linstead et al., Citation2014). However, we argue that a dark, dystopian side of technology lies in its ability to create deviancy and exceptions and its ability to force Kafka-esque compliance and conformity (e.g., McCabe, Citation2014). We argue that the self-learning aspects of AI, in particular, can create self-fulfiling myopia of what is acceptable and not, to the extent that diverse, unusual, and potentially transformative ideas and solutions are omitted.

4.2. The dark side is known

The dark side is a term usually used to refer to specific topics in the IS field, e.g., technology overuse and addiction (Tarafdar et al., Citation2020; Turel & Ferguson, Citation2020), technostress (Tarafdar et al., Citation2015), security and privacy concerns (D’Arcy et al., Citation2014; Goel et al., Citation2017), and fake news (Talwar et al., Citation2019). This is also true in the context of analytics and AI, where the dark side term refers to bias, inequality, etc. However, we argue that almost all topics in AI and analytics can, in principle, be the source of dark side issues. Many dark aspects of AI and analytics may be ignored or understudied simply because they do not neatly fit the labels that contemporary researchers associate with the dark side concept.

4.3. The dark side is timeless

An analysis of literature on the dark shows there is often an assumption that time is fixed, or at least that the passing of time does not alter how “dark” an issue is. It is unlikely to be reclassified once something is labelled as dark, e.g., stress, dysfunctional behaviour, or inequality. However, logic would dictate that something dark in one instance may suddenly be reclassified. For example, it is no coincidence that AI is scrutinised regarding bias and inequality (Akter et al., Citation2021; Luengo-Oroz et al., Citation2021) given the relatively recent rise of the #MeToo movement and general awareness of gender and race discrimination. However, the discriminatory aspects of technology have been there for many years. Time also plays an immediate key role at the very point that AI or analytics are used, and it is this timing that may determine whether said analytics or AI is, in fact, dark (Conboy et al., Citation2020; Trocin et al., Citation2021). For example, we often discuss whether the use of AI for medical diagnosis is or is not ethical and is or is not potentially dark. However, the number of seconds or minutes the surgeon spends assessing the AI recommendation or spot-checking the various AI decisions must in some way affect the ethicality or darkness of its use. Similarly, if the surgeon is at the start or end of a 24-hour work shift when screening or calibrating, the algorithm must also have some bearing. However, these temporal aspects are rarely studied. provides a list of existing and updated dark side assumptions, along with examples of emerging research questions.

Table 3. Existing and updated dark side assumptions with examples of emerging research questions

5. Closing thoughts

In this special issue, we have taken a contrarian view on the AI phenomenon to problematise the field and think through a different perspective of the implications and consequences of introducing AI into work and everyday life. The papers thus featured in this special issue adopt a dark-side lens to inform a more nuanced understanding of how AI is used in practice, its negative or unintended implications, and why and how these emerged. Doing so allows us to focus on the cases where AI has gone wrong and pre-emptively identify cases where AI does not conform with the set requirements. In providing a way forward, we have used the emerging notion of responsible AI as a set of fundamental principles that are important when developing AI applications. We build on the current discourse around responsible AI and develop a set of research questions relevant to the IS domain. While our agenda is by no means exhaustive, it serves as a starting point in thinking about responsible AI more holistically. In doing so, we also highlight the importance of utilising a dark-side lens for theorising and problematising existing recommendations. In addition, the dark-side lens serves as a means of rethinking some hidden assumptions around responsible AI. We argue that while these key principles are important in developing trustworthy and safe AI applications, they are also quite high-level and abstract and do not provide much guidance for practitioners regarding the realities of deploying AI in practice. Our ambition through this special issue and editorial is to encourage future researchers to adopt a more critical approach in IS research on the current debate about responsible AI, which will hopefully result in studies with deep theorising and strong practical implications.

Acknowledgments

This work was supported by the Science Foundation Ireland grant 13/RC/2094_P2, Slovenian Research Agency (research core funding No. P5-0410), and the Wallenberg Foundations, WASP-HS BioMe MMW2019.0112 (2020-2024).

Disclosure statement

No potential conflict of interest was reported by the author(s).

References

  • Aaen, J., Nielsen, J. A., & Carugati, A. (2021). The dark side of data ecosystems: A longitudinal study of the DAMD project. European Journal of Information Systems, 1–25. https://doi.org/10.1080/0960085X.2021.1947753
  • Accenture. (2021). Artificial intelligence services. https://www.accenture.com/us-en/services/ai-artificial-intelligence-index
  • Ågerfalk, P., Conboy, K., Crowston, K., Eriksson Lundström, J., Jarvenpaa, S., Mikalef, P., & Ram, S. (Forthcoming). Artificial intelligence in information systems: State of the art and research roadmap. Communications of the Association for Information Systems.
  • Akter, S., McCarthy, G., Sajib, S., Michael, K., Dwivedi, Y. K., D’Ambra, J., & Shen, K. N. (2021). Algorithmic bias in data-driven innovation in the age of AI. International Journal of Information Management, 60, 102387. https://doi.org/10.1016/j.ijinfomgt.2021.102387
  • Alvesson, M., & Sandberg, J. (2011). Generating research questions through problematization. Academy of Management Review, 36(2), 247–271. https://doi.org/10.5465/amr.2009.0188
  • Barredo Arrieta, A., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., … Herrera, F. (2020). Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, 58, 82–115. https://doi.org/10.1016/j.inffus.2019.12.012
  • Benjamins, R., Barbado, A., & Sierra, D. (2019). Responsible AI by design in practice. arXiv preprint arXiv:1909.12838. https://arxiv.org/abs/1909.12838
  • Brighton, H., & Gigerenzer, G. (2015). The bias bias. Journal of Business Research, 68(8), 1772–1784. https://doi.org/10.1016/j.jbusres.2015.01.061
  • Buiten, M. C. (2019). Towards intelligent regulation of artificial intelligence. European Journal of Risk Regulation, 10(1), 41–59. https://doi.org/10.1017/err.2019.8
  • Cheng, X., Su, L., Luo, X., Benitez, J., & Cai, S. (2021). The good, the bad, and the ugly: Impact of analytics and artificial intelligence-enabled personal information collection on privacy and participation in ridesharing. European Journal of Information Systems, 1–25. https://doi.org/10.1080/0960085X.2020.1869508
  • Clarke, R. (2019). Principles and business processes for responsible AI. Computer Law & Security Review, 35(4), 410–422. https://doi.org/10.1016/j.clsr.2019.04.007
  • European Commission. (2019). Ethics guidelines for trustworthy AI. https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai
  • European Commission. (2021). Proposal for a regulation of the european parliament and of the council: Laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain union legislative acts. https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX:52021PC0206&from=EN
  • Conboy, K., Dennehy, D., & O’Connor, M. (2020). ‘Big time’: An examination of temporal complexity and business value in analytics. Information & Management, 57(1), 103077. https://doi.org/10.1016/j.im.2018.05.010
  • Conboy, K. (2019). Being promethean. European Journal of Information Systems, 28(2), 119–125. https://doi.org/10.1080/0960085X.2019.1586189
  • Congress. (2019). AI Governance Principles Released. https://www.loc.gov/item/global-legal-monitor/2019-09-09/china-ai-governance-principles-released/
  • D’Arcy, J., Herath, T., & Shoss, M. K. (2014). Understanding employee responses to stressful information security requirements: A coping perspective. Journal of Management Information Systems, 31(2), 285–318. https://doi.org/10.2753/MIS0742-1222310210
  • Daly, A., Hagendorff, T., Hui, L., Mann, M., Marda, V., Wagner, B., & Wang, W. W. (2021). AI, governance and ethics. In A. Reichman, A. Simoncini, G. De Gregorio, G. Sartor, H.-W. Micklitz, & O. Pollicino (Eds.), Constitutional challenges in the algorithmic society, 182–201. Cambridge University Press.
  • Davenport, T. H., & Ronanki, R. (2018). Artificial intelligence for the real world. Harvard Business Review, 96(1), 108–116. https://hbr.org/webinar/2018/02/artificial-intelligence-for-the-real-world
  • Desouza, K. C., Dawson, G. S., & Chenok, D. (2020). Designing, developing, and deploying artificial intelligence systems: Lessons from and for the public sector. Business Horizons, 63(2), 205–213. https://doi.org/10.1016/j.bushor.2019.11.004
  • Dignum, V. (2019). Responsible artificial intelligence: How to develop and use AI in a responsible way. Springer Nature.
  • Erdélyi, O. J., & Goldsmith, J. (2018). Regulating artificial intelligence: Proposal for a global solution [Paper presentation]. Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, New Orleans, LA, USA.
  • Fjeld, J., Achten, N., Hilligoss, H., Nagy, A., & Srikumar, M. (2020). Principled artificial intelligence: Mapping consensus in ethical and rights-based approaches to principles for AI. Berkman Klein Center for Internet & Society.
  • Floridi, L., & Cowls, J. (2019). A unified framework of five principles for AI in society. Harvard Data Science Review, 1(1). https://doi.org/10.1162/99608f92.8cd550d1
  • Fuchs, D. J. (2018). The dangers of human-like bias in machine-learning algorithms. Missouri S&T’s Peer to Peer, 2(1). https://scholarsmine.mst.edu/peer2peer/vol2/iss1/1
  • Fügener, A., Grahl, J., Gupta, A., & Ketter, W. (2021). Will humans-in-the-loop become borgs? Merits and pitfalls of working with AI. MIS Quarterly, 45(3), 1527–1556. https://doi.org/10.25300/misq/2021/16553
  • Gehr,T., Mirman,M., Drachsler-Cohen, D., Tsankov, P., Chaudhuri, S., & Vechev, M. (2018, May 20-24). AI2: Safety and robustness certification of neural networks with abstract interpretation [Paper presentation]2018 IEEE Symposium on Security and Privacy (SP). San Francisco, USA: IEEE.
  • Giermindl, L. M., Strich, F., Christ, O., Leicht-Deobald, U., & Redzepi, A. (2021). The dark sides of people analytics: Reviewing the perils for organisations and employees. European Journal of Information Systems, 1–26. https://doi.org/10.1080/0960085X.2021.1927213
  • Goel, S., Williams, K., & Dincelli, E. (2017). Got phished? Internet security and human vulnerability. Journal of the Association for Information Systems, 18(1), 22–44. https://doi.org/10.17705/1jais.00447
  • Google. (2021). Responsible AI practices. https://ai.google/responsibilities/responsible-ai-practices/
  • Google (2022) Responsible AI practices. https://ai.google/responsibilities/responsible-ai-practices/
  • Gunning, D., Stefik, M., Choi, J., Miller, T., Stumpf, S., & Yang, G.-Z. (2019). XAI—Explainable artificial intelligence. Science Robotics, 4(37). https://doi.org/10.1126/scirobotics.aay7120
  • Haftor, D. M., Costa Climent, R., & Eriksson Lundström, J. (2021). How machine learning activates data network effects in business models: Theory advancement through an industrial case of promoting ecological sustainability. Journal of Business Research, 131, 196–205. https://doi.org/10.1016/j.jbusres.2021.04.015
  • Haibe-Kains, B., Adam, G. A., Hosny, A., Khodakarami, F., Shraddha, T., Kusko, R., Sansone, S.-A., Tong, W., Wolfinger, R. D., Mason, C. E., Jones, W., Dopazo, J., Furlanello, C., Waldron, L., Wang, B., McIntosh, C., Goldenberg, A., Kundaje, A., Greene, C. S., … Aerts, H. J. W. L. (2020). Transparency and reproducibility in artificial intelligence. Nature, 586(7829), E14–E16. https://doi.org/10.1038/s41586-020-2766-y
  • IBM. (2021). AI ethics. https://www.ibm.com/artificial-intelligence/ethics
  • IDRC. (2020). 2020 government AI readiness index: Governments must prioritise responsible AI use. https://www.idrc.ca/en/news/2020-government-ai-readiness-index-governments-must-prioritise-responsible-ai-use
  • IEEE. (2021). IEEE - Advancing technology for humanity. https://www.ieee.org/
  • Janssen, M., Brous, P., Estevez, E., Barbosa, L. S., & Janowski, T. (2020). Data governance: Organizing data for trustworthy artificial intelligence. Government Information Quarterly, 37(3), 101493. https://doi.org/10.1016/j.giq.2020.101493
  • Jeffrey, D. (2018). Amazon scraps secret AI recruiting tool that showed bias against women. Reuters. https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G
  • Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389–399. https://doi.org/10.1038/s42256-019-0088-2
  • Kampf, C. E., & Fashakin, O. K. (2021). The social responsibility of AI: A framework for considering ethics and DEI. In D. Pompper (Ed.), Public relations for social responsibility, 121–133. Emerald Publishing Limited.
  • Kazim, E., & Koshiyama, A. (2021). The interrelation between data and AI ethics in the context of impact assessments. AI and Ethics, 1(3), 219–225. https://doi.org/10.1007/s43681-020-00029-w
  • Kim, B., Park, J., & Suh, J. (2020). Transparency and accountability in AI decision support: Explaining and visualizing convolutional neural networks for text information. Decision Support Systems, 134, 113302. https://doi.org/10.1016/j.dss.2020.113302
  • Kordzadeh, N., & Ghasemaghaei, M. (2021). Algorithmic bias: Review, synthesis, and future research directions. European Journal of Information Systems, 1–22. https://doi.org/10.1080/0960085X.2021.1927212
  • KPMG. (2020). Artificial intelligence. https://home.kpmg/xx/en/home/insights/2018/07/our-artificial-intelligence-capabilities.html
  • Kumar, V., Rajan, B., Venkatesan, R., & Lecinski, J. (2019). Understanding the role of artificial intelligence in personalized engagement marketing. California Management Review, 61(4), 135–155. https://doi.org/10.1177/0008125619859317
  • Lebovitz, S., Levina, N., & Lifshitz-Assaf, H. (2021). Is AI ground truth really true? The dangers of training and evaluating ai tools based on experts’ know-what. MIS Quarterly, 45(3), 1501–1526. https://doi.org/10.25300/misq/2021/16564
  • Leidner, D. E., & Tona, O. (2021). The CARE theory of dignity amid personal data digitalization. MIS Quarterly, 45(1), 343–370. https://doi.org/10.25300/misq/2021/15941
  • Linstead, S., Maréchal, G., & Griffin, R. W. (2014). Theorizing and researching the dark side of organization. Organization Studies, 35(2), 165–188. https://doi.org/10.1177/0170840613515402
  • Lloyd, K. (2018). Bias amplification in artificial intelligence systems. arXiv:1809.07842. https://ui.adsabs.harvard.edu/abs/2018arXiv180907842L
  • Luengo-Oroz, M., Bullock, J., Pham, K. H., Lam, C. S. N., & Luccioni, A. (2021). From artificial intelligence bias to inequality in the time of COVID-19. IEEE Technology and Society Magazine, 40(1), 71–79. https://doi.org/10.1109/MTS.2021.3056282
  • Marjanovic, O., Cecez-Kecmanovic, D., & Vidgen, R. (2021). Theorising algorithmic justice. European Journal of Information Systems, 1–19. https://doi.org/10.1080/0960085X.2021.1934130
  • McCabe, D. (2014). Light in the darkness? Managers in the back office of a Kafkaesque bank. Organization Studies, 35(2), 255–278. https://doi.org/10.1177/0170840613511928
  • Mikalef, P., & Gupta, M. (2021). Artificial intelligence capability: Conceptualization, measurement calibration, and empirical study on its impact on organizational creativity and firm performance. Information & Management, 58(3), 103434. https://doi.org/10.1016/j.im.2021.103434
  • Murray, A., Rhymer, J., & Sirmon, D. G. (2020). Humans and technology: Forms of conjoined agency in organizations. Academy of Management Review, 46(3), 552–571. https://doi.org/10.5465/amr.2019.0186
  • Myers, M. D., & Klein, H. K. (2011). A set of principles for conducting critical research in information systems. MIS Quarterly, 35(1), 17–36. https://doi.org/10.2307/23043487
  • Neff, G., & Nagy, P. (2016). Automation, algorithms, and politics: Talking to Bots: Symbiotic agency and the case of Tay. International Journal of Communication, 10(17), 4915–4931. https://ijoc.org/index.php/ijoc/article/view/6277/1804
  • Nishant, R., Kennedy, M., & Corbett, J. (2020). Artificial intelligence for sustainability: Challenges, opportunities, and a research agenda. International Journal of Information Management, 53, 102104. https://doi.org/10.1016/j.ijinfomgt.2020.102104
  • OECD. (2021). Artificial intelligence. https://www.oecd.org/going-digital/ai/principles/
  • Osoba, O. A., & Welser, W. I. V. (2017). An intelligence in our image: The risks of bias and errors in artificial intelligence. RAND Corporation.
  • Parra, C. M., Gupta, M., & Mikalef, P. (2021). Information and communication technologies (ICT)-enabled severe moral communities and how the (Covid19) pandemic might bring new ones. International Journal of Information Management, 57, 102271. https://doi.org/10.1016/j.ijinfomgt.2020.102271
  • PwC. (2021). Artificial intelligence everywhere. https://www.pwc.com/gx/en/issues/data-and-analytics/artificial-intelligence.html
  • RAII. (2021). Responsible artificial intelligence. https://www.responsible.ai/
  • Rana, N. P., Chatterjee, S., Dwivedi, Y. K., & Akter, S. (2021). Understanding dark side of artificial intelligence (AI) integrated business analytics: Assessing firm’s operational inefficiency and competitiveness. European Journal of Information Systems, 1–24. https://doi.org/10.1080/0960085X.2021.1955628
  • Rinta-Kahila, T., Someh, I., Gillespie, N., Indulska, M., & Gregor, S. (2021). Algorithmic decision-making and system destructiveness: A case of automatic debt recovery. European Journal of Information Systems, 1–26. https://doi.org/10.1080/0960085X.2021.1960905
  • Russell, S., Dewey, D., & Tegmark, M. (2015). Research priorities for robust and beneficial artificial intelligence. AI Magazine, 36(4), 105–114. https://doi.org/10.1609/aimag.v36i4.2577
  • Salo, J., Mäntymäki, M., & Islam, A. K. M. N. (2018). The dark side of social media – And introduction to the special issue: The dark side of social media. Internet Research, 28(5), 1166–1168. https://doi.org/10.1108/IntR-10-2018-442
  • Seshia, S. A., Sadigh, D., & Sastry, S. S. (2016). Towards verified artificial intelligence. arXiv preprint arXiv:1606.08514. https://arxiv.org/abs/1606.08514
  • Shneiderman, B. (2020). Bridging the gap between ethics and practice: Guidelines for reliable, safe, and trustworthy human-centered AI systems. ACM Transactions on Interactive Intelligent Systems, 10(4), Article 26. https://doi.org/10.1145/3419764
  • Singapore_Government. (2021). AI Singapore. https://www.nrf.gov.sg/programmes
  • Small, B., & Jollands, N. (2006). Technology and ecological economics: Promethean technology, pandorian potential. Ecological Economics, 56(3), 343–358. https://doi.org/10.1016/j.ecolecon.2005.09.013
  • Srivastava, S., Bisht, A., & Narayan, N. (2017, January 12–13). Safety and security in smart cities using artificial intelligence — A review [Paper presentation]. 2017 7th International Conference on Cloud Computing, Data Science & Engineering - Confluence. Noida, India: IEEE.
  • Talwar, S., Dhir, A., Kaur, P., Zafar, N., & Alrasheedy, M. (2019). Why do people share fake news? Associations between the dark side of social media use and fake news sharing behavior. Journal of Retailing and Consumer Services, 51, 72–82. https://doi.org/10.1016/j.jretconser.2019.05.026
  • Tarafdar, M., Gupta, A., & Turel, O. (2013). The dark side of information technology use. Information Systems Journal, 23(3), 269–275. https://doi.org/10.1111/isj.12015
  • Tarafdar, M., Maier, C., Laumer, S., & Weitzel, T. (2020). Explaining the link between technostress and technology addiction for social networking sites: A study of distraction as a coping behavior. Information Systems Journal, 30(1), 96–124. https://doi.org/10.1111/isj.12253
  • Tarafdar, M., Pullins, E. B., & Ragu-Nathan, T. S. (2015). Technostress: Negative effect on performance and possible mitigations. Information Systems Journal, 25(2), 103–132. https://doi.org/10.1111/isj.12042
  • Theodorou, A., & Dignum, V. (2020). Towards ethical and socio-legal governance in AI. Nature Machine Intelligence, 2(1), 10–12. https://doi.org/10.1038/s42256-019-0136-y
  • Thiebes, S., Lins, S., & Sunyaev, A. (2021). Trustworthy artificial intelligence. Electronic Markets, 31(2), 447–464. https://doi.org/10.1007/s12525-020-00441-4
  • Thomson, R., & Schoenherr, J. R. (2020). Knowledge-to-information translation training (KITT): An adaptive approach to explainable artificial intelligence [Paper presentation] Adaptive Instructional Systems, Cham. Springer Cham.
  • Trocin, C., Mikalef, P., Papamitsiou, Z., & Conboy, K. (2021). Responsible AI for digital health: A synthesis and a research Agenda. Information Systems Frontiers. https://doi.org/10.1007/s10796-021-10146-4
  • Turel, O., & Ferguson, C. (2020). Excessive use of technology: Can tech providers be the culprits? Communications of the ACM, 64(1), 42–44. https://doi.org/10.1145/3392664
  • Twomey, P., & Martin, K. (2020). A step to implementing the G20 principles on artificial intelligence: Ensuring data aggregators and AI firms operate in the interests of data subjects. G20 Insights. G20 Summit. https://www.g20-insights.org/policy_briefs/a-step-to-implementing-the-g20-principles-on-artificial-intelligence-ensuring-data-aggregators-and-ai-firms-operate-in-the-interests-of-data-subjects/
  • Wang, Y., Xiong, M., & Olya, H. (2020). Toward an understanding of responsible artificial intelligence practices. In: Bui, T.X., (ed.) Proceedings of the 53rd Hawaii International Conference on System Sciences (HICSS 2020), Maui, Hawaii, USA, 4962–4971.
  • Weston, S. (2021). Google to settle hiring bias accusations for $3.8 million. IT Pro. https://www.itpro.co.uk/business-strategy/careers-training/358491/google-to-settle-pay-gap-and-hiring-bias-accusations-for
  • Wiggers, K. (2021). Bias persists in face detection systems from Amazon, Microsoft, and Google. The Machine. https://venturebeat.com/2021/09/03/bias-persists-in-face-detection-systems-from-amazon-microsoft-and-google/

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.