ABSTRACT
Systems based on Artificial Intelligence (AI) are increasingly normalized as part of work, leisure, and governance in contemporary societies. Although ethics in AI has received significant attention, it remains unclear where the burden of responsibility lies. Through twenty-one interviews with AI practitioners in Australia, this research seeks to understand how ethical attributions figure into the professional imagination. As institutionally embedded technical experts, AI practitioners act as a connective tissue linking the range of actors that come in contact with, and have effects upon, AI products and services. Findings highlight that practitioners distribute ethical responsibility across a range of actors and factors, reserving a portion of responsibility for themselves, albeit constrained. Characterized by imbalances of decision-making power and technical expertise, practitioners position themselves as mediators between powerful bodies that set parameters for production; users who engage with products once they leave the proverbial workbench; and AI systems that evolve and develop beyond practitioner control. Distributing responsibility throughout complex sociotechnical networks, practitioners preclude simple attributions of accountability for the social effects of AI. This indicates that AI ethics are not the purview of any singular player but instead, derive from collectivities that require critical guidance and oversight at all stages of conception, production, distribution, and use.
Acknowledgement
We would like to thank Siobhan Dodds and Hannah Gregory for their contributions to this work.
Disclosure statement
No potential conflict of interest was reported by the authors.
Notes on contributors
Will Orr is a postgraduate student at the Australian National University. He studies sociology and data analytics.
Jenny L. Davis is a Senior Lecturer in the School of Sociology at the Australian National University.
Correction Statement
This article has been republished with minor changes. These changes do not impact the academic content of the article.
Notes
1 What is (or is not) ethical is the topic of an entire subfield of philosophy; those debates are outside of our concern here. Rather, we are interested in how AI practitioners understand the ethical landscape and their own role within it.
2 On this point see the extensive literature on technological affordances (e.g. Davis & Chouinard, Citation2016; Evans, Pearce, Vitak, & Treem, Citation2017; Gibson, Citation2014; Nagy & Neff, Citation2015; Norman, Citation1988).
3 Although our sample is disproportionately male and white, it surpasses the 16% of women that constitute Australia’s STEM-qualified population, and coincides with the ratio of women employed in STEM fields (27%) (Office of the Chief Scientist, Citation2016). We found no available race data for STEM in Australia, but the predominant whiteness of the field has been well documented in the United States (Funk & Parker, Citation2018).
4 Using existing theories as a starting point necessarily creates interpretive bounds, thus potentially limiting the full spectrum of readings that researchers might apply to a set of data. We believe this limitation of the abductive approach is outweighed by the rigorous grounding provided by established and well-tested theoretical frameworks.