ABSTRACT
Critics now articulate their worries about the technologies, social practices and mythologies that comprise Artificial Intelligence (AI) in many domains. In this paper, we investigate the intersection of two domains of criticism: identity and scientific knowledge. On one hand, critics of AI in public policy emphasise its potential to discriminate on the basis of identity. On the other hand, critics of AI in scientific realms worry about how it may reorient or disorient research practices and the progression of scientific inquiry. We link the two sets of concerns—around identity and around knowledge—through a series of case studies. In our case studies, about autism and homosexuality, AI figures as part of scientific attempts to find, and fix, forms of identity. Our case studies are instructive: they show that when AI is deployed in scientific research about identity and personality, it can naturalise and reinforce biases. The identity-based and epistemic concerns about AI are not distinct. When AI is seen as a source of truth and scientific knowledge, it may lend public legitimacy to harmful ideas about identity.
Acknowledgments
Our thanks go, first and foremost, to each other. We are additionally grateful to Nikki Stevens, Claire and Margaret Hopkins, Adam Hyland, and our anonymous reviewers and editors for their ongoing support.
Disclosure statement
No potential conflict of interest was reported by the author(s).
Notes
1 In fairness to these researchers, glossing over the complex history of autism as a concept is a practice autism researchers have long engaged in, frequently preferring (as most scientists prefer) a linear history of new, improved truths inexorably overtaking old falsehoods (Verhoeff Citation2013; Hollin Citation2014).
2 As of writing, the paper has received 305 citations in under two years, along with coverage in the New York Times, The Guardian, The Economist and the Financial Times.
3 The authors later claimed their true motivation in writing and publishing this work was to demonstrate the dangers here. Why this required them to explain, in great detail, how to build a ‘gay face’ detector, is unclear. We should be profoundly grateful that they did not, for example, decide to contribute to nuclear disarmament, since they would presumably have done so by designing, building, and detonating a hydrogen bomb before publishing the schematics online – just to ensure everyone really understood the dangers.
Additional information
Funding
Notes on contributors
Os Keyes
Os Keyes is a PhD candidate at the University of Washington, where they study the interplay of identity, infrastructure and (counter)power. They are a frequent media commentator and public scholar, and an inaugural winner of the Ada Lovelace Fellowship.
Zoë Hitzig
Zoë Hitzig is a PhD candidate in economics at Harvard University. Her graduate research has been supported by fellowships from the Edmond J. Safra Center for Ethics at Harvard, Microsoft Research and the Forethought Foundation.
Mwenza Blell
Mwenza Blell is currently a Rutherford Fellow aliated to Health Data Research UK, a Newcastle University Academic Track Fellow, and a Grant Researcher at Tampere University. A biosocial anthropologist, her research draws from ethnography to examine intransigent and often invisible structures of injustice.