9,124
Views
13
CrossRef citations to date
0
Altmetric
Perspectives

Fitting the description: historical and sociotechnical elements of facial recognition and anti-black surveillance

ORCID Icon
Pages 74-83 | Received 20 Jul 2020, Accepted 28 Sep 2020, Published online: 18 Oct 2020

ABSTRACT

It is increasingly evident that if researchers and policymakers want to meaningfully develop an understanding of responsible innovation, we must first ask whether some sociotechnical systems should be developed, at all. Here I argue that systems like facial recognition, predictive policing, and biometrics are predicated on myriad human prejudicial biases and assumptions which must be named and interrogated prior to any innovation. Further, the notions of individual responsibility inherent in discussions of technological ethics and fairness overburden marginalized peoples with a demand to prove the reality of their marginalization. Instead, we should focus on equity and justice, valuing the experiential knowledge of marginalized peoples and optimally positioning them to enact deep, lasting change. My position aligns with those in Science, Technology, and Society (STS) which center diverse and situated knowledges, and is articulated together with calls for considering within science and engineering wider sociocultural concerns like justice and equality.

Introduction

At this point in the twenty-first century, widespread civil unrest and inequality are being caused, driven, and exacerbated by numerous factors such as climate change, global pandemic, and persistent racialized oppression, and all of these factors both feed into each other and are exacerbated by various technological realities. Among the various struggles in the world today, one which has been gaining increasing prominence is the anti-Black racism embedded in the history of policing in the United States, and the ways that facial recognition surveillance technologies can exacerbate this. Through an examination of the origins of entities such as US policing, photographic technologies, the bodily perception of certain types of people, and the datafication of those perceptions, we will confront the implications of surveillance technologies and question what it means that these technologies are so thoroughly rooted in systems of oppression.

The history of policing in turn becomes a conversation about how we continue to understand and assume the bodies of certain types of people as being ‘made’ for incarceration or punishment, and how sociotechnical systems make these bodies continually available for this process. If tools such as facial recognition, biometrics, and other algorithmic systems are inextricably bound up with a history of racialized inequality, marginalization, and even violence, then might this not mean that there simply is no way to responsibly innovate on such technoscientific systems? And, if so, what ought we to do about that? Tracing the history of these circumstances, and working with tools from STS, race and gender studies, disability studies, and even prison abolition movements, I argue that some technologies are inherently unjust. Further, I demonstrate that many contemporary technologies are part of a lineage of technologies which are embedded with the prejudicial assumptions born from the unjust societies in which they are built. Ultimately, it is only by naming and confronting the very foundations of these injustices that we will be able to recognize and abolish them as they are expressed through and within modern sociotechnical systems.

Enslavement, carcerality, and dehumanization

In many ways, innovations in the fields of algorithmic machine learning applications such as predictive policing are simply extensions of the model of policing in the United States, a model predicated upon the idea of retrieving lost or stolen property and punishing the perpetrators. It just so happens that, up until the latter half of the nineteenth century, the property and perpetrators in question were the same people: Enslaved Africans who managed to escape their captivity were hunted down and recaptured, often with the added penalty of physical mutilation to make their escape more difficult, in the future. A great deal of research exists on the history of slave patrols and how those were forwarded by but also constituted of various American white supremacist elements, such as the Ku Klux Klan back in 1800s, and even until today. (Potter Citation2013; Kappeler Citation2014; Hansen Citation2019). Even in the twenty-first century, white supremacist organizations ingratiate themselves into various levels of US law enforcement and military to gain access to training, weaponry, and social and legal authority to enact their views on others. Even discounting avowed white supremacists, psychological researchers have demonstrated that Black children – and Black people in general – are almost always perceived as older and more imposing than white people of similar ages, heights, and builds, a fact which quite obviously has vast ramifications for Black people’s encounters with police (Goff et al. Citation2014).

In the case of Black children, this can also result in white respondents having a hyper-sexualized perception of girls, and an increased threat response toward boys. And this perception of Black people as more often violent, more often imposing, more often older than they actually are is something that pervades not just our popular culture and social media, but also the training procedures for how police are meant to respond in any situation in which a Black person might be a threat. And so we must ask: what happens when police officers or other agents of sociopolitical authority who must search for or even just converse with Black people already have in mind images and perceptions of violence? When the belief that these ‘others’ should be considered less than human, that they are more violent and criminal, and should be treated with fear and suspicion, are put into wide, systemic dissemination, an ever greater number of people will be influenced by those belief – and their behavior, as a result, might easily turn deadly.

Two of the most well-known incidents among a far too extensive history of Black people being killed by the police are the 2014 cases of Michael Brown and Tamir Rice. Darren Wilson, the 6’4″, 210 lbs. trained police officer who killed the 6’4″, 292 lbs. 18 year-old Brown, said of their altercation, ‘ … I felt like a five year old holding on to Hulk Hogan.’ (Sanburn Citation2014). Wilson also famously described Michael Brown as a ‘demon,’ ‘not human.’ 12-year-old Tamir Rice was shot to death because the Cleveland police officers who opened fire within less than two seconds of arriving on the scene, said that they, ‘did not know it was a kid.’ (Izadi and Holley Citation2014; Palmer Citation2017). They claimed that the report was of a Black man with a weapon in the park, and when they got there, they did not see a child. (Palmer Citation2017). In her book The Cultural Politics of Emotion Sarah Ahmed wrote that,

some bodies are ‘in an instant,’ judged a suspicious or as dangerous, as objects to be feared, a judgment that can have lethal consequences. There can be nothing more dangerous to a body than a social agreement that that body is dangerous. (Ahmed Citation2004)

And so what do we do when the social agreement about that body is not just pervasive, but also algorithmized and automated? We must consider the historical foundations and contemporary implications of turning certain kinds of bodies – and the judgments about them – into data out of which whole systems of technology are built.

Datafication and black bodies

There is a long history of measuring Black bodies, turning those measurements into data, and then building systems of values, beliefs, and predictions off of that data, and it extends all the way back to the weighing and biological inspection of enslaved Africans who were kidnapped and transported across the Atlantic, and forward from there to seventeenth and eighteenth century innovations such as physiognomy, phrenology, and spirometry – the measurement of breath. (Braun Citation2014; Stein Citation2015; Seiberth, Yoshioka, and Smith Citation2017). On phrenological charts comparing different skull shapes and types, we find a clear intention to draw similarities between African skull shape and those of nonhuman apes. The mistreatment and dehumanization of Black women such as Saartjie Baartman, often called ‘The Hottentot Venus,’ also play integral role in how Black bodies were rendered as somehow fundamentally different or other than white bodies (even though the definition of who counts as white has changed multiple times down through history). (Washington Citation2006). Baartman was the person on whom the majority of early studies into gynecology were performed, entirely without her consent, for testing tools such as speculums and all kinds of gynecological equipment; she received very little credit until the late twentieth century. Other medical measurements were specifically used to render Black masculinity as threatening, violent, and overly sexual; and this was reflected not just in the medicalization of Black bodies, but also in popular culture representations of Blackness such as Birth of a Nation. These beliefs about the nature of Black people are foundational within Western cultural values.

Over hundreds of years, the outcomes these experiments and measurements have become scientific and mathematical data, data which have in turn been fed into equations and algorithms and used to train and operate multiple technoscientific systems (Hoffman et al. Citation2016). Also in 2018, a demonstration of Amazon’s Rekognition facial imaging system made false identification matches to 28 different congresspersons, linking their images to pictures found in mugshot databases. (Snow Citation2018). This kind of facial recognition output is not isolated and is in fact predicated on a clearly explicable set of event: first, the majority of facial recognition systems that exist are trained on mugshot databases. Mugshot databases are disproportionately populated with digitized images of Black and brown individuals as a result of the disproportionate overpolicing and criminalization of Black and brown communities. (Garvie, Bedoya, and Frankle Citation2016). However, digital cameras and imaging systems see Black and brown faces – and particularly the faces of Black and brown women – less well (Buolamwini and Gebru Citation2018). All of which combines to mean that the facial recognition technology which is widely deployed by law enforcement and its partners is drastically less capable of distinguishing between faces of the type on which it most likely to be used. (Garvie, Bedoya, and Frankle Citation2016). In her 2018 book Algorithms of Oppression, Safiya Noble discusses how stereotypical perceptions of Black people, especially women and girls, are rendered in American culture through the lenses of Google’s search and advertising metrics producing returning search results which reflect and reinforce those same stereotypes (Noble Citation2018). In effect, racist, sexist, and otherwise prejudiced assumptions have been encoded into a whole spectrum of hi- and low-tech developments, from systems as complex as modern-day artificial intelligence all the way back to tools as seemingly simple as pinhole cameras.

Analog to digital, photography and surveillance

The racialized and racist history of photographic technology stretches back almost to its inception. Photographic plates and chemical processing technologies were initially and primarily used by wealthier white people, even when its availability spread to use for the same of historical recordkeeping and posterity, the majority of those images being recorded were white people with their possessions and families, who would want their images to be clear and detailed. (Roth Citation2009). The way this was achieved was through brute contrast, whereby the darker portions of the image were rendered indistinguishable. In fact, it was not until after the production of photographic techniques for the sake of recording images of furniture with darker surface textures such as wood grain, and dark horse coats, that photo images could be reliably used for the sake of clearly rendering images of dark skinned humans. (Roth Citation2009; Wittkower Citation2016). Beginning in 1954, Kodak began using a tool known as the Shirley Card to aid in the refinement of their film; literally just an image of a white woman named Shirley, the Shirley Card was used to perform ‘white balancing’ adjustments to make sure white a person’s skin would blend nicely with whatever colors were in the image. (del Barco Citation2014). Shirley’s skin tone was supposed to be the ‘normal’ tone on which the camera was meant to focus and ensure that her features were able to be perceived. The fact that all of Kodak’s film production or chemical processing techniques were specifically formulated to ensure clear images of white people meant that images of Black people had to be specially tended to and new techniques had to be developed by filmmakers like Spike Lee and Jean Luc Godard, who were specifically intending to primarily produce images of Black people. (del Barco Citation2014).

Most recently, the HBO series Insecure has had to develop new digital camera technologies and lighting techniques to ensure that its majority-Black cast was able to be seen by people watching HBO on high definition televisions at home. (Harding Citation2017). But all of these elements of how and for whom analog photographic technology came to be created were choices, and all of these choices have been translated into digital photographic technologies. In 2009, there was an incident in which Hewlett-Packard’s then-new motion sensitive digital camera failed to follow Black faces. (Rose Citation2010). If a white face was present, the camera would follow that person around perfectly, but if a Black face was present, the camera stayed stationary, regardless of how that Black person moved. Similarly, many face detection features on digital cameras have had instances of enacting ‘blink detection’ software when taking pictures of Asian people who were simply smiling. And all of this happens because digital cameras have been trained on a series of choices made from the 1950s onward, and that training has thus replicated and iterated on many of the same prejudices inherent in analog photography, just in a digital form.

In much the same was as we can see the historical progression and implications in all the technosystems we have discussed thus far, we can think in much the same way about the history of surveillance and its widespread implications. While many researchers and end users have sought to technologically innovate in order to solve many of the problems of analog and digital photography, they have still replicated many of the same underlying assumptions about what kind of person will or even ‘should’ be photographed, and in what circumstances. In the 1960s and ‘70’s, the FBI’s Counterintelligence Program, or COINTELPRO, was tasked with monitoring and, when possible, subverting elements of the Black Civil Rights movement, from the Rev. Dr Martin Luther King, Jr. to the Black Panther Party. (Churchill and Wall Citation1990). The high-level authorization and enactment of this surveillance program hinged upon the idea that these people – Black activists protesting and agitating for their rights – needed to have tabs kept on them in the first place, a belief which stems directly from the sociohistorical understanding that Black people are somehow inherent violent and ‘lesser.’ This is the same set of beliefs which has led to the preponderance of stop-and-frisk measures, predictive policing, and surveillance systems that have been developed and deployed throughout the twentieth and twenty-first centuries.

The Georgetown Center For Privacy and Technology’s 2016 study ‘The Perpetual Lineup,’ found that surveillance and facial recognition systems as they are currently deployed reinforce and replicate all of these assumptions about who needs to be surveilled, who needs to be watched, and who needs to be monitored. These assumptions are being algorithmized and automated into a system which renders – and depends upon the perception of – Black and brown bodies as those which need to be watched. Again: digital camera systems see Black people less well than white people, and while many have claimed to make headway in correcting that, such an eventuality does not change the racist and otherwise discriminatory carceral history of these technologies. In 2018, it was revealed that IBM had used the NYPD’s pervasive surveillance of the city of New York to develop and train a system which allowed users to search for people by skin color. (Joseph and Lipp Citation2018). But, again, the system operates precisely because it sees Black people less well , in essence creating a multi-billion-dollar algorithmic machine learning system which lumps all Black people together and says that they ‘fit the description,’ a phrase used by racist cops everywhere to justify the harassment of Black and brown individuals. (Barlow and Barlow Citation2002; Brunson and Miller Citation2006; Smith, Allen, and Danley Citation2007; English et al. Citation2017).

What comes next?

The fact that Automated facial recognition systems work less well on the communities that they’re going to be most often used on does not mean that we somehow solve this problem by making sure that the training data we use is ‘more diverse.’ Merely putting more Black faces in the training data will not change the fact that, at base, these systems themselves will most often be deployed within a framework of racialized and gendered carceral justice. To this end, scholars such as Ruha Benjamin have argued that we must fundamentally question whether we should be using these systems at all. (Browne Citation2015; Benjamin Citation2019). Advocates for responsible innovation within machine learning and other fields often argue that he best path forward is to democratize the process of disseminating and using knowledge, such that researchers may learn from each other’s’ mistakes. (Brundage Citation2016; Stilgoe Citation2018). However, this perspective does not adequately grapple with the fact that there are those cases in which the accumulated paradigmatic weight around certain technosocial systems is such that no model of innovation within the field can overcome it. That is, when prejudicial values and beliefs are baked into the foundational assumptions of what types of innovation into which technologies are funded or authorized by various governments, and for what purposes, then no amount of innovation within the scope of those foundations will be able to escape those assumptions. As Audre Lorde famously put it, ‘The master’s tools will never dismantle the master’s house.’ (Lorde Citation[1984] 2007).

And while the foregoing perspectives have primarily focused on gendered and anti-Black racist biases, it should not be assumed that these are the only places where prejudicial perspectives make their way into these systems. Many automated vehicle imaging systems are unable to properly categorize people in wheelchairs or on crutches as pedestrians, and several researchers have claimed to be able to use facial recognition and biometrics to determine and predict everything from homosexuality to criminal behavior – ignoring the fact that in some parts of the world these are perceived to be the same thing, and are punishable by death. (Wu and Zhang Citation2016; Hao Citation2018; Farivar Citation2018; Quach Citation2019). The solution to this is not to measure more Black, disabled, and LGBTQIA+Footnote 1 people and turn them into training data for these systems. Such an approach would only lead to said Black, disabled, and queer people continuing to be subject to and subjugated by the systems trained on this data, because those systems and applications specifically exist for the purposes of oppression and subjugation (The Takeaway Citation2020). Instead, we need to think in a drastically different way about how and why we build algorithmic machine learning applications like facial recognition, in the first place. And, in some small ways, many have started to do exactly that.

Within the space of one week in June of 2020, following weeks of protests in the wake of the separate incidents of Ahmaud Aubrey, Breonna Taylor, and George Floyd being unjustly killed by police or police-affiliated individuals, IBM, Amazon and Microsoft all divested themselves of facial recognition technology in various ways. Amazon and Microsoft each proclaimed they would stop selling these systems to police or in support of ‘policing actions.’ (Amazon, Citation2020; Statt Citation2020). IBM was first and the most stringent, saying that they would cease to research into, make use of, or sell facial recognition solutions, and that they would not even provide service support for any outside facial recognition vendor. (Krishna Citation2020). But less than a week later, the American Civil Liberties Union (ACLU), an organization focused on constitutional protections for Americans, revealed that as recently as 2017–2019, Microsoft has sought to sell facial recognition to the US Drug Enforcement Agency (ACLU Citation2020). Previously, Microsoft has been tied to policing actions in the Gaza blockade, in service of Israel’s occupation of Palestine (Solon Citation2019). Indeed, even IBM’s complete divestment reads more like atonement for their complicity in developing systems of vast surveillance and categorization which were in part used to perpetrate the Holocaust (Black Citation2012). We must continue to press these companies with the question, ‘What do you count as policing?’ And, knowing as we do that these technologies were so often developed out of stereotypical or otherwise biased data, and thus map least well onto the marginalized communities color on which they likely will be used, we must be deeply critical of the idea that the problem is or the solution to the problem is somehow increased proliferation of that bias, or greater ‘diversity and inclusion.’

As others have pointed out, the concept of responsible innovation has limitations, from a failure grapple with resource scarcity, to its lack of focus on sustainment and maintenance in favor of innovating the new, to the need to clearly explicate what is good about innovation itself or even what we mean by ‘responsibility’ (Blok and Lemmens Citation2015; de Hoop, Pols, and Romijn Citation2016; de Saille and Medvecky Citation2016; Ludwig and Macnaghten Citation2020). But further than this, we must recognize that some technologies have had systemic prejudicial values baked into every aspect of their commission, design, and construction. Such technologies are unjust at their core, and they cannot be responsibly innovated upon or explored without a fundamental reckoning with and reconfiguration of the unjust societies in and out of which they’re built. We must articulate, grapple with, and dismantle the roots and branches of these fundamental injustices, or they will simply recur in ever shinier and newer technological packages.

Acknowledgements

The author thanks the editors and reviewers for their comments and suggestions which helped me to increase the clarity and precision of the argument presented herein.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Note on contributor

Damien Patrick Williams is a PhD candidate in the Department of Science, Technology, and Society at Virginia Tech. His research areas include AI, algorithms, and bias; specifically exploring how human values get embedded in technological and nontechnological systems.

Notes

1 ‘Lesbian, Gay, Bisexual, Transgender, Queer/Questioning, Intersex, Asexual + More;’ cf. ‘What Does The + In LGBTQ+ Stand For?’ Anjali Sareen Nowakowski. Elite Daily, June 18 Citation2017. https://www.elitedaily.com/life/culture/what-is-plus-in-lgbtq/1986910.

References

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.