689
Views
0
CrossRef citations to date
0
Altmetric
Special Anniversary Forum | Looking Back: Taking Stock at Year Twenty: The Unfinished Journey of Critical/Cultural Scholarship
Guest Editor: Robert L. Ivie

Copies without an original: the performativity of biometric bordering technologies

Pages 79-97 | Received 10 Aug 2022, Accepted 19 Feb 2023, Published online: 30 Dec 2023

ABSTRACT

We analyze two examples of biometrics in civil registration and migration contexts (the German Federal Office for Migration and Refugees’ voice biometry system and the UK HMPO passport photo checker tool) to show how, rather than “recognizing” a person, biometrics create a field of intelligibility within which the shifting positionalities of bodies are “stabilized” and deemed recognizable. We show how the obfuscation of this process has had violent racializing and gendering effects on the bodies of AI participants. We present our performative approach as a strategic intervention at the intersections of AI ethics and biometrics.

Introduction

In 2017, Germany’s Federal Office for Migration and Refugees (BAMF) began to use a voice biometric system to validate asylum claims. Biometrics, commonly defined as the use of physical or behavioral human characteristics to digitally identify a person,Footnote1 form part of BAMF’s “Integrated Identity Management” (IIM) pilot project, which was introduced after a record number of asylum seekers arrived in Germany between 2015 and 16. It also includes scope for BAMF to surveil an applicant’s mobile phone for further information without their consent.Footnote2 BAMF claims that IIM will improve how they deal with lost identity documents, the doubling of asylum seeker files, uncertainty about the applicant’s registered country of origin, and the inefficient registering of asylum applications by digitizing asylum procedures.Footnote3 They state that their language and dialect recognition technology will offer additional information to caseworkers that assess each applicant’s story about their country of origin and their travel history. In other words, in their summation, it helps caseworkers verify their suspicions about an applicant’s stated country of origin.Footnote4 BAMF believes that their system can “recognize” an accent, and that this “allows conclusions to be drawn” that either validate or dismiss the asylum seeker’s account of where they are from.Footnote5 The system is perceived as functional, but it exacerbates racist border control practices.

IIM is shared with several unnamed European countries, which exchange their analyses of migrants’ speech recordings in order to “optimally identify applicants.”Footnote6 While BAMF claims that this international data sharing will improve “transparency and security” in the asylum procedure, BAMF has been condemned by German civil rights associations for its lack of transparency about the algorithms, training data, origin of the tool, and its refusal to answer freedom of information (FOI) requests with the justification that transparency would potentially impede the tool’s effectiveness.Footnote7 Since 2009 the German Federal Government has spent more than €1.8 million on combating FOI requests. BAMF’s emphasis on digitization to speed up the process of accepting and rejecting migrants suggests that quick decisions made by accent recognition technology can help bypass lengthier and costlier verification processes.Footnote8 However, investigations by a migration non-profit have stated that asylum decisions cannot be based on a tool that only gives its results in the form of a percentage probability.Footnote9

Across disciplines and domains focusing on the ethics of biometrics, critiques and concerns are often expressed in the language of STEM, focusing on the “errors” they make and the risks they pose, including inaccurate authentication, identity theft, data security, and breach of privacy.Footnote10 At industry-level, the language of “bias” is often used to identify how biometrics tend to disproportionately affect individuals and communities from already marginalized groups through misidentification and/or exclusion. Both types of criticism often opt for functional fixes, including the isolation of a bias that can subsequently be extracted in order to return the system to a purportedly original state of neutrality.Footnote11 Scholars in the humanities, on the other hand, tend to bypass practical solutions in favor of stronger examinations of the racializing capabilities of AI-powered biometrics, such as an analysis of the surveillance/biometrics crossover,Footnote12 an engagement with political critiques of a variety of techniques and contexts including face recognition and Automated Gender Recognition, and an investigation of algorithmic governance at the “biometric border.”Footnote13

While we value these contributions and the way they interrogate STEM perspectives, we aim to integrate and develop them by reformulating biometric dis/functionality. Where conversations around the extraction of bias often consider it possible to “neutralize” technology by optimizing functionality, we argue that functionality is always already entangled with dynamics of power. We contribute to scholarship that dismantles the conflation of “ethical” or “good” AI and error-free AI, since the isolation of bias has often proved impossible, and biometrics’ harmful effects do not necessarily manifest as fixable errors. We build on scholarship that demonstrates the importance of attending to biometrics’ failures as revelatory of racist and sexist presuppositions behind this technology. This work often gestures towards biometrics’ performative production of bodies without using the term performativity – notably when analyzing the “materialization” of national borders and other bordering practices such as immigration court proceedings or sponsorship roles.Footnote14 We apply theories of performativity from feminist and gender studies to contribute to recent debates on the performativity of biometrics to address examples of biometrics deployed by state immigration and domestic security departments with greater specificity.

Our findings illuminate how racial norms are enforced by a process of iterative citation. We argue that effective ethical approaches to AI biometrics must highlight how systems performatively and citationally produce the gendered and racialized bodies that they are believed to observe and identify, whether or not the participant is enrolled, validated, rejected or rendered illegible.Footnote15 While face recognition algorithms can be perceived as “failing” Black women when a passport photo is not successfully recognized, a biometric identification system which appears to be functional may still fail an asylum seeker by overwriting their lived experience and contradicting their claims. In our previous work, rather than taking race and gender as pre-existent, we use performativity to demonstrate how AI affects the materiality of our bodies.Footnote16 We take up Butler’s argument in Bodies That Matter to demonstrate that bodies emerge through social norms rather than precede them, and technology contributes to this process by mediating between bodies and norms to make those norms produce material effects.

To reinforce this point, we draw on Karen Barad’s argument that scientific observation performatively constitutes the observed world. Barad demonstrates that an observed phenomenon does not pre-exist the observer. Instead, the techno-human observational apparatus constitutes the phenomenon. Barad names this process “intra-action,” claiming that the observer and the observed co-emerge as a provisional stabilization, or an “agential cut.” Crucially, Barad does not renounce the idea of objectivity, which for them, is the accountability of the knower to the marks that they leave on bodies.Footnote17 We argue that this form of accountability is also an ethical proposition. When a techno-human knower (for example, an AI system) co-emerges with a techno-human known (for example, an illegible racialized body), the questions we must ask are: Who/what exactly is made (in)visible, (un)intelligible, and by whom/what? Who/what is configured as (ab)normal? Who/what is stabilized in a way that causes harm? What marks are left by AI on bodies and how can it give an account of them?

“Agential cut” is a wider-reaching concept: it calls into question the very possibility of the existence of subjects and asks for greater responsibility on the part of all agencies involved. At the same time, the citational aspects of feminist performativity are important because they show how racist and sexist assumptions are entangled with AI systems and cannot be identified as belonging to any specific part of these systems (for example, the datasets used to train them). Ultimately, a performative understanding of AI makes it possible to reorient development and deployment practices towards doing justice to the communities that are most impacted by its use.Footnote18 We contribute to the current discussion on the performativity of biometrics by incorporating an analysis of the citational aspects of performativity and by showing how biometrics cites gender and race norms that ultimately refer to copies that have no original. In what follows, we first introduce biometrics as a performative apparatus, in that it produces the very features it claims to observe by always recognizing them within a normative frame. Second, we explain how the Derridean and Butlerian notion of normative citationality, combined with Karen Barad’s agential realism, sheds light on biometrics’ essentializing power while revealing how biometric authentication processes consist of the iterative production of gendered and racialized bodies. Thus, we show how gender and race remain unattainable ideals or myths that serve the interests of bordering practices. In reality, bodies are constituted through their interactions with technology and never fully realize the gendered or racial norm. We debunk the myth that a perfectly recognizable body exists in order to demonstrate that biometric borders cannot objectively establish who belongs to a national space and who does not.

We turn to the two examples mentioned above of institutions responsible for immigration and domestic security in the UK and Germany that deploy technologies both at and beyond national frontiers. These state departments cause the border to appear and reappear in other places and situations where citizenship is at stake. It is of crucial importance that we understand how the body presents itself before institutions of immigration and citizenship, when those same institutions can only identify bodies through the racist and sexist norms of state control. This process of identification is therefore always mediated by the technological apparatuses that reiterate and sediment norms within the bodies of those who are forced to comply with state processes in the hope of social and state validation.

Normative citationality in biometrics: biometrics as the selection and production of characteristics

The collection of biometric data is a matter of detecting light, sound waves or thermic differences through a variety of sensors, and of translating analogue entities into digital form. The bodily surface, or the interface between the measuring device and the measured object, shifts depending on the level at which we position our observation. Furthermore, very early in this process of detection, it becomes necessary to train the system to know what to look for, and this is done through the computation of past instances of detection. The explainability of facial and voice recognition technologies is further complicated by the fact that machine learning models “operate on features, such as pixel values, that do not correspond to high-level conceptualizations that humans easily understand.”Footnote19 Given the complexity of the relationship of measurement, the association between biometrics and authentication must be further problematized.

Authentication is about successfully executing a procedure – namely, successfully presenting two or more pieces of evidence to an authentication device. This evidence can be something the user uniquely knows, such as a password; something they uniquely have, such as an ID card; or something they uniquely are, as is the case with biometric authentication. This is why biometrics have the capacity to essentialize their participants. Biometric identifiers are categorized as physiological characteristics, which are ultimately related to the shape of the body and its supposedly inherent, universal but unique attributes, such as fingerprints, facial features, the voice, and even behavioral biometrics such as keystroke dynamics.Footnote20 The very language of biometrics is suggestive of essentialization, which is exemplified by the use of the term “identifiers” to indicate reliance on distinctive characteristics. But what exactly is being authenticated through the use of biometrics? Even more importantly, what enables the complex process of the translation of chromatic and thermal variations on a bodily surface (a skin-deep measurement) to be taken, first, as the synecdochic representation of the whole and, second, as expressive of the essence of an individual? And how can outputs from voice authentication technologies trained to recognize dialects be positioned as proof that a given individual truly comes from a war zone and is genuinely in need of asylum?

As we have argued elsewhere, a process of normative citationality is at work in AI and is even foundational to the functioning of neural networks, resulting in biometric systems whose recognition processes are mediated by the “citation” of social norms.Footnote21 Here we build on that argument, as well as on scholars who have analyzed biometrics along similar lines, to provide a theoretical frame for our subsequent discussion of specific examples.Footnote22

PuglieseFootnote23 provides a perceptive reading of biometrics as systems that work through a racist citational logic. Once biometric authentication technologies have been implemented in a given context, the participants whose identity is meant to be verified must complete an initial enrollment step. They are required to supply their biometric data (for instance, a facial scan or a fingerprint) so that it can be converted by the system into an enrollment template, that is, an algorithmic encoding of what the system considers the subject’s distinctive features to be. These data are then stored and used to authenticate the participant’s identity at every future interaction with the system. Crucially, every time the user presents themselves for biometric authentication, the system generates a verification template that is matched against the enrollment template. However, enrollment and verification templates can never be identical. This apparently counterintuitive logic can be explained at a technical level by the fact that every biometric acquisition (that is, every finger placement, or facial scan, or voice recording) generates a slightly different template because of minute differences in distance, lighting, pressure or other contextual variables (for example, the same finger, placed on the same scanner multiple times, generates a different template with each placement).

Pugliese further frames this process in the context of Derrida’s discussion of a person’s signature as the impossible repetition of what is substantially a singular enrollment, that is, the original signature. In other words, framing biometric authentication as repetition with a difference “graphically exemplifies the deconstructive movement of iteration, as a movement always already inscribed with alterity in every new instance of repetition.”Footnote24

For Derrida, a sign that cannot be cited is not a sign.Footnote25 Similarly, an identity, or a biometric signature (as common discourse goes) that cannot be falsified is not a digital signature. In fact, an identical match between verification and enrollment templates is viewed as the sign of fraud – namely, of someone having stolen someone else’s enrollment template and using it to illegitimately gain access to the system. This is why Pugliese perceptively notices that digital spoofs and frauds haunt biometric systems from within.

In Derridean terms, authentication is nothing but “an iterable movement that must be differentially marked” every time a subject presents itself for verification by providing biometric data to an acquisition device.Footnote26 In this sense, Pugliese argues, the logic of authentication is aporetic because it can only identify a subject by “recognizing” a slightly different copy of an unattainable “authentic” self. Thus, authentication literalizes Derrida’s deconstruction of the metaphysic of presence, which is also a deconstruction of the Western idea of the subject as univocally self-same. Here, we want to point out how the citational understanding of AI corresponds to Butler’s argument that gender is a copy without an original. It is impossible to identify an original bodily indicator of gender (or the perfect embodiment of a gender ideal) in the same way that it is impossible to identify an original biometric signature; even the biometric enrollment template, which is positioned as the original is always already a copy, a contingent algorithmic re-inscription of something (a supposedly essential and unique characteristic of the body) that in fact cannot be established once and for all.

Importantly, after pointing out the iterative citationality of AI, Pugliese focuses on the so-called failure to enroll (FTE) to show the racist origins of biometric citationality. FTE is a technical term used in the biometric industry to describe the fact that certain bodies do not lend themselves to easy extraction or acquisition of features by the biometric system. For example, “certain ethnic and demographic populations are more prone to high FTE rates than others,” as is the case of users of Pacific Rim/Asian descent, especially women, who are deemed to have “faint fingerprint ridges.”Footnote27 In this case, racism manifests as the biometric system’s failure to read even the very first presentation of the subject trying to inscribe themselves in the system. Thus, racial norms are always already cited and inscribed in the system, such as the particular settings of biometric acquisition technologies (cameras, fingerprint acquisition devices, etc.) tailored to a certain (white) ethnic group.

Failure to enroll and face recognition at His Majesty’s Passport Office

Emphasizing the performative and citational nature of AI allows us to rethink the notion of recognition while inviting a different answer to responsibility and harm. Our second example of why this is important is an incident from October 2020, when Black British student Elaine Owusu attempted to use the online passport photo checker installed by His Majesty's Passport Office (HMPO). HMPO had entered into a £2 million contract with a private software service provider, Dataminr, with the aim of remedying its “frankly shambolic state” following a decade of underinvestment, understaffing, unprecedented passport issue backlogs, and the short-term recruitment of agency staff lacking in proper training.Footnote28 In our previous work, we have discussed how, around the same time, Dataminr was also contributing to racist predictive policing at BLM protests by New York and Minneapolis Police Departments.Footnote29 Dataminr was to play a key role in HMPO’s new Digital Application Processing technologies, introduced to remove aspects of passport photo identity confirmation that were confusing to the public.Footnote30 As we have demonstrated elsewhere, it is not uncommon for companies to turn to AI systems as a cost-cutting measure or to alleviate pressure on the workforce.Footnote31

Despite significant investment into technologies designed to circumvent problems, when Owusu uploaded her photo onto the website it was rejected by the automatic system even though it met all the criteria for a correct upload, including image size, background color, and no presence of objects or other people. The software, which uses biometrics to map facial features from a photograph, offered the following error message: “It looks like your mouth is open.” Her mouth was closed. A subsequent BBC investigation revealed that the faces of Black women were twice as likely to be “read” incorrectly by software as those of white men,Footnote32 with each incorrect reading exposing users to a harmful reminder of racialized norms around appearance. Inioluwa Deborah Raji, a Mozilla Fellow and researcher with the Algorithmic Justice League later claimed that the Home Office knew that the system worked better for white men but believed that its functionality was sufficient.Footnote33 HMPO’s response was to “update” their software, but they admitted they had still not deployed the new version almost a year later.Footnote34 HMPO’s reply indicates that the only problem they perceived was that the original software was plagued by a functionality issue that could be resolved through an update. In the meantime, human examiners were being trained to assess photos that had been rejected by the automatic system but flagged with a case note explaining why they should be accepted.Footnote35 By July 2022 HMPO were aware that their automation required important manual oversight and necessitated further employee training.

This case of Failure to Enroll (FTE) makes visible how the process of recognition presupposes a normative understanding of the human body. The norm is constantly being produced through these many iterations of a body – each slightly different to the previous one – which means that even the frameworks of recognizability created by AI are unstable. The “British citizen” emerges through AI-mediated gendered racialization,Footnote36 where AI defines and prompts the illegibility of Black womanhood, creating bodies that cannot be read by a software that polices British citizenship. The experience of repeatedly instructing a would-be passport holder that there is a problem with her face should be of utmost concern to HMPO, not least because of the affective response it provoked: Owusu expressed distress at a system which “wasn't built for me” and evidenced the technologization of “systemic racism.”Footnote37 As noted by the End Everyday Racism project, systemic racism produces cumulative messages of exclusion which have both physical and psychological effects.Footnote38 Therefore, the differential exclusion prompted by bordering practices that extend beyond national frontiers not only reveals that the bodies of Black women contradict their citizenship status but that the system operates through these exclusions. Racialized and gendered bodies, as well as these bodies’ citizenship statuses, are therefore explicitly produced with technology rather than identified by it.

Owusu caught the system in the act of stabilizing a racialized gender identity using algorithmic knowledge that represented Black female faces as partially unreadable. As a consequence, she was repeatedly denied the possibility of requesting documentation that validated her citizenship because the norms of British citizenship rendered her face illegible. Importantly, the AI did not materialize Owusu’s face as completely unreadable. “Your mouth appears to be open” clearly shows how Owusu’s face poses a problem for the (biometric) system, which enacts an anti-Black racial stereotype. The technologically-mediated visual erasure and misrepresentation of Black women’s faces occurs when a face is recognized but its features are not. Owusu’s face materializes algorithmically as a numeric representation of a face with an open mouth; a mouth that, from the point of view of the human participant, can never be closed. The result is an unactionable request: “Please close your mouth,” which denotes how Owusu’s face emerges as a “quasi-face” in the technological system.

FrabettiFootnote39 argues that technical “quasi-malfunctions” are revelatory of the political and ethical decisions made by and with technology. A technological system is at its most illuminating when it is quasi-broken – that is, when it works in an unexpected way that cannot be clearly categorized as a malfunction. Establishing whether this unexpected behavior is an error that needs to be fixed or a welcome machinic innovation that must be incorporated into the technology is not a technical decision but an ethical and political one. It determines what kind of technology we want to live with, and it opens up different futures. In Derridean terms, a quasi-malfunction is a “point of opacity”Footnote40 where the conceptual system underlying a specific technology deconstructs (or undoes) itself, thus revealing its untenable presuppositions. The quasi-malfunction that disproves the system’s logic is also a key tenet of Butler’s theory of performativity, which exposes how non-conforming gender presentations demonstrate the impossibility of the gender binary by exceeding or subverting its rules.

The quasi-recognition of quasi-faces is particularly important because contemporary AI (typically, ML) works precisely by reincorporating errors made by neural networks. Errors are understood by data scientists as an important part of the development of AI, a technology that “learns” by making mistakes.Footnote41 However, a quasi-malfunction defies this logic and causes an interruption in the process of reincorporating errors and developing the algorithm. If AI is characterized by the reduction of data to single actionable points (for example, the rejection of a person’s passport application because their uploaded photo does not conform to certain norms), quasi-failures interrupt this actionability. Quasi-failures are not actionable: a mouth that is not open cannot be closed. Here the system cannot reject the non-conforming subject but can still display their non-conformity, therefore leading to repeated attempts to resubmit the passport photo. Owusu is rendered as a quasi-citizen, not quite accepted into documented citizenship and political existence. This quasi-subject confuses the algorithm and the people actioning its outputs. It also makes apparent that an ethical and political decision must be made, thus exposing the power dynamics at play in technology. Quasi-malfunctions disrupt techno-solutionism – that is, the attempted use of technology to resolve social, ethical or political problems, which in the process often unwittingly prompts new complications.

The quasi-error also makes apparent the conflict between how ML-based biometrics and industrial and commercial deployment operate. While the technology returns a probabilistic result (for example, a 75% chance that an image is a face, or a 1% chance that it is a mouth), when deployed by an immigration authority these estimations are applied as if they were certain (or, in Derridean terms, as if they were a perfect copy of the original). So, while supervised learning deals in quasi-determinations, the institutions that deploy it claim that it establishes certainty. In the case of quasi-errors, this discrepancy between a probabilistic technology (for which every recognition is a quasi-recognition) and a commercial imperative (for which every recognition equals certainty) becomes apparent. In the case of Owusu, this tension is made visible precisely because the technological output is not actionable, and therefore leads to an impasse. Certain malfunctions reveal more than others about the technology because they indicate instances where recognition does not work but is not registered as a system error.

Although HMPO may frame this as an unusual occurrence that can be resolved by requesting that a human operator check the passport photo, quasi-errors still reveal the tension between the internal logic of the system and its deployment. This “undoing” of the citizen is a disempowering move that consigns them to quasi-invisibility. Therefore, these instances of quasi-subjectification, quasi-legibility, quasi-recognition, quasi-representation, are the essential starting points for new possibilities of political action and an ethical rethinking of AI’s functionality. In fact, in the same way that instances of quasi-functioning technology shed light on how technology works and offer us an opportunity to gain a different kind of insight into technological functionality, the quasi-subjects constituted by AI (in the form of quasi-faces, quasi-bodies, quasi-fingerprints) shed light on how AI normatively and citationally constitutes subjects and offers us an opportunity to rethink attempts to “improve” systems in order to better serve marginalized communities. HMPO cannot remedy the quasi-error through a technical fix that aims to return the system to a state of neutrality. The system is incapable of seeing all faces in the same way because of its inevitable citation of racialized facial norms. Therefore, HMPO must confront how systems are always embroiled in systemic gendered racism, and how this affects its performative production of the subjects and biometric features it seeks to identify. It should therefore respond by interrogating how its practices and policies more broadly contribute racist and gendered configurations of citizenship, and how these factor into the way its technologies function.

Immigration and accent recognition at the German Federal Office for Migration and Refugees

The instance of quasi-recognition outlined above exposes how ideologies of race and racism citationally produce bodies through high-stakes “verification” technologies whose non-functionality cannot be easily challenged. BAMF claims that its “speech biometrics” software can “recognize the (major) dialect spoken by the applicant.”Footnote42 Non-conformity in the voice of the participant asylum seeker to the system’s normative assumptions makes itself visible as a rejected asylum application. In this case, the “functionality” of the system remains intact and, unlike HMPO’s tool, goes unquestioned if its outputs remain actionable. The system is perceived as “working” insofar as it validates or discredits the applicant’s claimed place of origin based on their way of speaking, and no claims can be made against its functionality by the disenfranchised applicant; they can only take legal action later and appeal their rejection. We cannot exclude the possibility that BAMF voice recognition, like all other biometric systems, returns uncertain results in the form of quasi-recognitions or quasi-rejections. It might, for example, “recognize” a Jordanian accent from an applicant who, while having been raised in Jordan, has married a Syrian, lived there for 20 years and is now forced to flee from the war.Footnote43 In this case, the recognition is only partial, obscuring the aspects of the applicant’s story that are necessary for them to be granted asylum. As Dylan Mulvin notes, the messiness of reality has always been subject to technological containment through norms of whiteness, able-bodiedness, and purity that are foundational to Western standards and infrastructure.Footnote44

Dialect or accent recognition in computer audition can be done in a number of ways, but commonly features from speech are extracted and then modeled mathematically to represent a sound spectrum.Footnote45 This method has proved popular because of its perceived ability to “produce minimal data without losing any important information.”Footnote46 In any case, the information considered important is deemed to have been retained by the system. Because there are so many variables in how people speak, various techniques are employed to “normalize” variations that might affect the system’s perception of a person’s accent, including “frequency warping.”Footnote47 By extracting material components of the voice – the harmonic content of a sound, including pitch and timbre – the voice is stabilized by transforming multiple frequencies into a single frequency for data analysis. This listening machine accounts for objectivity by mathematically parsing a voice, but in doing so, ascribes interpretive value to its readings. These values are subsequently ascribed to the speaker, who takes on an identity as a valid or fraudulent asylum seeker.

Our performative approach helps to clarify BAMF’s attempt to legitimate an applicant’s claim based on the essentialization of a person’s “racial origins.” While FTE racializes applicants through its conceptions of what a citizen looks like, BAMF’s voice biometry tool essentializes race in the voice. This occurs as part of a procedure which, through its complexity, obscures the basic assumptions on which this application of voice biometry rests. Criticisms of the project in the media highlight that a program cannot recognize certain dialects accurately, because people may develop regional, familial or social language variants within their dialect (for instance, “youth language” or psychological stress) which may make it more difficult to match their language with certain regions of origin.Footnote48 Moreover, dialects are often used across national borders, particularly Arabic dialects, which would make it even more difficult to match a language with a nationality.Footnote49

Critiques are often interpreted as suggestions to improve and correct the system to better recognize a variety of dialects, as with the HMPO tool. However, we suggest that such improvements are futile if practitioners fail to understand that the systems do not “recognize” the body. Instead, they produce the body according to norms governing immigration and border control. These norms govern the voice biometry system such that it could fail to recognize a person’s asylum claims and needs if they do not correspond with a “degree of certainty” to what the system views as an adequate match between voice and a geographical location.Footnote50 As Paul Gilroy has argued and Simone Browne has elaborated in the context of biometrics,Footnote51 making race anatomical has its roots in white supremacy. Both Browne and Oliveira emphasize the significance of the “moment of measuring”Footnote52 in the production of the measured participant. In this way, they gesture towards Barad’s conception of the onto-epistemological “cut” that occurs during the technologically-mediated observation of bodies. As long as the system produces outputs, the reality that the system purports to make visible cannot be contested unless we more effectively communicate the contingency of how bodies and descriptive statements are produced. As Oliveira concludes: “Processes like these evince how sound is instrumentalized to act as a disciplinary mechanism, and how biometry is fundamentally a performative gesture: it seeks to pinpoint that which it has set itself to reveal.”Footnote53 Voice biometry’s referentiality and connectivity with other racist mechanisms of border control would suggest that it is co-performative. Oliveira explains that the voice in the system always exists in relation to – and is “calibrated within” – a set of normative assumptions that “in effect, convey white supremacist modes of seeing and listening.”Footnote54 This process of calibration is central to how accent recognition functions because, as Iván Chaar López has pointed out in his work on US-immigration history, each constitutive component of border control is only legible in reference to the other.Footnote55 These technologies and infrastructures are co-performative: their “worlding” ability is the effect of an accordance with other systems that seek to produce objective facts about the body. However, accent recognition is an especially significant added element of this integrated system because of AI’s track record of scaling racist modes of observation.

Oliveira interrogates BAMF’s functionalities and uses by applying his technical understanding of what happens when a voice encounters biometrics.Footnote56 He claims that the system’s decisions constitute,

incomplete assessments of the ambiguity and contingency of prosody, pronunciation, inflection, and timbre. These systems of voice biometry seek to normalize vocal traits, as well as to establish an alleged quantifiable ‘truth’ to how a voice might convey language – a physical but also cultural and social phenomenon.Footnote57

By reproducing the biometric voice within its own interpretative framework, machine audition determines what it listens to.

Derrida provides perhaps the most explicit exposé of the performative and co-constitutive relationship between citizenship, “les sans papiers” [the undocumented], and paper technologies of verification.Footnote58 Crucially, he does not distinguish between the materiality and discursivity of citizenship documentation and the bodies which they reference:

I would like to show that this fundamental or basic chain of the ‘base’ (support, substratum, matter, virtuality, power) cannot possibly be dissociated, in what we call ‘paper,’ from the apparently antinomic chain of the act, the formality of ‘acts,’ and the force of law, which are all just as constitutive.Footnote59

Paper is the traditional technology of inscription for the powerful, a transcendental arrangement of authority, virtuality, the law, and its acts. Writing in 2001, Derrida recognizes “new powers of information expropriation” which were already being used “to render visible, perceptible, or audible; extract bodies; to expose everything on the outside.”Footnote60 These technologies of “simulacra and simulation” create rather than process information.Footnote61

Indeed, an analysis of the performativity of BAMF’s voice biometry reveals that its ability to create is more significant than its professed processing or extraction capabilities. As it renders politically-mediated information audible, it fleshes out national domestic policy in the bodies of asylum applicants. As a multi-dimensional surface of inscription it shares the essential characteristics of paper, “corporality, extension in space, the capacity to receive impressions, and so on.”Footnote62 For Derrida, while paper documentation contributes to a similar mechanism of governance, “What is new is the change of tempo and, once again, a technical stage in the externalization.”Footnote63 This process of externalization means that “paper became the place of the self’s appropriation of itself, then of becoming a subject in the law.”Footnote64 In other words, the law is embodied in technologies of citizenship and documentation. If the body of the paper and the citizen are one, then it follows that the undocumented asylum seeker is “unmattered.” BAMF’s technology is, as Derrida termed, a “structuring prosthesis,” an extension of the body of the law which makes material inscriptions on the body of the asylum seeker.Footnote65

We therefore term this technology an apparatus of “white hearing,” because it listens to asylum seekers’ testimonies through norms of racialized citizenship. The voice of the asylum seeker is always made to “matter” – as either valid or invalid – in relation to the governing entities on behalf of whom the technology is built and deployed. Zakiyyah Iman Jackson and Alexander Weheliye suggest that to be seen and unseen through the eyes of white infrastructure is also to “do” migration under regimes of anti-Blackness.Footnote66 For Weheliye, Blackness in Western judicial systems is a “non-legal state of exception in the domain of modern humanity,” which, in invoking the judicial system differently, enables the abandonment of and violence towards particular subjects as modus operandi.Footnote67 Just as Weheliye indicates that Blackness is the structural assemblage that produces the human, accent recognition does not denote the essential otherness of a particular subject. In fact, as Jackson observes, “the question of violence might not necessarily hinge on whether you are included or excluded but how you are targeted by an order, and what function you serve for the order’s legitimation and reproduction.”Footnote68 The system performatively racializes the citizens it observes, which means that their bodies materially re-emerge in systems.

Deployment

The significance of this is that these are worlding technologies calibrated to racist norms – they do not neutrally observe citizens so much as violently racialize them. Therefore, as Radhika Mongia has argued, border control legitimates itself through violent migration practices that rely on a “yoking together” of “nation” and “state” on the terrain of race.Footnote69 Accent recognition deployed by BAMF should not be seen as a neutral source of information about applicants but rather as a way of obfuscating racist decision-making through the alleged objectivity of technological support systems. In further work, this might be usefully read alongside the policing of sexual and gender borders under the guise of security in cases where trans people experience rejection and harassment during border control procedures. As Toby Beauchamp and others have noted, the threat of concealed weapons is often rendered analogous to the concealed sex/gender of a trans person in airport security who must be “outed” and surveilled to maintain public safety.Footnote70 Indeed, accent recognition rests on the similar presumption of a migrant disguising their true identity, an impersonation that reflects disguised intentions for passing through European borders.Footnote71 In airport contexts that transparency is presented as the primary mode of protecting citizens from unwanted intruders, however as Sally Spalding notes, some bodies are deemed more transparent than others.Footnote72 These decisions around whose bodies are authentic precede the technology, having been registered in deployment contexts through non-technological processes.

Deployment context, therefore, factors heavily in the implications of the performative production of a voice which either corresponds to or diverges from a given place of origin. The stabilizations that occur when a system makes a body visible as a valid asylum seeker are the product of their development and deployment context, as well as an expression of national interests, border control, xenophobia, and foreign policy. Multiple agents, at once social and technological, are involved in the process of making phenomena intelligible. It is important to consider the spaces within which biometrics are made intelligible. As Stuart Hall has emphasized, “the question of discourse and the framework of intelligibility is about how people give meaning to those things and how they become meaningful, not whether they exist or not.”Footnote73 The question of how race or the experience of asylum seekers gains meaning in AI and in the context within which it is deployed must become the focus of ethical interrogation, supplanting the question of whether or not a person’s asylum claim is valid.

How, we should ask, do nationalist and populist rhetorics about migrants also define the fields of (un)intelligibility within which an asylum seeker’s voice enters? Sara Ahmed’s appraisal of the suspicion with which asylum seekers are assessed can be useful in thinking through how deployment context impacts the performative production of the biometrically parsed voice of the migrant-applicant. She notes that, “The suspicion of asylum seekers as not really being asylum seekers is thus part of a more general suspicion of those deemed strangers, as not really being from here, whose presence is framed as loitering with intent.”Footnote74 The statistics on the rejection and acceptance of the applicants who underwent authentification on BAMF’s system suggest that the context of suspicion ensures that acceptance rates are low. For instance, in 2015, two thirds of the people who were subjected to this technology of suspicion had their claimed country of origin rejected.Footnote75 Our performative reading of these systems’ functionality suggests that it is not the condition of being an asylum seeker that is inherently suspicious, but instead it is the technology that fosters suspicious observation. As musicologists and sound studies experts have pointed out, because the act of listening is involved in not only the reception but the production of the voice, there can never be an innocent description of sound.Footnote76 Not even thought is isolatable from sound unless through abstraction.Footnote77 Accent recognition therefore listens not to the vocal source but to itself, citing border control practices through both technological and non-technological forms of listening. Like HMPO’s tool’s inability to recognize faces without citing the norms of border control/citizenship, it cannot hear migrants “outside” of its own normative presuppositions.

If there is no objective way of processing and experiencing sound, how then does BAMF software’s field of intelligibility make some voices align with the place they profess to originate from and not others? As is emphasized in Oliveira, Nina Eidsheim, Jonathan Sterne, Jennifer Stoever, and Kathy Meizel’s work, the spoken voice and the listening ear materially produce, transmit, and receive complex information; therefore, the voice cannot stand in for the subject. In the case of BAMF software, the voice enters its material-discursive space where it is imbued with another set of political norms and domestic policy imperatives. “Listening” is the reconfiguration and translation of voice and software into data points that can be interpreted by BAMF authorities, who, as part of the process of interpreting that data, engage in a second act of listening. In other words, the ear of the state produces an embodied idea of what “native” pronunciation should sound like. Authentication further relies on the assumption of implicit connections between voices and accents, where both are also taken as proxies for place, place as a proxy for birthplace, and the relationship between voice and place as a proxy for the legitimacy of asylum claims. This process continually cites the German and European political climate that establishes what nationalities of migrants are more likely to be granted asylum, for example Syrians.Footnote78 Pronunciation (whether or not conforming to parameters that the system has previously extrapolated from a range of dialect-inflected speech samples) is the Derridean authentic signature of the asylum seeker. However, a signature is always already inauthentic and always already a citation. Therefore, the accent of the authentic asylum seeker is an aporetic construct. There is no originary asylum seeker with a “genuine” accent that “authentically” links them to a certain geographic area and a certain life journey.

Both HMPO’s passport checker and BAMF’s voice recognition constitute bodies performatively as potential citizens, and they do so by iteratively citing a norm that is made to matter as essential, when in fact it is an aporia, a copy of a copy. HMPO’s “quasi-recognition” of a “quasi-face” illustrates how racial norms can become visible via an unexpected event in technology. However, there needs to be some political (rather than technical) intervention for this unexpected behavior to become an opportunity to change the biometric tool. Otherwise, the quasi-face could somehow be ignored and the passport applicant simply delegitimated. BAMF’s speech recognition exemplifies the essentialization of a racial norm through a process that synthesizes the aporetic “authentic” voice of the original legitimate migrant. Again, there have been numerous legal actions against asylum claim rejections and some of them have been successful. A major political intervention is needed for the essentializing capacity of biometrics to be contested and for technology to change.

Conclusion

By arguing that biometric systems are by nature performative, we explore how in states of both “functionality” and “quasi-functionality” they racialize participants. Our performative and citational approaches enable us to make this intervention without essentializing participants. Instead, we mobilize theories of gender performativity to show that the participant does not pre-exist a system which subsequently observes them but is co-constituted through the act of technologically-mediated observation and citation of norms. We argue that those who escape the norm, for example Black women who are British citizens or asylum speakers whose voice and provenance are deemed erroneous, become non-subjects. This happens because the system cites its own construction of the ideal, which in the case of the “essentialized” legitimate asylum seeker is synecdochically represented by a deconstructed spectrum of sound. In keeping with gender studies, we question the idea that there is a biologized truth in either a voice or a face (a true gender, race, or identity) calculated by biometrics, and which immigration and domestic security contexts subsequently use to (mis)represent a participant. A performative reading of biometrics therefore unpacks how gender, race, and identity are performed together within a techno-human system. Since no critique of algorithms is possible from the outside – because we are all already implicated in algorithms – an important political intervention is to make visible how AI performatively shifts power dynamics according to the norms of, for example, border control. Deployment context, therefore, literally affects how the bodies targeted are “mattered” – how they materialize.

Quasi-failures make power dynamics surface in a manner that is rarely visible to the wider public. They offer momentary insight into racializing processes at work in systems that are not perceived as failing, such as the BAMF voice biometry tool. We therefore argue that norms and harms do not always manifest as failure, a connection that has been made by circumstantial associations between algorithmic “error” and “bias.” We demonstrate instead that the system’s purported functionality disguises how it performatively constitutes and essentializes the acceptable characteristics of an asylum seeker’s voice. In this case, we demonstrate how even when there is no immediately observable “error” in the system, its single actionable output cannot possibly be expressive of lived experience. Instead, place of origin is extracted into one single output that, even when it is successful – that is, when it returns the output that the asylum claim is legitimate – operates via a process of (vocal) manipulation and reduction.

Voice biometrics systems used in immigration contexts must be understood as performative by its participants and operators. Immigration agencies – which we argue are indistinguishable from the technology itself – view these tools as resources that are separate from their own operations and function objectively rather than as co-constitutive of the migration or citizenship process itself. If a federal department and its ethos are part of the AI’s operational ecosystem, we might ask if it is then possible to create a tool that does not understate its implication in the outputs it proffers. In this case, a system’s “confidence threshold” of the match between dialect and location may be irrelevant in ascertaining whether the system is “good enough” to be used. This is a particular problem given that if a human respects the tool’s decision, then the doubt inherent in the system will be obscured, doubt which is the product of national suspicion towards asylum seekers. If policymakers do not reckon with these complex issues, systems will continue to be misunderstood and potentially abused by stakeholders, and “improvements” will be ineffectual because racism is anchored within socio-technical configurations of citizenship.

Disclosure statement

No potential conflict of interest was reported by the authors.

Additional information

Funding

This work was supported by Christina Gaw.

Notes

1 Maria Korolov, “What is biometrics? 10 physical and behavioral identifiers that can be used for authentication,” February 12, 2019, https://www.csoonline.com/article/3339565/what-is-biometrics-and-why-collecting-biometric-data-is-risky.html.

2 (BAMF) Bundesamt für Migration und Flüchtling, “The Personal Interview,” BAMF, 2018, https://www.bamf.de/EN/Themen/AsylFluechtlingsschutz/AblaufAsylverfahrens/Anhoerung/anhoerung-node.html; Federal Office for Migration and Refugees, “Migration, Integration, Asylum,” Home-Affairs, 2017, https://home-affairs.ec.europa.eu/system/files/2020-09/11a_germany_amr2017_part2_en.pdf.

3 Migration Data Portal, “AI-enabled identification management of the German Federal Office for Migration and Refugees (BAMF),” 2022, https://www.migrationdataportal.org/data-innovation-59.

4 Julian Tangermann, “Documenting and Establishing Identity in the Migration Process: Challenges and Practices in the German Context Focussed study by the German National Contact Point for the European Migration Network (EMN),” BAMF (2017): 50; https://www.bamf.de/SharedDocs/Anlagen/EN/EMN/Studien/wp76-emn-identitaetssicherung-feststellung.pdf?__blob=publicationFile&v=16; Migration Data Portal, “AI-Enabled Identification.”

5 BAMF, “The Personal Interview.”

6 BAMF, “Digitalising the Asylum Procedure,” 2020, https://www.bamf.de/EN/Themen/Digitalisierung/DigitalesAsylverfahren/digitalesasylverfahren-node.html; Migration Data Portal, “AI-enabled identification management of the German Federal Office for Migration and Refugees (BAMF),” 2022, https://www.migrationdataportal.org/data-innovation-59.

7 Gesellschaft für Freiheitsrecht, “Race, borders, and digital technology,” 15 May 2020, https://www.ohchr.org/sites/default/files/Documents/Issues/Racism/SR/RaceBordersDigitalTechnologies/Gesellschaft_fur_Freiheitsrechte.pdf.

8 BAMF, “Digitalising the asylum procedure.”

9 Jessica Bither and Astrid Ziebart, “AI, digital identities, biometrics, blockchain: A primer on the use of technology in migration management,” GMFUS, June 2020, https://www.gmfus.org/sites/default/files/Bither%20%20Ziebarth%20%202020%20-%20technology%20in%20migration%20management%20primer%202.pdf.

10 A typical example of the STEM approach is Apple’s Face ID, which uses TrueDepth camera and anti-spoofing neural network for security purposes; by matching against depth information, Face ID prevents a digital device from being unlocked by a 2D image of the user’s face or a mask resembling it. Attempts to address bias in functional terms include the Mozilla Foundation’s interdisciplinary approach to “responsible AI,” which brings together technicians and coders alongside activists and artists at its annual Mozilla Festival (Mozilla Foundation, “Mozilla’s Approach to Trustworthy Artificial Intelligence (AI),” Mozilla Foundation, October 23, 2019, https://foundation.mozilla.org/en/blog/mozillas-approach-to-trustworthy-artificial-intelligence-ai/); or open access toolkits such as Fairlearn and AI Fairness 360 developed by IBM Research that purport to help developers mitigate fairness issues in AI (Fairlearn (n.d.), https://fairlearn.org/); (AI Fairness 360, “IBM Research Trusted AI,” (n.d.), https://aif360.mybluemix.net). Individual computer scientist proponents of functional fixes to racist systems include Joy Buolamwini’s Gender Shades project, and Kate Crawford’s exposé of the “error rate” of facial recognition. Theorists such as Os Keyes have critiqued the focus on making “improvements” to the data or accuracy of systems, which they believe are ingrained in deep structural discrimination (Os Keyes, “Counting the Countless,” Real Life, (April 8, 2019), https://reallifemag.com/counting-the-countless/).

11 Jude Browne, Eleanor Drage, Kerry Mackereth, The Politics of Ethical AI and AI-Generated Harm: A Feminist Empirical Study (forthcoming).

12 Simone Browne, Dark Matters: On the Surveillance of Blackness (Durham: Duke University Press, 2015); Ruha Benjamin, Race after Technology: Abolition Tools for a New Jim Code (Cambridge: Polity, 2019).

13 Claudio Celis Bueno, “The Face Revisited: Using Deleuze and Guattari to Explore the Politics of Algorithmic Face Recognition,” Theory, Culture and Society 37, no.1 (2020): 73–91; Keyes, “Counting the Countless”; Louise Amoore, “Biometric Borders: Governing Mobilities in the War on Terror,” Political Geography 25 (2006): 336–351.

14 Shoshana Magnet, When Biometrics Fail: Gender, Race and the Technology of Identity (Durham: Duke University Press, 2011); Martin French and J.D. Gavin, “Surveillance and Embodiment: Dispositifs of Capture,” Body and Society 22, no. 3 (2016): 3–27; Louise Amoore, Georgios Glouftsios, and Stephan Scheel, “An Inquiry into the Digitisation of Border and Migration Management: Performativity, Contestation and Heterogeneous Engineering,” Third World Quarterly 42, no.1 (2022): 123–140; Pedro Oliveira, “‘Das hätte nicht passieren dürfen’: Re-narrating border vocalities and machine listening calibration,” Spheres – Journal for Digital Cultures 5, https://spheres-journal.org/wp-content/uploads/spheres-5_Oliveira.pdf.

15 We follow Priya Goswami in preferring the term “participant” to “user.” The term denotes the contributions that a person makes to the functioning of a given technology, for example the use of their data (Priya Goswami, “Priya Goswami on Feminist App Design,” The Good Robot Podcast, https://podcasts.apple.com/gb/podcast/priya-goswami-on-feminist-app-design/id1570237963?i=1000523813215).

16 Eleanor Drage and Federica Frabetti, “AI that Matters: A Feminist Approach to the Study of Intelligent Machines,” Feminist AI: Critical Perspectives on Data, Algorithms and Intelligent Machines (Oxford: Oxford University Press, 2023).

17 Karen Barad, “Posthumanist Performativity: Toward an Understanding of How Matter Comes to Matter,” Signs 28, no. 3 (2003): 801–831.

18 For Barad, performativity is not so much a matter of citationality as of mutual co-constitution of the known and the knower, a finding that Amoore (2020) has applied to AI. However, this analysis would benefit from Butler’s insight into the citation of norms to account for how AI’s citational practices racialize and gender its participants. We have discussed the possibilities of staging a conversation between Butler and Barad in the context of neural networks (Drage and Frabetti, 2023).

19 Stan Z. Li and Anil K. Jain, eds., Handbook of Face Recognition (Berlin: Springer-Verlag, 2011).

20 Been Kim, Martin Wattenberg, Justin Gilmer, Carrie Cai, James Wexler, Fernanda Viegas, Rory Sayres, “Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors,” Proceedings of the 35th International Conference on Machine Learning 80 (2018): 2668–2677.

21 Eleanor Drage and Federica Frabetti, “AI that Matters.”

22 For a critique of the distinction between information about the body and the body itself, see K. Ball, M. Di Domenico, D. Nunan, “Big Data Surveillance and the Body-subject,” Body & Society 22, no. 2 (2016): 58–81; and Irma van der Ploeg, “Biometrics and the Body as Information: Normative Issues of the Socio-technical Coding of the Body,” in Surveillance as Social Sorting: Privacy, Risk, and Digital Discrimination, ed. David Lyon (London: Routledge, 2003).

23 Joseph Pugliese, “In Silico Race and the Heteronomy of Biometric Proxies: Biometrics in the Context of Civilian Life, Border Security and Counter-Terrorism Laws,” Australian Feminist Law Journal 23, no. 1 (2005): 3; Oliveira, “‘Das hätte nicht passieren dürfen.’”

24 Pugliese, “In Silico Race”, 3.

25 Jacques Derrida, Limited Inc. (Evanston, IL: Northwestern University Press, 1988).

26 Jacques Derrida, Limited Inc. (Evanston, IL: Northwestern University Press, 1988).

27 Joseph Pugliese, Biometrics: Bodies, Technologies, Biopolitics (London and New York: Routledge, 2010), 3.

28 Hansard.UKParliament, “HM Passport Office Backlog,” UK Parliament, June 14, 2022, https://hansard.parliament.uk/commons/2022-06-14/debates/526271D8-AFD5-4E77-97E8-E4F348EB3F57/HMPassportOfficeBacklog.

29 Eleanor Drage and Federica Frabetti, “The Performativity of AI-powered Event Detection: How AI Creates a Racialized Protest and Why Looking for Bias is Not a Solution,” Science, Technology, & Human Values (2023) https://doi.org/10.1177/01622439231164660.

30 Mark Prince and Clare Watson, “Applying for your passport online,” Home Office Digital, Data and Technology, February 13, 2019, https://hodigital.blog.gov.uk/2019/02/13/applying-for-your-passport-online/.

31 Eleanor Drage and Kerry Mackereth, “Does AI Debias Recruitment? Race, Gender, and AI’s ‘Eradication of Difference,’” Philosophy and Technology 35, no. 4 (2002): 1–25, https://link.springer.com/article/10.1007/s13347-022-00543-1.

32 Maryam Ahmed, “UK Passport Photo Checker Shows Bias Against Dark-skinned Women,” BBC News, October 8, 2020, https://www.bbc.com/news/technology-54349538.

33 Ahmed, “UK passport photo checker.”

36 Bosworth, M. (2014) Inside Immigration Detention. (Oxford: Oxford University Press); Bradley, G. M. and De Noronha, L. Against Borders: The Case for Abolition. (London: Verso, 2022); Canning, V. Gendered Harm and Structural Violence in the British Asylum System (London: Routledge, 2017); Gray, H. and Franck, A. K. “Refugees as/at Risk: The Gendered and Racialized Underpinnings of Securitization in British Media Narratives”, Security Dialogue 50, no. 3 (2019): pp. 275–291. doi:10.1177/0967010619830590.

37 Ahmed, “UK Passport Photo Checker.”

39 Federica Frabetti. Software Theory: Software Theory: A Cultural and Philosophical Study. (London and New York: Routledge, 2014).

40 Jacques Derrida, “Structure, Sign and Play in the Discourse of the Human Sciences.” In Writing and Difference (London and New York: Routledge, 1980).

41 Louise Amoore, “Doubt and the Algorithm: On the Partial Accounts of Machine Learning,” Theory, Culture and Society 36, no. 6 (2019): 147–16.

42 BAMF, “The Personal Interview.”

43 Ben Knight, “Germany ‘failed to use language recognition tech on refugees,’” Deutsche Welle, May 26, 2017, https://www.dw.com/en/germany-failed-to-use-language-recognition-tech-on-refugees/a-39001280.

44 Dylan Mulvin, Proxies (Cambridge: MIT Press, 2021).

45 Andi Sunyoto and Dwi Sari Widyowaty, “Accent Recognition by Native Language Using Mel-Frequency Cepstral Coefficient and K-Nearest Neighbor,” 3rd International Conference on Information and Communications Technology (ICOIACT), (2020): 314–318.

46 Sunyoto and Widyowaty, “Accent Recognition,” 315.

47 Alexandros Potamianos and Richard Rose, “On Combining Frequency Warping and Spectral Shaping in HMM Based Speech Recognition,” IEEE (1997): 2.

48 Julian Tangermann, “Documenting and Establishing Identity.”

49 Anna Biselli and Lisa Beckmann, “Invading Refugees’ Phones: Digital Forms of Migration Control in Germany and Europe,” Gesellschaft Fur Freiheitsrechte, 2020, https://freiheitsrechte.org/home/wp-content/uploads/2020/02/Study_Invading-Refugees-Phones_Digital-Forms-of-Migration-Control.pdf.

50 Maarten Bolhuis and Joris van Wijk, “Case Management, Identity Controls and Screening on National Security and 1F Exclusion: A Comparative Study on Syrian Asylum Seekers in Five European countries. Norwegian Directorate of Immigration,” Utlendingsdirektoratet, UDI, 2018, https://www.udi.no/globalassets/global/forskning-fou_i/beskyttelse/case_management_identity_controls.pdf; for more on how geography is commonly tied to accent, language and identity, see Michelle Pfeifer, “The Native Ear,” in Thinking with an Accent: Toward a New Object, Method, and Practice, ed. Pooja Rangan et al. (Berkeley: University of California Press, 2023): 1–20.

51 Simone Browne, Dark Matters, 108; Paul Gilroy, Against Race: Imagining Political Culture Beyond the Color Line (Harvard University Press, 2000): 46.

52 Oliveira, “‘Das hätte nicht passieren dürfen.’”

53 Oliveira, “‘Das hätte nicht passieren dürfen.’”

54 Oliveira, “‘Das hätte nicht passieren dürfen.’”

55 Iván Chaar López, “Alien Data: Immigration and Regimes of Connectivity in the United States,” Critical Ethnic Studies 6, no. 2 (2020): 13.

56 Oliveira, “‘Das hätte nicht passieren dürfen.’”

57 Oliveira, “‘Das hätte nicht passieren dürfen.’”

58 Jacques Derrida, Paper Machine (Stanford, CA: Stanford University Press, 2005 [2001]), 54.

59 Derrida, Paper Machine, 54.

60 Derrida, Paper Machine, 57.

61 Derrida, Paper Machine, 57.

62 Derrida, Paper Machine, 52.

63 Derrida, Paper Machine, 56.

64 Derrida, Paper Machine, 56.

65 Derrida, Paper Machine, 56.

66 Zakiyyah Iman Jackson and Lauren Wilcox, “Black Feminism at the End of the World: An Interview with Zakiyyah Iman Jackson,” International Politics Reviews (2022).

67 Alexander Weheliye, Habeas Viscus: Racializing Assemblages, Biopolitics, and Black Feminist Theories of the Human (Durham: Duke University Press, 2014): 86–87.

68 Jackson and Wilcox, “Black Feminism.”

69 Radhika Mongia, “Race, Nationality, Mobility: A History of the Passport”, Public Culture 11, no. 3 (1999): 553.

70 Toby Beauchamp, “Transgender Politics and U.S. Surveillance Practices” (Durham: Duke University Press, 2019): 10.

71 Paisley Currah and Tara Mulqueen, “Securitizing Gender: Identity, Biometrics, and Transgender Bodies at the Airport,” Social Research 78, no. 2 (2011).

72 Sally J. Spalding, “Airport Outings: The Coalitional Possibilities of Affective Rupture,” Women's Studies in Communication 39, no. 4, (2016): 460–480.

73 Stuart Hall, “Representation and the Media. Media Education Foundation,” MEF, 1997, https://www.mediaed.org/transcripts/Stuart-Hall-Representation-and-the-Media-Transcript.pdf.

74 Sara Ahmed, Living a Feminist Life (Durham, NC: Duke University Press: 2016).

76 Nina Sun Eidsheim, The Race of Sound: Listening, Timbre, and Vocality in African American Music (Durham: Duke University Press, 2019), 9; Jennifer Lynn Stoever, The Sonic Color Line: Race and the Cultural Politics of Listening (NYU Press: 2016), 32; Nina Sun Eidsheim, “Rewriting Algorithms for Just Recognition,” in Thinking with an Accent: Toward a New Object, Method, and Practice, ed. Pooja Rangan et al. (Berkeley: University of California Press, 2023): 134–150; Pfeifer, “The Native Ear,” 192–207; Pooja Rangan et al. “Introduction,” in Thinking with an Accent: Toward a New Object, Method, and Practice, ed. Pooja Rangan et al. (Berkeley: University of California Press, 2023): 1–20; Jonathan Sterne, The Audible Past: Cultural Origins of Sound Reproduction (Duke University Press: 2003), 13.

77 Ferdinand de Saussure, Course in General Linguistics, trans. Roy Harris (London: Duckworth, 1983): 11.

78 In 2019, over half of the 41,094 Syrian applicants gained asylum in Germany (22,705) as opposed to under a third of the 15,348 Iraqi immigrants (4,639), the second largest country (AIDA, 2020). Half of Germany’s refugees are Syrian.