654
Views
1
CrossRef citations to date
0
Altmetric
Research Articles

Bias in Facial Recognition Technologies Used by Law Enforcement: Understanding the Causes and Searching for a Way Out

ORCID Icon
 

ABSTRACT

In defining facial recognition technologies (FRT) as artificial intelligence, we highlight the objectivity of machine-assisted decision-making. This creates a false impression that the results produced by these technologies are free from mistakes our eyes or minds often make, and from stereotypes and prejudices we find difficult to overcome. However, AI is not an entirely distant technology or completely objective algorithm; it is a set of codes written by humans, and thus it follows the rules humans put into it. These rules often appear to be directly or indirectly biased or to nurture inequalities that our societies continue to uphold. Furthermore, the use of FRT is dependent on human discretion, as it is deployed and utilized by humans, thereby introducing further potential for human bias. This paper focuses on the challenges faced due to the fact that FRT, which are used by law enforcement authorities, can be affected by racial and other biases and prejudices, or may be deployed in a manner that raises grounds for bias and discrimination. It discusses how bias can be introduced into the FRT software and how it may manifest during usage of FRT, resulting in unwanted side effects negatively affecting certain population groups. The paper considers whether and how these challenges can be overcome, focusing on data and social perspectives.

Disclosure statement

The author reports there are no competing interests to declare.

Funding details

This work was supported by the Research Council of Lithuania (LMTLT) under Grant ‘Government Use of Facial Recognition Technologies: Legal Challenges and Solutions’ (FaceAI), agreement number S-MIP-21-38.

Notes

1 Before FRT was created, law enforcement authorities used data from surveillance cameras. The main difference is that face detections and matches are now done automatically by a specific AI technology, saving a lot of time. Desara Dushi, ‘The Use of Facial Recognition Technology in EU Law Enforcement: Fundamental Rights Implications’ [2020] Policy Briefs 2020- South East Europe <https://repository.gchumanrights.org/server/api/core/bitstreams/51d86ab3-1cb5-45f6-b141-64c06dcef5d8/content>.

2 Discrimination is defined as a situation where one person is treated less favourably than another is, has been, or would be treated in a comparable situation and this differentiated treatment appears due to perceived or real personal characteristics (Article 2 of Council Directive 2000/43/EC of 29 June 2000 implementing the principle of equal treatment between persons irrespective of racial or ethnic origin. OJ L 180, 19.7.2000, p. 22–26 2000.; and Article 2 of the Council Directive 2000/78/EC of 27 November 2000 establishing a general framework for equal treatment in employment and occupation. OJ L 303, 2.12.2000, p. 16–22. 2000.). Discrimination can be direct or indirect. Direct discrimination is easier to detect, and indirect discrimination, sometimes referred to as covered discrimination, is harder to identify and prove. Indirect discrimination is a difference in treatment caused by disproportionately prejudicial effects of a general policy or measure which, though couched in neutral terms, discriminates against a group. DH and Others v the Czech Republic [2007] ECtHR [GC] 57325/00.

3 However, many AI systems and their algorithms open the door for further discussion as to which groups and characteristics should be protected. On the legal status of algorithmic groups see Sandra Wachter, ‘The Theory of Artificial Immutability: Protecting Algorithmic Groups Under Anti-Discrimination Law’ (Citation2022) 97 Tulane law review 149.

4 See further: Sandra Wachter, Brent Mittelstadt and Chris Russell, ‘Why Fairness Cannot Be Automated: Bridging the Gap between EU Non-Discrimination Law and AI’ (Citation2021) 41 Computer Law & Security Review 105567.; Natalia Criado and Jose M Such, ‘Digital Discrimination’ in Natalia Criado and Jose M Such, Algorithmic Regulation (Oxford University Press Citation2019).; Raphaele Xenidis and Linda Senden, ‘EU Non-Discrimination Law in the Era of Artificial Intelligence: Mapping the Challenges of Algorithmic Discrimination’, General Principles of EU Law and the EU Digital Order (Wolters Kluwer Citation2020).

5 To name just few recent academic papers: Lois James, ‘The Stability of Implicit Racial Bias in Police Officers’ (Citation2018) 21 Police Quarterly 30.; Dean Knox, Will Lowe and Jonathan Mummolo, ‘Administrative Records Mask Racially Biased Policing’ (Citation2020) 114 American Political Science Review 619.; Kristina Murphy and others, ‘Police Bias, Social Identity, and Minority Groups: A Social Psychological Understanding of Cooperation with Police’ (Citation2018) 35 Justice Quarterly 1105.; P Jeffrey Brantingham, Matthew Valasik and George O Mohler, ‘Does Predictive Policing Lead to Biased Arrests? Results From a Randomized Controlled Trial’ (Citation2018) 5 Statistics and Public Policy 1.

7 For instance, one example from the education sector is a grade-counting algorithm developed in England in 2020 that caused discrimination against students attending schools in less wealthy areas. In response to the COVID-19 pandemic, Ofqual, the governing body responsible for overseeing qualifications, exams, and tests in England, developed a grading standardization algorithm to address concerns of grade inflation and provide moderation for the teacher-predicted grades of A-level and GCSE qualifications after the cancellation of examinations. Due to the data used by the algorithm, however, bias was developed as, e.g., schools’ performances in each subject over the previous years were taken into account. Figures from Scotland revealed that the reduction in the pass rate for the Scottish Higher exam among students from the most deprived backgrounds was 15.2 percentage points, while their wealthier counterparts experienced a comparatively smaller decrease of 6.9 percentage points. See ‘A-Levels and GCSEs: How Did the Exam Algorithm Work?’ BBC News (17 August Citation2020) <https://www.bbc.com/news/explainers-53807730> accessed 9 June 2023. The algorithm was designed to look at the past, find patterns, and suggest the grade, however, particularly due to such a model it had bias embedded in it and was ‘bias-preserving’: students from low-performing schools (often with more students from deprived backgrounds) were predicted to perform worse (high-performing students at low-performing schools had their marks capped by the prior performance of other students from earlier years), thus perpetuating social inequalities. On ‘bias-preserving’ vs ‘bias-transforming’ algorithms see Sandra Wachter, Brent Mittelstadt and Chris Russell, ‘Bias Preservation in Machine Learning: The Legality of Fairness Metrics Under EU Non-Discrimination Law’ (Citation2021) 123 West Virginia Law Review 735.

8 In 2019, an Austrian employment agency ran an algorithm to assess the employability of job seekers (and whether to support them in job searches). Mirroring historical social injustices, the algorithm gave some groups, including women, older people, and people with disabilities, a negative weight. In 2020, the Austrian data protection authority declared the system illegal. ‘Austria’s Employment Agency Rolls out Discriminatory Algorithm, Sees No Problem’ (AlgorithmWatch) <https://algorithmwatch.org/en/austrias-employment-agency-ams-rolls-out-discriminatory-algorithm/> accessed 12 June 2023.

9 On algorithmic bias in credit domain see, e.g., Ana Cristina Bicharra Garcia, Marcio Gomes Pinto Garcia and Roberto Rigobon, ‘Algorithmic Discrimination in the Credit Domain: What Do We Know about It?’ [Citation2023] AI & SOCIETY.

10 To reach the policymakers and experts in relevant public institutions, official written requests to the institutions were made, and in some cases the experts were contacted directly. To select academics, several authors of papers relevant to the area were contacted. Snowball sampling (when respondents suggest other respondents) was also used. Before the interviews, the research participants were acquainted with the purpose and procedure of the research, ensuring their anonymity, and signed an informed consent form. To ensure data collection consistency, the interviews were based on uniform interview guidelines developed by the researchers of the project. The identities of respondents are anonymized in this paper.

11 For more information see the project’s website: https://teise.org/en/lti-veikla/projektines-veiklos/veidai/.

12 At the control post, a person scans their passport and a live picture is captured by camera. The FRT then compares the two facial images, looking for the likelihood that both show the same person. In the EU, the Entry/Exit System Regulation introduced facial images as biometric identifiers and provided for the use of FRT for verification purposes. See Regulation (EU) 2017/2226 of the European Parliament and of the Council of 30 November 2017 establishing an Entry/Exit System (EES) to register entry and exit data and refusal of entry data of third-country nationals crossing the external borders of the Member States and determining the conditions for access to the EES for law enforcement purposes, and amending the Convention implementing the Schengen Agreement and Regulations (EC) No 767/2008 and (EU) No 1077/2011 2017 (OJ L).

13 FRA, ‘Facial Recognition Technology: Fundamental Rights Considerations in the Context of Law Enforcement’ <https://fra.europa.eu/sites/default/files/fra_uploads/fra-2019-facial-recognition-technology-focus-paper-1_en.pdf>.

14 Nicholas Fletcher, ‘Lincs Police to Trial New CCTV Tech That Can Tell If You’re in a Mood’ (LincolnshireLive, 17 August Citation2020) <https://www.lincolnshirelive.co.uk/news/local-news/new-lincolnshire-police-cctv-technology-4431274> accessed 21 November 2022.; Article 19.org, ‘Emotion Recognition Technology Report’ (ARTICLE 19, Citation2021) <https://www.article19.org/emotion-recognition-technology-report/> accessed 22 November 2022.

15 Yilun Wang and Michal Kosinski, ‘Deep Neural Networks Are More Accurate than Humans at Detecting Sexual Orientation from Facial Images.’ (Citation2018) 114 Journal of Personality and Social Psychology 246.

16 Marcus Smith, Monique Mann and Gregor Urbas, Biometrics, Crime and Security (Taylor & Francis Citation2018) <https://researchprofiles.canberra.edu.au/en/publications/biometrics-crime-and-security> accessed 17 July 2020. Chapter 1

17 Many facial recognition databases are drawn from established administrative databases, such as driver’s licence photograph databases. ibid., Chapter 4.

18 FRA, ‘Facial Recognition Technology: Fundamental Rights Considerations in the Context of Law Enforcement’ (n 13).

19 For an overview of European countries using FRT in law enforcement see ‘TELEFI Project “Summary Report”’ <https://www.telefi-project.eu/telefi-project/results>. Recently, case law in this area started emerging. For instance, in 2022 the French Conseil d’Etat recognised that facial recognition software might be used by police officers to be able to effectively pursue criminal investigations. For discussion of this case see Theodore Christakis and Alexandre Lodie, ‘The French Supreme Administrative Court Finds the Use of Facial Recognition by Law Enforcement Agencies to Support Criminal Investigations “Strictly Necessary” and Proportional’ (10 July 2022) <https://papers.ssrn.com/abstract=4321643> accessed 12 June 2023.

20 Facial recognition cameras work by scanning and measuring the faces of passers-by. The data is recorded and each person gets a uniquely identifiable numerical code. If such data is successfully matched against an image on the ‘watch list’, meaning the system ‘recognizes’ the person’, the facial recognition system issues an alert.

21 See further, e.g.: Giuseppe Mobilio, ‘Your Face Is Not New to Me – Regulating the Surveillance Power of Facial Recognition Technologies’ (Citation2023) 12 Internet Policy Review.; Ben Bradford and others, ‘Live Facial Recognition: Trust and Legitimacy as Predictors of Public Support for Police Use of New Technology’ [Citation2020] The British Journal of Criminology azaa032.; Pete Fussey and Daragh Murray, ‘Independent Report on the London Metropolitan Police Service’s Trial of Live Facial Recognition Technology’ (University of Essex Human Rights Centre 2019) <http://repository.essex.ac.uk/24946/1/London-Met-Police-Trial-of-Facial-Recognition-Tech-Report-2.pdf>.

22 See, e.g., Cathy O’Neil, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (1st edition, Crown Citation2016).; Leslie, David, ‘Understanding Bias in Facial Recognition Technologies’ (Citation2020) <https://doi.org/10.5281/zenodo.4050457> accessed 11 February 2022.

23 Fabio Bacchini and Ludovica Lorusso, ‘Race, Again: How Face Recognition Technology Reinforces Racial Discrimination’ (Citation2019) 17 Journal of Information, Communication and Ethics in Society 321.; Wachter, Mittelstadt and Russell (n 7).

24 In EU law, Article 21 of the Charter of Fundamental Rights prohibits any discrimination based on grounds such as sex, race, colour, ethnic or social origin, genetic features, language, religion or belief, political or any other opinion, membership of a national minority, property, birth, disability, age, or sexual orientation. These grounds of discrimination, as discussed below, appear in the context of FRT. ‘Charter of Fundamental Rights of the European Union’.

25 FRA, ‘Facial Recognition Technology: Fundamental Rights Considerations in the Context of Law Enforcement’.

26 Joy Buolamwini and Timnit Gebru, ‘Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification’, Proceedings of Machine Learning research (Citation2018) <http://proceedings.mlr.press/v81/buolamwini18a/buolamwini18a.pdf> accessed 17 June 2020.

27 Patrick Grother, Mei Ngan and Kayee Hanaoka, ‘Face Recognition Vendor Test Part 3: Demographic Effects’ (National Institute of Standards and Technology Citation2019) NIST IR 8280 <https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf> accessed 28 October 2022.

28 One of the first reported and the most famous wrongful arrest due to FRT misidentification was arrest of Robert Williams in January 2020. See, e.g., Clare Garvie, ‘The Untold Number of People Implicated in Crimes They Didn’t Commit Because of Face Recognition | News & Commentary’ (American Civil Liberties Union, 24 June Citation2020) <https://www.aclu.org/news/privacy-technology/the-untold-number-of-people-implicated-in-crimes-they-didnt-commit-because-of-face-recognition> accessed 28 November 2022. See also Sidney Perkowitz, ‘The Bias in the Machine: Facial Recognition Technology and Racial Disparities’ [Citation2021] MIT Case Studies in Social and Ethical Responsibilities of Computing.

29 Lizzy O’Leary, ‘How Facial Recognition Tech Made Its Way to the Battlefield in Ukraine’ (Slate.com, Citation2022) <https://slate.com/technology/2022/04/facial-recognition-ukraine-clearview-ai.html> accessed 20 May 2022.

30 Salem Hamed Abdurrahim, Salina Abdul Samad and Aqilah Baseri Huddin, ‘Review on the Effects of Age, Gender, and Race Demographics on Automatic Face Recognition’ (Citation2018) 34 The Visual Computer 1617. See also Lindsey Barrett, ‘Ban Facial Recognition Technologies For Children - And For Everyone Else’ (Citation2020) 26 Boston University Journal of Science and Technology Law 223.

31 ‘Facial Recognition Creates Risks for Trans Individuals, Others’ (GovTech, 16 July Citation2021) <https://www.govtech.com/products/facial-recognition-creates-risks-for-trans-individuals-others> accessed 10 November 2022; ‘Facial Recognition Software Has a Gender Problem’ (CU Boulder Today, 8 October Citation2019) <https://www.colorado.edu/today/2019/10/08/facial-recognition-software-has-gender-problem> accessed 10 November 2022.

32 Sheri Byrne-Haber CPACC, ‘Disability and AI Bias’ (Medium, 11 July Citation2019) <https://sheribyrnehaber.medium.com/disability-and-ai-bias-cced271bd533> accessed 10 November 2022.

33 Clare Garvie, Alvaro Bedoya and Janathan Frankle, ‘Unregulated Police Face Recogniton in America’ (Perpetual Line Up. Georgetown Law. Center of Privacy and Technology, Citation2016) <https://www.perpetuallineup.org/> accessed 10 November 2022.

34 See, e.g., Andrew Guthrie Ferguson, The Rise of Big Data Policing: Surveillance, Race, and the Future of Law Enforcement (New York University Press Citation2020).

35 Songül Tolan and others, ‘Why Machine Learning May Lead to Unfairness: Evidence from Risk Assessment for Juvenile Justice in Catalonia’, Proceedings of the Seventeenth International Conference on Artificial Intelligence and Law (ACM 2019) <https://dl.acm.org/doi/10.1145/3322640.3326705> accessed 20 January 2022.

36 ‘Fair Trials’ Tool Shows How ‘predictive Policing’ Discriminates and Unjustly Criminalises People’ (Fair Trials, 30 January Citation2023) <https://www.fairtrials.org/articles/news/fair-trials-tool-shows-how-predictive-policing-discriminates-and-unjustly-criminalises-people/> accessed 13 June 2023.

37 EFRA, Bias in Algorithms: Artificial Intelligence and Discrimination. (Publications Office Citation2022) <https://data.europa.eu/doi/10.2811/25847> accessed 9 June 2023.

38 See also Brantingham, Valasik and Mohler (n 5).

39 Articles 2 of the Equality Framework Directive and the Race and Ethnicity Equality Directive (n2); Article 21 of the ‘Charter of Fundamental Rights of the European Union’ (n 24).

40 In law enforcement, particular attention is paid to discrimination on the basis of race, colour, language, religion, citizenship or national or ethnic origin. See, e.g., European Commission against Racism and Intolerance (ECRI), ‘ECRI General Policy Recommendation N° 11 On Combating Racism and Racial Discrimination in Policing’ <https://rm.coe.int/ecri-general-policy-recommendation-no-11-on-combating-racism-and-racia/16808b5adf>. For a discussion of the case law of the ECtHR, see Bea Streicher, ‘Tackling Racial Profiling: Reflections on Recent Case Law of the European Court of Human Rights’ (Strasbourg Observers, 16 December Citation2022) <https://strasbourgobservers.com/2022/12/16/tackling-racial-profiling-reflections-on-recent-case-law-of-the-european-court-of-human-rights/> accessed 19 September 2023.

41 Fussey and Murray (n 21). p. 40. See also Fabio Bacchini and Ludovica Lorusso, ‘Race, Again: How Face Recognition Technology Reinforces Racial Discrimination’ (Citation2019) 17 Journal of Information, Communication and Ethics in Society 321.

42 For a brief overview see European Commission, Directorate General for Justice and Consumers, and European network of legal experts in gender equality and non discrimination, ‘Algorithmic Discrimination in Europe: Challenges and Opportunities for Gender Equality and Non Discrimination Law.’ (Publications Office Citation2021).

43 FRA, ‘Under Watchful Eyes: Biometrics, EU IT Systems and Fundamental Rights’ <https://fra.europa.eu/sites/default/files/fra_uploads/fra-2018-biometrics-fundamental-rights-eu_en.pdf>.

44 Mutale Nkonde, ‘Automated Anti-Blackness: Facial Recognition in Brooklyn, New York’ (Citation2019) 20 7.

45 Interview with [R1EN], ‘AI Practitioner and Academic from Australia’ (2 November Citation2021).

46 It is interesting to note that a diversity audit of Google and Facebook’s AI research teams performed in 2019 found they had one and zero Black members respectively. Nkonde (n 44).

47 See, e.g., Christian A Meissner and John C Brigham, ‘Thirty Years of Investigating the Own-Race Bias in Memory for Faces: A Meta-Analytic Review.’ (Citation2001) 7 Psychology, Public Policy, and Law 3.; Kathleen L Hourihan, Aaron S Benjamin and Xiping Liu, ‘A Cross-Race Effect in Metamemory: Predictions of Face Recognition Are More Accurate for Members of Our Own Race.’ (Citation2012) 1 Journal of Applied Research in Memory and Cognition 158.

48 Interview with [R4EN], ‘Academic from Australia’ (11 November 2021).

49 Interview with [R1EN] (n 45).

50 Various methods to address possible data bias are being suggested by the researchers. See ‘Method Predicts Bias in Face Recognition Models Using Unlabeled Data’ (Amazon Science, 8 November Citation2022) <https://www.amazon.science/blog/method-predicts-bias-in-face-recognition-models-using-unlabeled-data> accessed 28 November 2022.; Jean-Rémy Conti and others, ‘Mitigating Gender Bias in Face Recognition Using the von Mises-Fisher Mixture Model’, Proceedings of the 39th International Conference on Machine Learning (PMLR 2022) <https://proceedings.mlr.press/v162/conti22a.html> accessed 28 November 2022.

51 FRA, ‘Facial Recognition Technology: Fundamental Rights Considerations in the Context of Law Enforcement’ (n 13); Buolamwini and Gebru (n 26).

52 There is a strong interconnection between machine learning and bias, and the term ‘inductive bias’ has been used in this regard. The inductive bias (also known as learning bias) of a learning algorithm is the set of assumptions that the learner uses to predict outputs of given inputs that it has not encountered. Tom M Mitchell, ‘The Need for Biases in Learning Generalizations’ [Citation1980] Rutgers University, New Brunswick, NJ.

53 See Jeffrey Dastin, ‘Amazon Scraps Secret AI Recruiting Tool That Showed Bias against Women’, Martin Kirsten (ed). Ethics of Data and Analytics: Concepts and Cases (Auerbach Publications Citation2022).

54 Interview with [R2EN], ‘European NGO Activist’ (2 November 2021).

55 Interview with [R8EN], ‘European NGO Activist’ (22 November 2021).

56 Fussey and Murray (n 21).

57 Interview with [R2EN] (n 54).

58 See, for example, Frank Edwards, Hedwig Lee and Michael Esposito, ‘Risk of Being Killed by Police Use of Force in the United States by Age, Race–Ethnicity, and Sex’ (Citation2019) 116 Proceedings of the National Academy of Sciences 16793. Radley Balko, ‘Opinion. There’s Overwhelming Evidence That the Criminal-Justice System Is Racist. Here’s the Proof.’ (Washington Post, Citation2020) <https://www.washingtonpost.com/news/opinions/wp/2018/09/18/theres-overwhelming-evidence-that-the-criminal-justice-system-is-racist-heres-the-proof/>.

59 Garvie, Bedoya and Frankle (n 33).

60 Buolamwini and Gebru (n 26).

61 Emma Pierson and others, ‘A Large-Scale Analysis of Racial Disparities in Police Stops across the United States’ (Citation2020) 4 Nature Human Behaviour 736.

62 Australian Law Reform Commission, ‘Over-Representation’ (ALRC, Citation2018) <https://www.alrc.gov.au/publication/pathways-to-justice-inquiry-into-the-incarceration-rate-of-aboriginal-and-torres-strait-islander-peoples-alrc-report-133/3-incidence/over-representation/> accessed 23 November 2022.; Amnesty International, ‘The Overrepresentation Problem: First Nations Kids Are 26 Times More Likely to Be Incarcerated than Their Classmates’ (Amnesty International Australia, 8 September Citation2022) <https://www.amnesty.org.au/overrepresentation-explainer-first-nations-kids-are-26-times-more-likely-to-be-incarcerated/> accessed 23 November 2022.

63 Fair Trials, ‘Disparities and Discrimination in the European Union’s Criminal Legal Systems’ (Fair Trials, Citation2021) <https://www.fairtrials.org/articles/publications/disparities-and-discrimination-in-the-european-unions-criminal-legal-systems/> accessed 23 November 2022.

64 Vikram Dodd, ‘Met Police to Use Facial Recognition Software at Notting Hill Carnival’ The Guardian (5 August Citation2017) <https://www.theguardian.com/uk-news/2017/aug/05/met-police-facial-recognition-software-notting-hill-carnival> accessed 15 November 2022.

65 Interview with [R2EN] (n 54).

66 This paper focuses on the need to mitigate bias, so the analysis is limited to this aspect. On the overall need to regulate FRT in law enforcement see Vera Lúcia Raposo, ‘The Use of Facial Recognition Technology by Law Enforcement in Europe: A Non-Orwellian Draft Proposal’ [Citation2022] European Journal on Criminal Policy and Research.

67 EU guidelines focusing on data protection issues. European Data Protection Board, ‘Guidelines 05/2022 on the Use of Facial Recognition Technology in the Area of Law Enforcement’ <https://edpb.europa.eu/system/files/2022-05/edpb-guidelines_202205_frtlawenforcement_en_1.pdf>.; See also § 2.06. Police Use of Algorithms and Profiles in the The American Law Institute, ‘Principles of the Law, Policing’ <https://www.policingprinciples.org/> accessed 14 June 2023.

68 Interview with [R6EN], ‘Academic from Germany’ (15 November 2021).

69 Daniel E Ho Li Emily Black, Maneesh Agrawala, and Fei-Fei, ‘How Regulators Can Get Facial Recognition Technology Right’ (Brookings, 17 November 2020) <https://www.brookings.edu/techstream/how-regulators-can-get-facial-recognition-technology-right/> accessed 28 October 2022. It should also be noted that training data collection and validation tests must be implemented in ways that respect human rights and ethics.

70 ‘A Policy Framework for Responsible Limits on Facial Recognition Use Case: Law Enforcement Investigations’ <https://www3.weforum.org/docs/WEF_A_Policy_Framework_for_Responsible_Limits_on_Facial_Recognition_2021.pdf>.

71 The template for the bias impact statement could be used for this purpose. See, e.g. Nicol Turner Lee, Paul Resnick and Genie Barton, ‘Algorithmic Bias Detection and Mitigation: Best Practices and Policies to Reduce Consumer Harms’ (Brookings, 22 May Citation2019) <https://www.brookings.edu/research/algorithmic-bias-detection-and-mitigation-best-practices-and-policies-to-reduce-consumer-harms/> accessed 30 November 2022.

72 In this regard see also ibid.

73 R (on the application of Bridges) v Chief Constable of South Wales [2019] EWHC 2341 (Admin); [2020] 1 WLR 672; [2019] 9 WLUK 9 (DC). In this case, the claimant, Edward Bridges, challenged the legality of the South Wales Police Force’s use of automated facial recognition. Bridges claimed to have been caught on the camera without warning and argued that usage of such technology was contrary to human rights laws and data protection legislation, and could give rise to discrimination.

74 For a detailed analysis of the case see Monika Zalnieriute, ‘Burning Bridges: The Automated Facial Recognition Technology and Public Space Surveillance in the Modern State’ (Citation2021) 22 Columbia Science and Technology Law Review 284.

75 Directive 2014/24/EU of the European Parliament and of the Council of 26 February 2014 on public procurement and repealing Directive 2004/18/EC. OJ L 94, 28.3.2014, p. 65–242 2014.

76 FRA, ‘Data Quality and Artificial Intelligence – Mitigating Bias and Error to Protect Fundamental Rights’ (Citation2019).

77 Zalnieriute (n 74).

78 Directive (EU) 2016/680 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data by competent authorities for the purposes of the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, and on the free movement of such data, and repealing Council Framework Decision 2008/977/JHA. OJ L 119, 4.5.2016, p. 89–131 2016.

79 Article 5(1)(d)) of the Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts. COM/2021/206 final 2021.

80 This is also related to the working method of FRT: whether used for authentication or identification, FRT do not provide for a final result but instead suggest a probability that two faces (facial images) correspond to the same person.

81 Automated decision-making is also generally prohibited by Article 11 of the EU Law Enforcement Directive. Directive (EU) 2016/680 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data by competent authorities for the purposes of the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, and on the free movement of such data, and repealing Council Framework Decision 2008/977/JHA. OJ L 119, 4.5.2016, p. 89–131 (n 78).

82 See, e.g. Rosamunde Van Brakel, ‘How to Watch the Watchers? Democratic Oversight of Algorithmic Police Surveillance in Belgium’ (Citation2021) 19 Surveillance & Society 228.

83 See also European Data Protection Board Guidelines with calls for training on data protection. They note that such trainings need to explain the capabilities and limitations of the technology so that users can understand the circumstances under which it is necessary to apply it and the cases in which it can be inaccurate. European Data Protection Board (n 67).

Additional information

Notes on contributors

Agnė Limantė

Agnė Limantė is a chief researcher at the Law Institute of the Lithuanian Centre for Social Sciences. She received an MA in EU law from King’s College London (awarded with the Prize for Best MA Dissertation in EU Law) and a PhD from Vilnius University, Lithuania. Dr Limantė is an expert in human rights and has published over 40 papers, including articles in national and international journals and book chapters. Dr Limantė also has extensive experience working in international teams and conducting comparative research. She was a team member of a project ‘Government Use of Facial Recognition Technologies: Legal Challenges and Solutions’ (FaceAI), funded by the Research Council of Lithuania, which was implemented in 2021-2023.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.