5,325
Views
1
CrossRef citations to date
0
Altmetric
Research Articles

Legal contestation of artificial intelligence-related decision-making in the United Kingdom: reflections for policy

ORCID Icon, ORCID Icon, ORCID Icon, , , ORCID Icon, , ORCID Icon, ORCID Icon, , ORCID Icon & show all
Pages 251-285 | Received 13 Jul 2021, Accepted 25 Oct 2021, Published online: 24 Nov 2021
 

ABSTRACT

This paper considers legal contestation in the UK as a source of useful reflections for AI policy. The government has published a ‘National AI Strategy’, but it is unclear how effective this will be given doubts about levels of public trust. One key concern is the UK’s apparent ‘side-lining’ of the law. A series of events were convened to investigate critical legal perspectives on the issues, culminating in an expert workshop addressing five sectors. Participants discussed AI in the context of wider trends towards automated decision-making (ADM). A recent proliferation in legal actions is expected to continue. The discussions illuminated the various ways in which individual examples connect systematically to developments in governance and broader ‘AI-related decision-making’, particularly due to chronic problems with transparency and awareness. This provides a fresh and current insight into the perspectives of key groups advancing criticisms relevant to policy in this area. Policymakers’ neglect of the law and legal processes is contributing to quality issues with recent practical ADM implementation in the UK. Strong signals are now required to switch back from the vicious cycle of increasing mistrust to an approach capable of generating public trust. Suggestions are summarised for consideration by policymakers.

Acknowledgements

Clearance was obtained through King’s research ethics systems (MRA-18/19-13307). The authors would like to thank all of the participants for their engagement in the discussions. Note that the participants in the workshop held a wide range of views. This paper does not reflect individual contributions from the participants. Any conclusions, statements and observations in the paper do not necessarily reflect the unanimous view of all the participants. Special thanks to Professor Frank Pasquale, Swee Leng Harris and Leigham Strachan.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Notes

1 A-levels are subject-based educational qualifications typically conferred on school leavers aged 18 in parts of the UK (England and Wales, Northern Ireland). In 2020, Covid-19 ‘social distancing measures meant that the so-called “A-level” exams could not go ahead as planned. … In the absence of actual exams, Ofqual decided to estimate the A-level grades using an algorithm … [leading to a] massive public outcry and protest’ (Kolkman Citation2020).

2 CDEI is a UK government expert body enabling the trustworthy use of data and AI. It was set up in 2018, with a mandate to ‘help to maximise the benefits of data and Artificial Intelligence (AI) for our society and economy’. CDEI has been ‘tasked by the Government to connect policymakers, industry, civil society, and the public to develop the right governance regime for data-driven technologies’, including AI (CDEI Citation2018).

3 The Office of Qualifications and Examinations Regulation (Ofqual) is a non-ministerial government department that regulates qualifications, exams and tests in England.

4 These considerations have become more directly relevant to active policy formulation following the announcement of major data protection reforms by the UK Department for Culture, Media and Sport (DCMS) in September 2021 (DCMS Citation2021a). The consultation, ongoing at the time of writing, specifically included AI-related proposals advanced as measures to ‘build trustworthy … systems’ (notably Section 1.5, especially Paragraphs 81-113, Section 2.2, especially Paragraphs 165–9 and 174-7, and Section 4.4). Amongst other things, the UK government is considering abolishing the GDPR right to explanation altogether, following up on the recommendations of a ‘taskforce’ formed to ‘seize new opportunities from Brexit with its newfound regulatory freedom’ (TIGRR Citation2021). See the Conclusion for further analysis.

5 ‘Between 2000 and 2014, the [UK] Post Office prosecuted 736 sub-postmasters and sub-postmistresses - an average of one a week - based on information from a recently installed computer system called Horizon.

Some went to prison following convictions for false accounting and theft, many were financially ruined and have described being shunned by their communities. Some have since died. After 20 years, campaigners won a legal battle to have their cases reconsidered, after claiming that the computer system was flawed’ (BBC Citation2021c).

6 The ICO is the ‘UK’s independent authority set up to uphold information rights in the public interest, promoting openness by public bodies and data privacy for individuals’ (ICO Citation2021b).

7 ‘In July 2017, following reports concerning the use of Google DeepMind’s Streams application at the Royal Free NHS [National Health Service] Foundation Trust [(a large hospital in London)], the ICO announced that the processing of personal data within Streams was not compliant with the Data Protection Act 1998 – the relevant data protection law at the time’ (ICO Citation2019). Remedial actions ordered by the ICO were focused on the hospital, not the tech firm; and it was immediately clear that they were unlikely to act as much of a deterrent to future misuses of data (other than from a public relations point of view) (Shah Citation2017).

8 ‘The public sector equality duty is a duty on public authorities [in the UK] to consider or think about how their policies or decisions affect people who are protected under the Equality Act’ (Citizens Advice Citation2021).

9 The Home Office (or ‘Home Department’) is the UK’s ‘lead government department for immigration and passports, drugs policy, crime, fire, counter-terrorism and police’ (gov.uk website). Since 2015, the Home Office ‘has used a traffic-light [“streaming tool”] system to grade every entry visa application to the UK … . Once assigned by the algorithm, this rating plays a major role in determining the outcome of the visa application. [It was] argued this was racial discrimination and breached the Equality Act 2010’ (JCWI Citation2020).

10 ‘In 2014 a BBC Panorama documentary drew attention to fraud in the UK visa system, including cheating in English language tests. The Home Office revoked visas where there was evidence of cheating, but its decisions have come under renewed public and Parliamentary scrutiny … It was not possible for the Department to directly check the accuracy of ETS [Educational Testing Service] classification [which was the approach used by the commercial testing provider, including using voice recognition, to establish whether candidates cheated or not] … Individuals matched to invalid or questionable TOEIC certificates [have] won 3,600 First-tier appeals’ (NAO Citation2019).

11 Noting, for example, that the NAO’s recent reflections on 25 years of ‘digital change’ more generally highlighted that ‘underperformance can often be the result of programmes not being sufficiently thought through before key decisions on technology solutions are made’ (NAO Citation2021a).

12 We are also grateful to a reviewer of this paper for the comment that ADM has a longer notoriety in popular culture (for example in the phrase ‘Computer says no’) than the view that it is ‘early days’ for relevant legal contestation might suggest.

13 Equally some might interpret the proposal to merge biometrics and surveillance camera oversight into the ICO (Q5.8.1-2) as a further, more formal discouragement to proactive regulation (see Discussion above).

Additional information

Funding

King's College London work for this paper was supported by the EPSRC under the Trust in Human-Machine Partnership (THuMP) project (EP/R033722/1). University of Essex work for this paper was supported by the Economic and Social Research Council under the Human Rights and Information Technology in the Era of Big Data (HRBDT) project (ES/M010236/1).