1,533
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Unveiling algorithmic power: exploring the impact of automated systems on disabled people’s engagement with social services

ORCID Icon & ORCID Icon
Received 01 Feb 2023, Accepted 29 Jun 2023, Published online: 13 Jul 2023

Abstract

This article examines the impact of algorithmic systems on disabled people’s interactions with social services, focusing on a case study of algorithmic decision-making in Australia’s National Disability Insurance Scheme (NDIS). Through interviews and document analysis, we explore future visions and concerns related to increased automation in NDIS planning and eligibility determinations. We show that while individuals may not fully comprehend the inner workings of algorithmic systems, they develop their own understandings and judgments, shedding light on how power operates through the datafication of disability. The article highlights the significance of addressing epistemic justice concerns, urging a reevaluation of dominant modes of understanding and assessing disability through algorithmic categorisation, while advocating for more nuanced approaches that acknowledge disability’s embeddedness in social relations. The findings have implications for the future use of algorithmic decision-making in the NDIS and disability welfare provision more broadly.

Points of Interest

  • Computer-based tools, or algorithms, are increasingly being used to assess who is eligible for public disability programs.

  • This research investigates disabled people’s concerns about the use of these tools. It uses a case study from Australia, where algorithmic assessments were recently proposed and trialled by the government.

  • Concerns about technology’s impact on resource allocation in disability support regimes include insensitivity to individual lived realities and reduction of disability to a score of bodily functionality.

  • To avoid harm to disabled people, the research recommends an approach to assessment that better addresses contextual factors and lived experiences of disability.

Introduction

As algorithmic systems increasingly shape many aspects of social life, they are also transforming the social services and institutions many disabled people rely upon.

We use the term “algorithmic system” to describe a computer-based structure that employs predetermined instructions (algorithms) derived from logical rules and mathematical calculations to analyse data. The results of this analysis can assist, influence, or even substitute human decisions. The use of algorithmic systems in disability provisioning has the potential to enhance accessibility, improve resource allocation, and ensure equitable distribution of support. However, it has also raised concerns about overreliance on automated systems that conceal biases and discriminatory practices through seemingly benign methods of social classification, risk evaluation, and prediction, with significant implications for how resources are distributed and services are made accessible to people with disability (Dencik et al. Citation2019; Dencik et al. Citation2018; Whittaker et al. Citation2019; Bennett and Keyes Citation2019).

The growing body of literature on algorithmic injustice presents a compelling argument for scrutinising the intended goals, underlying logics and values embedded in algorithmic systems (Birhane and Cummins Citation2019; Dencik et al. Citation2022; Redden, Dencik, and Warne Citation2020). The systems implemented by governments are considered among the most consequential, due to their role in facilitating access to vital services that directly impact people’s well-being and future prospects (Eubanks Citation2019; Saxena et al. Citation2021). These decisions carry high stakes. As influential as these systems are, because the average person lacks access to government infrastructures in which they are embedded they remain largely hidden from public scrutiny. This lack of transparency amplifies concerns about the possible harms of algorithmic decision-making (ADM), particularly for marginalised and vulnerable populations.

As algorithmic systems continue to play a growing role in mediating interactions with social services, their presence is becoming more noticeable and significant in these interactions. Bucher notes that “algorithms make themselves known even though they often remain more or less elusive” (Bucher Citation2018: 94). While we may not be able to study algorithms directly, we can study the contexts and processes through which they are “put to work”’ (Lomborg and Kapsch Citation2020). We can also examine the ways people experience the impact of algorithms, their thoughts, feelings and knowledge of algorithmic processes, and how their encounters with algorithms shape their perceptions of power, fairness and justice in areas of high stakes decision-making (Lomborg and Kapsch Citation2020). Indeed, the exploration of how algorithmic tools are reshaping public domains, often perpetuating historical inequalities, has become an expanding field of interest for critical algorithm and data justice scholars (Birhane and Cummins Citation2019; McQuillan Citation2021; Singh Citation2021; Redden, Dencik, and Warne Citation2020).

In the critical discourse surrounding this topic, disability has largely been marginalised and overlooked. Over the last decade or so, disability studies has witnessed numerous significant contributions focusing on disability, technology, and new media (Alper Citation2017; Ellcessor Citation2016; Goggin Citation2018; Ellis and Kent Citation2017), including work foregrounding the importance artificial intelligence (AI) and big data for assistive technology (Wangmo et al. Citation2019), accessibility (Ellcessor Citation2016; Hamraie Citation2013) and digital inclusion (Goggin, Ellis, and Hawkins Citation2019; Tsatsou Citation2021). While these works provide valuable insights into issues of accessibility, inclusive and participatory design, and critical perspectives on technology, they have not yet addressed the widespread consequences of algorithic decision-making for people with disability (Goggin and Soldatić Citation2022). In a recent report, the United Nations Human Rights Council made a telling remark about how disability is typically discussed in relation to technology. Often, when the two come together, “one… naturally think[s] of accessibility”, the core concern being “making… new technology accessible to, and useable by, persons with disabilities” (Quinn Citation2021: 10-11). This limited viewpoint often neglects more expansive conversations about disability justice, ecompassing topics such as rights, justice, oppression, extractivism and exploitation (Banner Citation2019) – topics frequently explored in fields of cultural and media theory, critical data studies, and related disciplines. Consequently, while racism, sexism, and class are routinely considered within the context of these broader issues, the recognition of disablism and ableism remains lacking.

This article aims to advance the discussion by exploring how people with disability experience and make sense of the algorithmic systems that mediate their interactions with social services. We explore this through a case study of algorithmic decision-making in Australia’s National Disability Insurance Scheme (NDIS). This case study centres around a recent proposal by the Australian government to modify the assessment process for determining eligibility for disability support funding. The proposed change involved using a computer algorithm to analyse and categorise individuals based on data obtained through a new independent assessment process. In the past, these assessments were typically conducted by doctors, but under the new system they would be performed by government-contracted "independent" healthcare professionals (National Disability Insurance Agency, hereafter, NDIA Citation2021). The data collected through the in-person assessments would be converted into a numerical score and combined with other data points to determine both eligibility for NDIS access and the allocation of funding for the participant’s support services. Public concerns about the proposal materialised in a nationally significant disability-led campaign against independent assessments. Eventually, the government backed down from its reforms, citing the “concern and fear” that arose from the expectation of increased automation (Australian Government Citation2021a: 2). Although it considered these fears unwarranted, the government ultimately failed to gain the political and public support it needed to pass the legislative changes.

Our analysis of this case study takes up a series of related questions: what future visions are evoked by the increased automation of NDIS planning and eligibility determination? What do these visions reveal about the current and anticipated dangers of ADM within social services generally and disability provisioning in particular? What are the perceptions of people with disability regarding the impact of algorithmic systems on their interactions with the NDIS? Furthermore, what are the current and anticipated social justice implications associated with the implementation of these systems in this specific context? By employing a methodological approach involving interviews and analysis of relevant documents, our study found that even if they don’t fully comprehend the inner workings of algorithmic systems, people develop their own understandings, perceptions and “imaginaries” of algorithms (Bucher Citation2017). Through this process, they form judgments that provide insights into how power is seen to operate through algorithms to datafy disability in particular ways.

A key concern for people with disability and their representative organisations relates to the ways in which algorithmic systems of categorisation privilege stable, hegemonic ways of knowing, measuring and living with disability. We suggest that this is a form of epistemic injustice highlighting the need for more nuanced regimes of legibility that move beyond the medical to incorporate the embeddedness of disability in social relations and structures. By legibility we refer to the ways in which the administrative state represents the people, places and relations it seeks to govern, “bringing them into being as data points of information, labelling them and categorizing them in such a fashion that they may ‘be centrally recorded and monitored’” (Scott 1998:2, quoted in Logan Citation2017: 5). This process involves using technology to sort human bodies. We argue this creates a specific type of epistemic injustice wherein the individual’s own claims or knowledge of their body are undermined by interpretations of reality based on data. In its original formulation, epistemic injustice refers to wrongs done to people in their capacity as knowers – having their knowledge discredited or dismissed because of prejudices against the group to which they belong (Fricker Citation2007). Epistemic justice as we apply the concept refers to the legitimacy afforded to the situated knowledge of people with lived experience of disability.

The paper begins with a discussion of the datafication of social welfare organisation in general and the human rights concerns this has raised, before turning to focus on disability. We then take the NDIS and the attempted introduction of independent assessments as an empirical case study. Through qualitative analysis of key informant interviews and documentary sources from a parliamentary inquiry into independent assessments, we identify three distinct themes that articulate social, ethical and political concerns: (i) the datafication of disability, and the potential (ii) epistemic and (iii) distributive injustice resulting from increased automation. Finally, we consider the implications of these findings on the future use of ADM in the NDIS and disability welfare provision more broadly.

Social protection in the digital age

Governments have long utilised algorithmic decision support tools to organise and streamline decision-making processes, leading to a gradual bureaucratisation and standardisation of various aspects of social provision, including disability benefits and services (van Toorn Citationforthcoming). The combination of recent technological advances in computing power and the rapid expansion of big data has amplified this trend. As a result, many areas of disability provisioning have undergone a significant transformation, with a heightened dependence on algorithmic systems. Resources are increasingly allocated through algorithmic systems of profiling, classification and risk prediction that use digital data to classify citizens, determine their eligibility for assistance, and predict and surveil their behaviours (Alston Citation2019; Mann Citation2020). Among policy makers and researchers, there is growing interest in these practices of “citizen scoring” (Dencik et al. Citation2018; Redden, Dencik, and Warne Citation2020): “the use of data analytics in government for the purposes of categorisation, assessment and prediction at both individual and population level” (Dencik et al. Citation2019: 3). Citizen scoring takes insights and techniques developed in the private sector and applies these to public sector settings, with the aim of better understanding people’s needs, allocating resources more efficiently, and aligning social service delivery with ideals of fairness, consistency and personalisation.

But, as highlighted earlier, a growing body of research shows that such data-driven approaches have the potential to cause significant harm. Apart from widening the potential for heightened surveillance through data (Mann Citation2020; Bielefeld, Harb, and Henne Citation2021), citizen scoring risks reducing people to datafied versions of themselves (McQuillan Citation2021), deepening social inequalities and entrenching moral hierarchies of deservingness (Eubanks Citation2019). Fundamentally it is incapable of accounting for the diversity and complexity of lived experience, as we will see later in the case of independent assessments.

Human rights concerns

The world’s foremost human rights bodies have voiced concerns about emerging digital technologies driven by big data. A recent report by the Australian Human Rights Commission (Citation2021), for example, examined the human rights implications of “AI informed administrative decision-making”. It concluded that governments’ use of ADM systems raises legal, privacy and security concerns, alongside fundamental questions about fairness, equity and public confidence in government administration. Such systems, it suggested, can threaten human rights, for example by impeding access to social entitlements and thereby denying people an adequate standard of living, by enacting and reinforcing systemic discrimination, and by withholding effective legal remedies when rights are violated. Interestingly, in this report disability was a major theme in considering rights to access new technologies, and in discussions of technology-enabled discrimination more broadly. However, the report paid little attention to the infringements on disability rights caused by algorithmic or automated decision-making systems. Despite disability being a protected attribute under Australia’s anti-discrimination laws, the report failed to address disability considerations in discussions surrounding automated decision-making in areas such as social security, criminal justice, and immigration. This omission is concerning, especially considering the heightened potential for intersecting discrimination based on race, gender, class and dis/ableism in these domains (Newman-Griffis et al. 2022). A subsequent report by the United Nations Human Rights Council stated that “[w]hile there is a growing awareness of the broad human rights challenges that these new technologies can pose, a more focused debate on the specific challenges of such technology to the rights of persons with disabilities is urgently needed” (Quinn Citation2021: 1).

One of the strongest human rights critiques of the “digital welfare state” has come from the United Nation’s Special Rapporteur on extreme poverty and human rights, Philip Alston. In his 2019 report to the Human Rights Council, Alston emphasised how digital transformation undermines the fundamental values that once underpinned the welfare state. He argued that “the use of algorithms and artificial intelligence have, in at least some respects, facilitated a move towards a bureaucratic process and away from one premised on the right to social security or the right to social protection” (Alston Citation2019: 14). When these technologies are used to assess eligibility, he noted, the activity “can easily be transformed into an electronic question and answer process that almost inevitably puts already vulnerable individuals at even greater disadvantage” (Alston Citation2019: 15). Alston and others note the politically-driven nature of these developments (Redden, Dencik, and Warne Citation2020; Gillingham and Graham Citation2016; Schou and Hjelholt Citation2019), suggesting that automation strengthens efforts to restrict access to welfare, reduce spending on social programs, and enforce discipline and behavioural change among welfare recipients. These are guiding imperatives for neoliberal governments worldwide and inevitably end up defining the very problems for which technological solutions are saught. They influence the development of algorithmic tools and processes, as well as the social and political contexts in which these tools are deployed (Newman-Griffis et al. 2022).

Disability

Given this, is it not surprising that neoliberal welfare states have moved so readily to adopt algorithmic tools of disability need and eligibility assessment. By 2020, 41 states in the United States used one or more such tools (Center for Democracy & Technology Citation2020). The United Kingdom has a long history of welfare “reform” involving various technologies of disability classification and social sorting (Grover and Soldatić Citation2013; Roulstone Citation2015), and in the last two decades or so has moved to outsource much of this work to the IT industry (van Toorn, forthcoming, Grover Citation2014).

While these countries use different technologies to certify, quantify and classify disability for administrative purposes, they do so according to a common logic. Algorithmic systems generally strive for standardisation and “maximum calculability” in the ways they seek to code disability in computer legible forms (Henman and Dean Citation2010: 78). To make the concept of disability operable for a computer requires it to be “transcoded into discrete, processable elements” (Cheney-Lipold Citation2017: 53). As Sheila Jasanoff puts it, “the messy reality of people’s personal attributes and behaviours [need to be converted] into the objective, tractable language of numbers” (Jasanoff Citation2004: 27). But disability is a fluid, contested and nuanced concept that does not lend itself to easy quantification. From social attitudes, structures and power differentials, to unaccommodating environments, to differences in sex, gender, sexuality, culture and class, the mediating factors that shape individual experiences of disability are rarely captured by algorithmic calculations in any meaningful way. As we discuss below, for many disabled people this systematic erasure of lived experience is one of the most troubling and discriminatory aspects of algorithmic social sorting.

Defining disability

There is a rich history of critical disability theory on who counts as disabled, how that is decided, and why it is important. As Deborah Stone argued in her 1984 book, The Disabled State, defining the disability category and controlling its boundaries is a techno-political problem. By tracing the ways in which states expand and contract the disability category in response to social and fiscal pressures, Stone’s historical analysis challenged the medical model, which defines disability as “a property of the individual body” (Siebers Citation2008: 25), showing how disability – and the criteria used to determine disability status – function politically to resolve the problem of who should qualify for social aid. These coded categories can be thought of as a sorting process, “invisible doors that permit access to or exclude” people from systems of social protection (Lyon Citation2003: 13). Traditionally, this process relied on standardised tools such as rule-based decision trees, mathematical formulas, and statistical benchmarks. However, more recently, it has been digitalised, with computer algorithms encoding categories of disability and deservingness (van Toorn, forthcoming). As benign as they may seem, these tools of algorithmic classification and scoring therefore perform a vital function for states seeking “either to expand, legitimate and fund disability welfare or to define and shrink the disability category to make it much harder to access” (Roulstone Citation2015: 675; Soldatić Citation2019; Soldatić and Fitts Citation2020).

While there is now a growing body of critical analysis in response to the rise of ADM in social services, as highlighted above, there has been little exploration of disability welfare provisioning as a site of algorithmic social sorting. There is therefore a need for work that centres disability in these broader critical debates. Further empirical analysis is needed to understand how, and with what effects, algorithmic systems of welfare state decision-making encode specific conceptualisations of disability. ADM represents a significant lacuna in critical disability scholarship on the welfare state. This is mirrored in critical disability studies more broadly, where, as Goggin and Soldatić argue, the “consideration of ADM, disability, and digital inclusion is an urgently needed research endeavour” (Citation2022: 386). Any such endeavour needs to be attuned to the broader power dynamics and political economies of disability and technology to fully grasp both the current drivers of ADM and its likely future trajectories (Goggin Citation2018).

In the rest of this paper, we explore the algorithmic social sorting of disabled bodies by state decision-making infrastructures. Our findings suggest that disability is made legible through the use of proxy concepts from health and medicine, which has a number of political and ethical implications. For instance the concept of functional capacity, while amenable to easy computation, reveals very little about the social dynamics of disablement. We show that this approach to the quantification of embodied difference overlooks and misrepresents the realities of disabled lives, while obscuring issues of social injustice and inequality. It also points to issues of epistemic injustice arising from the ongoing privileging of biomedical tehchnoscientific approaches over alternative knowledge systems, including social and relational models of disability.

Background to the case study of algorithmic NDIS assessments

Prior to describing our methodological approach, it is important to provide some contextual detail about our case study. The NDIS is a publicly funded, national scheme through which Australian citizens and permanent residents can access disability support services, provided they meet the age and disability requirements. While the scheme is not means tested, individuals must have significant and permanent disability to be eligible. Only around 1 in 10 people with disability meet the eligibility threshold: the rest are expected to draw on informal care arrangements and other community-based support systems. Demand outstrips supply: in some parts of Australia, for example, there are no publicly subsidised home care services for the 90% of people with disability who do not qualify for NDIS funding (Disability Advocacy NSW Citation2018). Hence, determinations of NDIS eligibility are high stakes, life altering decisions.

When the concept of independent assessments utilising an algorithm-based resource allocation tool was initially put forward, both the NDIA and the Federal government provided various justifications for the change. In the existing process, applicants were evaluated by their treating specialist or doctor, whose medical assessment of the “severity” of disability was converted into a rating. This rating served as one of the data points used by the NDIA in decision-making regarding eligibility and support planning (Australian National Audit Office (Citation2020). Ostensibly, a central aim for the new proposal was to enhance fairness. The government claimed that using private assessors contracted by the government for a standardised assessment process would enhance funding consistency and reduce regional disparities in the allocation of NDIS resources (Australian Government Citation2021). Similar claims to fairness were made about human bias in decision-making. Treating doctors and allied health professionals were described as prone to exaggerating their clients’ disabilities to increase their chances of obtaining funding (NDIA Citation2020). The then Minister for the NDIS claimed that independent assessments would reduce the potential for “sympathy bias” in a scheme that was too reliant on “individual public servants’ judgement and their natural empathy” (Australian Government Citation2021b: 32). Outsourcing and automating assessments was an attempt to bring fiscal discipline and efficiency to the process and to address grievances related to equity and fairness.

The proposed “independent” assessment process meant applicants would still have to meet the disability threshold, but the test would be carried out in-person by private contractors using standardised tools in the form of questionnaires. The tools themselves had been developed internationally for use in clinical contexts, to measure the impact of a person’s disability on their functioning. For example one of the tools, the World Health Organisation Disability Assessment Schedule (WHODAS), was developed as an instrument “to produce standardized disability levels and profiles… [a]pplicable across cultures, in all adult populations” (World Health Organisation Citation2022). During a WHODAS assessment, a person is scored based on their answers to a generic questionnaire covering 6 “domains of functioning” (cognition, mobility, self-care, getting along, life activities, participation). Within each domain the different items are scored and summed, with the final figure taken to represent the “severity” of a person’s disability on a scale of “none” to “extreme” (World Health Organisation Citation2022). It is important to note that these tools were not originally designed for the task of optimising resource allocation; they were appropriated by the NDIA for that purpose under the independent assessment program, without seeking input from end users. Assessment scores were to be digitalised and the data fed into a computer program, along with other demographic data, to classify the person into one of 400 categories or profiles. These profiles grouped statistically similar NDIS participants based on age, disability type, and functional capacity score (NDIA Citation2021). Through this process the algorithm would determine whether NDIS access criteria for eligibility were met and, if so, the level of funding an applicant would receive.

Methodological approach

We began analysis by exploring the current views on algorithmic social sorting of people with disability held by a range of civil society actors. Our study drew on two main sources of information: (i) interviews with key informants from the disability activist and advocacy movement and NDIA officials who had been involved in the independent assessment trials; (ii) public submissions and Hansard transcripts of evidence given by witnesses to a 2021 parliamentary inquiry into independent assessments. We were particularly interested in comments relating to informants’ perceptions of fairness and existing or anticipated injustices. The inquiry, led by eight federal members of parliament constituting the Joint Standing Committee on the NDIS, held eight public hearings from April to August 2021 and collected 376 submissions (Australian Government Citation2021). Witnesses included government officials from within the NDIA and the Department of Social Services, civil society representatives speaking on behalf of Disabled People’s Organisations and other disability, First Nations, migrant and legal advocacy groups, and individual people with disability and family members speaking in a private capacity. Given the lack of published information about the specifics of the assessment process (e.g. the exact tools that would be used, the composition of the 400 profiles, and the nature of the sorting algorithm), testimony from public officials was an important source of evidence for our analysis. The Hansard transcripts also provided access to the opinions of disabled people and their representative organisations on both operational aspects of the proposed algorithmic system and its anticipated socio-political implications.

Eight key informant interviews were conducted between January to July 2022. The group of interviewees consisted of five representatives from disability organisations and two current or former staff members of the NDIA, and one person who was both a disability advocate and a former NDIA staffer. Ethics clearance was obtained from the University of New South Wales in November 2021 [HC210810]. An initial scoping exercise identified key informants who had unique insight into the independent assessment model, either as government “insiders” involved in its development and trial phases or as advocates representing the concerns of people with disability via Disabled People’s Organisations. All interviews were conducted online via Zoom by author A, using an interview guide with a set of prompts designed to facilitate a semi-structured, exploratory discussion. The guide contained questions about the initial conceptualisation, design and purpose of independent assessments, the benefits and risks of algorithmic approaches to assessment, and the disability movement’s response to the proposal. With the consent of the participants, the interviews were recorded and transcribed verbatim.

Our analysis of the interview and Hansard transcript and public submission data was an iterative process resembling Ritchie, Spencer, and O’Connor (Citation2003) method of framework analysis. Key themes were identified inductively from the data to create an analytical coding framework. The evidence for these themes was then charted onto a framework matrix to understand the relationship between themes and subthemes, and to organise data extracts, which we use selectively in the text to illustrate our main points of discussion. All excerpts have been de-identified using a coding system containing high-level, general information about their professional context.

Findings

During our study it became increasingly clear that people’s thinking about ADM – whether as it currently operates, or how it might operate in the future – takes the form of a social imaginary; that is, a collectively held vision of a technological future, shaped by the shared experiences, concerns, and aspirations of individuals within a given society or community (Lehtiniemi and Ruckenstein Citation2019; Bucher 2017). We anticipated this for the discussion of the future of algorithmic decision-making; what we realised in the course of the analysis is that the invisibility of ADM means that for most people, their discourse on the present of ADM is an imaginary as well.

During the public inquiry into independent assessments and in our interviews with key informants, advocates and critics speculated on the possible benefits and risks of an algorithmic approach. More favourable imaginaries highlighted its simplicity. An NDIA official, for example, described how the assessment would proceed in a series of steps: “So functional capacity in. …again there’s an assessment for that… algorithm runs. Spits out what the budget should be” (NDIA senior representative #1). Here, the algorithm holds the promise of transforming decision-making for the better by reducing complexity and enhancing objectivity. The presumption was “it would have been better. It would have been simpler. We wouldn’t have to talk to people every year. We’d add facts to it. It would be objective” (NDIA senior representative #1).

Nevertheless, civil society organisations appearing at the public inquiry – including disability, carer and migrant advocacy groups, Aboriginal and Torres Strait Islander organisations, disability service providers, community legal centres, and Disabled People’s Organisations – were unanimous in their opposition to independent assessments. Among the most prominent critics was the former inaugural chair of the NDIS and architect of the scheme, Bruce Bonyhady, who stated that:

there is a suggestion that these tools are being put together scientifically. They’re not. In fact, what is happening is that there is a whole series of judgements. Which tools should be used? … how [should] the tools be weighted. … You then have to decide what questions we’re going to ask… These are all judgements. There’s nothing scientific in this… nothing objective (Australian Government Citation2021c: 2-3).

A further critique made by both Bonyhady and several civil society groups was that scientific objectivity was not merely an impossible goal, but was also something potentially more insidious: a cover for harmful practices of social sorting that bestow or deny opportunities to people based on flawed algorithmic representations of disability.

It was Bonyhady who first proposed the term roboplanning as an alternative framing of independent assessments. In his testimony at the inquiry, Bonyhady explained “The reference to ‘roboplanning’ is because it’s exactly analogous with robodebt [an automated debt recovery system in Australia that wrongly issued debt notices to welfare recipients based on flawed income averaging, leading to widespread criticism and legal challenges]. This is the application of mathematical formulas in ways that they should never be used” (Australian Government Citation2021c: 7). The term gained currency within disability and policy circles and circulated more widely in the mainstream media and on social media, reappearing in hashtags such as #RoboNDIS. While in opposition, the then shadow NDIS Minister, Bill Shorten, accused the government of leading “a mad rush to trade a whole vital public service for a human-free Robo system” that was “more concerned with data-points than people” (Shorten Citation2021, n.p.).

Eventually, roboplanning became a common reference point from which people drew their own interpretations. Other speculative accounts of roboplanning expressed similar anxieties about shifting the locus of control from people to technology. As the chair of the NDIS committee put it, “In some ways, this sort of move—moving away from the voices of people with disability directly impacting what’s in their current [NDIS] plan to the computers working it out based on a formula—is a radical change” (Australian Government Citation2021d: 8). Roboplanning was seen as a harbinger of what was to come, should AI continue to transform the machinery of government in ways that decreased democratic control. In a submission to the inquiry, a critic speculated that:

in a not too distant future scenario, a participant would be served a robo-plan, arising from an Independent Assessment, with the robo-plan services transacted using blockchain programmable “smart money”. Far from participant choice and control… no transparency to the robo-plan algorithms or rules… (Johnson Citation2021: 12).

She continued with a warning that, “[w]hether by intention or inadvertence, this is a dangerous future emerging without governance or ethics. … Algorithm generated robo-plans arising from the Independent Assessments are the first step” (Johnson Citation2021: 13).

Within these general concerns we identified three specific recurring themes or critiques. The first relates to the capacity of algorithmic categorisation to capture the lived experience of disability: datafying disability. The second relates to the limited understandings of disability encoded in algorithmic systems, which mean they fail to capture the social-relational dynamics of disablement and the ways in which disability intersects with other markers of embodied difference: a form of epistemic injustice. The third concerns the material effects of this partial representation; that is, how algorithmic representations of disability function politically to mediate the distribution of resources and power in society: distributive injustice.

Datafying disability

A common criticism of algorithmic systems is that in transforming the world into computer legible formats (data), they diminish its complexity (Cheney-Lipold Citation2017; McQuillan Citation2021). This process of datafication necessarily involves the reduction of phenomena into abstract entities, including categories that help make sense of data. When individuals and their data are grouped in this way, something is lost: human experience is rendered into clusters of data points, with all its richness extracted for purposes of generalisability, prediction, and decision-making.

In the case of roboplanning, data from the disability assessments were to be processed algorithmically so that post-assessment, individuals would be automatically placed into one of 400 categories, which would in turn determine their funding allocation. According to the NDIA, the categories would be differentiated by a combination of “disability type, age and a range of other factors that enable us to work out what a typical plan budget should be” (Australian Government Citation2021b: 27). Given the paucity of publicly available information, it is difficult to know exactly which variables were to be used and how they were to be weighted. Insights about the process can be obtained from the experiences of around 4270 individuals who voluntarily underwent independent assessments as part of two pilot programs conducted by the NDIA from 2019 to 2021 (NDIA Citation2021a). The Disabled People’s Organisations that participated in our study reported on the experiences of their members who had participated in these pilot programs. These experiences offer a glimpse of how independent assessments were intended to operate and the methods used to datafy disability.

DPO representatives criticised the assessment process, stating that it failed to adequately capture the nuance and diversity of lived experiences of disability. Many perceived it as principally concerned with the measurement of functional impairment, with social and environmental factors receiving relatively less consideration. Relaying the views of members with vision impairment, one DPO representative stated:

our members that had been involved in the trials weren’t particularly happy. …some of the methods [used] to try and identify the functional impact of their disability [were] not really nuanced enough to be aligned to what their experiences of blindness were (Disability advocate #2).

Part of the justification for the new assessment tools was that they were “disability agnostic” (that is, universally applicable across all disabilities) and would “work equally well for every person in every situation” (Australian Government Citation2021c: 3). These claims about the universality of the instruments were criticised for subsuming people of different cultural and ethnic backgrounds, and with different disabilities, under a single classification rubric.

Roboplanning was also seen to both project and construct a certain normative account of disability that privileged some aspects and ways of living with disability while neglecting others. First Nations representatives, for example, spoke of “culturally inappropriate measures” of disability that were “designed in a non-Aboriginal community and [did] not translate at all” to the contexts in which Aboriginal and Torres Strait Islander people live, particularly in remote areas:

If people here [in outback Western Australia] want fresh fruit and vegetables, they grow it. They pick it from their garden. They go and collect bush foods. One of the [assessment] items is: do they wash the fruit? That’s just culturally inappropriate because they’re not buying it from a supermarket. They ask if they take their shoes off when they go into the house, but most kids from zero to 12 don’t wear shoes. How does a parent answer that—’I don’t know; they don’t wear shoes’? (Australian Government Citation2021e: 12).

Aside from these cultural nuances, roboplanning was also perceived as indifferent to embodied experiences of disability that differ along gender, sexuality, and class lines. One interviewee, for instance, noted that issues of safety and bodily autonomy were not appropriately recognised in the case of women with disability:

Particularly what we see is where it’s intersectional, like say safety as a woman travelling in an environment and then safety as someone with disability, that that intersectional cross is not captured in that sort of automated process (Systemic advocate #2)

Others drew attention to the unique patterns of disadvantage and difference that affect subgroups of people living with disability and/or chronic illness:

what we’re missing here is the intersectional disadvantage happening for people and the compounding elements that are often seen with people with a disability—if they have chronic health conditions at the same time, for example—and how that can be captured (Australian Government Citation2021d: 3)

A witness to the inquiry, 18-year-old Gi, highlighted their transgender status as an intersectional stigma that bears strongly on their experience of disability. For Gi, resisting stigma meant fighting for the right not to be “frozen in time at any given moment” according to fixed understandings of gender, sexuality, or disability (Australian Government Citation2021c: 17). But algorithmic systems, depending as they do on categorisation, inevitably do freeze a life experience in time. This denies a person’s unique context and identity that for many people are characterised by change and fluidity:

…how the NDIA thinks they can complete an assessment without acknowledging a person’s contextual environment. Will it be acknowledged that I am transgender and that my support needs will differ because of this? (Australian Government Citation2021c: 17-18).

Other statements point to the ontological problem of inferring the nature and impacts of a person’s disability from insufficiently nuanced, decontextualised data:

…if the questions are wrong, they’re not nuanced enough, if the artificial intelligence isn’t intelligent enough, if you don’t know the questions and the context within which they’re … and you put in a whole lot of [data], and then once you filter it through the algorithm and all the rest of it, you are some distance away from the reality of the experience of the human being (Disability advocate #5).

Stakeholders are also concerned that the tools do not adequately account for the context of the person’s life. What level of community inclusion do they experience? How do their relationships with their family enable their goals and objectives? What are their relationships like? How is their ability to get a job influenced by that contextual factor? While there are questions that would go to some of those details, the richness of the person’s community context is not captured by the tool (Australian Government Citation2021e: 17).

These comments call into question the capacity of algorithmic processing to meaningfully incorporate important contextual information that allows funded supports to be tailored to individual circumstances. They note the difficulties that current computational methods have in attending to unique contexts, needs, and intersectional identities. This problem arises where disability interacts with other embodied subjectivities with the result that disablement is structured and conditioned by ongoing traumas of colonisation and racism; by the varied manifestations of misogyny, transphobia and homophobia; and by the collective harms of the exploitative labour relations and anti-welfare ideologies of neoliberal capitalism. Scoring algorithms do not measure disability’s intricate entanglement with these interlocking power structures.

At an individual level the presence of multiple disabilities, fluctuating disabilities, and invisible, rare, or poorly defined conditions for which there are no reliable measures, can also easily confound a roboplanning-type model of assessment.

…where [a person has] got multiple disabilities… how they interact is quite unique for that particular individual. Even with something like deaf blindness, where you’ve got an interaction between the hearing impairment and the vision impairment, that looks quite different for two people with deaf blindness. So a sort of aggregate or a stereotype has been applied to that person that’s not necessarily capturing everything about their experience (Disability advocate #2).

The ability of almost 500,000 people to be easily categorised into 400 separate boxes is a concern (Australian Government Citation2021d: 3).

Epistemic injustice in algorithmic systems

Against a conventional narrative of continual progress through technological innovation, the metaphor of roboplanning expressed an alternative vision: technology generating a deeply untrustworthy data infrastructure that neither recognises nor responds to the realities of disabled lives. It also contained the seeds of a wider critique of algorithmic disability classification. It was considered problematic not just that people were being algorithmically classified, but also that the classification itself reflected hegemonic beliefs regarding the nature of disability. For those potentially subject to roboplanning, this was a matter of epistemic injustice (Fricker Citation2007). As media scholar Wendy Chun (Citation2021) reminds us, algorithmic systems exert power over individuals by defining the very terms on which they are clustered, sorted and labelled. Often, “our data is assigned categorical meaning without our direct participation, knowledge or acquiescence” (Cheney-Lipold Citation2017: 5). A distinctly algorithmic form of epistemic injustice occurs when meaning is assigned to data in ways that marginalise or exclude the perspectives of the knower, denying them the right to be an equal epistemic agent in making sense of their own realities (McQuillan Citation2021).

Roboplanning can be viewed as epistemically unjust in that it reflects and produces representations of disability contrary to disabled people’s own understandings. One prominent framework shaping contemporary thought on disability is the so-called social model of disability which originally developed out of the British disability rights movement. The social model and later variants reject the idea that disability is solely a matter of the biological dysfunction of the body. According to the original model, disability is the social oppression experienced by people with impairments, manifesting in social and environmental barriers that exclude them from participating equally in social, economic and cultural life (Hasler Citation1993; Oliver Citation1996). Later social-relational models see disability more broadly as the result of the interaction between a non-standard body and a disabling environment. Irrespective of detail, a relational view of disability makes functional capacity an inadequate proxy for disability, as it is too narrow in scope, concerned primarily with the quantification of factors “contained within the body of the person” (Disability advocate #6). Although one of the main data collection instruments, the WHODAS, defines disability as “the product of an interaction between attributes of an individual and features of the person’s physical, social and attitudinal environment” (Citation2022, n.p.), there is nevertheless an overriding focus on functional capacity, which was seen by many in the disability community as problematic:

By the very nature of a functional capacity assessment, even when they’re done in the best possible way, you are assessing the functional capacity of the individual. You are not taking into account environmental contexts; 50 per cent of the social model [of disability] is an understanding of the interaction of the impairment and the overall context (Australian Government Citation2021f: 8).

By reducing disability to a matter of functional capacity, roboplanning was seen to give only a partial account of the dynamics of disablement. The production of a score indicating high, medium or low functioning, based purely on assessment data and diagnostic information, belied the complex ways in which socio-political and cultural influences intersect with the materiality of the body to produce experiences of disability, including marginalisation and oppression. As one interviewee put it, the assessment itself would only cover

the tiny, tiny little territory of functional capability. We’re not even talking about relationships, public attitudes. … [An] algorithm can’t compensate for that social complexity (Disability advocate #5).

A global indicator of functioning used in this context is both inaccurate and misleading because it does not adequately capture the barriers to social, economic and community participation which contribute to disability (Australian Government Citation2021e: 30).

…we are concerned that the tools proposed do not adequately take into account the person’s familial and community context. Psychosocial disability and the ability to live a good life is relational by nature (Australian Government Citation2021e: 15).

Algorithmic social sorting based on functional capacity alone misrepresents the very nature of disability by removing it from the context(s) that produce it. Further, algorithmic portrayals of disability as an isolable phenomenon have a de facto monopoly on legitimate forms of recognition, at the expense of alternative knowledge systems. When disability is made legibile as a matter of individual disfunction, people have no way of claiming or receiving recognition of their disability in ways that are sensitive to context. As we have seen, versions of the social model form an alternative framework, which in attempting to account for aspects of life outside of medical diagnosis and functional capacity, lends authority to situated knowledge based on lived experience of disability.

Aside from the social model, there are other forms of situated knowledge that, in the case of roboplanning, reveal the limits of algorithmic recognition. From a First Nations perspective, for example, the attempt to datafy disability is itself a colonial project, since the very concepts and methods that enable disability datafication are products of Western medicine, science, and technology. One witness to the inquiry noted that “in traditional language we have no word… for ‘disability’; it’s always been an accepted part of the human experience” (Australian Government Citation2021b: 8). According to one witness, this is ultimately “about Aboriginal people fitting into a very Western way of looking at disability” (Australian Government Citation2021e: 12-13).

Similar concerns were raised in relation to culturally diverse people with disability, including those from migrant backgrounds. Witnesses to the inquiry described how dominant discourses and Western norms of disability had to be “translated” for people who were not culturally accustomed to talking about their disability in assessment-legible ways. A representative of a Tasmanian migrant resource centre explained that

[For the people we support] the concept of disability is not the same cultural concept as in Australia, where people are used to expressing their functional impairments in order to get the services they need. [The NDIA] don’t really have an understanding about looking at disability through a cultural lens… to them, disability is a one-size-fits-all (Australian Government Citation2021f: 9).

These concerns were echoed by many other witnesses for whom roboplanning represented a “highly westernised, white Anglo-Saxon approach to mandatory independent assessments” (Australian Government Citation2021g: 25).

Algorithmic abstraction and redistributive politics

Both the evidence submitted to the Parliamentary inquiry and our interviews indicate that roboplanning was also held to have important distributive effects in determining access to the NDIS, and the level and types of support provided to eligible beneficiaries. For many interviewees and witnesses, this is where the algorithmic misrecognition of disability mattered most – its capacity to distort the fair allocation of NDIS resources. Other authors have detailed the significance of algorithmic misrecognition as a feature of “datafied life” online, where algorithmic identities are imagined to sit “as a layer atop” our embodied social relations that exist “outside of data” (Cheney-Lipold Citation2017: 158). But in the case of roboplanning, algorithmic interpretations of people and their disability have a much more direct bearing on material and lived realities, through NDIS funding decisions. Rather than “sitting atop” and being separate from lived experience, these algorithmic constructions would shape the very content of everyday life for those people whose disability would be subject to a robo assessment. A concern articulated by many witnesses to the inquiry was that in giving only a partial and distorted picture of a person’s disability, roboplanning would systematically misrepresent their need for support and/or the nature of the support, potentially underestimating their funding entitlement. For many it would mean that “my supports end up being wrong… because [the score] doesn’t reflect the true nature of my disability” (Australian Government Citation2021c: 17). In his written submission to the inquiry, Bruce Bonyhady framed the problem as ultimately one of accurate and fair resource allocation:

Other questions [in the assessment] which are being piloted, such as “Can you dress yourself independently?”, frequently cannot be answered with a simple “yes” or “no”. Without further information and context, simplistic responses risk misleading or inaccurate scores. This jeopardises the validity of the entire assessment – and the resource allocation which will follow it (Bonyhady Citation2021: 12-13).

Bonyhady’s key insight here is that the algorithmic harms of (mis)representation invariably produce harms of (mis)allocation. This is especially true in contexts of welfare state decision-making, where algorithms offer and foreclose different possibilities for ameliorating the impacts of social disadvantage through redistributive mechanisms. When algorithmic processes produce a picture of disability that is drastically misaligned with lived experience, the harm is material as well as representational: funding is withheld, or support plans fail to meet true need. As we have shown, for marginalised groups such as First Nations peoples, migrant communities, and people with multiple, complex, or invisible disabilities, the potential for misrepresentation is particularly high, and misallocation an even more likely prospect.

Concluding discussion

This article has explored the visions, expectations, and fears evoked by the automation of NDIS disability assessments. The impetus for this research came from the strong public backlash against the algorithmic approach proposed by the NDIA. As one of our interviewees remarked, “this was the most intense and the most people had responded and participated since the campaign to get the NDIS in the first place – that’s how angry, afraid, scared, anxious, worried, people really were by what was being proposed” (Disability advocate #1). The sentiment expressed here underscores the importance of this research, and understanding what lay behind the unprecedented public response. Our analysis suggests disabled people, their families, and disability sector professionals are deeply concerned about the ways technology shapes resource allocation decisions within public disability support regimes. For them, roboplanning evokes visions of a future NDIS that is both insensitive to individual lived realities and highlighly individualistic in its reduction of disability to a score of bodily functionality. While this vision is in some respects future-oriented, it is also strongly shaped by understandings of current approaches to assessment, which already contain elements of roboplanning in the way disability is sorted, classified and defined based on diagnostic and other medical data (van Toorn et al. Citation2022).

When it comes to disability, the injustices of algorithmic social sorting are representational, material and epistemic. In our study, independent assessments operated as a veiled sorting mechanism – a means of quantifying disability and transforming it from an embodied experience to a mathematical relation represented by a score. For AI researcher Ranjit Singh, representation is a matter of being seen by state administrative infrastructures: “what data is collected”, he argues, and “how data categories are combined… deeply shape how an infrastructure sees people” (Singh Citation2021, n.p.). Moreover, visibility and legibility are not all or nothing phenomena. They operate on a spectrum of resolution, such that some aspects of human experience are brought into sharp focus, while others are distorted or obscured by a lack of representative data (Singh Citation2021).

Computational approaches to assessing disability render some aspects of disability more visible than others. Disabled critics of independent assessments held that while their body and functioning would be highly visible and scrutable, important social, relational and contextual details would be seen in low resolution or not at all. The social-relational aspects of disablement are certainly difficult to capture in the “objective, tractable language of numbers” (Jasanoff Citation2004: 27). While disability is fluid, nuanced and context dependent, algorithmic systems deal only with discreet, static, essential categories (Lu, Kay, and McKee Citation2022). But where the policy aim is to restrict accesss to disability support funding, algorithmic systems of recognition are incentivised to not “see” these aspects, because recognising them potentially expands the disability category to include more people and more facets of experience. For example, a person whose brain injury is ranked as “mild” according to the WHODAS classification instrument, but whose impairment is made worse by a life of poverty, gendered violence, or traumatic exposure to the carceral system, places a greater financial obligation on the state if their disability is recognised to include these mediating factors. Thus, in algorithmic representations of disability, these forms of social suffering appear in very low resolution, as do common experiences of disabling social and physical environments. Where this is the case, it is not necessarily a design flaw. Rather, the datafication of disability is influenced to a greater or lesser degree by the political project of delimiting the disability category.

While the literature on ‘sociotechnical imaginaries’ has tended to focus on the processes through which governments present a novel technology to the public, it is also clear that different publics then generate their own visions of how the technology will work in practice. Our findings highlight the way that public imaginaries of a technology such as ADM are both shaped by and reveal specific concerns of those who are, or will be, affected – in this case, people with disability. In the two years since the government abandoned independent assessments, further details have emerged regarding the existing use of automated tools by the NDIA for planning purposes and the resulting harms, including budget reductions and inadequate plans (Johnson Citation2022). At the time of writing, a campaign led by a coalition of disability activists is underway to bring attention to the “harms from the robo machinery [that] continue in plain sight” (https://robondis.org/home/). It is important to note that many of our participants offered alternative visions: what might be called counter-imaginaries, including perspectives that emphasised the importance of enhancing citizen voices and involvement, that in various ways attempt to resist or reframe the prevailing, expert narrative. This is an area that calls for further exploration, in order to move the conversation beyond the critique expressed in the imaginary of roboplanning towards a more nuanced understanding of the use of ADM in the future lives of people with disability.

Disclosure statement

No potential conflict of interest was reported by the authors.

Additional information

Funding

This work was supported by ARC Centre of Excellence for Automated Decision-Making and Society grant number(grant number CE200100005).

References