174
Views
0
CrossRef citations to date
0
Altmetric
Research Article

From Robodebt to responsible AI: sociotechnical imaginaries of AI in Australia

ORCID Icon
Received 11 Apr 2024, Accepted 11 Apr 2024, Published online: 06 Jun 2024

ABSTRACT

This paper examines Australia’s recent AI governance efforts through the lens of sociotechnical imaginaries. Using the example of Robodebt, it demonstrates how a more holistic and contextual examination of AI governance can help shed light on the social impacts and responsibilities associated with AI technologies. It argues that, despite the recent discursive shift to ‘safe and responsible AI’, a sociotechnical imaginary of AI as ‘economic good’ has been a persistent undercurrent in the past two governments’ efforts at AI governance. Understanding how such sociotechnical imaginaries are embedded in AI governance can help us better predict how these governance efforts will impact society.

Introduction

In June 2023, the Australian Labor Government unveiled its vision for Safe and Responsible AI in Australia (Department of Industry, Science and Resources, Citation2023), initiating the latest project in nearly a decade of Australian efforts to engage with and govern AI. These efforts include early ambitious plans to establish Australia as a global leader in AI to more recent cautious steps to create a national AI regulatory framework. Both instances are yet to be realised with each encountering tensions between visions of what AI can be in contrast to the practical reality of achieving that vision. This paper argues that, despite this lack of success, these efforts are significant not just as historical records of government policy, budgeting, and resource allocation, but also because of the sociotechnical imaginaries they reflect. Sociotechnical imaginaries represent our ‘visions of desirable futures’ as enabled by science and technology (Jasanoff, Citation2015a, p. 4). Examining these sociotechnical imaginaries is important as they reveal how abstract ideas about technologically-enabled futures have real and material impacts on people’s everyday lives.

AI governance, particularly at the level of nation-states, is a complicated space. For some years now, as the capabilities and affordances of AI technologies have rapidly developed, the debates around how to regulate and govern these technologies have been similarly intense. The push for more ethical development and management of AI technologies have been accompanied by a strong desire to ensure that AI technologies are governed in ways that promote economic growth and do not stifle innovation (Bareis & Katzenbach, Citation2022; Katzenbach, Citation2021). None of these efforts occur in a vacuum. The purpose of this article is to demonstrate how a more holistic and contextual examination of AI governance can help shed light on the sociotechnical imaginaries driving such governance initiatives. These imaginaries shape the design, implementation, availability, and access to AI technologies, including who benefits from their capabilities, how, and at what cost.

This article begins by introducing the sociotechnical imaginaries framework that guides the analysis in this discussion, and explains how this framework can contribute to studies of AI governance. AI governance, as shown using the sociotechnical imaginaries lens, can extend beyond the formal documentation and processes issued by governmental bodies. It also encompasses the broader actions, events, and debates that contextualise the development, availability, and use of AI technologies. I demonstrate how such context can give us insight into government efforts to shape Australia’s AI future by tracing the sociotechnical imaginaries of AI articulated by two recent Australian governments. While it is impossible to examine every relevant event within the scope of this paper, one example in particular stands out: Australia’s Online Compliance Intervention (OCI) system, referred to in this paper by its better-known colloquial name ‘Robodebt’. By exploring the AI imaginaries across these governments, I demonstrate how the broader perspective enabled by the sociotechnical imaginaries framework can help us gain a more nuanced understanding of the trajectory of AI governance in this country.

Sociotechnical imaginaries

Sociotechnical imaginaries, as advanced by Sheila Jasanoff and Sang-Hyun Kim (Citation2009; Citation2015) can be understood as ‘collectively held, institutionally stabilised, and publicly performed visions of desirable futures’ (Jasanoff, Citation2015a, p. 4). These imaginaries of the future lay the foundations for the design, development, deployment, marketing, and regulation processes that shape access to and adoption of technologies. The significance of the sociotechnical imaginaries framework is that it takes seriously the sometimes nebulous and abstract ways in which technology’s role in society is imagined, and then goes on to draw links between these imaginings and the practical manifestations that attempt to bring them into being.

There are three points worth noting here. First, that sociotechnical imaginaries are ‘visions of desirable futures’ that reveal ideas of ‘how life ought, or ought not, to be lived’ (Jasanoff, Citation2015a, p. 4). As such, the work of tracing sociotechnical imaginaries not only engages with desirable futures, but also engages with ideas of what is deemed undesirable and to be avoided (Jasanoff, Citation2015b, p. 325). Second, while sociotechnical imaginaries articulate visions of the future, they are nonetheless grounded in the past and present experiences of society, and therefore can both reveal (and function to entrench) existing social inequalities (Sartori & Theodorou, Citation2022). Finally, these sociotechnical imaginaries are not just the products of the powerful elites in society, but emerge out of the myths, legends, and stories that circulate certain ideas, shared values, and normative expectations in society. In other words, the experiences of everyday people are very much relevant to understanding current imaginaries that circulate in society. Sociotechnical imaginaries move through four phases: origin, where the imaginaries emerge; embedding, where they become established in society; resistance, where counter imaginaries clash and engage with each other; and finally, extension, where the imaginary evolves, spreads and becomes more broadly entrenched Jasanoff (Citation2015b). Before exploring how these four phases have operated in the Australian context, I first want to explain the significance of sociotechnical imaginaries for studies of AI governance.

AI imaginaries and governance

The role that imagination plays in relation to a range of technologies has received significant attention (Flichy, Citation2007; Goggin, Citation2015; Lupton, Citation2017; Mansell, Citation2012). It has particularly captured attention in studies relating to AI (Bareis & Katzenbach, Citation2022; Hoff, Citation2023; Lazaro & Rizzi, Citation2023; Paltieli, Citation2022; Wang, Downey, & Yang, Citation2023), including AI policies and governance (Hassan, Citation2020; Mager & Katzenbach, Citation2021; Sartori & Bocca, Citation2023). This is largely due to the way that AI functions as a powerful cultural and technological myth (Natale & Ballatore, Citation2020), revealing our broader fascination with the idea of a ‘thinking machine’ that can perform tasks on par with humans. While these imaginaries of AI do not often align with the actual capabilities of AI technologies, they are nonetheless powerful because they shape the purposes for which AI is designed, how AI is accepted and adopted by the public, and how AI is regulated and governed. Using a sociotechnical imaginaries perspective helps us move beyond a study of the formal channels and outputs of AI governance, and engage with the less tangible yet tenacious visions, hopes, desires and fears surrounding AI technologies and how, through permeating processes of governance, these imaginaries are made material. It is for this reason that this paper includes a discussion of Robodebt as part of its analysis of Australian AI governance, despite the Robodebt scheme not constituting a formal part of the government’s AI governance efforts.

Australia’s AI governance

Australia serves as a particularly interesting case study for this examination of AI imaginaries as embedded within governance. Despite a relatively connected digital economy and a digitally literate population, as a nation Australia is still seen as something of a laggard when it comes to global rankings of digital competitiveness (IMD, Citation2022). Since the late 2010s, Australian governments have tried to improve this digital competitiveness, promising to invest in and build AI capacity and position Australia as a world leader in the future global digital economy (Commonwealth of Australia, Citation2021b). A sociotechnical imaginaries lens enables us to examine how these intentions to build and manage Australia’s AI capacity privilege certain aspects of society and the economy over others. This examination tracks the evolution of AI imaginaries in the Australian context across the two most recent governments representing Australia’s two dominant political parties. The initial efforts by the Liberal-National Coalition Governments (2015–2022) were primarily concerned with positioning Australia as a world leader in AI, focusing attention on investing in AI technologies and building AI capacity for economic growth. In contrast, the current Labor Government, which succeeded the Coalition in 2022, has been more cautious in its approach, concentrating on the regulation of AI risk. Both governments’ efforts to frame and govern AI in Australia also need to be read against the backdrop of the Robodebt scheme, which took place while the Coalition Government were in power, as an example of government use of automated decision-making technology and its subsequent impacts on sociotechnical imaginaries of AI and governance.

Imaginaries of AI as economic good

Work to develop a formal government-led approach to AI in Australia largely began under the aegis of the Liberal-National Coalition Government. Under their oversight, key AI-related projects were intended to firmly entrench Australia as a ‘global leader’ in AI (Commonwealth of Australia, Citation2021a). These included: the development of a national AI Ethics Framework (Dawson et al., Citation2019; Department of Industry, Science and Resources, Citation2019); the AI Action Plan (Commonwealth of Australia, Citation2021a; Department of Industry, Science, Energy and Resources, Citation2020) as part of a broader Digital Economy Strategy 2030 (Commonwealth of Australia, Citation2021b); and the establishment of a National AI Centre under the umbrella of the CSIRO, the national scientific research organisation.

These initiatives demonstrate the government’s dominant sociotechnical imaginary of AI as an economic good. The term ‘economic good’ refers here not only to the framing of AI technology as a resource for business and industry but to also encompass its perceived capacity to boost the national economy. The language used to frame both the AI Action Plan and the National AI Centre reveal the Coalition Governments’ construction of AI as a resource to boost Australia’s economy and become a world-leader in the AI space. The AI Action Plan outlined Australia’s strategy for an ‘AI-enabled economy’, emphasising the need to ‘lift our competitive capabilities, enable industry-wide transformation and secure Australia’s future prosperity by unlocking local jobs and economic growth’ (Commonwealth of Australia, Citation2021a, p. 5). This imaginary traces its origin to the ‘jobs and growth’ mantra that had very early become indelibly associated with the Coalition Government (AAP, Citation2016). The National AI Centre further embedded the jobs and growth focus, promising to commercialise AI research, ‘drive business adoption of AI technologies’ and ‘support the development of new AI solutions with a commercial application’ (Commonwealth of Australia, Citation2021b, p. 65). It is implied through the language framing these initiatives that investment in AI technologies and its commercial capacity will have a natural consequence of positioning Australia as a world leader in AI.

While the AI Action Plan strategy and the National AI Centre both frame AI as an economic good, the AI Ethics Framework represents a more ethics-oriented effort to regulate and govern AI. The AI Ethics Framework establishes eight principles to guide the development and use of AI in Australia: human, societal and environmental wellbeing; human-centred values; fairness; privacy protection and security; reliability and safety; transparency and explainability; contestability; and accountability (Department of Industry, Science and Resources, Citation2019). Contrary to the dominant imaginary of AI as economic good, what emerges through the AI Ethics Framework can be interpreted as reflecting imaginaries of social good. However, there is little evidence that this imaginary of ethical AI and social good was solidly embedded within this period. The AI Ethics Framework is completely voluntary, lacking any regulatory oversight and enforceability, with a flexibility intended to accommodate the needs of business and industry. As such, it functions as little more than a performative exercise in ‘ethics washing’ (Wagner, Citation2019). In this way, the AI Ethics Framework can still be seen to service the Coalition Government’s dominant sociotechnical imaginary of AI as an economic good, as it is designed to promote the adoption of AI technologies within the commercial sector without stifling innovation.

This dominant sociotechnical imaginary of AI as a resource to be harnessed for the good of the economy is significant. Not only did it align with the Coalition Government’s broader mantra of ‘jobs and growth’, but it privileged AI’s capacity to serve the national economy over and above a more nuanced engagement with AI’s potential for social good. This oversight is notable given that two projects which informed the initial shape of the AI Action Plan specifically highlighted AI’s capacity for such social good. The first was a report by the CSIRO’s data and digital specialist unit, Data61, which explored the ways that AI could benefit an ageing population, the health sector, disability support, cities and infrastructure, and environmental management (Hajkowicz et al., Citation2019). The second was the Human Rights and Technology project by the Australian Human Rights Commission (AHRC, Citation2021) which, acknowledging the potential for AI to cause great harm, advocated a human rights approach to AI technologies to ensure these harms were mitigated. The AHRC placed great emphasis on digital and social inclusion, arguing the need to ensure AI technologies were inherently accessible to avoid further marginalising already vulnerable members of society (AHRC, Citation2021). Both these projects emphasised the broader application of AI beyond the economy and its potential benefit to the everyday lived experience across society.

The sociotechnical imaginary of AI as economic good dominated the Coalition Government’s AI approach to the extent that it shifted attention away from a more complex recognition of the impacts, both desirable and undesirable, that AI can have on people and society. It is significant therefore to recognise that it was within this context of framing this imaginary of AI as economic good that the government was also employing automated technologies within the public sector in the case of what became known as the Robodebt scheme. While automated technologies are not the same as AI (which aims to simulate human-like capacity), it is an important precursor to AI. Its use in the public sector reflected the government’s broader desire to transition to an AI-economy. While Robodebt was not directly related to the formal AI initiatives proposed by the Coalition Government, the scheme nonetheless had a deep impact on sociotechnical imaginaries of AI in Australia.

Robodebt and imaginaries of efficiency and accuracy

‘Robodebt’ refers to the Online Compliance Intervention (OCI) programme launched by the Australian Government in 2015 (Royal Commission into the Robodebt Scheme, Citation2023). The OCI programme used automated technology to look for discrepancies between individuals’ reported income to Centrelink (the government agency responsible for welfare payments) against that assessed by the Australian Tax Office to identify welfare overpayments and facilitate debt recovery. The act of debt recovery of welfare overpayments is not a new practice, but what was different about the Robodebt scheme was the use of automated technology to drastically scale the search for discrepancies. These changes were intended to increase the efficiency of the debt recovery process and were projected to result in $1.7 billion recovered payments (Whiteford, Citation2021). Confidence in the efficiency and accuracy of this technology was so strong that human oversight in the process was also reduced (Rinta-Kahila, Someh, Gillespie, Indulska, & Gregor, Citation2022).

Despite the promises of efficiency, accuracy, and objectively neutral assessment, errors in the way the technology calculated debt resulted in a significant number of debt notices being wrongfully issued, causing deep distress to those impacted (Whiteford Citation2021). The lack of human oversight, which would previously have been able to reconcile the discrepancies, further exacerbated the problem. The reduction in human involvement in the process also meant that debt notice recipients struggled to communicate with Centrelink in order to explain or seek clarity regarding their notices (Braithwaite, Citation2020). The government’s faith in the accuracy of the automated decision making technology meant that the burden of proof was shifted onto welfare recipients to demonstrate inaccurate debt calculation, many of whom no longer had the capacity or records to prove the error (Nikidehaghani, Andrew, & Cortese, Citation2023). This burden was particularly onerous given that welfare recipients already constitute vulnerable members of society, often suffering intersectional social disadvantages and digital exclusion (Park & Humphry, Citation2019). Attracting heavy criticism, the Robodebt scheme finally halted in 2020, by which time it had significantly weakened public trust in the government and democratic governance (Braithwaite, Citation2020). It is likely that this experience also impacted how the public viewed AI technologies, with a 2020 report finding that Australians had significantly low levels of trust in AI (Lockey, Gillespie, & Curtis, Citation2020). The Royal Commission into the Robodebt scheme subsequently recommended legislative reform to clarify and make more transparent the use of automated decision making, and the establishment of a body to audit and monitor the future use of such technologies (Royal Commission into the Robodebt Scheme, Citation2023).

Robodebt is one example of how the ongoing shift towards AI and automated decision-making technology in the public sector serves to intensify existing power asymmetries and deepen existing social inequalities (Kuziemski & Misuraca, Citation2020; Park & Humphry, Citation2019; Sartori & Theodorou, Citation2022). Even seemingly benign uses of AI, such as overhauling service provisions and enhancing existing governmental processes, can serve to consolidate power and control over citizens through the automation of decision making, the use of algorithmic technologies that shape people’s lives, and the datafication of individuals (Reutter, Citation2022). Park and Humphry (Citation2019) raise the important point that the digitisation of public services, as seen in Robodebt, enforces a ‘passive inclusion’ into a datafied society where already vulnerable individuals have little agency or control over what type of data is collected about them, how it is stored and accessed, or fed into technologies that make decisions that influence their lives. As we see, the dominant sociotechnical imaginary of economic good that shaped the Coalition Government’s approach to AI also permeated their practice of governing, whereby economic efficiencies were prioritised above the wellbeing of vulnerable citizens. These events contributed to weakened trust in government and AI technologies.

Imaginaries of ‘safe and responsible’ AI

After a change in government following the 2022 Australian federal election, the new Labor Government distanced itself from the previous government’s initiatives (Brookes, Citation2023). Notably, the new federal Minister for Industry and Science, Ed Husic, repeatedly pointed to the Robodebt scheme as a cautionary example of what to avoid in charting the way forward with AI (National Press Club of Australia, Citation2023; Smith, Citation2023). Acknowledging Australia’s low public trust in AI (Lockey, Gillespie, & Curtis, Citation2020), and with Robodebt still very much in the public memory, the Labor Government has signalled their desire for a ‘safe and responsible’ approach to governing AI (Department of Industry, Science and Resources, Citation2023, Citation2024). The release of the Safe and Responsible AI in Australia discussion paper in 2023 launched a public consultation process to help shape AI regulation in Australia (Department of Industry, Science and Resources, Citation2023).

Significant public interest followed this discussion paper, attracting more than 500 submissions across a wide range of sectors including civil society, academia, industry bodies, the healthcare sector, legal firms, and advocacy groups. The volume and scope of these submissions indicated the strength of public interest in the topic. Low public trust in AI featured as a common concern, as were concerns about the capability of Australia’s existing legal frameworks to manage current and future AI risks (Davis, Farthing, & Santow, Citation2023; Department of Industry, Science and Resources, Citation2024; Husic, Citation2023; Weatherall, Citation2023). Following this public consultation, the government released a brief interim response in January 2024 proposing a risk-based approach to governing AI incorporating testing, transparency, and accountability mechanisms in conjunction with robust safety guardrails (Department of Industry, Science and Resources, Citation2024).

While work to develop the government’s final response is still in progress, there are two things worth noting at this stage. First, an awareness that public mistrust ‘acts as a handbrake on business adoption, and public acceptance’ of AI is driving the government’s overall cautious approach to AI (Department of Industry, Science and Resources, Citation2024, p. 4). In addition to the proposed risk-based regulatory approach, concern about public trust has also been evident in Minister Husic’s emphasis on helping businesses ‘make more informed decisions about using AI’ and showing the government’s commitment to ‘being an exemplar user of AI’ (Husic, Citation2023). There is a clear desire here to replace the public mistrust of AI with confidence that AI is ‘safe and responsible’ in the current government’s hands.

However, this cautious approach also risks leaving Australia vulnerable. The ongoing lack of any clear formal AI regulatory framework positions Australia as policy laggard on the global stage. At the time of writing this paper, there is also no timeline for when we can expect this framework to be finalised. Edward Santow, the former Human Rights Commissioner and co-author of the AHRC’s Human Rights and Technology report (AHRC, Citation2021) suggests that such ‘extreme policy lethargy’ places Australia and its people at risk of harm and disruption with no clear legal framework for protection (Santow, Davis, & Farthing, Citation2023). As the capabilities of AI technologies continue to evolve so too do the Australian public’s concerns about the associated risks. Such concerns include fears of job losses, cyber-attacks, critical infrastructure failure, deepfakes, and the spread of misinformation (Saeri, Noetel, & Graham, Citation2024). So long as there is no clear framework of how AI will be regulated in Australia, the imaginary of ‘safe and responsible’ AI is at risk of being undone by the very caution it is advocating.

Second, despite efforts to distance themselves from the Coalition Government and portray their approach to AI as risky and ill-considered, the current Labor Government continues to extend the sociotechnical imaginary of AI as an economic good. Responsibility for developing a national AI governance framework is firmly housed within the Minister for Industry’s portfolio and AI falls under the purview of the Department of Industry, Science and Resources. The opening lines of the interim Safe and Responsible AI report proudly states that AI can add ‘$170 billion to $600 billion a year to Australia’s GDP by 2030’ (Department of Industry, Science and Resources, Citation2024, p. 4). Recent initiatives to address AI trust have been directed towards business and industry (Business.gov.au, Citation2024), rather than enhancing public AI literacy and education (Kao et al., Citation2023). These examples demonstrate that AI continues to be inextricably intertwined with industry, productivity, and economic growth in this country. Indeed, this continuation of the economic good imaginary suggests that rather than trying to replace or negate it, the Labor Government is extending this imaginary by trying to redirect it away from any undesirable previous associations of risk and harm, and towards more desirable visions of a ‘safe and responsible’ AI future under their stewardship.

Conclusion

Examining Australia’s AI governance journey through the lens of sociotechnical imaginaries helps us consider not just what the governments say and do about AI governance, but also the wider context surrounding these statements, actions, and events. This paper suggests that the imaginary of AI as ‘economic good’ was a predominant driver of the Coalition Government’s AI efforts, aligning with their broader focus on ‘jobs and growth’. While this imaginary of economic good is most clearly manifested through Coalition initiatives such as the AI Action Plan and the National AI Centre, its influence can also be seen in the voluntary nature of the AI Ethics Framework. This imaginary of economic good is also evident in the Robodebt scheme, whereby misuse of automated technologies resulted in weakened public trust in these technologies and in the government. The succeeding Labor Government has been at some pains to distance themselves from this mistrust of AI technologies and government, winding back many of their predecessor’s AI initiatives and adopting ‘safe and responsible AI’ as their own mantra. However, progress to develop this ‘safe and responsible’ AI regulatory framework is slow, creating its own risks in the process. So long as Australia lacks a clear legal and regulatory AI framework, AI technologies continue to evolve, becoming more powerful and accessible, and leaving the Australian public with inadequate protection against possible harm or misuse.

Further, despite the focus on ‘safe and responsible AI’ the Labor Government has continued to extend the sociotechnical imaginary of AI as economic good, as is evident through the framing of their AI discussion and the way their funding and resources have been channelled. The value of the sociotechnical imaginaries framework is that it helps reveal the ongoing tenacity of this imaginary of AI as an economic good. By recognising the persistence of this imaginary, we can recognise that ultimately, and despite surface level differences in the two governments’ rhetoric and approach to AI, both are oriented towards the same goal of harnessing AI as a resource to boost Australia’s economy. The longer this economic good imaginary is perpetuated, the deeper it becomes embedded within the social consciousness and becomes harder to resist or challenge.

As yet there is no indication of this imaginary of economic good being dislodged from its dominant position. But this imaginary is not yet entirely encompassing, as demonstrated by the sheer volume and wide range of submissions received in the consultation process. This is a topic very much in the public interest and there is still room for alternative imaginaries, such as that of AI as social good, to offer different perspectives of AI futures. Australia’s AI governance framework is still in the process of being finalised and by the very token of its ‘safe and responsible’ approach, the Labor Government has signalled itself as willing to learn and adapt to get the framework right. Consequently, the ‘safe and responsible AI’ process will warrant further scrutiny and analysis as it evolves.

Disclosure statement

No potential conflict of interest was reported by the author(s).

References