554
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Artificial intelligence and complex sustainability policy problems: translating promise into practice

ORCID Icon, ORCID Icon, ORCID Icon, ORCID Icon & ORCID Icon
Received 16 Aug 2023, Accepted 15 Mar 2024, Published online: 07 May 2024

Abstract

Addressing sustainability policy challenges requires tools that can navigate complexity for better policy processes and outcomes. Attention on Artificial Intelligence (AI) tools and expectations for their use by governments have dramatically increased over the past decade. We conducted a narrative review of academic and grey literature to investigate how AI tools are being used and adapted for policy and public sector decision-making. We found that academics, governments, and consultants expressed positive expectations about AI, arguing that AI could or should be used to address a wide range of policy challenges. However, there is much less evidence of how public decision makers are actually using AI tools or detailed insight into the outcomes of use. From our findings we draw four lessons for translating the promise of AI into practice: 1) Document and evaluate AI’s application to sustainability policy problems in the real-world; 2) Focus on existing and mature AI technologies, not speculative promises or external pressures; 3) Start with the problem to be solved, not the technology to be applied; and 4) Anticipate and adapt to the complexity of sustainability policy problems.

1. Introduction

Interest in how Artificial Intelligence (AI) could benefit policy and decision-making has grown rapidly over the past decade (Valle-Cruz, García-Contreras, and Muñoz-Chávez Citation2022). Debates about how AI should (or should not) be developed and applied are highly salient, with academic publications about AI doubling in the period 2012–2022 (Zhang et al. Citation2022). Governments are creating national AI strategies to describe their intentions for development, use, and regulation of AI (Oxford Insights Citation2022). Nonacademic experts, advisors, and consulting firms are disseminating narratives and setting societal expectations for the use of AI in public policy.

Public awareness of the promises and perils of AI is also rapidly increasing. The past few years have seen an exponential growth in the accessibility of these technologies in daily life (e.g. generative AI such as ChatGPT, personalized recommendations for social media and shopping, and smart home and voice assistants), as well as scandals about unethical uses of AI by governments and companies. Examples of the latter include bias and racialization in the development and use of Facial Recognition Technologies by law enforcement identified in the United Kingdom and the United States of America (Radiya-Dixit Citation2022; Turner Lee and Chin Citation2022). Another is the use of AI tools to wrongly accuse welfare recipients of claiming fraudulent payments and automating demands for repayment. The “Robodebt” scandal in Australia (Whiteford Citation2023) and the System Risk Indication system (SyRI) in the Netherlands (Ó Fathaigh and Appelman Citation2020) are both examples of this which have resulted in successful legal proceedings against the respective governments. These cases illustrate what can go wrong when technologies are applied to complex real world scenarios without a clear understanding of the limits of the technology, a transparent or unambiguous definition of the problem the technology is being applied to, or surety on what the desired outcome is. Relatedly surveys of public attitudes have found AI related failures in government contribute to negative valuations of government in general (Schiff, Schiff, and Pierson Citation2022).

All this discussion and excitement gives the impression AI tools are already widely used by governments in their efforts to address policy problems. We examined the literature to find evidence of this use, and determine what this evidence, or lack thereof, can tell us about translating the promise of AI into practice.

2. The current research

Our study sought to identify cases in the published literature that shed light on current government and public policy use of AI. In order to focus our literature search, we wanted to examine examples of AI being applied to sustainability policy challenges. Such challenges are defined by their complexity and interconnected nature and are therefore a potentially fruitful area for the application of AI technologies (Goralski and Tan Citation2020; Vinuesa et al. Citation2020). Consistent with the United Nations’ Sustainable Development Goals (SDGs), we take a broad view of sustainability policy challenges to include not only those relating to environmental concerns, but the broader suite of concerns which make up the five SDG pillars of people, prosperity, planet, peace, and partnership (United Nations General Assembly Citation2015).

We undertook a narrative review of scholarly and grey literature to examine how AI is (or is not) actually being developed and used to address sustainability policy problems for and in government.

Our research was guided by three questions:

  1. What can literature (academic and grey) tell us about fitting AI technology(ies) to complex sustainability policy problems?

  2. Where are the gaps in this body of literature and what can they tell us about the current state of AI applications in this sphere?

  3. What actions can be taken to translate the promise of AI into practice?

Through our review we found a consistent gap between the promise and reality of AI use for addressing sustainability policy problems across three perspectives (academic developer/theorist, government/policymaker, advisor/consultant). Academic and industry AI developers, policy theorists, governments, and advisors and consultants often express positive expectations for AI, arguing that AI could or should be used to address sustainability policy challenges. However, we found a lack of detailed information about how (and if) AI is being used in government settings, and the nature and extent of real world impacts of this use.

Our findings, informed through the lenses of sociology of expectations and sustainability transitions studies culminate in four lessons for AI developers and policymakers interested in applying AI to sustainability policy problems.

  1. Document and evaluate AI’s application to sustainability policy problems in the real-world

  2. Focus on existing and mature AI technologies, not speculative promises or external pressures

  3. Start with the problem to be solved, not the technology to be applied

  4. Anticipate and adapt to the complexity of sustainability policy problems

3. Methods

Approaching our literature review, we sought to identify examples of how artificial intelligence technologies have been applied to sustainability policy problems in real world settings. Starting with a traditional search-string approach,Footnote1 we discovered many articles that spoke to the potential of AI in sustainability policy spaces. However, the dearth of articles that included relevant real world case studies told us we needed to be more creative in our approach. Consequently, we included forward and backwards searching of papers that mentioned potentially relevant cases, finding five articles in total that included use cases evaluating AI applications for sustainability problems in real world policy contexts (Ackermann et al. Citation2018; Benčina Citation2007; Bolton, Raven, and Mintrom Citation2021; Hartmann and Wenzelburger Citation2021; Ye et al. Citation2019). We then turned to grey literature to account for the variety of voices beyond academia speaking to this rapidly changing space.

We searched grey literature (e.g. national strategies, reports) pulled from government websites, and the websites of research centers monitoring the uptake of AI in policy settings (i.e. AI-WATCH), as well as consultancy firms surveying and providing advice on government and public sector uptake. Both searches were conducted in October 2022.

We included articles and reports that discussed artificial intelligence and sustainability policy. We used the following broad definitions in an attempt to capture a range of perspectives:

  • Artificial intelligence: We included articles and reports that used the terms "artificial intelligence," "AI," and "machine learning," without critiquing the extent that the technologies described or used are technically consistent with particular definitions of AI.

  • Sustainability policy: We included articles and reports that used the terms "public policy" and “sustainability” and/or "sustainable development." This contemporary holistic perspective of “sustainability” acknowledges how environment, social, and economic systems and values are interdependent, and how policy interventions must address this interdependence. For that reason, we sought to include reports that discussed any public government decisions about any of the domains within the United Nations Sustainable Development Goals (SDGs) (United Nations 2015).

We excluded articles that discussed the application of technology in sustainability policy that was not explicitly described as using artificial intelligence (e.g. digital transformation, digital government). We excluded articles that discussed the application of artificial intelligence to non-policy domains and decisions, such as in corporate Environmental, Social and Governance (ESG).

Once we’d collected the literature, we conducted an integrative thematic analysis of the results of the review to identify the discourses and narratives that academic, policy, and other actors in sustainability policy ecosystems use in the context of artificial intelligence for sustainability policy. Given the small number of cases that fit our selection criteria, and the large number of differing actors involved in discussions about the potential for AI in sustainability policy spaces, we included summaries of a broad range of literature to provide insight into the expectations and promises surrounding AI and sustainability policy problems. Narrative review and thematic analysis of these literatures allows for a creative approach to lesson drawing where there are few exact matches to initial search criteria, but unintended findings offer useful insights (Greenhalgh, Thorne, and Malterud Citation2018).

We have presented the findings from this analysis organized by actor perspective: (1) Academic AI Developer and Policy Theorists; (2) Government and Policymakers; (3) Advisors and Consultants.

3.1. Understanding heightened expectations for AI in public policy

For our narrative review, we build on van Noordt and Misuraca’ (Citation2022) exploration of AI’s impact on core governance functions, Zuiderwijk, Chen, and Salem’ (Citation2021) systematic review of AI implications in public governance, and Kankanhalli, Charalabidis, and Mellouli’ (Citation2019) investigations into sophisticated AI applications in Internet of Things for government agencies to examine the relationship between expectations and realities of AI in governance and sustainability problems. The interplay between the discourses and expectations set by various actors – including academia, government, private sector advisors, and consultants –, and the actual implementation of AI in public policy must be understood in order to achieve positive societal outcomes.

The persistent gap between expectations about the potential for AI in sustainability policy and the reality of AI use in these settings can be understood through a sociology of expectations lens, where expectations about what a new technology could or should do are discussed, argued over, and speculated upon (Brown and Michael Citation2003). Collective expectations can "guide activities, provide structure and legitimation, attract interest and foster investment” (Borup et al. Citation2006, 285–286). With increases in computing power, access to data, and innovations in architecture and algorithms, the progress of AI technologies is rapid and accelerating (Stein-Perlman, Weinstein-Raun, and Grace Citation2022). But strong expectations about novel technologies, such as AI, can lead to simplified stories about their value and how widely they are being implemented and used, especially in policy (Brown and Michael Citation2003). These stories often ignore the technical and social complexities of implementation (Borup et al. Citation2006) and “the burden of longer-term failure usually falls on other kinds of community (investors, patients, public policy makers)” (Brown and Michael Citation2003, 13).

Sustainability transitions research provides another useful lens to understand how AI technologies may co-evolve with societal expectations, regulations, practices, and behaviors to create (or fail to create) more sustainable futures (Markard, Raven, and Truffer Citation2012; Köhler et al. Citation2019). The extent to which AI can be effectively developed and deployed to advance the UN Sustainable Development Goals (SDGs) is determined by both its technological capabilities, such as the sophistication of models and access to data, but also perceptions of its effectiveness, public acceptability, appropriate regulation, and “fit” with existing policy processes and objectives (Dwivedi et al. Citation2021).

These insights from the sociology of expectations and sustainability transitions studies tell us how understanding the dynamic relationship between technical and social processes could lead to better long-term outcomes. This includes ensuring the complexity of real-world contexts is accounted for when designing or applying novel technologies, explicitly acknowledging technical limitations, and incorporating flexibility and evaluation when integrating novel technologies into policy settings.

4. Results

4.1. Academic developer and theorist perspectives

Our search identified articles and reports from AI developers, AI theorists, and policy theorists that explored the multiple ways AI could be used in policymaking for sustainable development, as well as how AI should be developed and implemented to improve policy processes and outcomes generally. The summary below indicates common threads across this diverse literature, while the lack of articles referring to real world case studies highlights a clear gap between expectations about the possibilities for AI in sustainability policy contexts and available records of actual AI use and outcomes.

There is an ever growing number of articles published that describe the development and early testing of a specific AI tool for sustainability policy, albeit, rarely in partnership with governments or applied in the real world. Topics include forecasting and prediction to determine policy options in cases of resource scarcity (Mehryar et al. Citation2019); modeling and prediction for migration (Metsker, Trofimov, and Kopanitsa Citation2021), environmental issues, and terrorist attacks (Basuchoudhary and Bang Citation2018); sentiment analysis of public attitudes using Machine Learning (ML) techniques (Isabelle, Han, and Westerlund Citation2022); poverty identification via satellite imagery using ML (Alsharkawi et al. Citation2021); and numerous applications for AI technologies in health care settings (i.e. Ashrafian and Darzi Citation2018; Ruggeri et al. Citation2020). These academic publications confirm there is immense interest from AI developers and AI researchers in the possibilities for AI in the sustainability policy space. Yet, given these articles describe tools that were developed without policymaker input or validation, they cannot provide insight into what happens when AI is actually applied to complex, real-world scenarios.

Similarly, articles that provide a high-level overview of the possibilities for AI to facilitate achievement of the SDGs generally predict positive outcomes (i.e. Khamis et al. Citation2019a, Citation2019b). But they also warn of potentially negative effects if applications of AI in these contexts are not carefully done. For instance, Vinuesa et al. (Citation2020) found AI techniques could positively influence 134 of 169 (∼80%) agreed SDG indicators, but may make 59 of 169 (∼35%) more difficult to achieve, arguing the longer-term effects of AI once implemented are uncertain, and any use of AI should include civil society in its implementation. Goralski and Tan (Citation2020) determined AI could advance sustainable development, while also positing AI could make it more difficult to achieve some of the SDGs by causing job losses, or leading to poor decisions based on inaccurate or oversimplified AI-powered analysis, and increase inequality as a result of uneven access to these technologies globally.

Existing literature reviews looking at the application of AI tools in other policy contexts have found a lack of evidence about AI use and outcomes. A systematic review by Sousa et al. (Citation2019) identified 59 articles that discussed AI use in the public sector, but few of the included studies reported on actual use - instead focusing on suggested use of AI tools. Reviewing 78 papers with the intention of discovering current trends in AI for the public sector, Valle-Cruz et al. (Citation2019) found all were normative and exploratory in focus. The authors concluded AI could benefit the public sector across each of the phases in the policy process and be applied in a huge variety of issue contexts, but noted further empirical research should be conducted. A later review by Valle-Cruz, García-Contreras, and Muñoz-Chávez (Citation2022) identified 37 articles that investigated factors that influenced AI adoption and use in government, and benefits and problems with integrating AI into government decision-making. However, our own investigation of these 37 articles identified only three where the AI developer or theorist actually interacts with governments - meaning, these articles also illustrate the “could” and the “should” of AI in public policy, rather than contributing to our knowledge of what is actually happening in this space. This lack of evidence or details on government use of AI is echoed in a review by Pi (Citation2021), who investigated how Machine Learning (ML), a subset of AI, is being used by governments. Despite identifying many technical papers that refer to application in government, Pi found very few actual use cases emerged. Instead, most articles examined potential, rather than actual or current issues with applying AI in the public sector.Footnote2

Our own narrative review of AI in the sustainability policy space, elicited similar findings. Through our search of peer reviewed articles, we identified only five containing cases where AI was being developed or used to address sustainability public problems in conjunction with, or commissioned by a government. These included a case study on the use of Bayesian NetworksFootnote3 in Port Phillip Bay (Melbourne, Australia) that were successfully employed to provide insight into system interdependencies and identify policy options to address unsustainable environmental outcomes (Bolton, Raven, and Mintrom Citation2021); the use of Fuzzy Logic techniquesFootnote4 to increase cooperation and transparency among government officials in Slovenian municipalities (Benčina Citation2007); an ML tool successfully employed to save city resources by efficiently and effectively remotely determining potential cases of landlord abuse in New York City (Ye et al. Citation2019); another ML tool used as an early intervention system, predicting when police officers in USA police departments were likely to have an “adverse incident,” i.e. to use excessive force (Ackermann et al. Citation2018); and an AI-facilitated risk assessment tool used in the criminal justice sector in Wisconsin (USA) to determine pretrial decisions and post-trial management by allocating offenders a risk score (Hartmann and Wenzelburger Citation2021).

These cases of actual AI tool use were important because they identified issues that emerged only when these tools were deployed in complex real-world scenarios. Beyond the could and should of applying AI to sustainability policy problems, these examples illustrated the ethical, technical, and social challenges of implementation.

For example, Ackermann et al. (Citation2018) found deployment of their ML tool for predicting “adverse incidents” in real life policing contexts required police departments to address new problems, such as: technical implementation (accuracy); governance of the system; cost of the system; and trust in the system. In the criminal justice context, Hartmann and Wenzelburger (Citation2021) found that deploying an ML tool to create a risk score for offenders influenced legal decision-making, because the tool was seen as generating useful “evidence” in an uncertain and high-risk scenario. Therefore, elected court officials and other legal professionals using the tool perceived it helped them make the “right” decision and would defer to the scores generated by the AI. Other studies of the ML tool described by Hartmann and Wenzelburger have found it produces higher false-positive rates for African American offenders compared to Caucasian Americans, meaning the tool is unreliable and potentially discriminatory, suggesting this deference could result in biased outcomes (Mehrabi et al. Citation2022).

Difficulties finding government partners to test these tools with, and a lack of progress from pilot to implemented program, also point to the technical and social challenges in applying AI to sustainability policy problems. While Benčina recorded positive potential for the Fuzzy Logic tool for Slovenian government officials back in 2007, a follow-up article in 2011 records a trial of similar techniques but notes they were unable to finish one of the case studies or convince any others to participate in the project trial (no indication is given for why this was) (Benčina Citation2011). Bolton, Raven, and Mintrom (Citation2021) were able to successfully complete a pilot study, confirming the use of Bayesian Networks enabled stakeholders a clearer path to tackling complex policy decisions. But, critically, they were unable to confirm whether this AI-enabled pathway led to different policy outcomes. Like the other articles we reviewed, Bolton et al.’s findings confirm we are only just beginning to tease apart the potential of AI from the actual outcomes of AI use in practical policy contexts.

As noted, through our review we anticipated finding articles that provided insight into government use and interest in AI, especially in the realm of sustainability policy. However, the identified literature provided very few detailed examples of how AI tools are being used by governments or in the context of sustainability policy problems. Insights from the sociology of expectations literature and socio-technical transitions tell us how important the broader socio-political context is when assessing the potential impact of AI technologies. The limited case studies we identified illustrate how these complexities play out in practice. These technologies not only effect those they are used on, but also those who are using them. And, in turn, our continuing use of technologies shapes them in ways perhaps unintended by their original designers. Without real world case studies, these less predictable factors and their outcomes cannot be turned into lessons for others in order to scale up benefits and avoid mistakes.

4.2. Government and policymaker perspectives

Given the lack of published academic literature drawing from use cases of AI tools in sustainability policy or government more generally, we searched for examples of government applications of AI in the grey literature, including national AI strategies and plans.

More than 60 countries around the world have published dedicated AI strategies and many more have comparable policy documents or are in the process of developing them (Oxford Insights Citation2022). An investigation of the national AI plans and strategies from Australia (Australian Government Department of Industry, Science, Energy and Resources Citation2021), the UK (HM Government Citation2021) and English language strategies from countries within the European UnionFootnote5 shows a gap between promise and reality. The main intent of each plan or strategy is to set positive expectations for the use of AI to improve economic and social outcomes for the country. These documents tell a story of high-level excitement about the potential for AI to increase productivity and improve societal wellbeing and address environmental concerns, but tend to omit detailed descriptions of how AI is or will be used to achieve these outcomes.

The European Commission maintains a dataset, AI-WATCH, that seeks to identify trends in how AI is being developed and used by governments across the EU and in the UK. A detailed investigation of the projects included in the dataset shows a lack of detail in descriptions of the AI tools, including whether they are in use or were/are effective in addressing the identified public need. Indeed, many of the included resource links (e.g. URLs) are inactive.

While this list tells us governments are starting to use AI techniques in some instances, where further information is available, most applications seem to be very recent and/or in the pilot stage and there is consequently little evaluative information on how these tools operate once in real world policy contexts and whether or not they are effective.

4.3. Advisor and consultant perspectives

The market for AI-related solutions for government work is multi-layered and includes many different actors, each with different motivations. These actors go beyond those “producing AI” (i.e. AI-researchers, data scientists) and “consuming AI” (i.e. government). These various intermediaries act as educators, knowledge brokers, advisors/consultants, advocates, institutional and technological integrators. Expert intermediaries include nonacademic analysts and commentators working in think tanks, technological and consulting firms, and advisory bodies (that are independent or semi-independent from governments). Their research, insights and recommendations for change are rarely communicated via peer-reviewed academic journals, but through other channels such as reports, public seminars and advice to government.

Reports from consultancy groups such as the Centre for Public Impact (Citation2017) and Deloitte AI Institute for Government and Deloitte Center for Government Insights (Citation2021) survey and illustrate the state-of-play, benefits and constraints of the adoption and deployment of AI tools at scale by government. Their findings reiterate the point that the work on actually commissioning, designing and implementing AI in the public sector is still in its infancy, meaning evaluating such deployments is difficult. Deloitte note this gap between the current and desired state of AI capabilities seems to be recognized by government leaders, and argue that there is an over-dominance of pilots and potential to be stuck in “pilot purgatory” (2021). It is an important reminder that within governments, there is likely to be an uneven distribution of sophistication, capability, willingness or general “readiness” for AI tools. In their annual cross-country reports on government AI readiness Oxford Insights lament “the pace of change in AI capabilities has not been matched by the response of governments” (2022). They call for a rapid increase in the regulatory sphere of AI governance, a focus on public sector capacity building and, “keeping up with global developments and learning what their peers are working on.” However, it is worth noting that the report itself contains very few examples of actual government use cases. Where they are mentioned, links to additional information lead to few details regarding what these projects are and if they are still active.

5. Lessons for translating the promise of AI into practice

Through our review we found widespread positive expectations for using AI to address policy problems - including in the sustainability realm. These expectations were present in literature from academic AI developers and policy theorists, nonacademic advisors and consultants, and governments themselves. However, our review also identified a lack of information on how governments might actually be using AI, and offered few insights into the impact of AI use in the real world. From the sociology of expectations, we know real-world use is likely to be far more complicated than stories about the potential for AI describe (Brown and Michael Citation2003). From sustainability transitions studies we know that purely “technological fixes” to sustainability problems are unlikely to have a lasting effect and may also have unintended consequences - given the complexity of the systems they will be embedded into (Farla et al. Citation2012). In this section, we synthesize the findings of our literature review into four “lessons” for AI developers and policymakers who want to translate the promise of AI technologies for sustainability policy problems into practice.

5.1. Lesson 1: document and evaluate AI’s application to sustainability policy problems in the real-world

More detailed public case studies that evaluate the actual use of AI in government settings are needed to validate the benefits and highlight challenges of applying AI to sustainability policy problems. Our finding that there is little evidence of studies of this type in the sustainability policy space is echoed through literature reviews that examine other aspects of AI use in government and policy settings (Sousa et al. Citation2019; Valle-Cruz et al. Citation2019; Pi Citation2021). The grey literature further confirms this point, with government documents agreeing there is a need to monitor AI applications within their own nations and learn lessons from other countries (HM Government Citation2021). Reports from advisors and consultants also insist on the importance of sharing and learning lessons from attempts to implement AI in public decision-making (Oxford Insights Citation2022). The few instances we identified where detailed use cases were evaluated (Ackermann et al. Citation2018; Bolton, Raven, and Mintrom Citation2021; Hartmann and Wenzelburger Citation2021), and scandals such as Robodebt in Australia and SyRI in the Netherlands, illustrate the complexities of embedding these technologies in real-world contexts and emphasize the need for evaluated public case studies.

5.2. Lesson 2: focus on existing and mature AI technologies, not speculative promises or external pressures

The rapid and accelerating pace of development in AI is accompanied by future-focused narratives and expectations of what AI “could” or “should” do for public decision-making and sustainability policy problems. Our review found that AI researchers, developers and consultancy firms are advocating for the use of AI in government, and governments are in turn advocating for greater public sector use, while encouraging AI development and use across the private sector. These pressures combine to create an international sense of urgency around AI. Yet in all of these instances, there appears to be little focus on the current state of AI tools. A clear understanding of the limitations of existing AI tools, as well as appropriate training on how to use them, will enable governments, policy and decision makers to separate the hype about future benefits of AI development and use from actual benefits available now.

5.3. Lesson 3: start with the problem to be solved, not the technology to be applied

Governments across the world continue to emphasize the positive potential of AI to address or even “solve” economic, environmental and social problems. Although these same governments also raise concerns about possible negative effects of AI and the need to control these technologies and their use, there remains a general sense that not rapidly implementing AI into their processes could mean “missing out” on benefits or “falling behind” competing countries (Digital Transformation Agency Citation2023; Sunak Citation2023; The White House Citation2023). However, this emphasis on competition can mean discussions about complex policy issues focus on finding novel AI solutions, rather than unpacking the underlying policy problem and objectives (Brown and Michael Citation2003).

If AI developers and policymakers want to see advanced data science techniques used in a constructive way with positive end results, they must start by defining the policy problem and then look to see which, if any, forms of AI are suitable. This approach is consistent with the principle of “technological-neutrality,” based on the idea of being open to finding the “best” overall solution for the community, regardless of what technology is used (Briglauer, Stocker, and Whalley Citation2020). Just as no single policy lever or regulatory approach will be the “best” tool for addressing every problem, or every aspect of a complex problem, no single type of technology will always be the answer.

5.4. Lesson 4: anticipate and adapt to the complexity of sustainability policy problems

Successful development and implementation of AI technologies is dependent on non-technological policy contexts. For sustainability policy problems, the heterogeneous and dynamic operating environments in which technologies will be embedded mean that adaptation to context is mandatory, not optional. As referenced in Lesson 3, there is a common concern across these literatures that governments are not moving fast enough to implement and apply AI technologies. Governments, policy and decision makers in the public sector must not fall into the trap of thinking these technologies are a quick fix for complex sustainability problems. We argue discussion, lesson sharing, and continuous evaluation, rather than speed and competition, should be the core considerations moving forward.

6. Limitations and future research

The aim of our review was to understand if and how AI is being applied to sustainability policy problems in conjunction with, or commissioned through government. We wanted to gain insight into how these tools are being applied, and what impacts they have. We were also interested in the overarching narratives surrounding the use of AI in the realm of sustainability policy, and policy or government settings more broadly. Given this wide scope, and our inclusion of grey literature, our focus here was not to undertake a systematic review, but to examine the landscape and identify crucial gaps between the promise of AI for sustainability policy and the reality and/or effects of the use of AI in these complex settings.

Having established these broad narratives about the “could” and the “should” of AI for sustainability problems as they appear across three perspectives, future research could undertake more in-depth analysis by focusing on specific aspects of sustainability policy. Further, adding to findings from Sousa et al. (Citation2019), Valle-Cruz, García-Contreras, and Muñoz-Chávez (Citation2022), and Pi (Citation2021), we call for empirical research that provides insight into how AI technologies actually affect sustainability problems in government and public policy settings, with recourse to real world case studies.

7. Conclusion

There is no doubt both academic and nonacademic interest in AI and AI for the public sector has exploded in the past decade. Like Sousa et al. (Citation2019) and Valle-Cruz, García-Contreras, and Muñoz-Chávez (Citation2022) we found plenty of articles that spoke to a steady increase in peer-reviewed literature on AI and the public sector and AI applied to sustainability policy problems. Academic articles from AI developers and theorists illustrate the sheer range of potential applications for AI techniques across many policy problems. Given the breadth of the UN Sustainable Development Goals, this also illustrates enormous potential for delivering on the goal of sustainable development.

Yet, we also found the majority of these articles speak to just that, the potential of AI. Few articles shed light on how AI is actually being developed or used to address public problems in conjunction with, or as commissioned by, government. The articles we identified that did, demonstrate the need for well-evaluated and documented trials to ensure perverse outcomes do not ensue.

Government interest in the use of AI is confirmed through the proliferation of dedicated national AI strategies and similar documentation. It is also highly likely that many governments are already using AI techniques across many policy problems. However, outside of positive but vague claims about collaborations and AI leadership, or public scandals when government use of AI has negative effects, it is difficult to draw any conclusions about the effectiveness of AI techniques.

We see this lack of real-world research as a core roadblock to realizing the promise of AI to advance sustainable futures. Our review shows there is apparent agreement on the enormous potential for AI techniques in public policy to facilitate social, economic, and environmental good. Yet, there is also a broad appreciation that AI, without careful consideration, could be harmful and exacerbate the same problems it has been integrated to help solve. In order to reap the imagined benefits and address the potential issues of AI in the public sector we need more evaluations and public reports of real-world case studies, an appreciation of the current state of AI and its limitations, the ability to fit AI tools to actual policy problems, and a system of implementation that is reflexive and flexible.

Acknowledgements

We gratefully acknowledge the contributions of Professor Peter Stuckey and Dr Ilankaikone Senthooran to an earlier phase of this project.

Disclosure statement

No conflict of interest was reported by the author(s).

Additional information

Funding

This work was supported by the Monash University Faculty of IT under the Sustainability Seed Funding Grant.

Notes

1 Our peer reviewed article search was a search of Google Scholar in October 2022 using terms: [“Public Policy” AND “Artificial Intelligence”] OR [“AI” AND “Sustainable Development”; “Public Policy” AND “Artificial Intelligence”] OR [“AI”; “Artificial Intelligence” OR “AI” AND “Sustainable Development”; Machine Learning” AND “Public Policy”]. We reviewed the abstracts of the first ten pages of results, and further reviewed the content of papers that mentioned a case study.

2 We sought to validate this finding, manually reviewing the first 100 abstracts returned in Google Scholar when using the search terms “Machine Learning” and “Public Policy,” and elicited only two results that contained case studies that explored AI applied to sustainability policy problems in actual government settings - both from the USA (Ackermann et al. Citation2018; Ye et al. Citation2019).

3 An AI technique using mathematical models to represent causal relationships between multiple variables.

4 An AI technique that imitates human decision making.

5 Available English language national AI strategies reviewed include Czech Republic; Denmark; Estonia; Finland; Germany; Ireland; Italy; Lituania; Luxembourg; Malta; Netherlands; Portugal; Sweden; UK.

References

  • Ackermann, K., J. Walsh, A. De Unánue, H. Naveed, A. N. Rivera, S. J. Lee, J. Bennettand, et al. 2018. “Deploying Machine Learning Models for Public Policy: A Framework.” In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 15–22,
  • Alsharkawi, A., M. Al-Fetyani, M. Dawas, H. Saadeh, and M. Alyaman. 2021. “Poverty Classification Using Machine Learning: The Case of Jordan.” Sustainability 13 (3): 1412. https://doi.org/10.3390/su13031412.
  • Ashrafian, H., and A. Darzi. 2018. “Transforming Health Policy through Machine Learning.” PLoS Medicine 15 (11): e1002692. https://doi.org/10.1371/journal.pmed.1002692.
  • Australian Government, and Department of Industry, Science, Energy and Resources. 2021. “Australia’s AI Action Plan.” Australian Government Department of Industry, Science, Energy and Resources.
  • Basuchoudhary, A., and J. T. Bang. 2018. “Predicting Terrorism with Machine Learning: Lessons from “Predicting Terrorism: A Machine Learning Approach.” Peace Economics, Peace Science and Public Policy 24 (4): 20180040. https://doi.org/10.1515/peps-2018-0040.
  • Benčina, J. 2007. “The Use of Fuzzy Logic in Coordinating Investment Projects in the Public Sector.” Bornik Radova Ekonomskog Fakulteta u Rijeci: Časopis Za Ekonomsku Teoriju i Praksu 25 (1): 113–140.
  • Benčina, J. 2011. “Fuzzy Decision Trees as a Decision-Making Framework in the Public Sector.” Yugoslav Journal of Operations Research 21 (2): 205–224.
  • Bolton, M., R. Raven, and M. Mintrom. 2021. “Can AI Transform Public Decision-Making for Sustainable Development? An Exploration of Critical Earth System Governance Questions.” Earth System Governance 9 (100116): 100116. https://doi.org/10.1016/j.esg.2021.100116.
  • Borup, M., N. Brown, K. Konrad, and H. Van Lente. 2006. “The Sociology of Expectations in Science and Technology.” Technology Analysis & Strategic Management 18 (3-4): 285–298. https://doi.org/10.1080/09537320600777002.
  • Briglauer, W., V. Stocker, and J. Whalley. 2020. “Public Policy Targets in EU Broadband Markets: The Role of Technological Neutrality.” Telecommunications Policy 44 (5): 101908. https://doi.org/10.1016/j.telpol.2019.101908.
  • Brown, N., and M. Michael. 2003. “A Sociology of Expectations: Retrospecting Prospects and Prospecting Retrospects.” Technology Analysis & Strategic Management 15 (1): 3–18. https://doi.org/10.1080/0953732032000046024.
  • Centre for Public Impact. 2017. “Destination Unknown: Exploring the Impact of Artificial Intelligence on Government.” Working Paper. Centre for Public Impact (A BCG Foundation), September. https://www.centreforpublicimpact.org/assets/documents/Destination-Unknown-AI-and-government.pdf.
  • Deloitte AI Institute for Government and Deloitte Center for Government Insights. 2021. “Scaling AI in Government: How to Reach the Heights of Enterprisewide Adoption of AI.” Deloitte Insights, December 13. https://www2.deloitte.com/us/en/insights/industry/public-sector/government-ai-survey.html.
  • Digital Transformation Agency. 2023. “The AI in Government Taskforce: examining use and governance of AI by the APS.” September 20. Commonwealth of Australia. https://www.dta.gov.au/blogs/ai-government-taskforce-examining-use-and-governance-ai-aps.
  • Dwivedi, Y. K., L. Hughes, E. Ismagilova, G. Aarts, C. Coombs, T. Crick, Y. Duan, et al. 2021. “Artificial Intelligence (AI): Multidisciplinary Perspectives on Emerging Challenges, Opportunities, and Agenda for Research, Practice and Policy.” International Journal of Information Management 57: 101994. https://doi.org/10.1016/j.ijinfomgt.2019.08.002.
  • European Commission, Joint Research Centre (JRC). 2021. “Selected AI Cases in the Public Sector. European Commission, Joint Research Centre (JRC) [Dataset].” http://data.europa.eu/89h/7342ea15-fd4f-4184-9603-98bd87d8239a.
  • Farla, J. C. M., J. Markard, R. Raven, and L. E. Coenen. 2012. “Sustainability Transitions in the Making: A Closer Look at Actors, Strategies and Resources.” Technological Forecasting and Social Change 79 (6): 991–998. https://doi.org/10.1016/j.techfore.2012.02.001.
  • Goralski, M. A., and T. K. Tan. 2020. “Artificial Intelligence and Sustainable Development.” The International Journal of Management Education 18 (1): 100330. https://doi.org/10.1016/j.ijme.2019.100330.
  • Greenhalgh, T., S. Thorne, and K. Malterud. 2018. “Time to Challenge the Spurious Hierarchy of Systematic over Narrative Reviews?” European Journal of Clinical Investigation 48 (6): e12931. https://doi.org/10.1111/eci.12931.
  • Hartmann, K., and G. Wenzelburger. 2021. “Uncertainty, Risk and the Use of Algorithms in Policy Decisions: A Case Study on Criminal Justice in the USA.” Policy Sciences 54 (2): 269–287. https://doi.org/10.1007/s11077-020-09414-y.
  • HM Government. 2021. “National AI Strategy.” https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1020402/National_AI_Strategy_-_PDF_version.pdf.
  • Isabelle, D. A., Y. Han, and M. Westerlund. 2022. “A Machine-Learning Analysis of the Impacts of the COVID-19 Pandemic on Small Business Owners and Implications for Canadian Government Policy Response.” Canadian Public Policy. Analyse de Politiques 48 (2): 322–342. https://doi.org/10.3138/cpp.2021-018.
  • Kankanhalli, A., Y. Charalabidis, and S. Mellouli. 2019. “IoT and AI for Smart Government: A Research Agenda.” Government Information Quarterly 36 (2): 304–309. https://doi.org/10.1016/j.giq.2019.02.003.
  • Khamis, A., H. Li, E. Prestes, and T. Haidegger. 2019a. “AI: A Key Enabler for Sustainable Development Goals: Part 2 [Industry Activities].” IEEE Robotics & Automation Magazine 26 (4): 122–127. https://doi.org/10.1109/MRA.2019.2945739.
  • Khamis, A., H. Li, E. Prestes, and T. Haidegger. 2019b. “AI: A Key Enabler of Sustainable Development Goals, Part 1.” IEEE Robotics & Automation Magazine 3: 95–102.
  • Köhler, J., F. W. Geels, F. Kern, J. Markard, E. Onsongo, A. Wieczorek, F. Alkemade, et al. 2019. “An Agenda for Sustainability Transitions Research: State of the Art and Future Directions.” Environmental Innovation and Societal Transitions 31: 1–32. https://doi.org/10.1016/j.eist.2019.01.004.
  • Markard, J., R. Raven, and B. Truffer. 2012. “Sustainability Transitions: An Emerging Field of Research and Its Prospects.” Research Policy 41 (6): 955–967. https://doi.org/10.1016/j.respol.2012.02.013.
  • Mehrabi, N., F. Morstatter, N. Saxena, K. Lerman, and A. Galstyan. 2022. “A Survey on Bias and Fairness in Machine Learning.” ACM Computing Surveys 54 (6): 1–35. https://doi.org/10.1145/3457607.
  • Mehryar, S., R. Sliuzas, N. Schwarz, A. Sharifi, and M. van Maarseveen. 2019. “From Individual Fuzzy Cognitive Maps to Agent Based Models: Modeling Multi-Factorial and Multi-Stakeholder Decision-Making for Water Scarcity.” Journal of Environmental Management 250: 109482. https://doi.org/10.1016/j.jenvman.2019.109482.
  • Metsker, O., E. Trofimov, and G. Kopanitsa. 2021. “Application of Machine Learning for E-Justice.” Journal of Physics: Conference Series 1828 (1): 012006. https://doi.org/10.1088/1742-6596/1828/1/012006.
  • Ó Fathaigh, R., and N. Appelman. 2020. “Automating Society Report 2020 (Netherlands).” Algorithm Watch, https://automatingsociety.algorithmwatch.org/report2020/netherlands/.
  • Oxford Insights. 2022. “Government AI Readiness Index 2022.” Oxford Insights, December 12. https://www.oxfordinsights.com/government-ai-readiness-index-2022.
  • Pi, Y. 2021. “Machine Learning in Governments: Benefits, Challenges and Future Directions.” JeDEM - eJournal of eDemocracy and Open Government 13 (1): 203–219. https://doi.org/10.29379/jedem.v13i1.625.
  • Radiya-Dixit, E. 2022. A Sociotechnical Audit: Assessing Police Use of Facial Recognition. Cambridge: Minderoo Centre for Technology and Democracy, https://www.mctd.ac.uk/wp-content/uploads/2022/10/MCTD-FacialRecognition-Report-WEB-1.pdf.
  • Ruggeri, K., A. Benzerga, S. Verra, and T. Folke. 2020. “A Behavioral Approach to Personalizing Public Health.” Behavioural Public Policy 7 (2): 457–469. https://doi.org/10.1017/bpp.2020.31.
  • Schiff, D. S., K. J. Schiff, and P. Pierson. 2022. “Assessing Public Value Failure in Government Adoption of Artificial Intelligence.” Public Administration 100 (3): 653–673. https://doi.org/10.1111/padm.12742.
  • Sousa, W. G. d., E. R. P. d. Melo, P. H. D. S. Bermejo, R. A. Sousa Farias, and A. O. Gomes. 2019. “How and Where Is Artificial Intelligence in the Public Sector Going? A Literature Review and Research Agenda.” Government Information Quarterly 36 (4): 101392. https://doi.org/10.1016/j.giq.2019.07.004.
  • Stein-Perlman, Z., B. Weinstein-Raun, and K. Grace. 2022. “2022 Expert Survey on Progress in AI.” August 3. https://aiimpacts.org/2022-expert-survey-on-progress-in-ai/.
  • Sunak, R. 2023. “Prime Minister’s speech on AI: 26 October 2023.” October 26. Gov.UK. https://www.gov.uk/government/speeches/prime-ministers-speech-on-ai-26-october-2023
  • The White House. 2023. “FACT SHEET: President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence.” October 30. https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/
  • Turner Lee, N., and C. Chin. 2022. ‘Police Surveillance and Facial Recognition: Why Data Privacy Is Imperative for Communities of Color’. Paper presented at American Bar Association’s Antitrust Spring Meeting, Washington, DC, April 8. https://www.brookings.edu/articles/police-surveillance-and-facial-recognition-why-data-privacy-is-an-imperative-for-communities-of-color/.
  • United Nations General Assembly. 2015. “Transforming Our World: The 2030 Agenda for Sustainable Development." A/RES/70/1, 21 October, United Nations. https://sdgs.un.org/sites/default/files/publications/21252030%20Agenda%20for%20Sustainable%20Development%20web.pdf.
  • Valle-Cruz, D., R. García-Contreras, and J. P. Muñoz-Chávez. 2022. “Mind the Gap: Towards an Understanding of Government Decision-Making Based on Artificial Intelligence.” Republic of Korea, 226–34. https://doi.org/10.1145/3543434.3543445.
  • Valle-Cruz, D., E. A. Ruvalcaba-Gomez, R. Sandoval-Almazan, and J. Ignacio Criado. 2019. “A Review of Artificial Intelligence in Government and Its Potential from a Public Policy Perspective’. In.” Proceedings of the 20th Annual International Conference on Digital Government Research, 91–99. Dubai, United Arab Emirates: Association for Computing Machinery, https://doi.org/10.1145/3325112.3325242.
  • van Noordt, C., and G. Misuraca. 2022. “Artificial Intelligence for the Public Sector: Results of Landscaping the Use of AI in Government across the European Union.” Government Information Quarterly 39 (3): 101714. https://doi.org/10.1016/j.giq.2022.101714.
  • Vinuesa, R., H. Azizpour, I. Leite, M. Balaam, V. Dignum, S. Domisch, A. Felländer, S. D. Langhans, M. Tegmark, and F. Fuso Nerini. 2020. “The Role of Artificial Intelligence in Achieving the Sustainable Development Goals.” Nature Communications 11 (1): 233. https://doi.org/10.1038/s41467-019-14108-y.
  • Whiteford, P. 2023. “The Robodebt Royal Commission Will Tell Us Who’s to Blame, but That’s Just the Start.” The Conversation. 7 July https://theconversation.com/the-robodebt-royal-commission-will-tell-us-whos-to-blame-but-thats-just-the-start-208916.
  • Ye, T., R. Johnson, S. Fu, J. Copeny, B. Donnelly, A. Freeman, M. Lima, J. Walsh, and R. Ghani. 2019. “Using Machine Learning to Help Vulnerable Tenants in New York City.” In Proceedings of the 2nd ACM SIGCAS Conference on Computing and Sustainable Societies, 248–258, https://doi.org/10.1145/3314344.3332484.
  • Zhang, D., N. Maslej, E. Brynjolfsson, J. Etchemendy, T. Lyons, J. Manyika, H. Ngo, et al. 2022. “The AI Index 2022 Annual Report.” AI Index Steering Committee, Stanford Institute for Human-Centered AI, Stanford University, https://aiindex.stanford.edu/wp-content/uploads/2022/03/2022-AI-Index-Report_Master.pdf.
  • Zuiderwijk, A., Y.-C. Chen, and F. Salem. 2021. “Implications of the Use of Artificial Intelligence in Public Governance: A Systematic Literature Review and a Research Agenda.” Government Information Quarterly 38 (3): 101577. https://doi.org/10.1016/j.giq.2021.101577.