884
Views
0
CrossRef citations to date
0
Altmetric
Part 1: Theories & Futures in AI Megaprojects and Sustainable Development: Article 1:

Whose interests will AI serve? Autonomous agents in infrastructure use

ORCID Icon & ORCID Icon
Pages 21-36 | Received 17 Mar 2022, Accepted 19 Sep 2022, Published online: 16 Oct 2022

Abstract

This article examines new challenges for sustainability presented by artificial intelligence (AI) in infrastructure megaprojects. We differentiate between the deliberative processes of infrastructure megaproject construction and the everyday uses of such infrastructure, focusing on the latter as both having major sustainability impacts and presenting distinct opportunities for AI intervention. While AI applications are often imagined supporting human decision-making processes, we argue that cases in which AI agents act autonomously without direct human intervention present novel questions about how to achieve sustainable outcomes. Specifically, we differentiate between the setting of AI objective functions in the development stages and the control of unanticipated behaviours once an AI system can act independently. After illustrating applications of AI in energy grid management and automated driving, we examine how such systems could operate either in support of shared sustainability interests or to advance their private interests.

1. Introduction

Infrastructure megaprojects are a major contributor to greenhouse gas emissions, ecosystem degradation, habitat loss and other sustainability impacts. For this reason, they are also among the most important sites for reducing these impacts. Meanwhile, the expanding field of artificial intelligence (AI) has introduced new tools and new hopes in a variety of domains, including infrastructure megaprojects (Greiman Citation2020), and a broad field of research has examined how AI can be used in support of sustainable development. Recent literature has examined AI’s impacts on the environment in general (Vinuesa et al. Citation2020; Dauvergne Citation2020; Crawford Citation2021) and on urban sustainability in particular (Batty Citation2018; Macrorie, Yigitcanlar and Cugurullo Citation2020; Nagenborg Citation2020; Marvin, and While Citation2021). Work in this vein has identified potential sustainability benefits and harms of AI, and has called for various measures of public regulation and accountability to produce AI systems that address sustainability, equity and other shared concerns. In that context, the contribution of this article is to differentiate among the sites where such mechanisms for directing AI towards the public interest can be found. We focus on AI that can act independently of people, rather than AI informing human action, and identify the programmed objectives and the actual behaviour of the AI as two distinct sites of control, each of which has its own opportunities and challenges.

In the comparatively straightforward case, AI is used as a decision-support tool for infrastructure development. In such projects, sustainable development’s status as a political endeavour involving the distribution of scarce resources and the balancing of competing goals across populations and over long periods remains clear. AI can be inserted into these existing processes by offering tools to reduce uncertainty and to achieve specified outcomes more effectively, but the ultimate decision-making practices remain situated in socio-political contexts of human decision-makers. We show that the distinct challenge of AI, however, is its ability to act on its own. We call this AI’s autonomy, which does not necessarily imply any human-like intelligence or will, but simply refers to its ability to engage with its environment independently of humans in ways that its creators cannot always anticipate. Efforts to direct such autonomy towards sustainability goals can be divided into two approaches. The first deals with the design and development of the AI system, particularly setting its objective functions. We argue that this presents new levers of control that might be used to direct AI serve sustainability goals. The second approach recognises that AI agents will always produce unanticipated emergent behaviours, and can be controlled only indirectly. In such circumstances, the autonomy of the AI calls for a regulatory approach capable of bounding such agents’ unanticipated actions. Without such an approach, AI will not necessarily act in the collective interests of sustainability but will instead pursue its own often hidden goals or those of its private programmers.

In this article, we examine these approaches to regulating AI for sustainability as applied to infrastructure megaprojects. We begin by clarifying our focus within the intersection of infrastructure, AI and sustainability in the next section. Rather than focusing on the design and construction of infrastructure, we highlight the continual activity that the infrastructure supports as an important site for improving sustainability impacts. This entails shifting from the slow, deliberative decision-making of a defined group of actors to the more diffuse, more frequent and smaller decisions of infrastructure users. In these decisions, AI is more likely to act independently rather than as a support for human decisions made during infrastructure development. We also emphasise that sustainability is not an easily quantifiable target but an inherently political project of balancing competing objectives. The third section then provides two examples of how AI might act as a decision-maker within infrastructural systems: by managing clean energy grids and by navigating self-driving cars through intersections. The grid example highlights the role of the AI objective function, and the car example shows the role of external control in directing autonomous agents towards collective objectives. Each has sustainability implications, but we examine how an autonomous AI might or might not support shared goals. The concluding section then points to opportunities for regulating AI to promote sustainability.

2. Infrastructure, AI, and sustainability

2.1. Infrastructure production and use

Attention to infrastructure megaprojects often focuses on their construction. The sustainability impacts of the design and delivery phases, including energy usage, greenhouse gas emissions and ecological impacts, are indeed significant, and the complexity of stakeholders, materials and environments in megaproject construction makes sustainability management particularly challenging (Shi, Zuo, and Zillante Citation2012; Gibbs and O'Neill Citation2014; Brooks and Rich Citation2016). The common focus in these phases is on time, schedule and specification targets, and this can overlook the situatedness of megaprojects within larger contexts and timespans (Dimitriou and Field Citation2019). Initial project needs and operational costs (Sturup and Low Citation2019) and how these are monitored over their life cycle (Horne Citation2009) are well-known issues and can have enormous sustainability consequences. However, it remains difficult to draw boundaries around infrastructure impacts, especially over long periods. As one resource for engineers puts it, dealing with ‘these extra ‘boxes’ means planning with less direct control, and with more outside issues affecting outcomes[, which] means you have to learn to deal with more uncertainty’ (Ainger and Charles Citation2014, 62).

We take a wider view of infrastructure that includes its use. Applying such a perspective to infrastructure megaprojects allows us to see a project as not just the concrete artefact that is designed, constructed and completed in a bounded time frame, but also as a structure embedded in human activities for many generations, and how these activities confound sustainability goals. In sociological perspectives on infrastructure (Star and Ruhleder Citation1996; Star Citation1999; Larkin Citation2013), power grids and transport networks are not simply things that are built and operated. Instead, they are relations, connecting people and things and enabling activities in particular moments. Views informed by ethnography and phenomenology focus on the users of such systems and the particular kinds of use the structures enable to see infrastructure as a basis of social interaction and exchange. From this perspective, infrastructure is ‘a constant work in progress that engages numerous agents: civic authorities design and implement infrastructure; designated agencies maintain and repair infrastructure; and ordinary people utilize, modify, ignore or destroy it’ (Smith Citation2016, 164), and megaprojects are inherently messy and unpredictable (Santamaría Citation2020).

These analytical lenses sit at two ends of a spectrum. We can call these the ‘hard’ infrastructure of material artefacts and the ‘soft’ infrastructure of their use, a distinction inspired by attention in urban studies to the active and social dimensions of technical infrastructures, especially in the Global South (Simone Citation2004; Gandy Citation2005; McFarlane and Rutherford Citation2008; Monstadt Citation2009). Their differences work along both temporal and social axes. An analysis of hard infrastructure looks at the design and construction of discrete projects. In these phases, decisions can be slow and deliberate, made within institutional contexts specifying requirements for environmental review and public comment. The relevant actors are engineers, builders, and funders and technical concerns dominate. Soft infrastructure, by contrast, has a less defined scope. As projects connect to other infrastructures and are maintained, modified and augmented, the boundaries of the infrastructural object that were once clear in the construction phase become blurry. The actors are people who operate and interact with infrastructures on an everyday basis, whether as end-users or managers. Their encounters with infrastructure can be considered more routine. The soft aspects of infrastructure have a longer duration, but decisions are more frequent.

To illustrate this distinction, consider a typical freeway megaproject. Transportation planners identify a need for a highway, engineers spec the project and a public process considers community and environmental impacts. Once a project is approved and funding secured, equipment and materials are delivered to the site, and the freeway is built. The project concludes with the opening of the roadway to traffic, and its concrete and asphalt are visible as the hard infrastructure. As soft infrastructure, the completed freeway is now the basis for thousands of travellers to go somewhere every day. Their mobility is enabled by the new freeway segment, but also many other infrastructures: a network of travel destinations connected by freeways, surface streets and parking; global infrastructures for manufacturing, maintaining and fuelling automobiles; social practices for educating and policing drivers; and so on. In the everyday operation of this freeway project, decisions are dispersed and frequent. Drivers did not choose where to put the freeway or how to build it, but they do choose where and when to travel, how fast to drive and when to change lanes.

Clearly, the freeway has sustainability impacts as both the hard infrastructure of its material existence and as soft infrastructure of the use it supports. The implications for AI also differ, as AI can inform human decisions in the planning and construction phase, but can directly engage in practices of infrastructure use. For example, AI can be involved in regulating traffic, suggesting routes that reduce pollution, or helping people to pool cars can help to achieve sustainability goals. If, as Batty (Citation2018) argues, AI is better suited to modelling and acting within routine environments than to making future-oriented planning decisions, then the potential impact of AI in everyday activity is especially important. Infrastructure has a solid structure whose sustainability effects are magnified by the actions that agents, including AI agents, take within it.

2.2. AI decision support and decision making

These two ways of looking at infrastructure help to make a parallel distinction in the uses of AI. The output of an AI can be presented as information for human consumption, as a web search result, or it can control an actuator that does something automatically, as with a self-driving car. In the former category, well-known examples of AI used for decision support include medical diagnostics, screening resumes in hiring and predicting criminal recidivism for sentencing purposes. Models like these have been subjected to a great deal of scrutiny to identify biases in the training datasets, misalignment of the model’s objective function and the intended purpose and correspondence with the judgments of human experts (Calmon et al. Citation2017; Wachter, Mittelstadt, and Floridi Citation2017; Silberg and Manyika Citation2019; Caton and Haas Citation2020; Mehrabi et al. Citation2021). A recurring question of such examinations is to what degree humans should trust the recommendations of the AI, but regardless, a human is still in the loop. In most cases, the AI cannot unilaterally initiate a patient’s cancer treatment, make a job offer or send someone to prison. It remains a tool not unlike many others that can potentially improve human decisions.

One recent survey of the uses of AI in megaprojects offers examples of decision-support function including ‘project and site selection, design optimization, risk management, cost estimation management, schedule performance, and quality assurance’ (Greiman Citation2020, 623) as areas where AI holds promise. The core project management concerns of cost, schedule and specification are an easy target for AI applications, which promise to make sense of input from the multiplicity of interdependent actors, systems and processes whose complex interactions are a perennial challenge for project managers. The hope is that such tools can inform decisions that improve project delivery by such measures, and this may well be true. Still, we suggest that the use of AI in these instances is not qualitatively different from other kinds of research, modelling and analysis in supporting project decisions. AI offers new inputs and insights from its processing of disparate and heterogeneous data, but the ultimate action remains a result of complex multi-stakeholder decision-making processes. This continuity does not make the introduction of AI in decision-support straightforward or unproblematic, as the growing conversation about algorithmic transparency and accountability makes clear (see, e.g., the ACM Conference on Fairness, Accountability and Transparency, first held in 2018). Further, algorithmic predictions exert a kind of control even when a human ultimately decides (Nowotny Citation2021). However, these concerns are different from those of AI that can act independently of human intervention.

An AI that does not just inform a human but can act on its own has a much more direct influence. Self-driving cars do not simply provide a notification about an upcoming obstacle and recommend a course of action; they brake automatically. This is a different kind of agency with its own set of considerations. For example, when approaching merging traffic, a self-driving car on the highway can either slow down to let cars merge in front, or accelerate to let them merge behind, a decision that the merging drivers might characterise as ‘nice’ or ‘rude.’ But who chose to be nice or rude? In this example, the decision was made not by the driver but by some combination of programming decisions, training data and sensor input within the car itself. The ambiguous location of the decision is a concern for many critics concerned with the unintended consequences of AI (Campolo et al. Citation2017; Raso et al. Citation2018).

Decision-making within the scope of hard infrastructure is supported by AI tools, but remains a deliberative process embedded in human institutions. The large scale and relative infrequency of these decisions and their major social and political implications make them poorly suited for full automation. Moreover, these cases lack the opportunities for rapid iterations and feedback needed to train AI models. They must instead learn from artificial models or historical data, which have their own limitations and biases (Nishant, Kennedy, and Corbett Citation2020). On the other hand, the everyday operation and use of infrastructures, where decisions today are more often made by an individual human rather than dispersed stakeholders, are the arenas where we are more likely to see AI systems that act as autonomous agents. In situations where the AI is able to act without human intervention, we find more reason for concern about AI acting in support of sustainability goals.

2.3. Regulation for sustainability

Although the primary purpose of this article is to examine different forms of AI regulation, it is also important to point out the challenges of the content of sustainability goals within such regulation. Existing efforts to improve the sustainability of infrastructure megaprojects already involve both the construction and the use of infrastructures. In the construction phase, environmental impact assessments are an example of a tool for identifying long-term and distributed project impacts that might not appear in studies focused on construction cost and feasibility. The use of infrastructure is also a subject of sustainability-focused regulations, which might, for example, set utility rates to encourage sustainable consumption.

Applying any of these regulatory interventions to the pursuit of sustainability goals is quickly complicated by the fuzziness of sustainability as a concept. Given that there is no single measure for ‘sustainability,’ there is no straightforward way to translate this multi-dimensional ideal into a given policy. For decades, the sustainable development literature has underscored the three goals of sustainability – environment, economy and equity – and their often conflicting interactions. More recently, the UN’s Sustainable Development Goals have inherent contradictions, so that the pursuit of one often threatens another (Hickel Citation2019; Bennich, Weitz, and Carlsen Citation2020; Frame, McDowell, and Fitzpatrick Citation2022). Such contradictions are sorted out in political processes that produce investments or regulations favouring a certain outcome. This resolution must happen in AI as well, since its models are built around quantifiable objective functions that must specify what, exactly, a successful outcome looks like. We do not seek to resolve the tensions of sustainability or offer a new definition or measure of sustainability to be used in AI here. Instead, we highlight this complexity to suggest that the pursuit of sustainable development must remain within a political arena, where competing human values and visions can be made visible and sorted out.

Given this broad scope and complexity of sustainable development concerns, AI is unlikely to directly supplant human decision-making to the degree it does in self-driving cars or medical diagnostics. Even though those domains are also quite complex, the objectives – avoid collisions, identify malignant tumours – are much more straightforward. One review of AI for sustainability begins with the 169 targets of the Sustainable Development Goals and finds literature arguing that AI could support 134 of these (Vinuesa et al. Citation2020). However, each of the reviewed studies is linked to a narrow set of sustainability targets, avoiding the question of how to resolve conflicting objectives. Moreover, they represent AI providing information that must be acted on by people, whose political and economic situations might or might not encourage the decisions that the AI has identified as ‘sustainable.’ In one of the article’s cited examples, AI techniques could analyse satellite imagery of vegetation cover to better understand soil depletion (Mohamadi, Heidarizadi, and Nourollahi Citation2015), but this example says nothing of the mechanisms for action that would be needed to prevent or remediate such depletion once identified. When AI is the mechanism, on the other hand, we need to identify new ways to make sure that sustainability objectives can be served. The examples in the next section illustrate this.

2.3.1. AI as a new decision maker

When AI is used for decision support in producing infrastructure, it can fit naturally into the existing political procedures for evaluating the relative importance of various objectives, including those for sustainability. However, the regulatory approach becomes more complicated when AI acts autonomously, as the following two examples illustrate. The technical details of these cases remain speculative, but are based on actually existing or proposed applications. The selected domains, the electrical grid and driving, both have well-known sustainability impacts and are the subjects of much AI research. The purpose of these brief examples is to illustrate how autonomous AI agents could lead to unsustainable outcomes. The energy example highlights the challenges of black-boxed objective functions, while the driving example illustrates the complexity of agent interactions. These correspond with two different sites for AI regulation: the design of the systems and the behaviour of autonomous agents.

There are several other urban and infrastructural domains in which the role of AI is emerging, including policing, freight transportation, real estate, urban agriculture and more (Dauvergne Citation2020; Cugurullo Citation2020; Macrorie, Marvin, and While Citation2021). However, real-world examples of AI-driven decision-making systems are relatively new, and their current limited deployment obscures some of the complications likely to arise with more widespread deployment. One of the key technologies used is reinforcement learning (Watkins and Dayan Citation1992) in which a policy determining the decisions is optimised using the feedback from the environment seeking to maximise the objective function set by the designer or engineer. This method, however, relies on the environment being constant in the sense that the rules do not change and parameters vary within a given range, otherwise the policy likely fails to converge (Russell and Norvig Citation1995). Even more challenging are situations in which multiple agents are trying to improve their policy while simultaneously exploring the environment, since each agent’s exploration now becomes an additional source of noise. Multiple approaches exist to remedy these problems (Buşoniu, Babuška, and De Schutter Citation2010; Tan Citation1993), which, in turn, raise their own problems (Hu and Wellman Citation2003; Wolpert and Tumer Citation2002). Consequently, when AI is deployed, it is typically not constantly learning during deployment, but was optimised beforehand. Further, during the AI’s training, the range of variation of its environment was assumed to be constant, and not populated with other AI decision makers. Thus, real-world examples do not reflect the true complexity of independently acting AI decision makers that we might be soon faced with.

2.4. Smart electrical grids

Electricity grids are a prototypical infrastructure; they have accumulated over many decades and do not have centralised control. They are complex networks of generators, consumers and distributors of electricity. A major expansion of renewable energy sources in the grid is essential for reducing carbon emissions. The shift in this direction has introduced power generating units, such as wind turbines and photovoltaic arrays that are smaller, more distributed and more variable than traditional power plants. New on-site battery storage technologies have also been incorporated into the grids. Together, these changes require new forms of energy grid management that differ from strategies for operating grids with traditional power plants (U.S. Department of Energy Citation2019). The planning, designing and building of this infrastructure could have already included the help of AI tools to augment human decisions, as discussed above.

Potential uses of AI acting autonomously in energy grids have broader implications. As sensors are producing new real-time visibility into these grid components, this big data promises ‘big insights’ (Zhou, Fu, and Yang Citation2016) into energy management. In simple terms, grid management has three broad objectives: sustainability, cost and reliability (Cheng and Yu Citation2019). Energy managers pursuing sustainability goals want to maximise the proportion of energy produced by renewables, minimise emissions of greenhouse gases and other pollutants, balance electricity production with demand to minimise overproduction, and avoid energy loses in distribution. Financial goals revolve around minimising costs to utilities and consumers. Reliability goals are based on minimising system downtime, containing outages to the fewest number of customers and avoiding damage to equipment (Zhou, Fu, and Yang Citation2016; Cheng and Yu Citation2019).

AI solutions have been proposed to help utilities achieve all these objectives, many of them in predictive analytics. Supply and demand predictions can inform generation and distribution decisions to avoid waste. For example, neural networks can be used to develop more accurate hourly wind forecasts that can be used to predict power generation on a wind farm (Barbounis et al. Citation2006). On the demand side, neural networks are also used to forecast hourly electric loads based on historical and real-time data (Hahn, Meyer-Nieberg, and Pickl Citation2009). Suppose such predictions become a piece of information to be considered by an energy manager working to balance energy loads over the course of the day. In that case, the AI is acting as a form of decision support, with a human still in the loop. For example, decisions about bringing generators online, redirecting power distribution or storing energy in batteries (Kumar et al. Citation2021) are frequent and relatively localised among a handful of decision-makers.

Given these predictive models forecasting hourly changes in energy supply and demand, a utility might also choose to bypass the human manager and instead deploy an AI to automatically control the grid (Selim et al. Citation2021). Imagine, in this case, an AI that uses predictions for consumption and wind power generation to forecast how much of an area’s electricity demand can be met by a wind farm. The objective function of these predictions is to minimise discrepancies between the predicted and observed values. If the AI is to act on the grid, a different objective function must be specified. The AI might be asked to maximise the proportion of energy supply coming from renewables. Or, it might be asked to minimise the difference between the energy supplied by the utility and that consumed by its customers. At what point should wind-generated electricity be diverted to battery storage, where some percentage will be lost, rather than sold to the grid, where the price might be low? What should it do when the goal of generating clean energy comes into conflict with the goal of a reliable power supply?

These decisions of the AI are controlled by the objective function used to train it. The term objective function is used loosely here. In genetic algorithms or other optimisation problems, it is used to quantify the performance of a solution. In machine learning, the objective of a classifier is more often controlled by the training dataset and how it is annotated. Regardless, these training data determine the objective to achieve, hence our generalisation of this term. Depending on this objective function and its experiences during training the AI will prefer one outcome over the other. But who specified this outcome?

Engineers implemented said objective functions at one level, maybe with sustainability goals in mind. In this sense, greater attention to how the specification of objective functions and broader practices of developing and training an AI are supportive of collective goals promises a novel, more powerful approach to regulating complex infrastructural systems for sustainability. On the other hand, understanding the objective function does not fully explain how the AI made its decision. In fact, most advanced deep neural network technology is by construction a black box (Russell and Norvig Citation1995; LeCun, Bengio, and Hinton Citation2015). This property triggers the demand for transparent AI (Castelvecchi Citation2016; Wachter, Mittelstadt, and Floridi Citation2017). As such, it remains unknown to which degree the objective function got translated into proper decision making, a flaw recognised in other contexts before (Crawford Citation2016; Bellamy et al. Citation2019) leading, for example, to discrimination against female employees in hiring decisions (Kodiyan Citation2019). Similarly concerning are surprising innovations of AI systems, such as cheating (Chu, Zhmoginov, and Sandler Citation2017; Coledewey Citation2018), or developing new forms of communication (Lewis et al. Citation2017).

The disconnect between the definition of the objective function and its implementation by an AI, as well as the lack of oversight about the objectives is at the heart of a new problem. This disconnect allows autonomous AI agents to act in ways that cannot be fully anticipated or controlled. In the case of the smart grid, we can see the result of how the AI manager acts on the power distribution, and we might also be able to see the objective function guiding it. However, we cannot see why the AI acts as it does.

2.5. Self-driving cars

Self-driving cars by definition make decisions automatically, rather than provide information in support of a human-decision maker. AVs are complex systems based on various forms of AI whose models have been trained on actual road experience and bounded by defined rules. An AV must both predict something, for example, whether or not a pedestrian at the edge of the curb is likely to walk into the street, and then take an appropriate action, such as slowing down the car. This cycle must happen quickly and automatically, which is both a computational challenge and a source of anxiety to road users who feel they are losing control.

AVs are often considered within a broader scope of networked and sensor-equipped infrastructures and hardware components, the so-called Internet of Things, allowing communications among vehicles and the environment. Within such ‘intelligent transport systems,’ AVs promise to support transportation sustainability by more efficiently coordinating traffic (United Nations Citation2012; Balasubramaniam et al. Citation2017; Chehri and Mouftah Citation2019; Guevara and Auat Cheein Citation2020). However, the development of AVs has focused much more on technical issues than on the social and political factors that are necessary to implement sustainable AV practices (Mora, Wu, and Panori Citation2020; Cugurullo et al. Citation2021). Similarly, the smart shared infrastructure needed to realise the full benefits of AVs have received far less attention than the individual manufacturers’ development of AVs that drive independently of any coordinating infrastructure (Duvall et al. Citation2019). Critics argue that local policy frameworks allow AVs to prioritise individual mobility goals, reinforcing the existing sustainability problems of automobility, while failing to incorporate them into systemic mobility initiatives for sustainability like public transit and EV charging infrastructure (Grindsted et al. Citation2022). Others have suggested that without adequate policy interventions, the direct benefits of AVs that appear in models, such as intersection flow efficiency improvements, could be erased by the systemic effects that emerge as a second-order product of the technology’s deployment within a complex environment (Bahamonde-Birke et al. Citation2018).

One key to understanding how mobility infrastructures built on autonomous AI can support any given sustainability goal is the question of how AV agents interact with one another. This question motivates algorithms for coordinating intersection behaviour, where more efficient movement of vehicles through an intersection using infrastructure and vehicle communications promises to reduce fuel consumption and greenhouse gas emissions (Lee et al. Citation2013). In a model without coordination, AVs must leave larger buffers for the uncertain behaviour of other vehicles. If vehicles directly communicate their intentions, a coordination system can narrow those gaps, improving overall efficiency (Balasubramaniam et al. Citation2017). These coordination problems can be described mathematically as an effort to maximise a specified metric, e.g., vehicle throughput, within certain constraints, including collision avoidance, road geometry and the requirement that vehicles must eventually reach their destination (Hult et al. Citation2016). Note that even when the aggregate outcome is better in these models, the outcome for the target criterion could be worse for a given player in the coordinated game than it would have been if the player could act unilaterally. For example, a car might be directed to slow down to let others pass ahead of it. In the intersection coordination problem, the sustainability goal of more efficient movement of vehicles through the intersection is better achieved with cooperation, enacted automatically, than without it.

This example highlights the problem arising from introducing and mixing autonomous decision makers, with humans and unknown objectives in the context of sustainability or common regulations. A formal model of this situation is the public goods game and the resulting tragedy of the commons (Hardin Citation1968). Each participant needs to decide between adding money to a common cool (cooperation) or withholding that contribution (defection) in this game. The pooled money typically increases due to some synergy and then becomes redistributed to all participants. Depending on the degree of synergy, the defecting players will always receive more, making defection the evolutionary stable strategy (Hintze and Adami Citation2015). This model has been used as a metaphor for social cooperation, and the question it raises is how to encourage people to contribute to the common good rather than defect. Generally, institutions (Yamagishi Citation1986; Sigmund et al. Citation2010) and incentives (Fowler Citation2005; Perc and Szolnoki Citation2012; Hintze et al. Citation2020) can be used to stabilise the system.

However, these suggestions are based on the idea that participants have private goods and the selfish objective to increase them. As pointed out before, the goals of AI agents are less clear. If the AVs in the intersection act without coordination, then an entering car might contribute to the common good by yielding to another car, or it might defect by prioritising its own speed. Consequently, the suggestions for remedying this social tragedy might not apply when the nature of players changes. On the other hand, it might present an opportunity to improve the situation. If objectives of AI agents would be properly controlled, they could be made to act in the interest of the public goods, possibly shifting the propensity of other actors in this game to cooperate as well.

2.5.1. Conclusion: regulating AI for sustainability

Like any other technology, AI for infrastructure is not solely technical, but is embedded within existing social structures (Nagenborg Citation2020; Cugurullo Citation2020; Macrorie, Marvin, and While Citation2021). As such, directing AI towards a collective goal like sustainability will always be a political process. In this article, we have highlighted some familiar and some novel challenges that AI presents for regulation in the public interest.

Efforts to promote more sustainable infrastructure megaprojects can use AI to inform human decisions or can deploy AI that acts on its own. The use of AI for decision support raises a host of concerns about algorithmic transparency and encoded biases, despite these challenges, such practices are nonetheless largely already located within existing frameworks of human decision-makers and institutions. AI directed at megaproject design and development largely falls in this category, where AI recommendations are inputs into political processes for balancing competing objectives. However, an AI acting autonomously raises new issues, which are especially apparent when they direct the everyday use and operations of infrastructures. Research into AI ethics has identified three sources of potential negative impacts of AI: the design of the systems, the data it is trained on, and the ‘complex interactions’ through which the AI ‘will interact with the environment in ways that produce outcomes that might not have been foreseen’ (Raso et al. Citation2018, 15). In certain ways, it is tempting to see the ethical concerns of AI as not fundamentally different from those of other technologies, in that it is simply a tool that can be used for good or for bad purposes (Bryson and Kime Citation2011). However, the complex and unforeseen interactions of an apparently autonomous agent do make AI different than technologies that are either closely associated with human intention, or act independently in ways that are more deterministic and predictable.

The illustrations of the smart energy grid manager and the AV intersection coordination have suggested two challenges for promoting sustainability using AI that acts autonomously: the design of the system itself, which specifies an objective function that might or might not be aligned with a sustainability goal, and the behaviour of the AI agent, especially in the context of its interactions with other agents. An effective regulatory approach must see these together. Transparency in defining the goals of AI can help bring to light the sustainability trade-offs we expect it to make when operating independently, but opening these black boxes can be difficult when they are built by private interests. Regulating the behaviour of the AI after its development is also challenging, since the AI can act in ways we do not anticipate. However, this kind of regulation also offers an opportunity to promote the common-goods goals of sustainability. Even if AIs are pursuing objectives in their private interests, we can restrict their behaviours to support common goals instead. AIs can often be controlled in ways that human agents cannot, and this opens up new ways to make infrastructural activity more sustainable.

Disclosure statement

No potential conflict of interest was reported by the author(s).

References

  • Ainger, C. M, and M. Charles. 2014. “Sustainable Infrastructure: Principles into Practice.” Delivering Sustainable Infrastructure. London: ICE Publishing.
  • Bahamonde-Birke, Francisco. J., Benjamin Kickhöfer, Dirk Heinrichs, and Tobias Kuhnimhof. 2018. “A Systemic View on Autonomous Vehicles.” disP - the Planning Review 54 (3): 12–25. https://doi.org/10.1080/02513625.2018.1525197.
  • Balasubramaniam, Anandkumar, Anand Paul, Won-Hwa Hong, HyunCheol Seo, and Jeong Hong Kim. 2017. “Comparative Analysis of Intelligent Transportation Systems for Sustainable Environment in Smart Cities.” Sustainability 9 (7): 1120. https://doi.org/10.3390/su9071120.
  • Barbounis, T. G., J. B. Theocharis, M. C. Alexiadis, and P. S. Dokopoulos. 2006. “Long-Term Wind Speed and Power Forecasting Using Local Recurrent Neural Network Models.” IEEE Transactions on Energy Conversion 21 (1): 273–284. https://doi.org/10.1109/TEC.2005.847954.
  • Batty, Michael. 2018. “Artificial Intelligence and Smart Cities.” Environment and Planning B: Urban Analytics and City Science 45 (1): 3–6. https://doi.org/10.1177/2399808317751169.
  • Bellamy, Rachel. K. E., Kuntal Dey, Michael Hind, Samuel. C. Hoffman, Stephanie Houde, Kalapriya Kannan, Pranay Lohia, et al. 2019. “Think Your Artificial Intelligence Software is Fair? Think Again.” IEEE Software 36 (4): 76–80.
  • Bennich, Therese, Nina Weitz, and Henrik Carlsen. 2020. “Deciphering the Scientific Literature on SDG Interactions: A Review and Reading Guide.” The Science of the Total Environment 728: 138405. https://doi.org/10.1016/j.scitotenv.2020.138405.
  • Brooks, Andrew, and Hannah Rich. 2016. “Sustainable Construction and Socio-Technical Transitions in London’s Mega-Projects.” The Geographical Journal 182 (4): 395–405. https://doi.org/10.1111/geoj.12167.
  • Bryson, Joanna. J, and Philip. P. Kime. 2011. “Just an Artifact: Why Machines Are Perceived as Moral Agents.” In Proceedings of the Twenty-Second International Joint Conference on Artificial Intelligence, 6. Menlo Park, CA: AAAI Press/International Joint Conferences on Artificial Intelligence.
  • Buşoniu, Lucian, Robert Babuška, and Bart De Schutter. 2010. “Multi-Agent Reinforcement Learning: An Overview.” In Innovations in Multi-Agent Systems and Applications - 1, 183–221. Studies in Computational Intelligence. Berlin, Heidelberg, Germany: Springer Berlin Heidelberg. https://doi.org/10.1007/978-3-642-14435-6_7.
  • Calmon, Flavio, Dennis Wei, Bhanukiran Vinzamuri, Karthikeyan Natesan Ramamurthy, and Kush. R. Varshney. 2017. “Optimized Pre-Processing for Discrimination Prevention.” Advances in Neural Information Processing Systems, 30. Cambridge, MA: The MIT Press.
  • Campolo, Alex, Madelyn Sanfilippo, Meredith Whittaker, and Kate Crawford. 2017. “AI Now 2017 Report.” Accessed 1 March 2022. https://assets.ctfassets.net/8wprhhvnpfc0/1A9c3ZTCZa2KEYM64Wsc2a/8636557c5fb14f2b74b2be64c3ce0c78/_AI_Now_Institute_2017_Report_.pdf
  • Castelvecchi, Davide. 2016. “Can We Open the Black Box of AI?” Nature 538 (7623): 20–23.
  • Caton, Simon, and Christian Haas. 2020. “Fairness in Machine Learning: A Survey.” ArXiv Preprint ArXiv:2010.04053.
  • Chehri, Abdellah, and Hussein. T. Mouftah. 2019. “Autonomous Vehicles in the Sustainable Cities, the Beginning of a Green Adventure.” Sustainable Cities and Society 51: 101751. (November): https://doi.org/10.1016/j.scs.2019.101751.
  • Cheng, Lefeng, and Tao Yu. 2019. “A New Generation of AI: A Review and Perspective on Machine Learning Technologies Applied to Smart Energy and Electric Power Systems.” International Journal of Energy Research 43 (6): 1928–1973. https://doi.org/10.1002/er.4333.
  • Chu, Casey, Andrey Zhmoginov, and Mark Sandler. 2017. “Cyclegan, a Master of Steganography.” ArXiv Preprint ArXiv:1712.02950.
  • Coledewey, Devin. 2018. “This clever AI hid data from its creators to cheat at its appointed task.” TechCrunch (blog). December 21, 2018. https://social.techcrunch.com/2018/12/31/this-clever-ai-hid-data-from-its-creators-to-cheat-at-its-appointed-task/.
  • Crawford, Kate. 2016. “Artificial Intelligence’s White Guy Problem.” The New York Times, June 25, 2016, sec. Opinion. https://www.nytimes.com/2016/06/26/opinion/sunday/artificial-intelligences-white-guy-problem.html.
  • Crawford, Kate. 2021. Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. New Haven, CT: Yale University Press.
  • Cugurullo, Federico. 2020. “Urban Artificial Intelligence: From Automation to Autonomy in the Smart City.” Frontiers in Sustainable Cities 2: 38. https://doi.org/10.3389/frsc.2020.00038.
  • Cugurullo, Federico, Ransford. A. Acheampong, Maxime Gueriau, and Ivana Dusparic. 2021. “The Transition to Autonomous Cars, the Redesign of Cities and the Future of Urban Sustainability.” Urban Geography 42 (6): 833–859. https://doi.org/10.1080/02723638.2020.1746096.
  • Dauvergne, Peter. 2020. AI in the Wild: Sustainability in the Age of Artificial Intelligence. Cambridge, MA: The MIT Press. https://doi.org/10.7551/mitpress/12350.001.0001.
  • Dimitriou, Harry T., and Brian. G. Field. 2019. “Clarifying Terms, Concepts and Contexts.” Journal of Mega Infrastructure & Sustainable Development 1 (1): 1–7. https://doi.org/10.1080/24724718.2019.1618652.
  • Duvall, Tyler, Eric Hannon, Jared Katseff, Ben Safran, and Tyler Wallace. 2019. “A new look at autonomous-vehicle infrastructure.” McKinsey & Company. https://www.mckinsey.com/industries/travel-logistics-and-infrastructure/our-insights/a-new-look-at-autonomous-vehicle-infrastructure.
  • Fowler, James. H. 2005. “Altruistic Punishment and the Origin of Cooperation.” Proceedings of the National Academy of Sciences of the United States of America 102 (19): 7047–7049.
  • Frame, Mariko. L., William. G. McDowell, and Ellen. T. Fitzpatrick. 2022. “Ecological Contradictions of the UN Sustainable Development Goals in Malaysia.” The Journal of Environment & Development 31 (1): 54–87. https://doi.org/10.1177/10704965211060296.
  • Gandy, Matthew. 2005. “Cyborg Urbanization: Complexity and Monstrosity in the Contemporary City.” International Journal of Urban and Regional Research 29 (1): 26–49. https://doi.org/10.1111/j.1468-2427.2005.00568.x.
  • Gibbs, David, and Kirstie O'Neill. 2014. “Rethinking Sociotechnical Transitions and Green Entrepreneurship: The Potential for Transformative Change in the Green Building Sector.” Environment and Planning A: Economy and Space 46 (5): 1088–1107. https://doi.org/10.1068/a46259.
  • Greiman, Virginia. A. 2020. “Artificial Intelligence in Megaprojects: The Next Frontier,” 621–628. Sonning Common, UK: Academic Conferences and Publishing International Ltd.
  • Grindsted, Thomas. S., Toke Haunstrup Christensen, Malene Freudendal-Pedersen, Freja Friis, and Katrine Hartmann-Petersen. 2022. “The Urban Governance of Autonomous Vehicles – in Love with AVs or Critical Sustainability Risks to Future Mobility Transitions.” Cities 120: 103504. https://doi.org/10.1016/j.cities.2021.103504.
  • Guevara, Leonardo, and Fernando Auat Cheein. 2020. “The Role of 5G Technologies: Challenges in Smart Cities and Intelligent Transportation Systems.” Sustainability 12 (16): 6469. https://doi.org/10.3390/su12166469.
  • Hahn, Heiko, Silja Meyer-Nieberg, and Stefan Pickl. 2009. “Electric Load Forecasting Methods: Tools for Decision Making.” European Journal of Operational Research 199 (3): 902–907. https://doi.org/10.1016/j.ejor.2009.01.062.
  • Hardin, Garrett. 1968. “The Tragedy of the Commons: The Population Problem Has No Technical Solution; It Requires a Fundamental Extension in Morality.” Science 162 (3859): 1243–1248.
  • Hickel, Jason. 2019. “The Contradiction of the Sustainable Development Goals: Growth versus Ecology on a Finite Planet.” Sustainable Development 27 (5): 873–884. https://doi.org/10.1002/sd.1947.
  • Hintze, Arend, and Christoph Adami. 2015. “Punishment in Public Goods Games Leads to Meta-Stable Phase Transitions and Hysteresis.” Physical Biology 12 (4): 046005.
  • Hintze, Arend, Jochen Staudacher, Katja Gelhar, Alexander Pothmann, Juliana Rasch, and Daniel Wildegger. 2020. “Inclusive Groups Can Avoid the Tragedy of the Commons.” Scientific Reports 10 (1): 1–8.
  • Horne, Ralph. 2009. Life Cycle Assessment: Principles, Practice, and Prospects. Collingwood, Vic.: CSIRO Pub.
  • Hu, Junling, and Michael. P. Wellman. 2003. “Nash Q-Learning for General-Sum Stochastic Games.” Journal of Machine Learning Research 4: 1039–1069. https://jmlr.csail.mit.edu/papers/volume4/hu03a/hu03a.pdf.
  • Hult, Robert, Gabriel. R. Campos, Erik Steinmetz, Lars Hammarstrand, Paolo Falcone, and Henk Wymeersch. 2016. “Coordination of Cooperative Autonomous Vehicles: Toward Safer and More Efficient Road Transportation.” IEEE Signal Processing Magazine 33 (6): 74–84. https://doi.org/10.1109/MSP.2016.2602005.
  • Kodiyan, Akhil Alfons. 2019. “An Overview of Ethical Issues in Using AI Systems in Hiring with a Case Study of Amazon’s AI Based Hiring Tool.” Researchgate Preprint, 1–19.
  • Kumar, Astitva, Muhannad Alaraj, Mohammad Rizwan, and Uma Nangia. 2021. “Novel AI Based Energy Management System for Smart Grid with RES Integration.” IEEE Access.9: 162530–162542. https://doi.org/10.1109/ACCESS.2021.3131502.
  • Larkin, Brian. 2013. “The Politics and Poetics of Infrastructure.” Annual Review of Anthropology 42 (1): 327–343. https://doi.org/10.1146/annurev-anthro-092412-155522.
  • LeCun, Yann, Yoshua Bengio, and Geoffrey Hinton. 2015. “Deep Learning.” Nature 521 (7553): 436–444.
  • Lee, Joyoung, Byungkyu (Brian) Park, Kristin Malakorn, and Jaehyun (Jason) So. 2013. “Sustainability Assessments of Cooperative Vehicle Intersection Control at an Urban Corridor.” Transportation Research Part C: Emerging Technologies 32: 193–206. https://doi.org/10.1016/j.trc.2012.09.004.
  • Lewis, Mike, Denis Yarats, Yann. N. Dauphin, Devi Parikh, and Dhruv Batra. 2017. “Deal or No Deal? End-to-End Learning for Negotiation Dialogues.” ArXiv Preprint ArXiv:1706.05125
  • Macrorie, Rachel, Simon Marvin, and Aidan While. 2021. “Robotics and Automation in the City: A Research Agenda.” Urban Geography 42 (2): 197–217. https://doi.org/10.1080/02723638.2019.1698868.
  • McFarlane, Colin, and Jonathan Rutherford. 2008. “Political Infrastructures: Governing and Experiencing the Fabric of the City.” International Journal of Urban and Regional Research 32 (2): 363–374. https://doi.org/10.1111/j.1468-2427.2008.00792.x.
  • Mehrabi, Ninareh, Fred Morstatter, Nripsuta Saxena, Kristina Lerman, and Aram Galstyan. 2021. “A Survey on Bias and Fairness in Machine Learning.” ACM Computing Surveys 54 (6): 1–35.
  • Mohamadi, Abdolreza, Zahedeh Heidarizadi, and Hadi Nourollahi. 2015. “Assessing the Desertification Trend Using Neural Network Classification and Object-Oriented Techniques (Case Study: Changouleh Watershed - Ilam Province of Iran).” İstanbul Üniversitesi Orman Fakültesi Dergisi 66 (2): 683–690. https://doi.org/10.17099/jffiu.75819.
  • Monstadt, Jochen. 2009. “Conceptualizing the Political Ecology of Urban Infrastructures: Insights from Technology and Urban Studies.” Environment and Planning A: Economy and Space 41 (8): 1924–1942. https://doi.org/10.1068/a4145.
  • Mora, Luca, Xinyi Wu, and Anastasia Panori. 2020. “Mind the Gap: Developments in Autonomous Driving Research and the Sustainability Challenge.” Journal of Cleaner Production 275: 124087. https://doi.org/10.1016/j.jclepro.2020.124087.
  • Nagenborg, Michael. 2020. “Urban Robotics and Responsible Urban Innovation.” Ethics and Information Technology 22 (4): 345–355. https://doi.org/10.1007/s10676-018-9446-8.
  • Nishant, Rohit, Mike Kennedy, and Jacqueline Corbett. 2020. “Artificial Intelligence for Sustainability: Challenges, Opportunities, and a Research Agenda.” International Journal of Information Management 53: 102104. https://doi.org/10.1016/j.ijinfomgt.2020.102104.
  • Nowotny, Helga. 2021. In AI We Trust: Power, Illusion and Control of Predictive Algorithms. Hoboken, NJ: John Wiley & Sons.
  • Perc, Matjaž, and Attila Szolnoki. 2012. “Self-Organization of Punishment in Structured Populations.” New Journal of Physics 14 (4): 043013.
  • Raso, Filippo, Hannah Hilligoss, Vivek Krishnamurthy, Christopher Bavitz, and Levin Kim. 2018. Artificial Intelligence & Human Rights: Opportunities & Risks. Cambridge, MA: Berkman Klein Center for Internet & Society at Harvard University.
  • Russell, Stuart J., and Peter Norvig. 1995. Artificial Intelligence: A Modern Approach. Upper Saddle River, NJ: Prentice Hall.
  • Santamaría, Gerardo del Cerro. 2020. “Complexity and Transdisciplinarity: The Case of Iconic Urban Megaprojects.” Transdisciplinary Journal of Engineering & Science 11: 17–32. https://doi.org/10.22545/2020/0131.
  • Selim, Maher, Ryan Zhou, Wenying Feng, and Peter Quinsey. 2021. “Estimating Energy Forecasting Uncertainty for Reliable AI Autonomous Smart Grid Design.” Energies 14 (1): 247. https://doi.org/10.3390/en14010247.
  • Shi, Qian, Jian Zuo, and George Zillante. 2012. “Exploring the Management of Sustainable Construction at the Programme Level: A Chinese Case Study.” Construction Management and Economics 30 (6): 425–440. https://doi.org/10.1080/01446193.2012.683200.
  • Sigmund, Karl, Hannelore De Silva, Arne Traulsen, and Christoph Hauert. 2010. “Social Learning Promotes Institutions for Governing the Commons.” Nature 466 (7308): 861–863.
  • Silberg, Jake, and James Manyika. 2019. Notes from the AI Frontier: Tackling Bias in AI (and in Humans), 1–6. mckinsey.com: McKinsey Global Institute.
  • Simone, AbdouMaliq. 2004. “People as Infrastructure: Intersecting Fragments in Johannesburg.” Public Culture 16 (3): 407–429.
  • Smith, Monica. L. 2016. “Urban Infrastructure as Materialized Consensus.” World Archaeology 48 (1): 164–178. https://doi.org/10.1080/00438243.2015.1124804.
  • Star, Susan Leigh. 1999. “The Ethnography of Infrastructure.” American Behavioral Scientist 43 (3): 377–391.
  • Star, Susan Leigh, and Karen Ruhleder. 1996. “Steps toward an Ecology of Infrastructure: Design and Access for Large Information Spaces.” Information Systems Research 7 (1): 111–134.
  • Sturup, Sophie, and Nicholas Low. 2019. “Sustainable Development and Mega Infrastructure: An Overview of the Issues.” Journal of Mega Infrastructure & Sustainable Development 1 (1): 8–26. https://doi.org/10.1080/24724718.2019.1591744.
  • Tan, Ming. 1993. “Multi-Agent Reinforcement Learning: Independent vs. Cooperative Agents.” In Proceedings of the Tenth International Conference on Machine Learning, 330–337.
  • United Nations. 2012. “Intelligent Transport Systems (ITS) for Sustainable Mobility.” United Nationas Economic Commission for Europe. New York, NY: United Nations.
  • U.S. Department of Energy. 2019. “. Sensor technologies and data analytics.” SmartGrid.Gov. December 16, 2019. https://www.smartgrid.gov/sensor_technologies_and_data_analytics.
  • Vinuesa, Ricardo, Hossein Azizpour, Iolanda Leite, Madeline Balaam, Virginia Dignum, Sami Domisch, Anna Felländer, Simone Daniela Langhans, Max Tegmark, and Francesco Fuso Nerini. 2020. “The Role of Artificial Intelligence in Achieving the Sustainable Development Goals.” Nature Communications 11 (1): 233. https://doi.org/10.1038/s41467-019-14108-y.
  • Wachter, Sandra, Brent Mittelstadt, and Luciano Floridi. 2017. “Transparent, Explainable, and Accountable AI for Robotics.” Science Robotics 2 (6): eaan6080.
  • Watkins, Christopher, and Peter Dayan. 1992. “Q-Learning.” Machine Learning 8 (3–4): 279–292. https://doi.org/10.1007/bf00992698.
  • Wolpert, David H., and Kagan Tumer. 2002. “Optimal Payoff Functions for Members of Collectives.” In Modeling Complexity in Economic and Social Systems, 355–369. Singapore: World Scientific.
  • Yamagishi, Toshio. 1986. “The Provision of a Sanctioning System as a Public Good.” Journal of Personality and Social Psychology 51 (1): 110–116.
  • Yigitcanlar, Tan, and Federico Cugurullo. 2020. “The Sustainability of Artificial Intelligence: An Urbanistic Viewpoint from the Lens of Smart and Sustainable Cities.” Sustainability 12 (20): 8548. https://doi.org/10.3390/su12208548.
  • Zhou, Kaile, Chao Fu, and Shanlin Yang. 2016. “Big Data Driven Smart Energy Management: From Big Data to Big Insights.” Renewable and Sustainable Energy Reviews 56: 215–225. https://doi.org/10.1016/j.rser.2015.11.050.