9,940
Views
5
CrossRef citations to date
0
Altmetric
Critical Issues in Development Studies Series

AI for development: implications for theory and practice

ABSTRACT

The arrival of AI technology promises to add a fascinating new chapter to development theory and practice. Current studies have made good progress in examining the potential contributions of AI to achieving sustainable development goals and addressing challenges in specific development areas (poverty, global health, human rights, environment etc.). However, four lessons stand out when considering the impact of future research on the AI/development nexus: learning how to access and combine data from multiple sources, how to master AI techniques to extract analytical insight, how to build socially impactful AI solutions, and how to apply AI to development in an ethically responsible fashion. This paper makes the argument that AI could radically transform development theory and practice by prompting a rethinking of how data and algorithms come together to generate insights into the way in which development challenges are identified, studied, and managed.

“AI is akin to building a rocket ship. You need a huge engine and a lot of fuel. The rocket engine is the learning algorithms, but the fuel is the huge amounts of data we can feed to these algorithms”

— Andrew Ng (VP & Chief Scientist of Baidu)

Introduction

Andrew Ng’s penetrating description of the concept of Artificial Intelligence (AI) highlights two critical ingredients that emerging technologies rely on to disrupt the field of policy and political practice: data and algorithms. Data, the ‘‘bloodstream’ of the digital revolution, has become the most valuable commodity of our age, the ‘new oil’ to fuel the next stage of economic development (Szczepański, Citation2020; Nolin, Citation2019). These expectations are not unfounded. The global data sphere is expected to grow from 33 Zettabytes (ZB), or 33 trillion gigabytes in 2018, to 175 ZB by 2025. It has been also estimated that by 2025 six billion people, or 75% of the world’s population, will interact with data every day, and each connected person will have at least one data interaction every 18 seconds (Reinsel, Gantz, & Rydning, Citation2017). While data constitutes the digital disruption’s ‘raw material,’ it is the companion process of ‘datafication’ (Mayer-Schönberger & Cukier, Citation2013) that is responsible for value creation by tracking, aggregating and analysing the underlying information and data points that the ‘raw material’ offers. Through datafication, the informational aspect of a resource is ‘liquefied’ and separated from its use in the physical world, subjected to algorithmic treatment and machine-learning calibration by which relevant patterns, trends and relationships are identified, and then ‘re-bundled’ and mobilised via data visualisation methods to generate new analytical insights and representations of the world (Lycett, Citation2013).

If data and algorithms are now the main drivers of economic growth, then what does this mean for the field of development studies and how AI, the technology that embraces these two components most closely, is set to influence the research agenda on development theory and practice? The question is certainly ambitious for a short review piece, but pregnant with theoretical and practical implications of timely importance. It will be thus argued that AI could radically transform development theory and practice by prompting a rethinking of how data and algorithms come together to generate insights into the way in which development challenges are identified, studied, and managed. For this to happen, one should add, we need a good understanding of the AI’s internal mechanisms and the implications these mechanisms may have on addressing development issues from an analytical and normative perspective. The argument will be developed in several steps. The first section explains the concept of AI, traces its theoretical evolution, and describes how it works. The following three sections review current studies examining AI’s potential contributions to development, discuss four lessons inspired by them, and unveil a set of challenges that may emerge in terms of practice. The next section then advances two research strategies, paradigmatic and critical, that could generate novel, conceptual and normative understandings of the AI/development nexus. The study concludes with a summary of the main points and a few reflections on both what to expect and how to cope with the next generation of AI innovations.

Defining Artificial IntelligenceFootnote1

The term ‘artificial intelligence’ was first coined by an American computer scientist, John McCarthy in 1956, who defined AI as ‘the science and engineering of making intelligent machines, especially intelligent computer programs’ (McCarthy, Citation2011). In basic terms, AI refers to the activity by which computers process large volumes of data using highly sophisticated algorithms to simulate human reasoning and/or behaviour. Russell and Norvig (Citation2010, p. 2) relied on these two dimensions (reasoning and behaviour) to group AI definitions according to the emphasis they place on thinking vs. acting humanly, but the two terms defy easy categorisation. According to the test proposed by Alan Turing in 1950, to act humanly, a machine would have to meet two conditions: (i) react appropriately to the variance in human dialogue and (ii) display a human-like personality and intentions. No machine has passed the Turing test thus far, and some researchers believe that for mathematical reasons it would be actually impossible to program a machine which can master the complex and evolving pattern of variance which human dialogues contain (Landgrebe & Smith, Citation2019). Thinking humanly, on the other hand, would imply that the machine would be able to think like a person that is, it would be able to store, process, organise information so that it can solve problems and make inferences about new situations. Drawing on theories of cognitive theory, the field of cognitive computing has taken the lead in combing symbolic and statistical methods to explore how AI can reason and learn with vast amounts of data (Forbus, Citation2010).

Another approach to defining AI is by zooming in on the two constitutive components of the concept. Nils J. Nilsson defines, for instance, artificial intelligence as the ‘activity devoted to making machines intelligent’ while ‘intelligence is that quality that enables an entity to function appropriately and with foresight in its environment’ (Nilsson, Citation2010, p. 13). Echoing Nilsson’s view, the European Commission’s High-Level Group on AI has offered a more comprehensive understanding of the concept, which seeks to clarify what functioning ‘appropriately’ and with ‘foresight’ could mean for AI:

Artificial intelligence (AI) systems are software (and possibly also hardware) systems designed by humans that, given a complex goal, act in the physical or digital dimension by perceiving their environment through data acquisition, interpreting the collected structured or unstructured data, reasoning on the knowledge, or processing the information, derived from this data and deciding the best action(s) to take to achieve the given goal. (High-Level Expert Group on Artificial Intelligence, Citation2019, p. 6).

The development pathways of AI have been informed by two broad approaches. The first is symbolic artificial intelligence, also known as Good Old-Fashioned AI (GOFAI), which refers to the creation of expert systems and production rules to allow a machine to deduce behavioural pathways. Knowing exactly what the rules are and how the algorithm puts them together in decision trees makes it possible to test and improve the effectiveness of the system in incremental steps. At the same time, therein also lies the main weakness of GOFAI: its inability to adapt without human intervention to new circumstances, given the fact that it must rigorously follow a memorised set of rules. Connectionist approaches to artificial intelligence involve providing raw environmental data to the machine and leaving it to recognize patterns and create its own complex, high-dimensionality representations of the raw sensory data being provided to it (D’Souza, Citation2019). This could happen via machine learning (ML), by which the machine can learn on its own using statistical models without being explicitly programmed, or via deep learning (DL), a more sophisticated form of ML that uses a layered structure of algorithms replicating an artificial neural network to process and classify information.

Facebook’s news feed algorithm is an example of a machine learning approach seeking to promote ‘meaningful social interaction’ by assigning greater weights to parameters that make a post personal, engaging and worthy of conversation (Boyd, Citation2019). Deep learning machines, on the other hand, go a step further and make possible, among other things, to automatically translate between languages or accurately recognize and identify people and objects in images (Marr, Citation2018). That being said, lack of interpretability of how algorithms reach decisions, poor generalisation of the results when using data outside the distribution which the neural network has been trained on, and data inefficiency have exposed connectionist AI systems to legitimate criticism (Garnelo & Shanahan, Citation2019). Hybrid approaches have thus emerged as a possible middle ground solution, by combing minimal training data and no explicit programming with the ability to facilitate easy generalisation by deriving symbolic representations from supervised learning situations (Gan, Citation2019).

In the same way that cars differ by their quality and performance, AI programs also significantly vary along a broad spectrum ranging from rudimentary to super-intelligent forms. All existing AI applications to date, regardless of their degree of technical sophistication, fall in the category of ‘narrow’ or ‘weak’ AI (NAI) as they are programmed to perform a single task. By contrast, general AI (AGI) refers to machines that exhibit human abilities ranging from problem-solving and creativity to taking decisions under conditions of uncertainty and thinking abstractly. The next level, super AI would require some form of self-awareness or consciousness in order to be able to fully operate. Super AI may thus reach a point in which it will be able not only to mimic the human brain but even to surpass the cognitive performance of humans in all domains of interest. This is what Nick Bostrom calls superintelligence, an AI system that can do all that a human intellect can do, but faster (‘speed superintelligence’), or that it can aggregate a large number of smaller intelligences (‘collective superintelligence’) or that it is at least as fast as a human mind but vastly qualitatively smarter (‘quality superintelligence’) (Bostrom, Citation2014, p. 63–69).

General AI however, let alone superintelligence, remain merely theoretical constructs at this time, as all applications developed thus far, including those that have attracted media attention such as Amazon’s Alexa or Tesla’s self-driving prototypes fall into the category of narrow AI. This may change in the near future, especially if quantum computing technology succeeds in making significant progress (Taylor, Citation2020), but otherwise AI applications are expected to have a ‘narrow’ profile in the foreseeable future.

AI and international development

Riding the wave of growing academic interest in AI conceptual thinking, studies examining linkages between AI-based technologies and policies of international development have pursued two broad avenues of investigation. The first one has adopted a bird’s eye-view of the field of development and has sought to identify areas in which AI could have a strong positive vs. negative impact on the promotion of Sustainable Development Goals (SDG). While discussing the implications of how AI can either enable or inhibit the delivery goals and targets recognized in the 2030 Agenda for Sustainable Development, Vinuesa et al. (Citation2020) have found evidence to demonstrate that ‘AI may act as an enabler on 134 targets (79%) across all SDGs, generally through technological improvements’ (p. 2.). For example, by ‘supporting the provision of food, health, water, and energy services to the population,’ AI may act as an enabler for all the targets on no poverty (SGG1), quality education (SDG 4), clean water and sanitation (SDG 6), affordable and clean energy (SDG 7), and sustainable cities (SDG 11). At the same time, ‘59 targets (35% … across all SDGs) may experience a negative impact from the development of AI’ (p. 2). For example, ‘ … efforts to achieve SDG 13 on climate action could be undermined by the high-energy needs for AI applications, especially if non carbon-neutral energy sources are used’ (p. 4). Furthermore, if AI technology is used in regions where ethical scrutiny, transparency, and democratic oversight are lacking, AI might enable nationalism, hate towards minorities, and bias election outcomes, thus damaging social cohesion, democratic principles, or even human rights. In the same vein, Goralski and Tan (Citation2020, p. 7) have called attention to the ‘“winner-takes-all” competitive dynamics in AI adoption by corporations and countries,’ which in turn ‘may have profound implications for global sustainable development, particularly in relation to SDG 10 (Reduced Inequalities) and SDG 12 (Responsible Consumption and Productions).’ Truby (Citation2020) has also warned that without transparency and trust in AI-decision-making, as well as close adherence to ethical standards, AI could be actually detrimental to the achievement of the SDGs.

The second research direction has a stricter empirical focus and concerns itself with issues pertaining to the ramifications of AI applications in specific development areas (poverty, global health, human rights, environment etc.). McDuie‐Ra and Gulson insist, for instance, that close attention needs to be paid to the ‘backroads’ by which AI travels from the sites of incubation to the frontlines of uneven development. Taking the case of the Indian government’s AI strategy (#AIforAll), they argue that ‘as AI develops in concentrated geographies around tech‐hubs, success will likely be measured and celebrated in these sites, while the casualties of labour force transformation will be along the backroads’ (p. 631), far from view (McDuie‐Ra & Gulson, Citation2020, p. 631). Cossy-Gantner, Germann, Schwalbe, and Wahl (Citation2018) point out that AI expert systems, machine learning, automated planning and scheduling, cloud computing and signal processing hold ‘tremendous promise for transforming the provision of healthcare services in resource-poor settings’ (p. 6). At the same time, they argue that AI implementation cannot be divorced from a human-centred design and must consider approaching all relevant ‘legal and ethical questions through a human rights lens,’ which includes privacy, confidentiality, data security, ownership and informed consent. An expanding body of research also looks at the role of AI in protecting the environment and especially for tackling the climate crisis. AI solutions can be designed to provide enhanced warnings of approaching weather features, including extreme events (Barnes, Hurrell, Ebert-Uphoff, Anderson, & Anderson, Citation2019; Huntingford et al., Citation2019), be deployed in key areas that can have substantial impacts on decarbonizing societies such as electricity (Stein, Citation2020), or be integrated into the study of climate change adaptation policy (Biesbroek, Badloe, & Athanasiadis, Citation2020). That being said, Brevini (Citation2020) argues, we should also be mindful of the fact that AI development can exacerbate problems of waste and pollution through the use of scarce resources in AI production, consumption, and disposal.

Lessons from the field

Successful application of AI to development depends on mastering a set of ideas, techniques, and skills, which can be derived from four critical lessons. The first is learning how to access and combine data from multiple sources (data integration). Blumenstock has shown that ‘pockets of extreme poverty’ can be determined ‘using deep-learning algorithms trained to process troves of satellite imagery,’ often in combination with mobile-phone data. ‘Research shows,’ he explains, ‘that machine-learning algorithms can predict poverty characteristics by identifying patterns in phone logs. For example, wealthier people tend to make longer phone calls, have more contacts, and carry more balance in their mobile money accounts’ (Blumenstock, Citation2020). Other studies have used ML algorithms to extract economic wealth indicators from high-resolution daytime satellite images and combine them with night-time maps. The findings have provided ‘accurate estimates of household consumption and assets, both of which are hard to measure in poorer countries’ (Jean et al., Citation2016). Furthermore, the methodology enabled the authors to accurately reconstruct survey-based indicators of regional poverty, improving on results from simpler models that relied solely on nightlights or mobile phone data (Blumenstock, Citation2016; Jean et al., Citation2016). A recent study has also combined COVID-19 data collected from the European Centre for Disease Prevention and Control with 35 environmental, socio-economic, and demographic indicators compiled by the World Bank as well as with human mobility data from Google. The resulting data set has been then examined via two advanced supervised machine learning algorithms. The study has found that, among all the explanatory variables, air pollution, migration, economy, and demographic factor have been the most significant controlling factors (Chakraborti et al., Citation2021).

The second important lesson is how to select and use relevant AI techniques to extract analytical insight that can then be used to address development issues (analytical modelling). Regression, classification, and clustering are the most commonly used machine learning techniques. They are applied to predict generalisable relationships between two or more variables, to group data using pre-defined class labels, and to categorise data according to shared characteristics. Research examining the disbursement of aid for stunting alleviation in Nigeria has combined, for instance, sophisticated ML and Bayesian geostatistical techniques with geospatial analysis, national household surveys and economic data to classify and develop detailed maps showing where aid is needed most (Bosco et al., Citation2017). Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN) are, on the other hand, two popular deep learning techniques that are used to identify, extract, and classify visual features in images in the first case, and to make predictive results based on sequential data or time series data in the second. CNN has been trained, for instance, using high-resolution satellite images to successfully predict the degree of deprivation of slum settlements in Bangalore, India (Ajami, Kuffer, Persello, & Pfeffer, Citation2019) or the variation in local-level economic outcomes in several African countries (Jean et al., Citation2016). RNN has proved successful in forecasting COVID-19 outbreaks in Asia Pacific countries, particularly Pakistan, Afghanistan, India, and Bangladesh, demonstrating a rate of 87–94% accuracy depending on the country analysed (Rauf et al., Citation2021).

The third lesson is about the importance of pursuing an interdisciplinary approach to build socially impactful AI solutions in development areas. The AI for Social Good (AI4SG) movement has taken the lead in establishing interdisciplinary partnerships with tech companies, NGOs, and the academia for the purpose of developing AI applications towards SDGs. In more formal terms, AI4SG refers:

to the design, development and deployment of AI systems in ways that help to (i) prevent, mitigate and/or resolve problems adversely affecting human and natural life (ii) enable socially preferable or environmentally sustainable developments, while (iii) not introducing new forms of harm and/or amplifying existing disparities and inequities (Cowls, Tsamados, Taddeo, & Floridi, Citation2021, p. 112).

AI4SG guiding principles have been already incorporated in several projects. For example, the ‘troll patrol’ initiative developed by Amnesty International in collaboration with Element AI has applied computational statistics and natural language processing methods for quantifying abuse against women on Twitter and for potentially developing a deep learning model for automatic detection of abusive tweets. The Deep Learning Indaba project brought together Google’s DeepMind, IBM-Zindi, Data Science Africa, Black-in-AI and Women in Machine Learning to develop capacity and machine learning expertise in Africa. It has developed AI applications for tracking migration of potentially endangered animal species and it has applied reinforcement learning techniques for reducing the likelihood of transmission of malaria infections (Tomašev et al., Citation2020, p. 4–5).

The fourth lesson involves how to apply AI to development in an ethically responsible fashion (ethical AI). Good progress has been made in designing recommendations to support the adoption of ethically accountable AI applications, such as they five core principles of beneficence, nonmaleficence, autonomy, justice and explicability (Floridi & Cowls, Citation2019). For others, such as Hagendorff, the main challenge lies with low levels of compliance with the action-restricting principles set out in the various ethical guidelines. Calls have been thus made for a transition to ‘a situation-sensitive ethical approach based on virtues and personality dispositions, knowledge expansions, responsible autonomy and freedom of action’ (Hagendorff, Citation2020, p. 114). There are growing concerns, for instance, that deontologically-oriented rules such as the principle of explicability may not work well in an African context, unless proper attention is paid to cultural nuances and contextual sensitivity so as to ensure that AI values are well aligned with African interests and needs (Carman & Rosman, Citation2020). Others go a step further and demand the decolonialisation of AI by seeking to ‘undo the logics and politics of race and coloniality that continue to operate in technologies and imaginaries associated with AI’ (Adams, Citation2021, p. 190). From a development perspective, questions about who is developing AI and where, and how culture and power embed themselves in AI systems may thus need to become part of a dialogue between the ‘AI metropoles’ and peripheries as a means of developing an ‘intercultural ethics’ rooted in pluralism and local designs (Mohamed, Png, & Isaac, Citation2020, p. 675).

Risks and challenges

While limited in scope for space reasons, the overview of the main research directions and lessons to be drawn from the AI-development nexus allows us to identify several conceptual and practical challenges that require further elaboration. First, there is the question of the feasibility of applying AI to development issues, or more specifically of its form (symbolic, connectionist, hybrid) and policy area (health care, education, poverty reduction etc). As mentioned in the previous section, AI covers a broad spectrum of techniques, some of them more conventional and easier to implement (e.g. expert systems), while others are more data- and resource intensive (machine learning, deep learning). A key challenge for scholars and policy makers is therefore to learn to tailor AI to the development needs, policies, and priorities of the relevant agencies and institutions so that the level of complexity of the task matches the complexity of AI techniques. The success of such an endeavour will depend on the answers to two questions: first, how much data is necessary and available to build AI applications and second, what kind of algorithms are most suitable for the task. The first question calls attention to the need for development agencies to be able to access or build efficient data infrastructures that are capable to facilitate access, distribution and use of large volumes of data necessary for designing AI applications. Large volumes of data certainly increase the reliability of the AI models, but one should be mindful of the fact that poor data quality (e.g. incomplete, biased, improperly labelled etc.) is enemy number one to AI. It is estimated that 80% of data scientists’ time involves the cleaning and preparation of the data before feeding it to predictive ML or DL models (Redman, Citation2018). The answer to the second question requires an informed review and appraisal of the available AI techniques. Existing AI algorithms such as the ones described above (regression, classification, clustering or neural networks) are sufficiently robust for solving problems involving speech recognition and transcription, image classification or pattern recognition. They are not yet sufficiently developed for handling more complex issues that move beyond correlation and require higher level of causal thinking (e.g. did the poverty reduction policy in a country fail because of factor A or B?).

As is sometimes the case with technological leaps, an important risk that may impede AI adoption for development is the performance gap that may arise between perceptions of what the technology promises to accomplish and what it is capable to deliver. As mentioned above, the AI applications developed in the next decade will most likely have a ‘narrow’ profile (i.e. they will be able to perform one single task, and some of them quite well), so any expectations of AI being able to conduct complex operations by itself (e.g. to design, implement and monitor development programs) are clearly misplaced. Such misperceptions are bound to happen as technology is constantly evolving, but some of the risks could be anticipated and mitigated. Aside from reliability concerns (technical breakdowns or cyber vulnerabilities) and the lack of transparency and reproducibility of AI models (Haibe-Kains et al., Citation2020), which can both undermine trust in the service provided and increase AI avoidance, potential disruptions in the physical-digital-physical loop may also fuel misapprehensions. The physical-to-digital part of the integration sequence involves the creation of a digital record based on the information captured from the physical world. The second step, digital-to-digital, refers to the use of algorithms for recognizing meaningful signals and patterns in the digital record. The last component, digital-to-physical, is about the use of digital insights for driving action back in the physical world via real-time and informed decision-making. AI has made significant progress with respect to the first two steps, but not much with the third, especially in the development field. There are signs, however, that the pandemic has served as an accelerator, allowing AI scholars to develop easy-to-use interfaces to support computational infectious diseases epidemiology and near real-time response at the level of policy making (Chang et al., Citation2021). The performance value of the AI system is therefore defined by its ability to collect relevant information effectively, process it insightfully, and to feed it back into decision-making. An AI assistant designed, for instance, to map the spread of infectious disease outbreaks will face serious credibility issues if the data it uses is incomplete or inaccurate, the algorithms it applies for pattern identification are not transparent or reproducible, and if the prescriptions it offers are not tailored to local circumstances.

It is also important to remember that AI is the by-product of a systemic process of technological transformation, commonly referred to as the fourth industrial revolution, which is broadly defined by the ‘fusion of technologies’ and the blurring of the lines between the physical, digital, and biological spheres (Schwab, Citation2017). Therefore, AI’s contributions to international development could be only partially uncovered if they are discussed in isolation from the wider patterns and trends of technological innovation. Together with blockchain (BC – a shared and immutable ledger, used to record transactions, track assets, and build trust in supply chain networks around the world), 3D printing (3DP), satellite remote sensing (SRS), or extended reality, AI forms an amalgam of emerging technologies that are bound to disrupt and re-engineer how business is conducted in multiple policy sectors, including international development. AI leveraging via integration with other emerging technologies therefore deserves close attention. Decentralized AI, which is basically a combination of AI and blockchain is an example of how integration can enhance the reliability of AI applications. Decentralised AI enables decision-makers to process and perform analytics on trusted, digitally signed, and secure shared data that has been transacted and stored on the blockchain in a decentralised fashion (Salah, Rehman, Nizamuddin, & Al-Fuqaha, Citation2019). Humanitarian supply chains could also benefit from better integration of AI, BC and 3DP to overcome current challenges in disaster management. AI can augment decision-making, BC can enhance information management and 3DP can allow on-site production and mitigate inevitable congestions in the supply chain (Rodríguez-Espíndola, Chowdhury, Beltagui, & Albores, Citation2020). As mentioned above studies have also looked, with good promise, into the feasibility of integrating AI and remote sensing technologies in order to map poverty (Jean et al., Citation2016; Rigol et al., Citation2016; Steele et al., Citation2017), but also to develop decision support systems that can facilitate the adoption of sustainable management strategies (Kouziokas & Perakis, Citation2017). In sum, it is not only how AI as a standalone technology can contribute to development, but also how it can be combined with other technologies to deliver tailored solutions to the needs, tasks and objectives of development agencies and actors.

Research strategies

While research exploring the AI/development nexus remains embryonic, two broad strategies may prove effective for advancing the agenda in this area: paradigmatic and critical. The first one takes stock of existing research on development issues and seeks to uncover the specific contributions that AI makes to development theory and practice. The literature reviewed above on sustainable development goals and development policies partially falls in this category. These studies focus of specific development themes and then trace the conceptual ramifications to result from the application of an AI lens to the examination of poverty, global health, human rights, environment etc. In so doing, a paradigmatic research strategy can achieve three different sets of results. It can observe how traditional concepts and theories apply and capture the scale and intensity of the disruption that AI may induce in development areas. It can also draw on research in other studies with a more digital focus and reflect on the relevance of competing conceptual models for studying the AI/development nexus. Finally, it can steer AI and digital research towards the centre of development scholarship and revisit the epistemological assumptions involved in the production of knowledge that sustains the current paradigm.

Consider, for instance, the discussion regarding the emerging paradigm of global development, which seeks to move beyond the ‘blurring’ North–South divide and reassess development in the light of globalisation and associated transformations of the nation-state (Gore, Citation2015; Scholte & Söderbaum, Citation2017). The global development paradigm encompasses collective trade-offs of global public goods, and shared (sustainable) development challenges that countries and regions anywhere in the world face (Horner, Citation2020). AI research can contribute to this debate from three distinct perspectives, as suggested above. In a first step, it may call attention to the new inequitable gap between the technology haves and have-nots that the growing AI divide has precipitated (Yu, Citation2020), and to the broader socio-economic implications this divide may have on societies around the world, such as income distribution or unemployment (Korinek & Stiglitz, Citation2017). Second, it may be worth paying attention to fast-growing areas of AI research, such as urban studies, to understand the lessons that city development and sustainability alignment may offer more broadly for addressing challenges of global development through the coupling of AI technology with systems of democratic governance and participatory planning (Yigitcanlar & Cugurullo, Citation2020). A third approach may be to look at how research in other disciplines, such as geography, have dealt with and theorised the ‘digital turn.’ It would further look at how attempts to introduce a separate field of ‘digital geography’ (Ash, Kitchin, & Leszczynski, Citation2018) is relevant for advancing ‘digital global development,’ which features novel conceptual, methodological and empirical questions in the context of the technological and AI revolution.

The critical research strategy will take a different approach and investigate the normative implications entailed by the AI disruption of the field of development. Conceptually speaking, digital disruption is about both destruction and creation, but public suspicions have grown in the recent years that the creative part is much too slow or may simply fail to materialise once the destructive phase is completed. The original motto of Facebook’s CEO, Mark Zuckerberg, ‘move fast and break things’ has initially been hailed as the emblematic ethos of the digital age, but its appeal has faded in the recent years following accusations that tech companies have failed to anticipate and managed the worst-case scenarios that their disruption strategies have activated (Taneja, Citation2019). From a critical perspective, one may argue that the concept of AI disruption has become prone to disruption itself as the object of digital disruption (what is being disrupted?), mode (how is being disrupted?), and effect (what are the consequences of disruption?) have largely remain unquestioned thus far. At the same time, the conceptual position from which to develop such a critique requires further clarification. Theories of emancipating critique insist on forging diagnostic concepts that enable the critique to render visible the ‘captivity’ (presumably ideological) that emancipation aims to dissolve (Vogelmann, Citation2017, p. 107). Progress, on the other hand, has a more forward‐looking profile, which is underpinned by a commitment to realising a normative ideal of economic and social development that has not yet been gained (Pierosara, Citation2020, p. 3). The two concepts complement each other well, but they offer distinct scales of critical reflection of AI’s contributions to development studies.

Techno-utopianism (which refers to the belief in the power of science and technology to generate affluence, order, justice, and freedom) as well as its nemesis, techno-dystopianism, (which claims that technology deprives people of freedom and dignity, bringing destruction to humanity [Dai & Hao, Citation2018]), could both be viewed as symmetrical targets of potential emancipatory critiques. For some authors, AI techno-utopianism currently has the upper hand, as it has already imbued national discourses with nation-bound myths, utopias, and ‘technological solutionism.’ The mythical, heroic and redeeming qualities that some governments uncritically bestow on AI technologies ostensibly reveal national dreams of power aggrandisement, while masking the potential ‘dark side’ of AI in supporting forces of de-democratization through the rise of populism, authoritarianism, and tech oligarchies (Ossewaarde & Gulenc, Citation2020). AI techno-dystopianism may prove hostile, on the other hand, to progressive efforts seeking to harness the power of technology to promote and support social and economic development. Contemporary distrust of technology goes as far back as Marcuse’s critique of ‘technocracy’ and Heidegger’s fatalistic view of technology as a colonising force of the lifeworld. As suggested above, these concerns have recently resurfaced in the context of the AI for Social Good movement and of the ongoing debate regarding the ‘ethics of AI ethics,’ leading to calls supporting the adoption of a more pluralistic and intercultural approach to AI theory and practice. That being said, the ‘historical ‘malleability of technology’ should give us confidence in the possibility that technology could come to embody more democratic value (Thomson, Citation2000, pp. 205–11). The fact that AI is prone to incorporate instrumentally oriented social values does not necessarily imply succumbing to fatalism. It should rather compel us to recognise the need to take into account the social context in which technology is developed and to pursue a more humane vision of technological progress (Llinares, Citation2020, p. 10).

Conclusion

The arrival of AI technology promises to add a fascinating new chapter to development theory and practice. To summarise, while current studies have made good progress in examining the potential contributions of AI to achieving sustainable development goals and addressing challenges in specific development areas, I have suggested that development scholars are yet to start the discussion about how AI integration may restructure, theoretically and normatively, their field of study. Two research strategies may prove productive for further unpacking the AI/development nexus. The first one may seek to review how well concepts and theories underpinning the existing development paradigm can capture the scope of AI disruption and to draw on disciplines with a more digital focus to bridge possible gaps. The second strategy may adopt a more critical perspective to investigate rising normative concerns related to the potentially intrusive societal influences of AI technology and their reverberations on issues of social emancipation and economic progress. On the practical side, AI comes with a set of challenges for development involving issues of technological feasibility, performance and integration. They call attention to potential limitations in capacity building that may hinder efforts to tailor AI to the needs, policies, and priorities of development institutions. The reliability of AI applications also matters, especially their ability to collect information effectively, process it insightfully, and to feed it back into decision-making. Last but not least, AI’s contributions to development deserve to be discussed in the broader context of technological innovation by examining opportunities for integration and fusion with other emerging technologies such as blockchain, 3D printing, or satellite remote sensing.

As AI technology continues to evolve, the ‘AI effect’ is likely to take hold as well at some point: people will become accustomed to the latest version of the technology, which will stop being considered AI. Fresher technological advances will resuscitate and move the discussion forward. For some authors, the next stage of the ‘AI effect’ will consist of some form of collective or networked AI. More general intelligence will emerge as specialised (‘narrow’) AIs are linked together in complex and varied ways. Human levels in performing tasks will be thus rapidly exceeded, as all these AIs will be constantly interacting with each other, seeking improved partners, and maximising their functionality and reliability (Goertzel, Citation2014, p. 352). For others, discussions about the ‘AI effect’ are rather premature partly because current AI, based on deep learning, is rather expensive to adopt and may not be the route to developing more advanced forms of intelligence (Naudé, Citation2020). Others are even more sceptical and fear the arrival of a new ‘AI winter.’ If so, the declining public and investment interest in AI technology may create a situation in which potentially valuable solutions will be thrown out with the ‘water of the illusions’ (Floridi, Citation2020, p. 2). Either way, it is important for development scholars and practitioners to learn how to distinguish between AI hype and reality so that the promises which new technology makes, to promote development goals, remain conceptually sound and firmly anchored in the policy process. At the same time, new AI trends must also be pro-actively pursued and understood as the pace of technological innovation is unlikely to recede. If Kurzweil is right, then the ‘law of accelerating returns,’ which predicts a doubling of the rate of progress every ten years (Kurzweil, Citation2004, pp. 282–84), may see the next decade leading to an explosion of AI innovations. The theoretical and practical relevance for the development field would prove great as well.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Notes on contributors

Corneliu Bjola

Corneliu Bjolais Associate Professor in Diplomatic Studies at the University of Oxford. His research focuses on the impact of digital technology on the conduct of diplomacy with a focus on strategic communication, digital influence, and methods for countering digital propaganda. His most recent publication is the co-edited volume on the „Digital Diplomacy and International Organizations: Autonomy, Legitimacy and Contestation” (Routledge, 2020). He is currently working on a new book project “Diplomacy in the Age of Artificial Intelligence: Evolution or Revolution?” examining the potentially transformative impact of AI in consular affairs, crisis communication, public diplomacy, and international negotiations. 

Notes

1. This section draws on a more comprehensive examination of AI’s conceptual history that can be found in Bjola, Citation2020.

References