12,929
Views
8
CrossRef citations to date
0
Altmetric
Editorial

Re-examining AI, automation and datafication in education

ORCID Icon, ORCID Icon & ORCID Icon

Just as 2022 was drawing to a close, the artificial intelligence research company OpenAI launched a preview version of ChatGPT, a natural language interface designed for conversational text interactions with users. ChatGPT is the latest instantiation of OpenAI’s suite of GPT (Generative Pre-trained Transformer) technologies, otherwise known as Large Language Models (LLMs), or a form of artificial intelligence that can generate ‘human-like’ text based on its processing a huge quantities of text from the internet (Rettberg Citation2022). Its release prompted an outpouring of commentary on social media and in the press about how AI could thoroughly disrupt education – especially by rendering the student essay obsolete and opening up new ways for learners to access knowledge (Hern Citation2022). Only weeks previously, Meta (the parent company of Facebook) had launched Galactica, a language model for science and scholarship trained on millions of scientific articles, websites, textbooks, lecture notes, and encyclopaedias, which it described as a shortcut for researchers and students to ‘summarize academic papers, solve math problems, generate articles, write scientific code, … and more’ (Heaven Citation2022). And the previous year, Google proposed its own LLM could be used as a kind of smart chatbot interface to enhance students’ access to information, as part of its aims to roll out further AI features and automation in its globally popular suite of cloud-based education platforms (Williamson Citation2021).

The promise of these forms of AI for education, according to their developers and supporters, is to transform processes, practices and institutions of teaching and learning, but critics see things differently. Meta had to close Galactica after just three days once users revealed it could generate plausible-seeming articles and essays that actually contained false, misleading or dangerous information. Educators reacted with dismay when early users of ChatGPT showed it could produce course assignments that would be difficult if not impossible to detect as machine-written, leading to deep concern about its use for cheating. Existing plagiarism detection software wouldn’t even be able to flag it. Some academics reacted by posting on social media examples of the warnings they would be issuing to students against using LLMs as a form of academic misconduct. They noted that ChatGPT-generated essays often didn’t cite sources. If prompted it could compile references lists, but these references might be made up or disconnected from the points made in the automatic text. As critics of LLMs have argued for several years, they don’t understand meaning or content, but just synthesize information through sophisticated language processing to produce passable imitations of knowledge (Shah and Bender Citation2022). LLMs have also been associated with environmental racism, since those negatively affected by the environmental cost of training and deploying English LLMs are more likely to be located in the Global South (Bender et al. Citation2021), and are bound up in contested ideologies of ‘AI safety’ that privilege the interests and concerns of wealthy tech elites, entrepreneurs and investors (Gebru Citation2022). Computer scientists Narayan and Kapoor (Citation2022) described ChatGPT as a ‘bullshit generator’ that would be ‘a bad idea for applications like education’. By January 2023, US education departments had begun blocking access to ChatGPT on school devices or internet networks altogether (Elsen-Rooney, Citation2023).

LLMs are the most high-profile and controversial form of AI to hit education so far, prompting significant commentary about the possibilities and limitations of the technology and of the forms of education it might disrupt. Beyond the discussion of cheating, some thoughtful educators pointed out that ChatGPT simply revealed the inadequacy of the standard student essay as a mode of assessment, and called for better assignments that an algorithm would not be able to write (Warner Citation2022). Others noted that educators could invite students to play with, critique and thus demystify these forms of technologies within classroom learning assignments (Hirsch Citation2022), or to help develop students' critical digital literacies and teach them how to use LLMs ethically (Lipman and Distler Citation2023). Another caution was sounded about the extractive nature of the technology, as students’ labour is utilized by OpenAI to further refine their technology, which they will monetize and may make students’ own future careers obsolete (Caines Citation2022). Yet others noted that the technology companies building such technologies should take greater ethical responsibility to ‘promote the socially beneficial uses and deter or prevent the obviously negative uses, such as using a text generator to cheat in school’, rather than leaving it to educators to pick up the pieces (Reich Citation2022).

ChatGPT even produces disarmingly honest text about the risk-benefit continuum of its use in education. When one of us typed queries into the ChatGPT interface (on 22 December 2022) about its potential risks to students in education systems, the reply cited ‘dependence on technology’, ‘loss of privacy’, ‘misuse of AI’, and ‘inaccurate or biased information’, while on potential benefits it cited ‘improved efficiency’ (‘help students save time by generating reports, summaries, and other written assignments quickly and accurately’), ‘enhanced learning’ (‘help students learn by providing additional information and context on a given topic’), ‘greater accessibility’ and ‘better writing skills’. However, ChatGPT may never be able to produce utterances which exist outside of a current system that rewards 'measurable' and atomized knowledge and skills, as perhaps indicated by its tendency to emulate highly formulaic genres like the standardized school essay (Rettberg Citation2022).

It seems likely LLMs will be a significant issue in education in the months and perhaps years to come, prompting a range of debates about purposeful assessment, student cheating, the ethical responsibilities of the firms building the technologies, and the possibilities and limits of human-machine hybrid writing practices (Perrotta, Selwyn, and Ewin Citation2022). More generally, they illustrate the extent to which the education sector is currently being washed by waves of developments in AI, datafication, machine learning and automation, which are already exerting concrete effects, raising ethical conundrums and catalyzing acute controversies. These issues will be familiar to regular readers of Learning, Media and Technology, where we have previously published special issues on datafication and AI in education, and have welcomed a growing number of critical, empirically-based and theoretically-informed analyses on these and related topics.

This issue of Learning, Media and Technology was never planned as a special issue. Over the past two years, however, we have received and published a range of studies of AI, automation, digital platforms, infrastructures and data analytics in education. As we were making decisions about the line-up for this issue amid the controversy over LLMs, it made sense to gather some of these recent articles together for our first issue of the new year. So while this is not a special issue with a distinctive editorial direction, it constitutes a thematized issue that we think illustrates a series of exciting and challenging directions in research on AI, automation and datafication in education. Current controversy over LLMs needs to be situated in the context of ongoing critical scholarship that the articles collected here exemplify. But the current pace of such developments and the enthusiasm and concern they generate also requires us to re-examine the ways we approach such issues as AI, automation and datafication. Rather than rehearse the individual papers, in this editorial we want to briefly highlight what we see as a series of outstanding issues, and to invite future submissions to grapple with the ongoing and unfolding challenges of AI, automation and datafication in education.

The first issue is what we can think of as the politics of automation. Automated technologies like LLMs, or AI, automation and datafication more generally, are not neutral or obviously of beneficial value for teachers, students, institutions or education systems as a whole. Their development, deployment and use depends on socially situated power struggles for influence. This is obvious in the current race to roll out LLMs, for example, as large, well-funded technology companies are seeking competitive advantage by releasing new models, new functionality, new opportunities for integration, and new imaginaries of use across a vast range of sectors, education included. The first three articles in this issue all examine AI in education, from the perspectives of its historical genealogy (Rahm Citation2023), its embeddedness in public-private partnerships (Nemorin et al. Citation2023), and its resonances with forms of colonialism (Zembylas Citation2023). They make clear how AI has appeared in education according to historically-situated contingencies and power struggles, including forms of exploitation, and how its spread depends on new kinds of relations that criss-cross the political and commercial spheres. Together, these articles make the case for seeing AI and automation in education as a political program both in terms of the power of those producing the technology, and in terms of those experiencing its impacts.

Corporate infrastructuring is a key issue in relation to automation and education. A key way AI and automation arrive in schools and universities is through the introduction of private platforms into the spaces and routines of public education, potentially creating new kinds of dependencies and technological lock-ins. Platforms are the interfaces through which education institutions connect to underpinning digital infrastructures. To a significant degree, as platformization takes place, public schools are becoming increasingly reliant on corporate infrastructures to carry out many of their everyday functions, such as pedagogic activities, information management, behavioural and attendance monitoring, and assessment. This goes beyond the use of educational technologies in classroom instruction, and is enmeshing schools in technological stacks that include cloud computing facilities, data storage and analytics, Software as a Service, and application programming interfaces allowing cross-platform integrations. Articles in this issue highlight how platformization is occurring in the schooling sector (Cone Citation2023) as well as higher education (Thompson and Prinsloo Citation2023), emphasizing the ways educational institutions are increasingly operating within webs of platforms and data infrastructures (Pangrazio, Selwyn, and Cumbo Citation2023) and how platforms also facilitate and govern new forms of education beyond formal institutions (Robinson Citation2023). These articles therefore surface important research challenges concerning the platformization and infrastructuring of education, and the ways AI and automation are ‘plugged in’ to educational institutions and practices. They also raise the need for further empirical study of situated infrastructural and platform effects. Data infrastructures and platforms may be the products of global companies, but they are locally deployed and are enacted through context-dependent data practices that only up-close, ethnographically-informed research can examine.

Another issue is what we might term the scientization behind automation. Scientization refers to the processes involved in claiming authority from empirical evidence and scientific theory, often by mobilizing findings and concepts selectively according to particular interests and objectives. Articles in this issue, for example, address the ways that ‘nudge theory’ and facial recognition have been authorized in attempts to introduce greater automation in education. Nudge theory, with its basis in behavioural economics, has become the basis for many applications of predictive data analytics in education (Decuypere and Hartong Citation2023), proposed as a scientific way of predicting student outcomes and then prompting them to change their behaviours towards more optimal results (Smithers Citation2023). Facial recognition rests in highly contested scientific approaches that have nonetheless been deployed to manufacture trust and legitimacy in the education sector, as a way of selling such services to schools (Selwyn, Campbell, and Andrejevic Citation2023). Further research on automation in education might contend with the ways scientific theories and ideas are mobilized as seemingly objective sources of authority and legitimacy, addressing the ways often contested forms of scientific knowledge get encoded in the platforms and infrastructures of schooling.

One of the striking responses to critiques of LLMs has been conflict over ethics. On social media, high-profile technologists and venture capital investors have begun describing ‘AI ethics’ as a form of scientific censorship. Calls for greater responsibility and accountability by tech firms are therefore being treated as a ‘culture war’ issue, with social scientists and humanities scholars seen as seeking to ‘cancel’ the very technoscientific innovations that entrepreneurs see as vital for the future. Other tech figures have responded by funnelling funding into ‘AI safety’ and thus shaping the priorities of the field in particular ways. In this issue, questions of technology ethics and the governance of educational data are treated as sites of conflict and contestation (Hillman Citation2023). While calls for fairness and accountability in relation to AI and datafication are no doubt welcome, their realization is complicated, value-laden, and always the result of intense negotiation and compromise (Sahlgren Citation2023). Indeed, as recent cases of remote proctoring platforms show, contests over the ethics of educational technologies can extend into legal battles. Current efforts by regulators to contain the edtech industry, or to rein in Big Tech influence in education, highlight how questions over the ethics of technology have translated into major sites of regulatory conflict. Future studies might trace how ethical controversies over AI, automation and datafication in education play out in context, and how or if they are settled and resolved.

An inevitable response to critical studies of digital technologies and media in education is the question of how things might be designed otherwise. Some of the articles collected here explore that question in relation to AI, platformization and datafication. In the midst of attempts to decolonise higher education, decolonial ethical frameworks for AI in higher education are also imperative, or new technologies will reproduce the racialised and colonial formations of the past. Alongside redesigning technologies before they are used, attending to everyday practices in higher education can indicate ways in which students and instructors can proactively become involved in data activism, for instance, repurposing data analytics and reconfiguring the ways in which students are positioned and seen through learning analytics. A forthcoming special issue of Learning, Media and Technology confronts the challenge of designing otherwise in more depth, by addressing issues of data justice and democratic participation in the design of educational technologies.

As these brief notes suggest, the articles in this issue open up a range of important issues that are becoming increasingly urgent across the education sector – whether in schooling, higher education, lifelong or workplace learning – as automated technologies like LLMs, AI and platforms continue proliferating and expanding in reach and scale. The collected articles already help us make sense of this intense period of technological change and its implications, and advance the kinds of questions and issues that we hope future submissions to Learning, Media and Technology will help address further.

References

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.