8,480
Views
9
CrossRef citations to date
0
Altmetric
Articles

Education, automation and AI: a genealogy of alternative futures

ORCID Icon
Pages 6-24 | Received 24 Sep 2020, Accepted 03 Sep 2021, Published online: 09 Sep 2021

ABSTRACT

The relationship between technical development and education is a reciprocal one, where education always stands in relation to those skills, competencies, and techniques that are anticipated as necessary in a technological future. At the same time, skills and competencies are also necessary to drive innovation and technical development for the progressive creation of desirable futures. Jumping back to the 1950s, this article illustrates how automation and AI have been anticipated as both problems and solutions in society, and how education has been used to solve these problems or realize these solutions. That is, computerization debates have concentrated on both the growing opportunities and the increasing risks, but almost always also on the need for corresponding education. The article uses a genealogical approach to show how, from the 1950s and up until today, education has been mobilized as an important tool for governance in computer policies.

Introduction

Genealogies seek to trace out the historical processes, conditions, conflicts, bifurcations and con- and disjunctures out of which contemporary practices emerged. Applied to a field such as AIed, these contingent genealogical threads and twists include disciplinary conflicts and encounters, technological developments, funding schemes, methodological advances and sectoral encounters between academic research and commercial imperatives. (Williamson and Eynon Citation2020, 224)

In their call for expanded research on artificial intelligence (AI) and education, Williamson and Eynon (Citation2020) specifically argue that genealogies provide a pertinent approach for mapping out historical trajectories and alternative futures. This paper answers this call by analyzing how broad education efforts, often correlating to foreseen changes in automation and AI and aimed at the general citizenry, have been used as forms of governance in shaping the future in desired directions. The article maps out and explains intersections between technological and, what I call, educational imaginaries of technology (Rahm Citation2021b) from 1955 to 1997, by addressing the question: what role has education performed in the anticipation and realization of a sociotechnical future? I will argue and demonstrate that critical studies of educational policies, and related empirical material, are a way to explore and explicate how said policies provide a map of anticipated threats and corresponding solutions for desired (automated) futures, and concurrently act to produce this future. To address this issue, the article takes a Foucauldian (Citation1984a) genealogical research approach and elaborates on sociotechnical imaginaries (Jasanoff Citation2015) from an educational perspective.

Today, the use of digital technologies is a fundamental aspect of practically all learning environments (Selwyn Citation2017). The number of learning technologies and educational platforms that implement artificial intelligence (e.g., facial recognition, conversational agents, and predictive tools) is growing rapidly. Thus, learning is increasingly enacted in an interaction between humans, robots, and intelligent systems, and as such, the epistemic foundations of learning – as well as the organization of education – are imagined to be changing. Consequently, ‘artificial intelligence’ is also rising toward the top of educational policymakers’ agendas. That is, both national school curricula and international policies increasingly include digital competencies, media literacy, computational thinking, and even AI literacy as necessary skills for the future. Furthermore, when technical systems increasingly also learn from us, the aforementioned literacies will have to be supplemented with inherently human skills, or ‘powerful knowledges’ (Young and Lambert Citation2014), such as personal ‘learning compasses’ (OECD Citation2019a) and ‘futures literacy’ (Miller Citation2018). Proficiencies in anticipation, predictive knowledge, and social and emotional management (such as self-regulation) are seen as increasingly important for individual well-being, responsible citizenship, and thriving in a digital world (OECD Citation2019b). Concurrently, lifelong learning is seen as increasingly important, spurring new educational initiatives (Eynon and Young Citation2021). However, as Eynon and Young point out, stakeholders also ‘direct AI development in the context of lifelong learning through their constructions of it, which are aligned with their agendas’ (169).

Rapid digital development spurs epistemic changes, but knowledge is also seen as essential for the ethical use of digital technology. One component that is often put forward is the concepts of explainable and trustworthy AI, which is depicted as necessary for supporting the development of ethically responsible intelligent systems (OECD Citation2019c). However, for AI to be explainable, the general population must also be able to understand these explanations. The transparency of technical systems is therefore dependent on the skill sets and education of the citizenry.

Consequently, the shaping of the future through education is not only a question of schooling children. Today, and more than ever, technological change demands new and expanded knowledge(s) from everyone. The digital economy is growing exponentially, and in an imminent future, nine out of ten jobs are anticipated to require digital competences or skills (European Commission Citation2016). As more jobs are automated, many also foresee a polarization of skill sets, creating a growing global surplus of poor and uneducated people. In countries where the economy is primarily centered around manufacturing, as much as 85% of jobs are predicted to be completely automated (World Bank Citation2016). Increased automation potentially generates a growing ‘unnecessariat’ – people who lack the necessary skills, and who are thereby rendered superfluous (Bastani Citation2019). Digital skills are thus prioritized, and are put forward as one of eight key competences in lifelong learning (Council of the European Union Citation2006), and the main agenda for many supra-national institutions has become to improve the digital skill set of the population. But how did we end up here? Echoing the introduction, a genealogy of AI and education can provide important insights into how this development has not been an inevitable progression, but has rather been based on politically and economically charged problematizations that, in equal measure, both create the problem as well as the most viable solutions to the problem.

Background: using Sweden as a vantage point

Historically, new technologies have been seen as both a promise of a brighter future and a source of worry and fear. Technologies such as the car, the radio, and television have been described, at different points in history, as inhabiting emancipating and democratizing potentials (Winner Citation1986), as well as presenting society with new kinds of accidents (Virilio Citation2007) and potential worries. Automation has historically produced both utopian descriptions of a fundamentally improved world in which man is freed from work, and images of how a man becomes a slave or prisoner of technology.

In line with such debates, automation and AI have, since the 1950s, been seen as increasingly powerful tools that demand serious consideration about their consequences. This paper will show that education has been a primary (if not the most central) instrument in creating the necessary social preconditions for automation – as well as ensuring citizens’ leverage over anticipated technological futures.

Using Sweden as a vantage point is interesting, particularly seeing how Sweden’s welfare model has been described as a middle ground between capitalism and socialism. Even in a Scandinavian context, the Swedish state has been more interventionist and corporatist than its neighboring countries (Knudsen and Rothstein Citation1994). After the Second World War, Sweden began gaining a reputation for being one of the most modernized countries in the world. Swedish politics and the welfare state are thus often referred to as internationally exceptional. Full employment was achieved through a compromise between capital and labor, and Sweden’s universal and generous welfare state, through the redistribution of income, succeeded in creating a high level of equality with strong institutional trust and a high level of political mobilization based on social class (Esping-Andersen Citation1990). Today, however, the glorious days of the Swedish model are, arguably, over. Inequality and segregation are now relatively high in Sweden. Market-oriented politics and financial cuts have eroded the redistribution policy of the welfare state (Schierup and Ålund Citation2011). Taxes and public spending today are roughly at the same level as comparable countries (Pierre Citation2016). It is obvious that Sweden today has a less generous and successful welfare state. Despite this, Sweden is well worth studying, especially in relation to digitalization and education. Sweden has a rich history of state-funded computerization and education efforts (e.g., Kaiserfeld Citation1996; Rahm Citation2021a), and is today in the very top when it comes to digital skills among citizens. At the same time, Sweden (along with the other Nordic countries) has a high participation rate in non-formal adult education and further education (Pastuhov Citation2018) – to such an extent that is has been referred to as a Scandinavian mass phenomenon (Laginder, Nordvall, and Crowther Citation2013). Interestingly, a comparative analysis of national AI policies show that the Nordic countries to a greater extent advocate measures that are about changing people (such as education and information), while the U.S.A., for example, to a greater extent advocates measures to regulate technology (van Berkel et al. Citation2020). Research on Swedish computer politics has also shown that it has always been permeated by societal and political values, which are also more connected to a certain view of society and citizens than with technological preconditions (Blomkvist and Kaijser Citation1998; Glimell Citation1989; Henriksson Citation2005; Ilshammar Citation2007; Laginder Citation1989; Söderlind Citation2009). However, this body of research has rarely addressed the connection to education policy. The few that do (see, Rensfeldt and Player-Koro Citation2020 for discussion on major Swedish school digitalization curriculum reforms) are almost exclusively focusing on digitalization of schooling or higher education, rather than educational efforts aimed at the general citizenry. From a historical perspective, non-formal adult educational efforts regarding computers are thus understudied. Such efforts have included practical courses about operating a computer, but they have more often taken the form of enlightenment ambitions, with societal impacts and anticipated effects of computerization in focus, aimed at wide groups in the population.

This paper starts by providing a background to sociotechnical imaginaries of the digital and introduces the term educational imaginaries as a term encapsulating how education about technology is regarded as a main solution for reaching certain social goals. Then, the method, genealogy, and the empirical material are presented. After this, four educational imaginaries are discussed, with particular reference to political-educational efforts aimed at educating the general citizenry about computers. Using Sweden as a vantage point, this paper also provides both concrete examples of, and outlooks on, transnational discourses and anticipations. Historically, the diffusion of computing arguably started in the U.S. followed by Western European countries (e.g., France, Great Britain, and West Germany). Therefore, departing from a small country such as Sweden may seem moot, but it can help to shed light on the relationship between global and local discourses and more closely investigate which vulnerabilities, precarities, and uncertainties, but also strengths, welfares, and stabilities, that are anticipated, as well as ask which problems and solutions are framed, and how these problematizations are generative in terms of producing social, political and epistemic meaning.

Theory: sociotechnical and educational imaginaries

The dream (or sociotechnical imaginary) of ‘the good digital society’ starts in the mid-1950s. Over the next 70 years, computers will go from being massive calculators, the size of rooms, to being so small that they can be embedded in all kinds of everyday objects and even human bodies. From a historical perspective, it could be argued that computers have, in themselves, changed so much that they are hardly the same thing today as they were 70 years ago (and that they are therefore not comparable). But computers were never one delimited phenomenon. Their capacities have, over time, followed many trajectories, taken detours, crossed paths with other technologies, diverged, and so on. We have also lived, not only with materially existing technologies, but with anticipations of how future technologies will impact our lives – i.e., how we imagine that the sociotechnical future will take place. As such, problematizations of computerization have not only addressed what the machines were in fact capable of, but more often also their hypothetical societal implications and revolutionary future potentials.

Jasanoff (Citation2015) describes how sociotechnical imaginaries are a key dimension of modernity. Sociotechnical imaginaries can help us to transcend previous binaries between ‘descriptive and normative, structure and agency, material and mental, local and translocal’ (Citation2015, 323), but also between what is, and the alternatives that could have been. What I propose is a particular focus on how education is repeatedly used as a governing tool to reach certain future social goals – what I call educational imaginaries. So, paraphrasing Jasanoff’s definition of sociotechnical imaginaries (Jasanoff Citation2015, 19), educational imaginaries could be defined as:

[…] collectively held, institutionally stabilized, and publicly performed visions of desirable futures, animated by shared understandings of forms of social life and social order attainable through, and supportive of, advances in education about science and technology. (My elaboration and emphasis)

That is to say that not only are societal and sociotechnical discourses an integral part of the development of technical systems (Flichy Citation2007), but so are also educational efforts – something that the concept of educational imaginaries aims to address. Combining educational imaginaries with historical-genealogical studies allows us to address what predicted capacities computer technologies of the future may (or may not) hold, and what future visions (including threats and possibilities) that such an imaginary machine can produce, particularly relating to education about them. Even though we live in what can be described as a digitally permeated society, I will argue that very little research has been done that aims to problematize the joint historical, imaginary, and structural aspects of educating the general citizenry about computers in order to produce certain outcomes. Thus, in relation to the computerization of society, we could say that computers are both electronic machines, using binary logics to execute certain instructions, as well as an absolutely integral part of sociotechnical and educational anticipations over, at least, the last 70 years. Broadly speaking, developments in computing can be seen as a combination of automation (the computerization and mechanization of manual labor) and artificial intelligence (the computerization and mechanization of cognitive tasks). These conceptual forms often overlap in the history of societal discourses, which can have certain implications when discussing education about computers. That is, these different forms of computing suggest different kinds of imagined societal implications to which education becomes a response. As such, education policies, and related reforms, are forms of public pedagogy, and when the focus is on creating desirable futures then pedagogy (particularly pertaining to education about computers) becomes an important mode of intervention to actualize said imaginaries. However, as we shall see, educational imaginaries can both overlay, intensify and disrupt governing practices and processes.

Method: a genealogical approach

Genealogy, as described by Foucault (Citation1984a, Citation1984b, Citation1991), is not a method for describing a straight teleological derivation of a certain current phenomenon. Rather, it is a historization emphasizing how a plurality of events was consociated in leading up to the ‘now’. Genealogy is, therefore, not only about using history to show how the present was ‘caused’, but also a methodological procedure that makes use of history to diagnose the present. That is, it functions as a way to problematize contemporary concepts and intellectual thought figures. As several scholars have now suggested, a genealogical approach is not a strict methodological step-by-step instruction (cf. Ball Citation2013; Bowman Citation2007). Rather, the benefits of the genealogical approach come from a fundamental questioning and scrutinizing of the conditions and objects that constitute ‘knowledge’ in society. Importantly, at the heart of such scrutinizing is an ambition to reveal power differentials that we tend to take for granted, or that are enacted through networks that make them seem inevitable. Genealogy is thus ‘a methodology of suspicion and critique’ as Bowman puts it; a set of concepts and loose procedures that aims to de-familiarize ourselves with, and then reveal, consequences of societal knowledge-production (Bowman Citation2007, 138). As such, genealogies certainly become historizations of the present, but importantly, not of the more common, sequential, and continually progressive kind. This does not mean that genealogy thinks of history as random or unsystematic. Genealogy considers history to be a complex weave of interrelated events, collectively and organically directing change (or non-change) in certain directions. This includes a mapping of how particular discourses emerge and how they are enacted on different levels in society, including the selection and objectification of subjects and their ontological status as determined by various cultural techniques (Foucault Citation1977). As such, Foucauldian genealogy opposes the search for a definite origin and is put forward as the opposite of historical studies focused on sequential causal relations. A genealogy must start with the things we experience as having no history at all. In the words of Winch:

Using this approach, written records, including policy documents, laws, archival material, institutional reports, and training schedules, need to be collected. Data should also be drawn from ‘practical texts’ that provide rules, opinions, and advice on how to behave in a certain fashion. These texts are themselves objects of a ‘practice’ in that they are designed to underpin everyday conduct. (Citation2005, 181)

As such, a genealogical approach should be prepared (and perhaps even be obligated) to take on a breadth of empirical material.

As such I have operationalized the genealogical method by isolating the different contexts, or scenes, where events take place – but also made sure that I tried to remain sensitive to such events that are conspicuously missing or remain completely unrealized. I have then read, re-read, listened to, or watched the texts, audio recordings and films, with a focus on the problems and solutions that are enacted and how they are framed in relation to knowledge, learning and education. The central issue here is that events tend to remain important to us if they result in phenomena that keep on existing, and that discursively keep on having public or societal value. The main point of genealogy lies in questioning and revealing that who we are, and what we hold true, is not necessarily the only possible outcome of a controlled linear progression, but more a result of tangential events, or even happenstance (Gale Citation2001).

According to Foucault, the power of domination is reproduced in rules, rituals, and in the carefully designed procedures that distribute rights and obligations – what we may refer to as negotiation strategies and problematizations (Bacchi Citation2012). As such, a genealogical approach is interested not only in the creation of meaning, but also in revealing systems of domination and submission (Foucault Citation1985). For example, in relation to education (and computerization) it becomes particularly evident that certain body techniques and even certain bodies become targets for problematization and negotiation (cf. Pillow Citation2003). By including bodies and bodily experiences, classifications and problematizations in policy analysis, Pillow emphasizes the role that bodies play in regulating subject positions and thereby implicating the lived experiences of the very subjects of policy changes.

Foucault conceptualized systems of rules as colonizing regimes of dominance. The argument is that, throughout history, being ‘successful’ has corresponded to controlling, or dealing with, the regimes of dominance. The term ‘to deal with’ entails not only a following of rules, but also an ability to use the rules against those who created them – that is, to shape, pivot, obscure, hide from, and redirect rules. In relation to the inherent political aspects of technology, large education efforts can thus be theorized as a governing means for controlling, or dealing with, a certain (technological) system of rules and making subjects (want to) conform to it. The extent to which resistance against such conformation is in fact exercised is something that genealogy is further capable of revealing.

Empirical data

The starting point for this article is the 1950s, beginning in Sweden, but expanding towards supranational and time-spanning imaginaries. Computerization discourses are polyphonic by nature and originate from many different stakeholders. As such, additional international material has been analyzed. Consequently, the genealogical threads put forward in this article builds on several types of material (see Appendix for a list of the material): policies and political speeches, educational material (broadcast and printed), public reports, evaluations, and reports. In order to include education efforts on a more detailed level, I have also analyzed courses, educational films and radio broadcasts. Taken together, this material constitutes a broad, but also coherent, the genealogical backdrop from which I have elicited significant and recurring educational imaginaries. While this may come across as a disparate body of material, it also resonates clearly with how Foucault imagined his genealogical approach to be executed (Foucault Citation1984a; Dean Citation1994).

Analysis: a genealogy of four educational imaginaries

The analysis is presented in the form of four educational imaginaries. These educational imaginaries are negotiated and problematized by various collective actors, making them particularly strong during certain points in time, but also persisting and residing, while other imaginaries may grow stronger. As such, these imaginaries are part of a larger genealogical ‘weave’ of (education about) computerization in society. At the same time, to follow these particular ‘genealogical threads’ and their intertwining is an analytical process that helps us better understand not only the educational imaginaries through which contemporary practices have been shaped, but also consider the alternative threads that were not spun.

Educational imaginary 1: adapting citizens to automation and the leisure time explosion

In the 1950s, it was generally governments who financed the construction of the first computers – such as the American ENIAC or the Swedish BESK. Governments were consequently influential actors and procurers in producing and guiding problematizations around automation and computerization during this time. The main ambition was not only technological progression, but also to adapt people to an impending computerization of society, characterized by increased welfare and more free time. The concept of ‘automation’ was, during the 1950s, used to refer to an automated process including at least one ‘electronic brain’. An electronic brain was a colloquial concept used to refer to a digital machine that could control other machines. The British Department of Scientific and Industrial Research (Citation1956) announced that in a state of full automation, the workforce could be reduced to maintainers of machines. The threatening hazard, as it was anticipated, was not one of ubiquitous computerization as such, but rather one where computerization was not happening swiftly enough (Dobinson Citation1957). The automated future was anticipated as increasing wealth, decreasing workloads (as well as household tasks), creating more spare time for everyone, and thereby increasing well-being for all (Low Citation1950).

As such, the automation of the 1950s was imagined as bringing a large and rapid increase in standards of living, measured in variables such as more spare time and higher wages (Ivre Citation1956). The computer was conceptualized as a symbol for the rapid technological development and was seen as increasing the demands for education in a changing society, where the individual must always be ready to re-educate themselves. Automation was anticipated to be a powerful tool in the service of rationalization and productivity. From an economic standpoint, this was also seen as the only way to raise standards of living, increase consumption, extend spare time, and enrich the personal lives of the masses. As such, the main question was not if welfare and free time would increase, but how soon (Swedish Social Democratic Party and the Swedish Trade Union Confederation Citation1956)

The main problem was to adapt people to the social effects that automation was depicted as causing. A growing need for an educated labor force was anticipated in order to secure a better future. In 1957, the scientific journal International Review of Education, issued by UNESCO, devoted a special issue to automation and education. In the article ‘The call for men’, statistician and educator P. J. Idenburg (Citation1957) advocated a more scientific method in the search for talent in order to educate more of the ‘gifted boys’. That is, it was seen that an ‘educational reserve’, although suitable for higher education, had never been given the opportunity to study under the current system. Broadened recruitment would thus allow for twice as many (boys) to be educated compared to before; it was just a question of finding, through scientific methods, the most suitable candidates. These sifting procedures were also described as constituting a fairer way to determine futures, particularly compared to letting economic conditions determine an individual’s possibilities. The educational reserve of the 1950s was thereby already ‘datafied’ by being subjected to statistical techniques and technologies of aptitude. At the same time, these measures were also already deeply intersectionally structured (based primarily on an able-bodied male hegemony).

Consequently, both women and ‘low talented’ (sic) people were problematized as others, but in different ways (Rahm Citation2018). Through automation, women were anticipated to still have somewhat increased opportunities to combine work and care and could thus (conditionally) take part in the better future. However, the ‘others’ of deindustrialization were particularly targeted for ‘adjustment’ via social reforms and political actions (although everyone needed some adjustment). The imaginary of the soon to come leisure time explosion gave rise for arguments that education had to be supportive of creativity, imagination, and intuition; to aim not only at ‘Bildung’, but also at ‘Leistung’, that is proficiency and training in the skills needed to ‘master life in nature and in the world of men’ (Idenburg Citation1957, 418).

Educational imaginary 2: broad knowledge as an antidote to the lurking threats of computing power

If computers in the 1950s held a promise of (a certain) utopia, this idea shifted in the late 1960s and early 1970s, when computers were increasingly seen as dangerous, and were associated with imperial capitalism, surveillance, citizen control, invasion of personal privacy, and exploitation of workers. These potential problems with computers were seen as so pressing that stopping computerization entirely was discussed. In some cases, the imminent computer-driven rationalization of certain workplaces was in fact stopped by striking workers in countries including Sweden, France, Norway, and Great Britain (e.g., Ehn, Erlander, and Karlsson Citation1978).

Many political parties drew up their first action program for computers. The Swedish Social Democratic Party action program was called ‘Computers on people’s terms’, and began with a poem:

Never have so few known so much about so many. When the monitor displays your social security number, the ten digits are followed by a cryptogram, which only the initiated can read. It tells everything about your social and economic skeletons. That you missed down payments on the colour TV, for example. If we work together, we can read between the lines of a society where data gathering should at all costs guard the financial system against surprises. Both in a literal and a figurative sense, you should be capable of payment. When you falter, it is noted on the screen. […] The refinement lies in the impersonal abstraction. One wouldn’t use one’s fists against cold machines. This has also been foreseen by the system. (The poet Stig Sjödin in the program for the Swedish Social Democratic Party Citation1978, 6–7, my translation)

The main anticipated problem in the 1970s was control – who is in control of the automated future? Leaving control to the government or the market were both options that were regarded as deeply problematic. Several stakeholders, mainly outside the government, had become sceptical toward the potential benefits of computers. At the same time, the growing dystopian anticipation was mainly connected to the fear of automation and AI in the hands of an elite. Concurrently, many intellectuals in the 1960s stressed that computing should be a public utility – like water or electricity (Rankin Citation2018). Computerization controlled by ‘the people’ was thereby seen as an important solution to such anticipated threats.

An important solution to the potential problems of computerization was also education about computers, thereby providing people with the capacity to control, govern, and further develop computers in different directions. On start-up computer programs at universities, discussions about social issues increased. These discussions grew out of efforts to create an alternative computer development, a development that was not based on money from the military but based on user needs. For example, at Dartmouth College in the U.S., free computers were introduced for all students, and the emphasis was on the importance of simplicity and user convenience in the design of the time-sharing system at the school (in contrast to an earlier focus on efficiency for the computer) (Rankin Citation2018). Through education, computers could be disassociated from their strong connection to industry and the military, and would instead emphasize openness, democracy, and sharing.

‘Adjusting people machines’ was regarded as increasingly problematic by different stakeholders. Psychologist and computer scientist J. C. R. Licklider anticipated that in the early 1960s computers would be part of every intellectual transaction at universities. The problem was that the computer ‘lives behind a glass wall. It has a tighter appointment schedule, and more resolute appointment buffer, than a dean’ (Licklider Citation1962, 204). It was anticipated that if computers could be used simultaneously by many users at remote stations, who were able to work on their own problems at any time, great things would happen. The role of universities was to drive the development of this ‘approachable’ multi-program and multi-user computer, and further to create opportunities for more imaginative and creative people to use it. Licklider stated that it was not sufficient to wait for the computer industry to develop the computers the university needs, and also that preparing the programs required to make computers ‘go critical’ was an intellectual task universities should handle (Licklider Citation1962, 209). As such, universities had an important role in the creation of an enhanced man-machine system.

Most of the researcher’s time, most of the scholar’s time, most of the student’s time is spent getting into position to take the step. No one knows what it would do to a creative brain to think creatively continuously. Perhaps the brain, like the heart, must devote most of its time to rest between beats. But I doubt that this is true. I hope it is not. (Licklider Citation1962, 206)

As such, it was anticipated that: ‘We can look forward to the time when any student from grade school through graduate school who doesn’t get two hours a day at the console will be considered intellectually deprived – and will not like it’ (Licklider Citation1962, 208–209). Marvin Minsky, building on Licklider, stated that he too anticipated that a time-sharing computer system would be of immense value for anyone wanting to get good work done. However, the lack of time with the computer might not be the only problem: ‘[N]owadays one gets about twenty minutes of real thinking done on a good day when not too many students come in to interrupt him’ (Minsky Citation1962, 213).

At the same time, an alternative culture among computer scientists grew, especially in the U.S. Programmers, for example, organized themselves as ‘Computer People for Peace’, protesting against the Vietnam War and collecting bail money for the defendants in the Black Panther Party trials (Sonnert and Holton Citation2002). Not only did their newsletter ‘Interrupt’ focus on how the computer was used for war and behavior control, but they also wanted to redirect ‘the marvellous technology to our own purposes, whether striking back at the ruling class or just having fun or doing some good for the people’ (Computer People for Peace Citation1973). As such, the aim of the newsletter was to educate people and demystify computers.

The educational imaginaries of the 1950s focused on adapting people to an impending high-tech future. The educational imaginaries of the 1970s represented the opposite – adapting machines to people’s needs. Knowledge of computers was regarded as important in order to control and direct the threatening computing power. An illustrative example of this was that, at the time, the Nordic countries conducted policy-driven, workplace-based research and education initiatives, where a particular ambition was to include ‘low-skilled’ professions in the knowledge production relating to computers. These initiatives are often referred to as the ‘Scandinavian approach’ or the ‘Scandinavian school of systems development’, and represented an approach to developing computer systems in cooperation with their prospective users (Howard Citation1985). This is essentially the starting point for the field now known as participatory design. The purpose of this education was to strengthen the position of workers and to provide them with tools to express the requirements of computer systems and to develop computer systems in line with their own needs (Carlsson et al. Citation1978).

In 1978, 100,000 Swedes took part in a course called ‘Computer use’, developed by the Swedish Trade Union Confederation (Emanuel Citation2009). Participants were encouraged to examine the historical development of technology in the workplace, focusing on how it had affected working conditions, work content, values, power asymmetries, and control over work.

The 1970s include the most techno-dystopian imaginaries in the studied material, but there were also strong beliefs in the possibility of a better future if computers were placed in the hands of the people. Technologies and technological development are seen as distinctly non-neutral and in need of control and close observation. This may be due to large administrative systems entering the collective imaginary, and issues around privacy and surveillance surfacing and reaching critical mass(es). The computer became personal in the sense that it could have an impact on the private life of an individual – and the sociotechnical imaginaries took a more disconsolate turn.

One example is how, in the 1970s, the Swedish public debate emphasized what was called computing power in society. This concept entailed not the number of transistors that could fit on a single chip, but a general forceful process that would digitalize the entire society. This computing power was also envisioned as being capable of connecting the entire world in a network of computers. This was despite the fact that most people did not use computers, and hardly any households owned a computer, even less so a computer connected to the internet. Nevertheless, sociotechnical anticipations of computerization had a great impact on the framing of the machines and correlating educational needs.

Educational imaginary 3: computer education as a ticket to the future

The social and public critique of computerization that characterized the 1970s shifted again in the 1980s. Once more, everyone was the target population for education – this time in order to not become lost generations. As such, the educational efforts now contained more actual tech knowledge.

The UNESCO journal International Review of Education again published a special issue on computers in 1986. The prevailing discourse in this special issue was that digital technologies in education could only be prosperous if they were accepted by the users. Similarly, UNESCO emphasized that the imminent major impact on society of rapid technological change compelled retraining and the gaining of ‘new forms of literacy’, particularly ‘computer literacy’ (UNESCO Citation1985, 29). Many countries launched broad educational efforts directed at the entire citizenry. An example of this is the British BBC computer literacy project (Twining Citation1986). The program attracted the attention of millions and entailed twenty television programs, course books, computer programs, and even an actual computer. The BBC Micro was a series of computers developed for the purpose of the educational program (Gazzard Citation2016). The computer was installed in most schools in Britain, but also became a popular home computer.

The Swedish Social Democratic government decided to launch what they described as the largest educational reform in Swedish history – ‘Broader education and training in EDP (Electronic Data Processing)’ – providing every citizen with free computer education. It was stressed that this educational effort needed to be a mix of citizen education and work life education (with a particular focus on computers).

It was seen as very important for people to understand that EDP systems do not represent and process real information (data) about the world, but actually administer ‘conceptions, ideas or values, that a few people are “building into” electronic rule-based systems’ (Commission for Informatics Policy Citation1985, 89).

Educational efforts were aimed at the entire population, but particularly targeted the groups who showed the least interest in computerization, as well as those who were viewed as less educated in general. These groups (again) happened to be mostly women (especially blue-collar and migrant women). Women, if they were not provided with real influence in computerization processes, were particularly regarded as being in the risk zone of becoming the ‘computer illiterates’ of the future. As a solution, the Swedish government decided to initiate a special computer course, whereby low-skilled workers were offered education during work hours, with pay.

In the 1980s, imaginaries around an unstoppable computer development simultaneously portray anti-technological discourses as backward-looking. Once again, there was a central focus on the utopian aspects of technology. At that time, computer technology was anticipated as the core factor in creating the good information society, which would – or should – be characterized by increased business competitiveness, increased equality, and developed democracy. Sociotechnical anticipations of the time include descriptions of computers that were so small, so cheap, and so powerful that they would be integrated into all domestic appliances. Searching for information in databases was also anticipated as becoming a mundane part of people’s lives (Government Bill Citation1984).

Increased education and mass information were described as ‘one of the principal preconditions for the development and use of computer technology under democratic governance and control’ (Commission for Informatics Policy Citation1984, 13). A survey by Statistics Sweden (SCB) reported that 2 million Swedes felt they needed computer education; 1.5 millions of these stated that the most important reason for needing computer education was to be able to evaluate the social impacts of computerization.

Educational imaginary 4: access to information and ICTs as a prerequisite for democratic equality

During the 1990s and around the turn of the millennium, the data material increasingly refers to computers as ‘information and communication technology’, where access is presented as a particularly important prerequisite for democratic equality. Sociotechnical and educational anticipations are colored by ideas of equal and universal access to technology – and the concurrent political efforts correspond to this idea. Personal computers in the home were charged with notions of getting ‘onboard and online’, as well as becoming educated and entertained – mainly driven by a logic stating that access was a determinant.

An example of this shift toward access to computers is that during the 1990s the Swedish Trade Union Confederation and the Social Democratic governments subsidized purchases of hardware and software effectively helping Swedish households to get online. The subsidy has been described as essential in terms of computerizing Swedish households. About 20–30% of Swedes took up the offer (Pettersson Citation2001), which can only be seen as extremely important for the computerization of society. The PC Reform was perhaps a contributing cause to Sweden exceeding the U.S. as a leading IT nation at the turn of the millennium (IDC and Times Citation2000). As such, for the first time in history, access was considered more significant than increased education.

Discussion and conclusion: the educational imaginaries (and alternative futures) of automation and AI

Anticipations that education could support but also counteract the digital revolution emerged in the early 1950s. Thus, and generally speaking, education can be understood as a central mechanism underpinning the dissemination of – and the preparation for – (anticipated) societal consequences of technologies, such as artificial intelligence and autonomous systems (Webb, Sellar, and Gulson Citation2020). In line with this argument, this article has not been about how AI and automated systems and other digital technologies are used in education, but rather about how education is used to govern technological progress (and citizens’ preparedness and attitudes towards it). Sociotechnical imaginaries of the digital are thereby a motor for civic education, and digitalization is more than a logic for technology-driven modernity, it is also an educational governance fundamental to training proper and propitious citizens for the future.

Across different time periods, digital development is repeatedly described as an autonomous force (utopian or dystopian), where education must be mobilized to counteract such driving forces, and to prepare and govern citizens in order to regulate or conform them to said development. As such, digitalization was, for a long time, formulated as a political construction of a new system, where latent resistances were reshaped into problems, and consequently problematized in order to correspond to the right solutions. The problems of automation and AI have repeatedly been rephrased as educational problems and solutions, where educating the entire citizenry (or selected risk groups) has been imagined as a recurring remedy.

The genealogical swoops made in this article show how in the 1950s problematizations were centered around the building of a welfare state. Notions of adapting citizens were central. An increasingly automated society needed to be balanced through increased cultural refinement and sophistication. Of course, such a society would need more people educated in computer science, but since automation also demanded great responsibilities, a character-building educational foundation was seen as equally important. Education would thus provide people with the necessary abilities for a future where society and technology changed constantly. In other words, humans needed to be adapted to a new kind of society – including both new working knowledge, and new softs skills for recreation, which would have beneficial synergic effects.

In a general sense, the 1970s sees more techno-sceptical discourses, and problematizations include visions of stakeholders who control capital as well as the means of production, and who would safeguard technology to cater to their own interests. That is, under a capitalist order, there was a perceived risk that automation and AI would be used for oppressive purposes (or even become an oppressive ‘computing power’ in itself). An important solution to the potential problems of computerization was thus to educate citizens about computers and thereby provide them with the capacity to control, govern, and, if necessary, even stop computerization.

During the 1980s, the notion of an ‘unstoppable computer development’ became more common, and anti-technological discourses were seen as reactionary. The main goal for education was to get everyone on board in this progressive development, and education was seen as increasingly compensatory – by strengthening the knowledge of citizens who ran the risk of becoming ‘lost generations’. Education was also emphasized, during this period, to increase the competitiveness of nations, and to create a compromise between employers and employees when it came to implementing computers in the workplace.

By the end of the 1990s, debates shifted toward problems associated with unequal access to computers – a problematization which lingers to this day (for example, UNESCO [Citation2020] recently stressed the danger of ‘adding an AI gap to the digital gap’). Computers were in the 90s increasingly framed as ‘information technology’, and access to IT was consequently regarded as access to information. The citizen was conceptualized as a user, and access to IT was regarded as a distinct asset which needed to be distributed equally amongst the population. Several projects were launched, for example by unions, but also by governments, to facilitate and equalize ownership and use of computers.

Nowadays, a political coherence regarding digitalization seems omnipresent, regardless of differences in ideology, on both national and transnational levels (e.g., OECD Citation2019c). Access to digital media technologies is seen as a human right (United Nations Citation2016), as (again) important as access to clean water or electricity (House of Lords Citation2015), and with a significant impact on the health and economic well-being of citizens (Tinder Foundation Citation2016). As such, the use of digital media technologies is seen as a prerequisite for enacting citizenship. Digital inclusion becomes equal to societal inclusion. Today, our digital society is controlled not only by governments, but also by multinational corporations, acting in a competitive market. Such tendencies have been visible, also in relation to education policies, since the 1990s (Boretska Citation2015). Thus, the governance of the individual is, in a Foucauldian sense, still a ubiquitous, distributed, and invisible process, but now including a new mix of state and capital.

As a modern example of ‘broad computer education’, we can turn to the massive online course the Elements of AI that has been taken by more than 600,000 individuals in 170 countries (elementsofai.com, 2021). As well as taken by individual citizens, it is also used by several public authorities, such as the Swedish Association of Local Authorities and Regions, who deploy it as a form of continuous education for civil servants. The course is deliberately directed at the entire body of citizens. The Finnish Minister of Economic Affairs has in fact challenged his Swedish counterpart to educate more than 1% of the Swedish population in ‘getting to know artificial intelligence' (the Swedish Minister has accepted the challenge). The coordinator of the course explains:

It’s about what is known as AI literacy and, in a more general sense, keeping civic skills up to date with the demands of digitalization. Our most ambitious dream is, together with other countries, to reach one per cent of the global population. That’s totally insane, but why not aim for it? (Pekkarinen Citation2019)

The important takeaway here is that current (and historical) problematizations contain conceptions of AI entangled with ideas about which knowledge citizens need, now and in the future. Changes in (the imaginaries of) the digital push changes in (the imaginaries of) education. Further, education is still imagined as important in facilitating digital inclusion, re-skilling and upskilling of workers, and lifelong learning is more than ever seen as a prerequisite for both enabling the ‘fourth revolution’ as well as steering technology in the desired direction. The 1950s were characterized by a similar utopian view, but with the difference that technologies were also construed as something that would increase welfare for all and reduce physical labor and cognitive fatigue. In the 1980s, fear of lagging behind was also prevalent, but computerization was also construed as creating a good information society that would be beneficial to everyone through increased equality. In the 90s there were great hopes that the internet would deepen democracy.

Today, AI is described as creating enhanced possibilities for open education and lifelong learning, but lifelong learning is also described as an obligatory precondition in an AI-fuelled future (UNESCO Citation2020). Thus, when machines and systems learn from us, we never become fully educated. Instead, we must be increasingly flexible and willing to learn throughout life to adapt to constantly changing needs and technologies. Utopian descriptions of digital potentials are again common, but the positive effects are, if possible, even fuzzier. As such, current discourses are dominated by a lack of friction, where everyday smoothness, efficiency, and constant connectivity replace more critical social visions of automated systems and AI. Further, as this replacement is in an omnipresent system of power asymmetries, it will also serve the interests of those in power.

One way of achieving this is to hide the labor of production and to present technological artifacts as powered by almost magical characteristics (Suchman Citation2007). Another current example concerns the fact that someone must produce the huge amounts of big data that machine learning and artificial intelligence will operate on (and learn from). The enormous efforts in AI development are underpinned by continuous access to new data – data that is often produced by digitized citizens. A digitally literate and self-advocating body of citizens is an ideal data source. The current focus of educational efforts is to encourage the ‘digitally excluded’ to start using digital technologies. The key issue is not primarily to adjust people to machines (1950s), or to adjust machines to people (1970s), but to incorporate people into a ubiquitous algorithmic system society.

To summarize, automation, and later on, AI, has always been associated with educational imaginaries. Educational imaginaries have represented a form of governance in between education and technology – ways of coordinating the relationship between citizens and an increasingly technological society. Education has repeatedly been set up as one of, if not the, most appropriate and effective means for adjusting the citizen to the effects of computerization, promoting computer literacy, and, later on, fostering the completely quantified citizen. Education has been imagined as particularly suitable for getting people to use, accept, evaluate, and influence digital media technologies, but also to evaluate and harness the risks associated with automation and AI. The need for new knowledge drives changes in education, but education is also a way to direct technical development in desired directions. Furthermore, education is also a way to counteract the unwanted effects of technology. Societal organization, including the control of its citizens, is thereby partly upheld through educational imaginaries and, later on, actual education about digital technologies and their potential effects. By studying educational imaginaries, political ambitions based on dreams, hopes, and anticipated risks can be made visible.

To conclude, I argue that structural and critical studies of educational policies and efforts are important to the study of technological anticipations in two interconnected ways, by showing how education can be both a means and an end in the diffusion of technology. As many feminist and critical race scholars have pointed out (e.g., Noble Citation2018) AI systems tend to be biased and enhance racism, sexism, and social injustice in general. Despite this, the solution is presented as the inclusion of underserved communities of more different groups of people in tech and as data (e.g., D'Ignazio and Klein Citation2020)

More diversity is described as better. But if data is the new oil and extraction of data from people creates profit for a few, and further marginalizes the many – there is a need to critically scrutinize ‘digital inclusion’. Should we rethink digital up-skilling efforts and instead work towards invisibility and disruption, for marginalized groups especially? In a time when everything is imagined to change at an even faster pace, it is important to historicize anticipations of the future in order to put contemporary assumptions under new retrospective, and anticipatory, scrutiny. That is, to explore and explicate how educational policies and efforts provide a map of anticipated threats and corresponding solutions for desired (automated) futures, and concurrently to act to produce this future.

Disclosure statement

No potential conflict of interest was reported by the author(s).

References

  • Bacchi, Carol. 2012. “Why Study Problematizations? Making Politics Visible.” Open Journal of Political Science 2: 1.
  • Ball, Stephen J., ed. 2013. Foucault and Education: Disciplines and Knowledge. London: Routledge.
  • Bastani, Aaron. 2019. Fully Automated Luxury Communism. A Manifesto. London: Verso.
  • Blomkvist, Pär, and Arne Kaijser, eds. 1998. Den konstruerade världen: tekniska system i historiskt perspektiv. Eslöv: B. Östlings bokförl. Symposion.
  • Boretska, Viktoria. 2015. “Accelerated Westernization in Post-Soviet Russia.” In Trajectories in the Development of Modern School Systems: Between the National and the Global, edited By Daniel Tröhler and Thomas Lenz, 241–255. New York: Routledge.
  • Bowman, Brett. 2007. “Foucault’s ‘Philosophy of the Event’: Genealogical Method and the Deployment of the Abnormal.” In Foucault, Psychology and the Analytics of Power. Critical Theory and Practice in Psychology and the Human Sciences. London: Palgrave Macmillan. doi:10.1057/9780230592322_5.
  • British Department of Scientific and Industrial Research. 1956. Automation: A Report on the Technical Trends and Their Impact on Management and Labour. London: Her Majesty’s Stationery Office.
  • Carlsson, Jan, Pelle Ehn, Barbro Erlander, Maja-Lisa Perby, and Åke Sandberg. 1978. “Planning and Control from the Perspective of Labour: A Short Presentation of the DEMOS Project.” Accounting, Organizations and Society 3 (3/4): 249–260.
  • Commission for Informatics Policy. 1984. Datoranvändning i hushållen. Rapport från Datadelegationen. Stockholm: Liber.
  • Commission for Informatics Policy. 1985. Bred datautbildning, 1985:50. Stockholm: Liber Tryck.
  • Computer People for Peace. 1973. “Newsletter of Computer People for Peace.” Interrupt, March 20.
  • Council of the European Union. 2006. “Recommendation of the European Parliament and of the Council. On Key Competences for Lifelong Learning.” Accessed September 1, 2020. https://eur-lex.europa.eu/legal-content/EN/ALL/?uri=uriserv:OJ.C_.2018.189.01.0001.01.ENG.
  • D'Ignazio, Catherine, and Lauren F. Klein. 2020. Data Feminism. Cambridge: The MIT Press.
  • Dean, Mitchell. 1994. Critical and Effective Histories: Foucault's Methods and Historical Sociology. London: Routledge.
  • Dobinson, Charles Henry. 1957. “The Impact of Automation on Education.” International Review of Education 3 (4): 385–398.
  • Ehn, Pelle, Barbro Erlander, and Rune Karlsson. 1978. Vi vägrar låta detaljstyra oss!: rapport från samarbetet mellan Statsanställdas förbund, Avdelning 1050, och Demos-projektet. Stockholm: Arbetslivscentrum.
  • Emanuel, Martin, red. 2009. Folkbildning kring datorn 1978–85: transkript av ett vittnesseminarium vid Tekniska museet i Stockholm den 9 oktober 2008. Stockholm: Avdelningen för teknik- och vetenskapshistoria, KungligaTekniska högskolan. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-10286.
  • Esping-Andersen, Gøsta. 1990. The Three Worlds of Welfare Capitalism. Cambridge: Polity.
  • European Commission. 2016. A new Skills Agenda for Europe. Working Together to Strengthen Human Capital, Employability and Competitiveness, COM(2016) 381 Final. Brussels: European Commission.
  • Eynon, Rebecca, and Erin Young. 2021. “Methodology, Legend, and Rhetoric: The Constructions of AI by Academia, Industry, and Policy Groups for Lifelong Learning.” Science, Technology, & Human Values 46 (1): 166–191. doi:10.1177/0162243920906475.
  • Flichy, Patrice. 2007. Internet Imaginaire. Cambridge: MIT Press.
  • Foucault, Michel. 1977. Language, Counter-Memory, Practice: Selected Essays and Interviews. Ithaca, NY: Cornell University Press.
  • Foucault, Michel. 1984a. “Nietzsche, Genealogy, History.” In The Foucault Reader: An Introduction to Foucault’s Thoughts, edited by Paul Rabinow, 76–99. London: Penguin Books.
  • Foucault, Michel. 1984b. “Polemics, Politics and Problematizations.” In Essential Works of Foucault, Vol. 1: Ethics, edited by P. Rabinow, 381–390. New York: New Press.
  • Foucault, Michel. 1985. Discourse and Truth: The Problematization of Parrhesia. Evanson: Northwestern University.
  • Foucault, Michel. 1991. “Questions of Method.” In The Foucault Effect: Studies in Governmentality: With Two Lectures by and an Interview with Michel Foucault, edited by G. Burchell, C. Gordon, and P. Miller, 73–86. Chicago: University of Chicago Press.
  • Gale, Trevor. 2001. “Critical Policy Sociology: Historiography, Archaeology and Genealogy as Methods of Policy Analysis.” Journal of Education Policy 16 (5): 379–393. doi:10.1080/02680930110071002.
  • Gazzard, Alison. 2016. Now the Chips are Down: The BBC Micro. Cambridge: MIT Press.
  • Glimell, Hans. 1989. Återerövra datapolitiken!: en rapport om staten och informationsteknologin under fyra decennier. Linköping: Univ., Tema teknik och social förändring.
  • Government Bill. 1984. Om datapolitik, prop 1984/85:220. Stockholm: Regeringen.
  • Henriksson, Sten. 2005. “When Computer Became of Interest in Politics.” History of Nordic Computing 174: 413–423.
  • House of Lords. 2015. “Make or Break: The UK’s Digital Future.” In Report of Session 2014–15, edited by Select Committee on Digital Skills. London.
  • Howard, Robert. 1985. “Utopia: Where Workers Craft new Technology.” Technology Review: MIT’s Magazine of Innovation 88 (3): 43–49.
  • IDC, and World Times. 2000. “Sweden Edges the United States Out of Top Position in the Information Revolution.” Accessed September 12. https://www.idg.co.uk/news/sweden-edges-the-united-states-out-of-top-position-in-the-information-revolution-2000-idcworld-times-information-society-index-reveals.
  • Idenburg, Petrus Johannes. 1957. “The Call for Men.” International Review of Education 3 (3/4): 411–422.
  • Ilshammar, Lars. 2007. “When Computers Became Dangerous: The Swedish Computer Discourse of the 1960s.” Human IT: Journal for Information Technology Studies as a Human Science 9 (1): 6–37.
  • Ivre, Ivar. 1956. “Den förändrade behovsbilden.” In Människan i dagens och morgondagens samhälle, edited by The Workers’ Educational Association, 219–228. Stockholm: Tidens förlag.
  • Jasanoff, Sheila. 2015. “Future Imperfect: Science, Technology, and the Imaginations of Modernity.” In Dreamscapes of Modernity: Sociotechnical Imaginaries and the Fabrication of Power, edited by Sheila Jasanoff and Sang-Hyun Kim, 1–33. Chicago, IL: University of Chicago Press.
  • Kaiserfeld, Thomas. 1996. “Computerizing the Swedish Welfare State: The Middle Way of Technological Success and Failure.” Technology and Culture 37 (2): 249–279.
  • Knudsen, Tim, and Bo Rothstein. 1994. “State Building in Scandinavia.” Comparative Politics 26 (2): 203–220.
  • Laginder, Ann-Marie. 1989. Framtidsbilder i offentligt utredande : teknik, utbildning och samhällsutveckling. Linköping: Linköping University, Tema.
  • Laginder, Ann-Marie, Henrik Nordvall, and Jim Crowther, red. 2013. Popular Education, Power and Democracy: Swedish Experiences and Contributions. Leicester: Niace.
  • Licklider, Joseph Carl Robnett. 1962. “The Computer in the University.” In Computers and the World of the Future, edited by Martin Greenberger, 181–217. Cambridge: MIT Press.
  • Low, Archibald Montgomery. 1950. It’s Bound to Happen. London: Burke.
  • Miller, Riel, red. 2018. Transforming the Future: Anticipation in the 21st Century. Paris/New York: UNESCO/Routledge.
  • Minsky, Marvin. 1962. “The Computer in the University.” In Computers and the World of the Future, edited by Martin Greenberger, 211–217. Cambridge: MIT Press.
  • Noble, Safiya Umoja. 2018. Algorithms of Oppression: How Search Engines Reinforce Racism. New York: New York University Press.
  • OECD. 2019a. OECD Future of Education and Skills 2030: OECD Learning Compass 2030. Paris: OECD.
  • OECD. 2019b. OECD Skills Outlook 2019: Thriving in a Digital World. Paris: OECD.
  • OECD. 2019c. Artificial Intelligence in Society. Paris: OECD.
  • Pastuhov, Annika. 2018. “The Ideals and Practices of Citizenship in Nordic Study Circles.” In The Palgrave International Handbook on Adult and Lifelong Education and Learning, edited by M. Milana, S. Webb, J. Holford, R. Waller, and P. Jarvis, 797–815. London: Palgrave Macmillan.
  • Pekkarinen, Aino. 2019. “Elements of AI on Line Course to Be Introduced in Sweden in the Spring.” Accessed April 20, 2020. https://www.helsinki.fi/en/news/data-science-news/elements-of-ai-online-course-to-be-introduced-in-sweden-in-the-spring.
  • Pettersson, Carina. 2001. Datorer åt många – en studie om datorn som varadgsteknik och kunskapsverktyg. Linköping: Linköping University Press.
  • Pierre, Jon, red. 2016. The Oxford Handbook of Swedish Politics. Oxford: Oxford University Press.
  • Pillow, Wanda. 2003. “‘Bodies Are Dangerous’: Using Feminist Genealogy as Policy Studies Methodology.” Journal of Education Policy 18 (2): 145–159. doi:10.1080/0268093022000043083.
  • Rahm, Lina. 2018. “The Ironies of Digital Citizenship: Educational Imaginaries and Digital Losers Across Three Decades.” Digital Culture & Society 4 (2): 39–62.
  • Rahm, Lina. 2021a. “Computing the Nordic Way: The Swedish Labor Movement, Computers and Educational Imaginaries from the Post-War Period to the Turn of the Millennium.” Nordic Journal of Educational History 8 (1): 31–58.
  • Rahm, Lina. 2021b. “Educational Imaginaries: Governance at the Intersection of Technology and Education.” Journal of Education Policy. doi:10.1080/02680939.2021.1970233.
  • Rankin, Joy Lisi. 2018. People's History of Computing in the United States. Cambridge: Harvard University Press.
  • Rensfeldt, Annika Bergsviken, and Catarina Player-Koro. 2020. “‘Back to the Future’: Socio-Technical Imaginaries in 50 Years of School Digitalization Curriculum Reforms.” Seminar.Net 16 (2): 20. doi:10.7577/seminar.4048.
  • Schierup, Carl-Ulrik, and Aleksandra Ålund. 2011. “The End of Swedish Exceptionalism? Citizenship, Neoliberalism and the Politics of Exclusion.” Race & Class 53 (1): 45–64.
  • Selwyn, Neil. 2017. Education and Technology: Key Issues and Debates. London: Bloomsbury Academic.
  • Sonnert, Gerhard, and Gerald James Holton. 2002. Ivory Bridges: Connecting Science and Society. Cambridge: MIT Press.
  • Söderlind, Åsa. 2009. Personlig integitet som informationspolitik. Debatt och diskussion i samband med tillkomsten av Datalag (1973:289). Borås: Valfrid.
  • Suchman, Lucy A. 2007. Human-Machine Reconfigurations. Cambridge: Cambridge University Press.
  • Swedish Social Democratic Party. 1978. Datorer på människans villkor. Ett förslag till principprogram för datapolitiken, Sveriges socialdemokratiska arbetarepartis 27:e kongress 1978. Stockholm: Socialdemokraterna.
  • Swedish Social Democratic Party and the Swedish Trade Union Confederation. 1956. Tekniken och morgondagens samhälle. Stockholm: Tiden.
  • Tinder Foundation. 2016. “Social Return on Investment Analysis for Tinder Foundation.” Accessed September 12. https://www.goodthingsfoundation.org/sites/default/files/research-publications/sroi250216formatted4_0.pdf.
  • Twining, John. 1986. “Inside Information – a Door to the Future.” International Review of Education 32 (3): 295–311.
  • UNESCO. 1985. Final Report. Fourth International Conference on Adult Education. Paris: UNESCO.
  • UNESCO. 2020. Artificial Intelligence and Inclusion. Paris: UNESCO.
  • United Nations. 2016. “Oral Revisions of 30 June.” In Promotion and Protection of All Human Rights, Civil, Political, Economic, Social and Cultural Rights, Including the Right to Development, edited by the Human Rights Council.
  • van Berkel, Niels, Lefteris Papachristos, Anastasia Giachanou, Simo Hosio, and Mikael B. Skov. 2020. “A Systematic Assessment of National Artificial Intelligence Policies: Perspectives from the Nordics and Beyond.” In Proceedings of the 11th Nordic Conference on Human-Computer Interaction (NordiCHI’20), Newcastle-under-Lyme, UK, 1–18.
  • Virilio, Paul. 2007. The Original Accident. Cambridge: Polity Press.
  • Webb, P. Taylor, Sam Sellar, and Kalervo N. Gulson. 2020. “Anticipating Education: Governing Habits, Memories and Policy-Futures.” Learning, Media and Technology 45 (3): 284–297.
  • Williamson, Ben, and Rebecca Eynon. 2020. “Historical Threads, Missing Links, and Future Directions in AI in Education.” Learning, Media and Technology 45 (3): 223–235.
  • Winch, Sarah. 2005. “Ethics, Government and Sexual Health: Insights from Foucault.” Nursing Ethics 12 (1): 177–186.
  • Winner, Langdon. 1986. The Whale and the Reactor. Chicago, IL: The University of Chicago Press.
  • World Bank. 2016. World Development Report 2016. Washington, DC: World Bank.
  • Young, Michael, and David Lambert, eds. 2014. Knowledge and the Future School: Curriculum and Social Justice. London: Bloomsbury.

Appendix. Material.