2,266
Views
13
CrossRef citations to date
0
Altmetric
Original Articles

The PISA calendar: Temporal governance and international large-scale assessments

Pages 625-639 | Received 21 Jun 2018, Accepted 13 Feb 2020, Published online: 21 Mar 2020

Abstract

This article analyses international large-scale assessments in education from a temporal perspective. The article discusses and compares the different conceptions of time in the early international assessments conducted in the 1960s and 1970s by the IEA with the PISA studies conducted by the OECD from the year 2000 onwards. The paper argues that there has been a shift in the ways that the assessments structure time. The early IEA surveys were characterized by a relative slowness, lack of synchronization and lack of trend analyses. PISA, by contrast, is characterized by high pace, simultaneous publication of results around the world and regular and recurrent studies making the analysis of trends possible. The emergence of this new time regime, it is argued, has implications for how education is governed. At the transnational level, it strengthens the influence and importance of OECD as a significant policy actor. At the national level, as educational discourse and policy adapts to the temporalities of the PISA calendar, two kinds of effects can be distinguished. First, there is a tendency towards searching for “retrotopian” solutions for contemporary problems. Second, there is a tendency towards acceleration and short-term planning when it comes to educational reforms.

When the Norwegian minister of education Torbjørn Røe Isaksen woke up early in the morning on December 6, 2016, he knew that it was a special day. In fact, it was “one of the most important days in my time as a minister. The PISA day. It had been written down in my calendar for at least a year.” (Morgenbladet, Citation2016). The idea that there is a special day when the overall quality of a national school system is assessed says something about the peculiar nature of how education, in contrast to other societal sectors, is measured today. The economy and the environment are, to name two examples, measured on a continual basis. Education, by contrast, is with the PISA survey measured every third year and reported on a special day, accompanied by massive media coverage. Historically this is a new phenomenon, marking a new way in which education is measured and governed. What I call the “PISA calendar” refers to a new temporal regime in which international assessments are conducted regularly, reported swiftly and simultaneously across the whole world. The aim of the article is to explore how this temporal regime of international assessments emerged and the potential influence it has over educational policy and discourse. In doing so, the ultimate goal is to shed new light on the question of why and how international assessments have achieved its central role in the global governance of education.

Merging temporal and metric studies: theoretical and methodological considerations

Theoretically this means that two perspectives – on time and metrics – will be combined. International assessments like PISA are part of what has been called “the metric society” (Mau, Citation2019) and previous research has highlighted several aspects of the power of international assessments. The OECD and in particular its PISA survey (e.g., Jakobi & Martens, Citation2010; Meyer & Benavot, Citation2013; Sellar & Lingard, Citation2013), including how it emerged historically (Morgan, Citation2009; Tröhler, Citation2013) has been the focus of numerous studies. It has been noted that the transnational power of OECD is paradoxical, given that it lacks any formal power to regulate countries, resulting in a “soft” power (Mahon & McBride, Citation2008). Crucial mechanisms in making such power work is the role of the media (Grey & Morris, Citation2018; Hamilton, Citation2017), rankings (Martens & Niemann, Citation2013; Steiner-Khamsi, Citation2003) and crisis rhetoric (Landahl, Citation2017; Takayama, Citation2008; Waldow, Citation2009). Of importance is also the fact that the OECD gradually has expanded its range of influence by introducing studies such as PISA for development (Addey, Citation2017), PISA for schools (Lingard & Lewis, Citation2017) TALIS and PIACC (Volante, Fazio, & Ritzen, Citation2017). These studies demonstrate how international assessments have become influential by a logic of spatial expansion. While this research has successfully described the ways in which PISA has increased its influence, I will argue that a temporal perspective can contribute to a fuller understanding of the mechanisms behind the success of international assessments. Previous research has a tendency to see numbers as influential in themselves. Clearly there is a persuasive quality to numbers, but as Merry (Citation2016) points out in The Seductions of Quantification, not all numbers are equally influential. There is today a dense ecology of indicators, as international organizations, governments, NGOs, academics and UN agencies continually generate new indicators. While a few of these indicators become successful, many of them actually fail to create global interest and influence (Merry, Citation2016, p. 16; see also Beer, Citation2016, ch. 3). This paper attempts to demonstrate that studying the temporality of assessments can add to our understanding of how they become influential.

In introducing a temporal perspective on international assessments, the article draws inspiration from sociological and historical research on time as an instrument of power. Calendars are, as John Durham Peters expresses it, “preeminent signals of identity, and instruments of institutional control. Whoever sets the time rules the society: this truism holds for ancient priests and astrologers and for admirals, physicists, and programmers today” (Peters, Citation2015, p. 189). The relation between time and social power has been analyzed from numerous perspectives. Time plays an important part in a modern work-ethic, as demonstrated by Thompson (Citation1967) and Weber (Citation2001). Time is also a factor that structures society, with the aid of clocks and calendars (Zerubavel, Citation1981). As such it can not only regulate individuals but also disconnect and connect various parts of the world. A global standardization of time (Ogle, Citation2015) is an example of how previously disconnected parts of the world can become synchronized, mirroring global inequalities as certain concepts of time become dominant (Birth, Citation2016). Research has also demonstrated how the relative importance and meanings of the past (Zerubavel, Citation2003) the present (Hartog, Citation2015) and the future (Koselleck, Citation1985) are constructed, and how the relative speed of time can both accelerate and decelerate depending on the social context (Rosa, Citation2013). As these studies testify, time can play an important role in how social life is regulated.

By referring to the PISA calendar I want to explore how a certain temporal regularity, the triennial cycle of the PISA test, has evolved – with a potentially large influence on the global governance of education. Similar calendars exist in other social sectors as well. Barbara J. Keys has, in a study of the internationalization of sport, noticed “the quadrennial cycle of the World Cup and Olympic Games, which have synchronized the entire world in a single sporting calendar” (Keys, Citation2006, p. 2). In a more elaborate analysis of the temporalities of “mega-events” (Olympic Games, World’s fairs), Maurice Roche (Citation2003) argues that the popularity of such events partly can be attributed to the ways in which they structure time. The temporal distance involved in their periodicity (occurring every four or five years) contributes to their status as mega-events.

When calendars are widely accepted in society, they accomplish what might be called temporal hegemony. Birth (Citation2012, p. 72) argues that calendars shape and direct cognition, they are “inscribed condensations about cycles”. In doing so, they are always selective, in that they privilege certain ways of understanding time over others.

Calendars are organized by logics that choose certain ideas over others, for example, solar years over lunar. So calendars faithfully reproduce their logic, but often at the expense of the reproduction of knowledge about other temporalities. As Elias wrote, “knowledge of calendar time … is taken for granted to the point where it escapes reflection” (1992, 6), and such lack of reflection easily leads to the unquestioned acceptance of a calendar’s logic. Calendars reflect choices about the reckoning of time, and these choices are inscribed in the artifact itself (Birth, Citation2012, p. 72f).

The fact that calendars shape and direct cognition, and chose certain ideas over others while escaping reflection, make them a powerful, yet quiet societal force. This factor has been important in shaping national and/or other group sentiments. Bourdieu (Citation2014) has connected the power of the calendar to the state, and stresses the remarkable role of clocks and calendars in ordering time collectively. He points out that state power is dependent on consent towards the ways in which time is structured. In a similar fashion Zerubavel (Citation1981) has argued that calendars are an expression of group identity with specific kinds of holidays for specific groups. Calendars have therefore been capable of reproducing certain cultural patterns and traditions and have in that sense played a conservative role. It is therefore quite logical that the reigning calendar has been questioned in times of major social transformation. During the French revolution there were, for example, attempts to revise the whole temporal order of society by instituting a ten-day week (Zerubavel, Citation1981). Trying to invent a new calendar was thus a way of symbolically stressing that the old society was dead, and that a new had arrived. These examples demonstrate the close connection between time and social order. Applied to the PISA calendar, we can see how it expresses both modernizing and conservative values. While the advent of the PISA calendar made no explicit revolutionary claims, it represents a significant change in the way that time and global governance of education intersect. And once it was established it became conservative in the sense that it was repeated every third year without undergoing any significant changes during the years – thus reproducing not only itself but also the values of what counts as important knowledge in schools.

By using the concept of the calendar, I also want to stress a fundamental feature of the logics of contemporary assessments: that they measure not only performance, but also historical processes. The periodic recurrence of PISA since the year 2000 has made it possible to quantify and compare not only performance between nations but also between different time periods. As such, PISA is part of a slightly longer, but yet remarkably short, tendency to graphically represent time. In their history of the timeline, Rosenberg and Grafton (Citation2010) argue that the timeline, with its single axis and a regular, measured distribution of dates, is a relatively recent invention, dating back to the late 18th century. It represented a new way of visualizing history that changed the way in which history was talked about and understood. Importantly, it required a simplification of history, as the timeline communicated “the uniformity, directionality, and irreversibility of historical time” (Rosenberg & Grafton, Citation2010, p. 19). Such reduction of complexity is a feature of all quantitative measurement.

Commensuration, the “transformation of qualities into quantities that share a metric” (Espeland & Sauder, Citation2007, p. 16), is a process that transforms our cognition, whether it comes in the form of prices, cost-benefit ratios, survey responses of standardized tests. Commensuration shapes “what we pay attention to, which things are connected to other things, and how we express sameness and difference” (ibid). In terms of sameness and difference commensuration makes the world both more similar and more varied. A ranking system of law schools, for instance, makes the difference between the ranked schools small, as they are reduced to quantitative differences. At the same time tiny quantitative differences can have huge consequences for individual schools regarding public esteem. While studies on commensuration (Espeland & Sauder, Citation2007; Espeland & Stevens, Citation1998, Citation2008) have paid fairly limited attention to the temporal dimension, it is obvious that the observations can be applied to statistical surveys that are repeated over time. Thus, PISA and similar international assessments implicate that the idea of what counts as educational performance is seen as more or less constant. In order to make comparisons of education over time, we have to assume that quality education today is (more or less) the same as it was two decades ago, and that it will remain so in the coming years. In that sense, commensuration reproduces a static idea of what schooling is, focusing on the unchanging aspects and leaving out the changing aspects of schooling. At the same time, quantitative variations in performance over time will be highlighted. The process of commensuration is, thus, a mechanism that makes historical time appear highly variable. At the same time, the variability exists only on one or a few (quantitative) dimensions, summarized as general scores in the international assessments. The historical process becomes legible as it becomes measured and portrayed as unidimensional.

The fact that the historical process becomes quantifiable is arguably not an insignificant development. It makes international assessments influential in defining narratives of decline or narratives of development that tend to be prevalent in all kinds of policy (Stone, Citation2012). Measuring past performance does not only tell about the past, but creates fears and hopes about the future as well. Forthcoming measurements becomes a part of the drama of social life, predictably unpredictable in their outcomes. Fourcade (Citation2016) expresses it like this:

Rankings have enough stability over time that they do indeed order the world. But they contain enough movement that they can preserve the perception of mobility, the possibility of change. We relish in the rise and fall of people, organizations and things up and down various types of scales: player statistics, the Fortune 500, college rankings, wine ratings (Fourcade, Citation2016, p. 184).

In order to make the way in which today’s international assessments structure time visible, a historical perspective that takes into account the early history of international assessments, will be used. PISA has a rather short history – the first study was conducted in the year 2000 – but international assessments have a longer history dating back to the late 1950s. The article will elaborate on the temporal dimensions of international assessments by comparing two historical phases and two organizations. Early international assessments conducted in the 1960s and 1970s by the International association for the evaluation of educational achievement (IEA) will be contrasted with the later PISA studies conducted by the OECD from the year 2000 onwards. The article thus comprises two sections, and the sources used for each section differ as they are meant to illustrate different things. Both sections draw on previous and ongoing research about the history of international comparisons and the history of crisis narratives in education (Landahl, Citation2017; Citation2018a; Citation2018b; Citation2018c). In relation to those projects, many materials have been collected regarding the history of IEA. In the archive of IEA there are reports, memorandums, documentation from press conferences, notes about the future of the organization, etc. A selection of that material has been used to demonstrate the temporal dimensions of the early international assessments. When it comes to the advent of modern international assessments conducted by the OECD during the 21st century a greater variety of sources have been included in order to capture the influence and nature of how time is used as a governing tool. In this case it has not been necessary to use archival sources, given that so much is available online. The sources used include OECD documents, media debates and policy documents. The exploratory nature of the study makes a broad range of empirical examples suitable. The method aims at illustrating a new tendency and show a variety of examples, taken from different national contexts.

Before the calendar: IEA and the first large-scale assessments

The IEA was a pioneer of international large-scale assessments. It began as a network in the 1950s at the UNESCO Institute for Education and continues to be a major actor in international large-scale assessments today, responsible for, among other studies, the Progress in International Reading Literacy Study (PIRLS) and the Trends in International Mathematics and Science Study (TIMSS). Its first pilot study was published in 1962 (Foshay, Thorndike, Hotyat, Pidgeon, & Walker, Citation1962), followed by a mathematics study (Husén, Citation1967a, Citation1967b) and the so-called six subject survey, published between 1973 and 1976 (Walker, Citation1976). In the following I will describe three temporal features of the early IEA studies: slowness, irregularity and lack of synchronization.

Slowness

The early IEA-surveys were, by today’s standards, produced and reported with a fascinatingly slow pace. Several years could pass between the conduction of the studies and the reporting of them. This meant that when the reports were published, the results were already quite old. The extent to which they said anything valuable about the present educational systems could be questioned, especially in countries which had witnessed intense periods of educational reform. Finland’s reception of the mathematics study, conducted in 1964 and released in 1967, is a case in point. The country’s educational system was in a process of transition. Therefore, the data collected by IEA was fast becoming irrelevant, as it could only be seen as an assessment of the old system, doomed to disappear. One of the participant researchers has described the reception of the studies in Finland in the following way:

The historical context of these studies did not favor active demand for the results, although some aspects, e.g. of the First Mathematics Study received public attention. But since the basic (policy) decisions concerning our forthcoming major educational reform had already been made, there was no turning back. As a consequence, there was little genuine interest in results obtained in terms of a system soon to become obsolete (Leimu, Citation2011, p. 602).

This slowness was a logical consequence of the enormous amount of data that was assembled, unprecedented in the history of educational research. The first mathematics study, published in 1967, covered 133,000 pupils, an astronomical figure compared to many normal research projects at the time. To handle such huge projects, especially on an international basis, was time consuming. Some technological advances were, luckily, at hand. Torsten Husén, the chairman of IEA, has described how they used a new method for optical reading of answers. A machine that was able to read 50,000 cards in an hour was used (Husén, Citation1977, p. 91f). The use of a computer was also important in making the survey possible (Postlethwaithe, Citation1966, p. 360). Correspondence and various internal documents further reveals that the organization made attempts to accelerate its work. The researchers themselves felt that they had to rush through all data in order to meet the demands of funders. The six subject survey took six years to complete, but was still conducted “under a tight timetable in order to enable us to complete our studies within the initial budget” (Husén, Citation1976).

Irregularity

A second feature of the early IEA studies was that they were not frequently and regularly repeated. Many studies were conducted only once. Others were repeated, but with a large time distance: the first mathematics study in 1964 was followed by the second in 1980, the first science study in 1970/1971 was followed by a second in 1983/1984. The irregularity of the surveys, together with their slowness, meant that it was hard to track the changes of performance in schools. From an international perspective, it was also difficult to track changes in the relation between different countries, due to changes in which countries participated. In the second science study, 26 countries participated and in the first 19 participated – but only 10 countries participated in both (Härnqvist, Citation1987, p. 129).

Lack of synchronization

A third feature of the early IEA studies is lack of synchronization. The publication of national and international results were not always published simultaneously. For example, the Swedish results of the second mathematics study (SIMS) were published in 1983, whereas the international comparisons only became available in 1985. Similarly, the second science study (SISS) was published in 1986, and the international results in 1988. This lack of synchronization between the release of the national and the international reports made it difficult to create public interest. In both cases media attention was very limited at first. At least in the Swedish case, it was only after the release of the international results that public interest was generated (Marklund, Citation1989, p. 40, 42).

Survival of the fastest: Intimations of a new temporal regime

Gradually IEA went through a process of change. In the second half of the 1980s the question of IEA’s future was seriously discussed in the organization. There was a clear sense of being at a crossroad. The argument was that the organization either had to change or it would disappear. Then chairman, Alan Purves, initiated a debate over the future of the IEA in 1986, where he made the case for repeat studies.1

Rather than invent ‘new studies’, I.E.A. should devote the bulk of its efforts to the establishment of a cycle of subject surveys beginning with a survey of mother tongue in 1990, one in mathematics in 1993, one in science in 1996, and one in foreign languages and cultural studies in 1999 and repeat the cycle in 2002 (Purves, Citation1987, p. 106).

A more radical alternative would have been to dissolve the organization. The context for this was the increased competition in the field of international comparisons of education. In 1988, OECD started its Indicators of Education Systems (INES) program, later leading to the first publication of Education at a glance, which has been described as the cradle of PISA (Tröhler, Citation2013). The same year, Purves commented that the context for international assessments had changed drastically. The demand for the kind of scientific knowledge that the IEA produced was about to change as other agencies started doing similar, but more time effective, studies:

If IEA is to dissolve, it can do so without shame. It has whetted the world’s appetite for careful international comparative data and research. Yet IEA is no longer the only player in the educational indicators game, and although its rivals seem to pale by comparison, they promise a swifter delivery of rough indicators, which is what the policymakers and governmental funders appear to want. IEA started as the only producer of a commodity that the world was not sure it wanted. Now the world wants the commodity and sees IEA as a producer of the luxury version. Should IEA join the competition and become a data gathering organization or should it remain true to its ideals for quality research regardless of the lessening demand? Are there other alternatives? (Purves, Citation1988).

The question of the future of IEA was further discussed the same year in an internal report, written by Clare Burstall and Inger Marklund, who discussed the merits and problems of IEA and its possible futures. The report, produced for discussions within the General Assembly, was quite frank in its criticism of the organization. Among the listed problems of the organization, there were a number that had to do with temporal issues:

  • IEA has acquired a reputation for glacial slowness in delivering results.

  • Studies have started without guaranteed international funding or personnel.

  • There is a lack of a firm sanction system, applied to those who do not honour agreements or who are not capable of adhering to the time schedule of the project.

  • National reports more often than not have had to be published without international data.

  • There have been no guarantees for the completion of a study.

  • IEA has not foreseen the demands put on national centres in the closing phases of the international parts of studies given that the national studes [sic] may have been completed years before and the original personnel dispersed.

  • The difference in competence-level between countries and systems make the international work very vulnerable to criticism particularly with regard to the delays incurred and the quality of raw data (Burstall & Marklund, Citation1988, p. 2f).

Burstall and Marklund recommended several changes, and three had to do with time. First, they thought that all national and international results should be available at the same time and in a much shorter time-span than before. Second, communication with sponsors should be more effective and less academic, and sponsors must be certain that IEA delivers competent and relevant results in “the shortest time possible” (Burstall & Marklund, Citation1988, p. 5). Third, dissemination must be quicker, in contrast to what the organization was used to: “The international IEA volumes traditionally take years to appears, if indeed they ever do” (Burstall & Marklund, Citation1988, p. 5).

The discussions reveal that the IEA was struggling with its identity in times of increased competition with new agencies. The traditional ways of conducting research, with slow academic studies, sometimes with limited public appeal, were coming to an end. A new way of ordering time was, it seemed, necessary in order to increase its public relevance and to secure organizational survival. Eventually, this attitude would materialize in the invention of two new international assessments, TIMSS and PIRLS, conducted every fourth and every fifth year respectively. However, the organization that has dominated the world of international assessments lately has no doubt been the OECD, especially, but not only, through its PISA survey, first conducted in 2000. We will therefore now turn to the most successful organization in the field of international assessments to explore the relationship between time and global governance of education in the 21st century.

The PISA calendar is born: OECD becomes a pacesetter in educational policy

In contrast to the early IEA-surveys, today’s international large-scale assessments are characterized by swiftness, regularity and synchronization. Swiftness means that results are reported the year after data is collected. Regularity means that the studies are repeated regularly (in the case of PISA, every third year), and synchronization means that all national results are published at the same time, thus creating significant opportunities for cross-national comparisons. In what follows I will elaborate on the implications of this new way of ordering time. Two aspects will be highlighted. First I will discuss how OECD’s power rests on its use of time. Second, I will discuss two examples of how educational policy in individual nations can be affected by the way that PISA structures time.

OECD’s temporal power

We live in a world saturated by numbers, where some indicators fail to reach global influence and interest, and others succeed (Merry, Citation2016, p. 16–19). PISA is probably one of the most influential statistical surveys in the world today. One explanation for the success of PISA is the way in which the results are presented to the public, resulting in considerable media attention. It is common to explain the influence of PISA and other international assessments by the ways in which data is summarized in rankings: “The rise in influence of ILSAs can be attributed to their rapid global expansion, and the policy ‘shock’ and controversy that has been generated from international league tables” (Maddox, Citation2018, p. 1). However, in order to understand why rankings have an influence, it is also important to understand their temporal underpinnings. What role does the temporal rhythm play when it comes to making rankings influential?

One effect of the triennial design of PISA is that it transforms gradual changes of educational performance into regularly occurring events. What could be a dry collection of numbers becomes a major media event, possibly including bad news like PISA-shocks. The fact that there is a very specific, relatively exclusive, date for the release of data – “The PISA day” – creates the possibilities for a statistical happening, a numerical media event, a festive consumption of statistical tables.

The significance of such an event is that it introduces potentially dramatic changes in a system that in reality changes only gradually. Educational institutions do, in most cases, not crash as for example the capitalist economy recurrently does. Therefore, the ways in which they are measured is of great importance for the perception of their development over time. The interval of three years makes it possible to discuss changes as a kind of educational crash. This eventification has therefore contributed to our perception of problems in the field of education, since normal everyday problems are more difficult to grasp than extraordinary ones, regardless of the actual severity of the problems. As ´t Hart puts it in a study of crisis perception: “the rare but highly vivid event of a plane crash sticks in human memory, whereas the highly frequent and routinized occurrence of road accidents does not produce this evocation” (‘t Hart, Citation1993, p. 45).

It is not only the regular occurrence of the PISA shock that is a logical consequence of the temporal rhythm of PISA. The same applies to the opposite, and understudied, phenomena of good news, what might be called PISA relief. While PISA has been famous for creating fear and panics, its power also rests on the opposite ability to deliver good news about a nation’s performance. The graphs over historical development, or the speeches or blog-posts by Andreas Schleicher, do not only present the top-performers. An equally frequent element is the message of upward mobility. This is an important aspect that makes PISA’s message even more appealing. PISA does not only “sell” despair, it also offers hope. The constantly shifting hierarchies that the different league tables of different years demonstrate make it possible for the OECD to govern by optimism. As such the message of the OECD becomes hard to ignore. Andreas Schleicher, the Director of the Education and Skills Directorate at the OECD, has himself commented on how the creation of the trend analyses made policy makers more eager to listen to the PISA results. Looking back at the history of PISA he comments:

With each successive PISA assessment, the results attracted more attention and triggered more discussion. The controversy reached a climax with the release of the results from the 2006 assessment in December 2007, when we examined not just where countries stood at that moment in time, but, with the availability of three data points, how things had changed since the PISA test was first conducted in 2000.

It is easy to explain why one country might not perform as well as another; it is much harder for policy makers to acknowledge that things have not improved, or that improvement has been slower than elsewhere. Inevitably, political pressures ensued (Schleicher, Citation2018, p. 20 f).

Another important aspect that must be stressed is the way in which the OECD makes its data survive longer than a day. The triennial rhythm might be marked by the spectacle of the “PISA day”, with its press conferences, webinars, press releases, blog-posts, reports in the news and so on (Hamilton, Citation2017), but it lasts well beyond the borders of the actual day. The results continues to be debated in OECD reports, national reports, newspapers and political debate. In addition, the results are regularly discussed in advance. There might be speculation as to whether the results will go up or down for a specific country. For the trademark of PISA this means that media attention is created even before the actual release of the results. For example, five days before the release of PISA 2015, the Guardian wrote: “After a poor showing in the 2013 international tables of 15-year-olds, and strenuous efforts to improve, Wales awaits its new PISA scores with bated breath” (The Guardian, Citation2016). In France, the minister of education contributed to the public interest in PISA 2012 by making alarmist predictions about the results, two months in advance of the release of the study (Hugonnier, Citation2017, p. 10). The anguish in which some countries await the results of PISA, fearing a standstill, slight decline or even catastrophic results, says something about the peculiar way in which educational bad news are ordered in time. The catastrophe of a bad PISA result is different from many other social catastrophes. Environmental disasters, economic meltdowns or terrorist attacks never happen on a specific, predetermined date. The set date of the potential PISA catastrophe makes it even more relevant to relate to PISA in advance, to engage in what media theorist Richard Grusin (Citation2010) labeled the premediation of the future. According to Grusin premediation has been around for a long time but has expanded since 9/11. “Premediation works to prevent citizens of the global mediasphere from experiencing again the kind of systemic or traumatic shock produced by the events of 9/11 by perpetuating an almost constant, low level of fear or anxiety about another terrorist attack” (Grusin, Citation2010, p. 2).

The above mentioned examples demonstrate that the PISA calendar is an instrument that conveys the image of a world in flux. It is a world of decline and development, of rising and falling stars – an unpredictable and changing hierarchy of educational systems. This is the fundamental premise that the three-year cycle is based on – it would be no point in conducting PISA regularly unless some more or less significant changes occurred on every occasion. Interestingly, this image of a world in flux stands in sharp contrast to the image of PISA itself. PISA is, due to the steady, temporal rhythm, characterized by stability and predictability. The world of educational systems might be fluid, but PISA itself returns predictably on a set date, solid as a rock, using the same instruments as preceding years. This stability is the result of the new ways in which ILSAs are conducted. The early history of international assessments had no such stability. The early IEA studies were characterized by constant innovation, resulting in problems with continuity, both in terms of expertise and in terms of international partnerships and funding. Constantly creating new, pioneering studies, meant that all the arduous tasks of a major international survey had to be invented again and again: agreeing on what surveys to conduct, constructing test-items, making pilot-studies, securing funding etcetera. In contrast to this, the PISA calendar presupposes a much higher degree of continuity, in order to make the different surveys comparable. This decreases tensions in the organization and makes it easier to secure financial stability. The participating countries do not have to be convinced on the merits of a particular study since it is basically the same as it used to be and once a country has entered a study it tends to stay in it.

This continuity of PISA can contribute to its influence by creating a constant quest for more data among participating nations. The fact that the studies are regularly repeated means that the current information is soon to be outdated and will have to be replaced with new data. The triennial rhythm comes with the message that any knowledge about an educational system is only a temporary assessment, soon to be revised again. This was exemplified during the Swedish press conference on the results in PISA 2015. The representative from The National Agency of Education, Mikael Halápi, presented the results as an indication that the long-term trend of decline for Swedish education had been broken. But whether it was also the beginning of an upward trend was too early to say, he added. That, he argued, could only be established in 2019, when PISA 2018 was to be released (Skolverket, Citation2016). Thus, already on the PISA day in 2016 the next round of PISA data was mentioned. Such appetite for more data is of course stimulated by the awareness that more data is continually produced. The insecurity about whether a country’s development is going in the right or wrong direction will, from this perspective, soon be silenced, albeit only temporary.

Time is, in other words, an important factor in accounting for the power of OECD. Taking the temporal factor into account can therefore shed new light on the sources of the organization’s influence. The fact that it exercises a kind of soft power has been noted in previous research (Mahon & McBride, Citation2008). But the reasons why this particular organization is successful in exercising that particular power can arguably not be fully understood without the temporal factor. And if time is an important source of power, it is perfectly logical that attempts at minimizing the role of PISA can be formulated in terms of time. In an open letter by academics to Andreas Schleicher, published in 2014, the PISA tests where criticized, and one of the trademarks of PISA, the regular cycles, was attacked. The “testing juggernaut” should be slowed down, to “gain time to discuss the issues mentioned here”, and OECD should “consider skipping the next PISA cycle” (Meyer, Zahedi, & Signatories, Citation2014).

To more fully understand the possible implications of OECD’s temporal power, the question can be asked about how the PISA calendar can influence educational policy on a national level. The question of the reception of international assessments is of course a complicated one, and it has repeatedly been stressed that nations differ when it comes to how much value they place on the PISA and other assessments (e.g., Martens & Niemann, Citation2013; Wiseman, Citation2013). What I offer here are two examples of potential policy responses, both of which have a temporal character.

Accelerating aspirations: Setting goals in the near future

The fact that the PISA survey is repeated every third year means that the next opportunity to assess a school system is always close at hand. This has the potential to affect the speed of educational policy, by making short-term goals more attractive. This applies of course also to the repeated studies that IEA conducts today (TIMSS and PIRLS). The release of the 2011 TIMSS results were for example commented by the U.S. Secretary of Education Arne Duncan as “unacceptable,” and he argued that they “underscore the urgency of accelerating achievement in secondary school and the need to close large and persistent achievement gaps” (Carnoy, Citation2015, p. 1). The same message was sent by the OECD following the bad results for Sweden in PISA 2012. Between 2000 and 2012 Sweden saw a radical decline in performance in PISA, steeper than any participating country. Following PISA 2012 the Swedish Ministry for Education and Research invited OECD to conduct a review of the quality of school education in Sweden. The first part of the report was titled “A school system in need of urgent change (OECD, Citation2015b, p. 11–62). Upon the launch of the review, Andreas Schleicher referred to the “urgent need for reform” in Sweden (OECD, Citation2015a).

An example of how short-term goals can be formulated is found in a governmental report written by the School commission in Sweden (SOU, Citation2016, p. 38). The report describes how other countries have formulated their goals for future educational performance, and mentions that Estonia and Ontario have used increased PISA results as goals for the future development of their schools (p. 71f). The report also suggests that Sweden should adopt similar targets and specifies the goals in terms of how many percentages of the pupils should reach a certain level in TIMSS, PISA and PIRLS, during the period 2018-2024 (p. 88-89). This is a kind of strategy that arguably makes it important to act fast in terms of policy. Such goals illustrates how the OECD has acquired an influence of the temporalities of national educational policy. The tendency is consistent with other observations concerning acceleration of political and social life (Peck & Theodore, Citation2015; Rosa, Citation2013). However, this tendency towards acceleration does not necessarily involve changes towards a new, hitherto unknown social condition. As we will see PISA can also contribute to nostalgic or retrotopian educational policy.

Retrotopian solutions: Borrowing from the past

PISA is coming of age. As it grows older, its role is gradually changing. It has become more and more obvious that PISA is more than a snapshot of the present education. It is also trying to depict long-term trends, by now almost dating back two decades. The role of these trends have not been given much attention in previous research.

The ambition to describe trends by repeating studies has made PISA into an instrument that measures the past. PISA does not merely quantify performance, it also quantifies past performances, summarized as trends in graphs. These graphs are frequently used to discuss the status of national school systems, and has enabled discussions on the changing quality of education in a country without explicitly referring to other countries. In cases of decline, they give rise to discussions about the factors that might explain change in performance: “Scotland’s schools were once among the best in the world. What went wrong?” (Economist, Citation2016). Historical, rather than just cross-national, comparisons are thus enabled. Sweden is another example of that tendency. The country suffered a severe decline in PISA between 2000 and 2012, which has given rise to numerous comments in books and newspaper articles about the reasons for the fall of Swedish education (e.g., Henrekson, Citation2017). Even Andreas Schleicher has commented on what he described as a “soul” that had been “lost.”

During my days as a university student, I used to look to Sweden as the gold standard for education. A country which was providing high quality and innovative education to children across social ranks, and close to making lifelong learning a reality for all. […] But not long after the turn of the 21st century, the Swedish school system seems to have lost its soul. Schools began to compete no longer just with superior learning outcomes, but by offering their students shiny buildings in shopping centres, or a driving license instead of better teaching. And while teachers were giving their students better marks each year, international comparisons portrayed a steady decline in student performance. Indeed, no other country taking part in PISA has seen a steeper fall (Schleicher, Citation2015).

The label “lost its soul” proved powerful in Swedish educational discourse. The director-general of the Swedish National Agency for Education, Anna Ekström, used the same words at a press conference the ensuing year, as reported by the press (Aftonbladet, Citation2016-05-17; Dagens Nyheter, Citation2016-05-16), and an interview with education minister Gustav Fridolin in 2016 came with the headline: “Fridolin agrees with critical PISA-director: the school soul has been lost” (Svenska Dagbladet, Citation2016-11-28).

So where do you go to search for your lost soul? A journalist of culture with great interest in educational issues, Jenny Maria Nilsson, picked up Schleicher’s concept of the lost soul and argued that travels in space were less relevant than travels in time. Rather than desperately looking at other systems, such as Hong Kong, Finland or Canada, she recommended lessons from the national past. The best way to improve the Swedish school was to “dig where we stand and compare the Swedish school with the Swedish school in the past.” Such a comparison would reveal a lost school system that used to teach knowledge to many, regardless of social class, a system that worked for a long time. “It was only at the beginning of the 21st century that the Swedish school lost its soul, as Andreas Schleicher, director of educational issues at OECD, expressed it” (Svenska Dagbladet, Citation2016-06-16).

Previous research has rightly pointed out that international large-scale assessments stimulate externalization in the form of educational borrowing from other nations (e.g., Steiner-Khamsi & Waldow, Citation2012). However, as the example above illustrates, PISA can also stimulate a different kind of educational borrowing: from the national past. This tendency can be seen as an example of what has been labeled “externalization to tradition” (Schriewer & Martinez, Citation2004, p. 31f). It is likely, although systematic studies are needed to prove whether this is a general tendency, that PISA can stimulate countries to return to their own history when they try to develop their educational policy. An early example of this is Japan, which in the second PISA study saw its results decline. Since this decline appeared in the wake of recent educational reforms (for example the abolishment of the six day school week) the results were interpreted as a sign that traditional Japanese education was better (Takayama, Citation2008). This retrotopian or nostalgic attitude of looking back at the past, a general tendency in today’s society (Bauman, Citation2017), is thus potentially stimulated by the advent of the PISA calendar and its pretensions to measure the past.

Temporal governance

The paper has described a shift in the ways that international assessments of education structure time. The early IEA surveys were characterized by a relative slowness, lack of synchronization and lack of trend analyses. Years passed before studies were reported (making the results old already at publication date), national and international reports were published at different times, and the studies were seldom, if ever, repeated. What I have called the PISA calendar is, by contrast, characterized by high pace, simultaneous publication of results around the world and regular, recurrent studies making the analysis and comparison of trends possible.

The emergence of this new temporal regime has implications for the global governance of education. Two themes have emerged: 1) the power of the OECD and 2) the nature of policy aspirations. The PISA calendar has been important in establishing OECD as an important policy actor in education. Its somewhat puzzling source of dominance, often described as a soft power, can be more fully understood when the temporal dimension is taken into account. The triennial rhythm of PISA has vastly contributed to its influence. Further it has been argued that the policy aspirations of individual countries tend to change as they adopt to the temporal values of the PISA calendar. These changes go in two directions. On the one hand, the PISA calendar stimulates a sense of urgency, an idea that education has to change swiftly. On the other hand, the PISA calendar can encourage a retrotopian (Bauman, Citation2017), nostalgic vision, resulting in what might be called historically based self-borrowing – at least in countries which have experienced decline in performance. In some countries these two tendencies combined might create a somewhat paradoxical attitude to educational policy, resembling what others have called a frenetic standstill (Rosa, Citation2013) or a society in which everything stands still at an enormous speed (Eriksen, Citation2001). In such cases educational discourse might feature constant proposals for urgent and drastic changes in educational policy with limited practical effect on educational practice.

The particular way in which ILSAs today exert an influence over educational policy and discourse does in other words rest on a transformed relation to time. That shift can be described in terms of how instability, flux and precariousness is located. The early history of ILSAs is the story of an international organization in a state of constant flux. IEA often lacked funding and its future was constantly insecure. That very instability of the organization precluded any attempts or even ambitions to conduct regular studies of school performance, not to mention trend analyses. IEA in itself was in a state of crisis, rather than the individual countries. During the last decades that relationship has been reversed. The main international organizations that govern ILSAs, OECD, but also IEA, have achieved more organizational and financial stability, and have become capable of conducting regular tests. It is this very stability that has made it possible to regularly describe the instability of individual countries’ educational results. Countries are nowadays constantly reminded that comparison is good, complacency is bad, and that the relative position for any nation in the international hierarchy can change swiftly. That reminder is arguably an important source of the power of ILSAs. Metric power, as Beer has emphasized, depends on the ability to instill a sense of insecurity:” Metric power works through the uncertainty it produces” (Beer, Citation2016, p. 212). International assessments assumed that role when they achieved the stability that is necessary to detect instability – the secure position from which insecurity can be made visible.

The way in which PISA structures time was not introduced with revolutionary rhetoric nor has it faced much resistance or scholarly debate. Perhaps it is indicative of the ways in which time tends to shape social life. Time is as influential as it is invisible, or as Birth (Citation2016, p. 73) puts it: “There are relatively few political debates on time. Yet, there are few ideas that pervade almost everything to the same extent as temporal ideas.” Engaging more fully in issues of the temporalities of international assessments is thus a way of exploring a discreet but potentially very powerful force in our educational systems.

Acknowledgement

An earlier version of this article was presented at the symposium “Time for Change? For a Temporal turn in the Sociological Study of Education and Europe”, organized at ECER 2017, Copenhagen.

Disclosure statement

No potential conflict of interest was reported by the author.

Additional information

Notes on contributors

Joakim Landahl

Joakim Landahl is a professor of education at Stockholm University, where he leads the research group ‘History of education and sociology of education.’ His current research is centered on the history of international comparisons of education, the history of educational research and the role of crisis narratives in educational discourse. He is also interested in educational policy, especially the role of education ministers, and has written a biography of a Swedish minister of education (Fridtjuv Berg).

Notes

1 The suggestion was first disseminated internally in IEA, in March 1986, when the chairman Alan Purves distributed his text “Future activities of IEA: a proposal” (Purves, 1986).

References

  • ‘t Hart, P. (1993). Symbols, rituals and power: The lost dimensions of crisis management. Journal of Contingencies and Crisis Management, 1 (1), 36–50. doi:10.1111/j.1468-5973.1993.tb00005.x
  • Addey, C. (2017). Golden relics & historical standards: How the OECD is expanding global education governance through PISA for Development. Critical Studies in Education, 58, 311–325. doi:10.1080/17508487.2017.1352006
  • Aftonbladet. (2016). Skolan förtjänar bättre än Fridolin. Published May 17, 2016. Retrieved from https://www.aftonbladet.se/ledare/ledarkronika/karinpettersson/article22830194.ab.
  • Bauman, Z. (2017). Retrotopia. Malden, MA: Polity.
  • Beer, D. (2016). Metric power. Basingstoke: Palgrave Macmillan.
  • Birth, K. K. (2016). Calendar time, cultural sensibilities, and strategies of persuasion. A. Hom; C. McIntosh, A. McKay & L. Stockdale (Eds.) Time, temporality and global politics. Bristol: E-International Publishing.
  • Birth, K.K. (2012). Objects of time. New York: Palgrave Macmillan US.
  • Bourdieu, P. (2014). On the state: Lectures at the Collège de France, 1989-1992. Cambridge: Polity.
  • Burstall, C., & Marklund, I. (1988). Memorandum 1988-04-14, Discussions in General Assembly concerning the future of IEA (vol. 388). IEA Archive, Hoover Institution.
  • Carnoy, M. (2015). International Test Score Comparisons and Educational Policy: A Review of the Critiques, p. 1. Retrieved from http://nepc.colorado.edu/files/pb_carnoy_international_test_scores_0.pdf.
  • Dagens Nyheter. (2016, May 16). Skolkommissionen jagar en förlorad själ. Retrieved from https://www.dn.se/ledare/signerat/johannes-aman-skolkommissionen-jagar-en-forlorad-sjal/
  • Economist. (2016, August 27). Not so bonny.
  • Eriksen, T.H. (2001). Tyranny of the moment: Fast and slow time in the information age. London: Pluto.
  • Espeland, W., & Sauder, M. (2007). Rankings and reactivity: How public measures recreate social worlds. American Journal of Sociology, 113 (1), 1–40. doi:10.1086/517897
  • Espeland, W. N., & Stevens, M. (2008). A sociology of quantification. European Journal of Sociology, 49 (3), 401–436. doi:10.1017/S0003975609000150
  • Espeland, W., & Stevens, M. (1998). Commensuration as a social process. Annual Review of Sociology, 24(1), 313–343. doi:10.1146/annurev.soc.24.1.313
  • Foshay, A.W., Thorndike, R.L., Hotyat, F., Pidgeon, D.A., & Walker, D.A. (1962). Educational achievement of thirteen-year-olds in twelve countries. Hamburg: UNESCO Institute for Education.
  • Fourcade, M. (2016). Ordinalization: Lewis A. Coser memorial award for theoretical agenda setting. Sociological Theory, 34 (3), 175–195. doi:10.1177/0735275116665876
  • Grey, S., & Morris, P. (2018). PISA: Multiple ‘truths’ and mediatised global governance. Comparative Education, 54(2), 109–131. doi:10.1080/03050068.2018.1425243
  • Grusin, R.A. (2010). Premediation: Affect and mediality after 9/11. Basingstoke: Palgrave Macmillan.
  • Guardian. (2016, December 01). Can Welsh schools make up for ‘lost decade’ to climb in PISA league? Retrieved from https://www.theguardian.com/education/2016/dec/01/welsh-schools-PISA-league-international-scores?CMP=Share_iOSApp_Other.
  • Hamilton, M. (2017). How international large-scale skills assessments engage with national actors: Mobilising networks through policy, media and public knowledge. Critical Studies in Education, 58(3), 280–294. doi:10.1080/17508487.2017.1330761
  • Härnqvist, K. (1987). The IEA revisited. Comparative Education Review, 31(1), 129–136. doi:10.1086/446659
  • Hartog, F. (2015). Regimes of historicity. New York: Columbia University Press.
  • Henrekson, M. (Ed.) (2017). Kunskapssynen och pedagogiken: Varför skolan slutade leverera och hur det kan åtgärdas. Stockholm: Dialogos.
  • Hugonnier, B. (2017). France and PISA: An unfinished dialogue. International Perspectives on Education and Society, 31, 3–15.
  • Husén, T. (1967a). International study of achievement in mathematics: A comparison of twelve countries (Vol. 1.) Stockholm: Almqvist & Wiksell.
  • Husén, T. (1967b). International study of achievement in mathematics: A comparison of twelve countries (Vol. 2). Stockholm: Almqvist & Wiksell.
  • Husén, T. (1976 no date). Letter. [From Torsten Husén to unknown recipient, probably the press] (Vol 196). IEA Archive, Hoover institution.
  • Husén, T. (1977). Jämlikhet genom utbildning? Perspektiv på utbildningsreformerna. Stockholm: Natur och kultur.
  • Jakobi, A. P., & Martens, K. (2010). Expanding and intensifying governance: The OECD in education policy. In K. Martens & A. P. Jakobi (Eds.) Mechanisms of OECD governance: International incentives for national policy-making? Oxford: Oxford University Press.
  • Keys, B. J. (2006). Globalizing sport: National rivalry and international community in the 1930s. Cambridge, Mass.: Harvard University Press.
  • Koselleck, R. (1985). Futures past: On the semantics of historical time. Cambridge, Mass.: MIT Press. doi:10.1086/ahr/92.5.1175
  • Landahl, J. (2017). Kris och jämförande pedagogik. In Landahl, J & Lundahl, C (Eds.) Bortom PISA. Internationellt jämförande pedagogik. Stockholm: Natur och kultur.
  • Landahl, J. (2018a). De-scandalisation and international assessments: The reception of IEA surveys in Sweden during the 1970s. Globalisation, Societies and Education, 16 (5), 566–576. doi:10.1080/14767724.2018.1531235
  • Landahl, J. (2018b). The rise of international data and the return of the nation: Educational competition and crisis as devices for flagging the nation (and the OECD). Conference paper, ECER, Bolzano.
  • Landahl, J. (2018c). Announcing a scientific discovery. IEA and the uses of scientific press conferences in the 1970s. Conference paper, ECER, Bolzano.
  • Leimu, K. (2011). 50 years of IEA: A personal account. In C. Papanastasiou, T. Plomp and E.C. Papanastasiou, (Eds.) IEA 1958-2008. 50 Years of Experiences and Memories (pp. 591–626). Nicosia: Cultural Center of Kykkos Monastery.
  • Lingard, B., & Lewis, S. (2017). Placing PISA and PISA for schools in two federalisms: Australia and the USA. Critical Studies in Education, 58(3), 266–279. doi:10.1080/17508487.2017.1316295
  • Maddox, B. (2018). Introduction. In B. Maddox (ed.) International large-scale assessments: Insider research perspectives. London: Bloomsbury publishing.
  • Mahon, R. & McBride, S. (Eds.) (2008). OECD and Transnational Governance. Vancouver BC: UBC Press.
  • Marklund, I. (1989). How two educational systems learned from comparative studies: The Swedish experience. In A. C. Purves (Ed.) International comparisons and educational reform. Washington, D.C.: Association for Supervision and Curriculum Development.
  • Martens, K., & Niemann, D. (2013). When do numbers count? The differential impact of the PISA rating and ranking on education policy in Germany and the US. German Politics, 22 (3), 314–332. doi:10.1080/09644008.2013.794455
  • Mau, S. (2019). The metric society. On the quantification of the social. Cambridge: Polity Press.
  • Merry, S.E. (2016). The seductions of quantification: Measuring human rights, gender violence, and sex trafficking. Chicago: The University of Chicago Press.
  • Meyer, H. D., & Benavot, A. (Eds.) (2013). PISA, power and policy: The emergence of global educational governance. Oxford: Symposium Books.
  • Meyer, H.-D., Zahedi, K. & Signatories. (2014). An open letter: To Andreas Schleicher, OECD, Paris. Retrieved from http://www.globalpolicyjournal.com/blog/05/05/2014/open-letter-andreas-schleicheroecd-paris. Accessed 2018-02-18.
  • Morgan, C. (2009). OECD Programme for international student assessment: Unraveling a knowledge network. Saarbrucken: VDM Verlag Dr. Muller.
  • Morgenbladet. (2016-12-09). Advent er ventetid. Jeg venter på at en liten gutt skal komme til verden.
  • OECD. (2015a). Retrieved from http://www.oecd.org/sweden/sweden-should-urgently-reform-its-school-system-to-improve-quality-and-equity.htm.
  • OECD. (2015b). Improving schools in Sweden. An OECD perspective. Paris: OECD.
  • Ogle, V. (2015). The global transformation of time: 1870-1950. Cambridge, Massachusetts: Harvard University Press.
  • Peck, J., & Theodore, N. (2015). Fast policy: Experimental statecraft at the thresholds of neoliberalism. Minneapolis: University of Minnesota Press.
  • Peters, J.D. (2015). The marvelous clouds: Toward a philosophy of elemental media. Chicago: The university of Chicago press.
  • Postlethwaithe, N. (1966). International Project for the Evaluation of Educational Achievement. International Review of Education, 12, 356–369.
  • Purves, A. C. (1987). I.E.A. An Agenda for the Future. International Review of Education, 33, 103–107. doi:10.1007/BF00597544
  • Purves, A. C. (1988). Chairman’s opening remarks (Vol 388). IEA Archive, Hoover Institution. June 24.
  • Purves, A.C. (1986). Future activities of IEA: A proposal (Vol. 388). IEA Archive, Hoover Institution.
  • Roche, M (2003) Mega-events, time and modernity: On time structures in global society. Time & society, 12(1), 99–126.
  • Rosa, H. (2013). Social acceleration: A new theory of modernity. New York: Columbia University Press.
  • Rosenberg, D., & Grafton, A. (2010). Cartographies of time: A history of the timeline. New York: Princeton Architectural Press.
  • Schleicher, A. (2018). World class: How to build a 21st-century school system. Paris: OECD Publishing.
  • Schleicher, A. (2015). “How Sweden’s school system can regain its old strength” 4/5 2015 Retrieved May 31, 2017 from http://oecdeducationtoday.blogspot.se/2015/05/how-swedens-school-system-can-regain.html.
  • Schriewer, J., & Martinez, C. (2004). Constructions of internationality in education. In G. Steiner-Khamsi (Ed.) The global politics of educational borrowing and lending. New York: Teachers college press.
  • Sellar, S., & Lingard, B. (2013). The OECD and global governance in education. Journal of Education Policy, 28(5), 710–725. doi:10.1080/02680939.2013.779791
  • Skolverket. (2016). Presskonferens PISA 2015. Retrieved from https://www.youtube.com/watch?v=m0VZG3OGS4Y.
  • SOU. 2016:25. Likvärdigt, rättssäkert och effektivt: ett nytt nationellt system för kunskapsbedömning (p. 25). Stockholm: Wolters Kluwer.
  • SOU. 2016:38. Samling för skolan: Nationella målsättningar och utvecklingsområden för kunskap och likvärdighet. Delbetänkande av 2015 års skolkommission (p. 38). Stockholm: Wolters Kluwer.
  • Steiner-Khamsi, G., & Waldow, F. (Eds.) (2012). Policy borrowing and lending in education. London: Routledge.
  • Steiner-Khamsi, G. (2003). The Politics of League Tables. Journal of Social Science Education, 2003–1. Retrieved from http://www.jsse. org/index.php/jsse/article/view/470.
  • Stone, D. (2012). Policy paradox. The art of political decision making. New York: W.W. Norton & Co.
  • Svenska Dagbladet. (1985, February 26). Svenska elever kan inte räkna.
  • Svenska Dagbladet. (2016, June 16). Skolans vurm för entreprenörer är destruktiv.
  • Svenska Dagbladet. (2016, November 28). Fridolin håller med: Skolsjälen förlorad.
  • Takayama, K. (2008). The politics of international league tables: PISA in Japan’s achievement crisis debate. Comparative Education, 44(4), 387–407. doi:10.1080/03050060802481413
  • Thompson, E.P. (1967). Time, work-discipline, and industrial capitalism. Past and Present, 38(1), 56–97. doi:10.1093/past/38.1.56
  • Tröhler, D. (2013). The OECD and cold war culture: Thinking historically about PISA. In H. D. Meyer & A. Benavot (Eds.), PISA, power, and policy the emergence of global educational governance. Wallingford, GB: Symposium Book.
  • Volante, L., Fazio, X., & Ritzen, J. (2017). The OECD and educational policy reform: International surveys, governance, and policy evidence. Canadian Journal of Educational Administration and Policy, 184, 34–48.
  • Waldow, F. (2009). What PISA did and did not do: Germany after the ‘PISA-shock. European Educational Research Journal, 3, 476–483. doi:10.2304/eerj.2009.8.3.476
  • Walker, D.A. (1976). The IEA six subject survey: An empirical study of education in twenty-one countries. Stockholm: Almqvist & Wiksell international.
  • Weber, M. (2001). The Protestant ethic and the spirit of capitalism. Hoboken: Routledge.
  • Wiseman, A.W. (2013). Policy responses to PISA in comparative perspective. In H-D- Meyer & A Benavot (Eds.) PISA, power and policy: The emergence of global educational governance. Oxford: Symposium books.
  • Zerubavel, E. (1981). Hidden rhythms: Schedules and calendars in social life. Berkeley: University of California Press.
  • Zerubavel, E. (2003). Time maps: Collective memory and the social shape of the past. Chicago, Ill.: University of Chicago Press.