2,500
Views
11
CrossRef citations to date
0
Altmetric
Articles

Where science met policy: governing by indicators and the OECD’s INES programme

& ORCID Icon
Pages 122-137 | Received 16 Feb 2021, Accepted 16 Feb 2021, Published online: 24 Feb 2021

ABSTRACT

Drawing on archival sources, interviews, and research literature, this article offers new insights into the making, structure and long-term effects of the International Educational Indicators (INES) programme of the Organisation for Economic Co-operation and Development (OECD). The article argues that INES was crucial in setting the OECD on the path to becoming a global education policy actor. Although science has informed policy making for centuries, the 1980s mark the start of a much closer imbrication of evidence-making for policy. By focusing on those renewed encounters of education science with policy-making, the article examines the INES indicators as a boundary infrastructure. It shows the ways that INES materialised many of the OECD ambitions at the time. In other words, through INES, the OECD gained the unique confidence, expertise, and reputation in the field to deliver large international comparative works on a global scale.

Introduction

Today, education indicators are perceived by many policy actors as the best available evidence-based tools in measuring and shaping education policies worldwide. However, despite their promise of a factual, quantitative, and scientific approach to producing knowledge for policy, by now, education indicators have been well shown to be more than mere instruments in the making of policy; they are political constructs, always built on the verge of science meeting policy.

Although their formation and measurement are usually veiled with technocratic considerations and mathematical complexity, according to Fasano (Citation1991, 25), ‘current selections of paradigms in indicator-systems are justified more by historical accident than by the application of any valid screening process of competing approaches to effectiveness’. Interestingly, Fasano, a university professor participating in the 1991 General Assembly of the Organisation for Economic Co-operation and Development (OECD) International Educational Indicators (INES) project, offered a glimpse into the discussions that went into the design of indicators for cross-national comparisons from the very start. Fasano’s commentary suggests that participants in these processes at the time were well aware that the development of education indicators is by no means a straightforward technical undertaking. Rather, education indicators as a policy instrument rest on historical and political trajectories shaped by changing conditions, agential struggles, and windows of opportunity (Cardoso and Steiner-Khamsi Citation2017; Godin Citation2006; Lundgren Citation2011; Ozga Citation2020; Ydesen and Andreasen Citation2020).

Although previous research has dealt with the historical formation of education data in the OECD, this article applies a science and technology studies (STS) approach to examine education indicators as boundary objects, to show the interdependency and complexity of the technical and the political in the rise of the OECD as a major global education actor over the last 50 years.

The article aims to uncover the history and politics behind the OECD’s INES project between the late 1980s and 1997, when the Programme for International Student Assessment (PISA) was launched. The momentum for the launch of the INES project in 1988 came from a series of events, the most important of which was the 1983 seminal report by the US National Commission on Excellence in Education, titled A Nation at Risk. This publication tagged the US education system as lagging behind the rest of the world and in need of radical change and improvement (Martens and Jakobi Citation2010). As we will discuss, this single event brought forth a rapid reaction not only from the US, but also from France, with both countries organising the first conferences at the end of the 1980s, with the sole aim of coordinating international efforts to build a long-lasting and comparable education database: the INES programme. This history’s significance is undisputed and has been studied before (albeit differently, see for example Martens and Jakobi Citation2010; Morgan Citation2009, Citation2011). INES thus became the flagship international collaborative initiative that directed both minds and datasets towards solidifying a commensurate global education policy field (Resnik Citation2006), culminating with the publication of the first PISA results in 2001. Additionally, the article will show how the INES project played a first but nonetheless normative and legitimising role in promoting the global ideology of educational performance management and change across OECD member countries and beyond.

This article builds on previous scholarship on the rise of education indicators as a legitimate and dominant education policy tool. It does so in order to show how INES become a key instrument in establishing the OECD as the quintessential education technocracy paving the way for the sustained production of education performance data. INES was also crucial in setting the OECD on the path of becoming a policy actor, a role that would be strengthened with the launch of the PISA study in 1997. The article utilises the concept of boundary infrastructure (Bowker and Star Citation1999) to examine how INES brought together a range of actors that, despite disagreements and diverse perspectives, constructed the common language of global education indicators. As a result, the science of measurement came into much closer proximity with – and often acted in the service of – education governance than was the case before. Such a development is manifest not only in the field of education governance but also in science. Although science has informed the governing of societies for centuries, the 1980s saw a much closer imbrication of scientific knowledge, and in particular statistical knowledge, in informing and shaping the production of policy (Nowotny, Scott, and Gibbons Citation2003).

The article begins by presenting its theoretical underpinnings and methodological considerations. We then move on to the following two sections which offer a historically informed contextual analysis of the making of INES, its process, and its deliveries. We conclude with a discussion of the programme’s long-term effects and significance of indicators as boundary objects in these early attempts to construct a science/policy nexus as a novel and effective way to govern education.

Theoretical underpinnings and methodology

To understand the rise of education indicators, the article uses the notion of boundary infrastructures, which emerged in the field of STS in the 1980s due to Star and Giesemer (Citation1989) and, later on, Bowker and Star (Citation1999). Before explaining the notion of boundary infrastructures, it is essential to explain its sister concepts so as to unravel its meaning and thus analytical purchase. To do so, we start with the concept of boundary objects, a term Star and Giesemer (Citation1989) introduced to describe the making of the Museum of Vertebrate Zoology at the University of California, Berkeley in the first half of the twentieth century. For the purposes of explicating the notion of the boundary object, a definition would be helpful here: ‘A boundary object is any object that is part of multiple social worlds and facilitates communication between them; it has a different identity in each social world that it inhabits’ (Star and Giesemer Citation1989, 409). According to Star and Griesemer, scientific work is always heterogeneous. This means that, apart from being assembled via many diverse constituent parts, material and immaterial it is also open to multiple interpretations, depending on the context and the actors’ values and interests. Productive and impactful scientific work thus requires the interpretative flexibility of the multiple actors that come together in its making for the endeavour to be successful. In the case of the Museum of Vertebrate Zoology, these actors included the museum director, the museum’s patrons, university administrators, and amateur collectors who provided specimens for the collection. Each actor comes to a project with different interests, concerns, and aspirations; in other words, they are motivated differently by the social world they inhabited. For example, although the museum director was interested in advancing scientific knowledge, amateur collectors were motivated by conservation concerns. Star and Giesemer (Citation1989) posit that the ability of these disparate actors to cooperate on the museum project hinged upon two things: the development of standards, and the creation of boundary objects.

We will return to the role of standardisation in a moment. As we have seen, a boundary object must carry seemingly contradictory qualities: it must be simultaneously concrete and abstract, fluid yet well defined. Star and Giesemer (Citation1989, 393) write,

Boundary objects are objects which are both plastic enough to adapt to local needs and the constraints of the several parties employing them, yet robust enough to maintain a common identity across sites.

The concept of boundary objects was further theorised to denote objects that become ‘naturalised’ within ‘communities of practice’ (Bowker and Star Citation1999, 16). In other words, boundary objects are routinely used by members of a community of practice in such a way that their function becomes transparent, so that the members of the community take them for granted. Apart from the need for interpretive flexibility, a key requirement in the use of any boundary object, Star (Citation2010, 602) also referred to ‘two other aspects of boundary objects, much more rarely cited or used’, these being a. the material/organisational structure of different types of boundary objects; and b. the question of scale/granularity.

In response to the criticism that boundary objects could well be any objects that bring diverse actors together, Star (Citation2010, 602) stressed that boundary objects are ‘a sort of arrangement that allow different groups to work together without consensus’, that is, without agreeing on the purpose or outcomes of the common endeavour. Star stressed that they can mostly be found in organisational/institutional work, where actors need to collaborate on common projects, yet without necessarily having the same reasons or purposes for doing so. Importantly, boundary objects do not dwell in the margins or in individual silos; they create a shared space, where the here and there are confounded. This kind of conflation is, as we know, very important in the making of global education governance, which requires bringing together global concerns, values, and priorities with local and national challenges. In relation to the materiality of boundary objects, this way of theorising objects, meanings and relations aims to explain that objects do not necessarily relate to physical, observed entities; rather, materiality, here, entails ‘the stuff of action’, something that can be ‘embodied, voiced, printed, danced and named’ (Star Citation2010, 603).

Closely connected to the boundary object concept itself is the notion of boundary infrastructure (Bowker and Star Citation1999; Star and Ruhleder Citation1996). According to Bowker and Star (Citation1999, 313), ‘boundary infrastructures are complex networks of boundary objects that remain relatively stable enough to be useful, but exist at the intersection of multiple infrastructures’. The concept of boundary infrastructure is useful because it entails a range of characteristics of boundary work that are very relevant to our analysis. Following Star (Citation2010), these are:

  1. Embeddedness. The infrastructure of data and measurement tools that we investigate as part of INES is not completely novel; it is embedded into or found within other structures, arrangements, processes, and technologies of practice.

  2. Transparency. The new infrastructure of measurement indicators is transparent in that it is always there, readily observable, yet invisibly supporting the emergence of new tasks and ways of thinking.

  3. Reach or scope. The making and use of indicators do not happen in one place, or at one time; they are a continuous, multi-sited event.

  4. Learned as part of membership. The actors involved in the production of indicators are members of a new community of practice (Lave and Wenger Citation1991). Becoming a participant allows a naturalised familiarity with the infrastructure and depends on the learning of a certain codified technical language that increases the sense of belonging.

  5. Links with conventions of practice. Members of the community of practice (in our case, all the diverse actors working in the making of INES) slowly develop a range of work conventions and path dependencies that are increasingly difficult to overcome or change.

  6. Standardisation. INES, as a new infrastructure of measurement, is not completely new, but plugs into other, already standardised and used, tools and infrastructures.

  7. Is changed in modular increments, not all at once or globally: Since the new infrastructure is layered and complex, changes are always the result of prolonged negotiations and adjustments, together with other considerations and measurement systems.

In terms of research design, we have analysed a range of publicly available policy documents from the OECD website, as well as archival documents that stem from the OECD Archives in Paris. The OECD policy documents treated in the analysis consist of programme descriptions, reports, records, discussion papers, and minutes. The material consists of 58 documents (policy and broader archival documents) written between 1979 and 2014. The material was selected from a database of OECD archival documents consisting of a sample of 2,072 documents on various programmes and activities in education. The search criterion in the database was that the terms indicator and/or INES occur in the document (a search produced 58 documents which is our corpus for this study). In addition, we coupled the document analysis with an interview with a key actor who worked in the OECD during the period we investigate: Ron Gass, former director of the Centre for Educational Research and Innovation (CERI).

The importance of indicators: the contextual backdrop of the production of the INES programme

Delving into the history of the OECD reveals a complex picture of shifting attitudes towards international comparative indicators within the organisation, illustrating internal differences and competitive struggles within and among the ranks of managers, education experts, and member states. These complexities form an important part of the setting and development of the INES project.

From the inception of the OECD, its management and key staff in the field of education appeared ambitious regarding the development of quantitative indicators. Indeed, these tendencies were to be found from early on and were reflected in the two main OECD programmes of the 1960s: the Mediterranean Regional Project and the Educational Investment and Planning Programme (Tröhler Citation2014; Ydesen and Grek Citation2020), and also in the OECD’s commitment to systems analysis and human capital theory (Bürgi Citation2017; Spring Citation2015). The overall purpose of these programmes was to provide member states with comparative studies of their educational systems (Centeno Citation2017).

In the 1960s and 70s – and in keeping with Keynesian theory – the frame of reference among OECD specialists was the state. Key questions included how states could optimise ‘manpower’ investments to improve economic growth and how mathematical models could be developed to forecast these needs (Lyons Citation1964; Tröhler Citation2014). In trying to solve some of these emergent challenges, work began on developing educational indicators. The production of knowledge around education performance was deemed necessary for conducting valid economic growth forecasts, as well as in guiding governmental decision making (Resnik Citation2006). In response to the rise of these discourses, in 1967, the OECD published a handbook on education statistics titled Methods and Statistical Needs for Educational Planning.

Although still in its early stages, the handbook played a key role in establishing the production of education indicators as a necessary instrument in the development of statistical data in education. Following this publication, the OECD Education Committee established the Working Party on Educational Statistics and Indicators (OECD Citation1971; Papadopoulos Citation1994), organised following discussions at the 1970 Paris Conference on Policies for Educational Growth. In 1973, the Working Party published A Framework for Educational Indicators to Guide Government Decisions. In a follow-up report, an attempt was made to develop a set of educational social indicators to evaluate educational system performance (Morgan Citation2009). Centeno argues it was precisely the economic turbulence of the 1970s caused by the oil crises that prompted the OECD to ‘shape its identity as the producer of league tables, benchmarking, and statistics that interlinked social, economic, and political issues’ (Citation2017, 110–111).

However, while many member countries at the OECD Council and Education Committee levels looked favourably upon the work concerning indicators – ad reflected in the 1970 Conference on Policies for Educational Growth – ideological and philosophical debates relating to the nature of the educational activities arose within the special OECD branch of education research, namely, CERI, established in 1967 (see Centeno’s contribution in this issue).Footnote1 The CERI perspective on education centred on educational reform, social equity, and innovation, and it was couched in terms that were more conceptual and philosophical than evaluative and statistical (Addey Citation2018).

By the mid-1980s, CERI came under pressure – not least from the United States (US) – for a new push to develop international comparative indicators. International comparative studies conducted by the International Association for the Evaluation of Educational Achievement (IEA) had shown the US lagging behind in terms of performance (Morgan Citation2009). This irked the US Department of Education considerably; consequently, it raised questions about the quality of the indicators hitherto used. The US thus emerged as a powerful actor in the scene of the production of education data. A key development in this respect was the U.S. National Commission on Excellence in Education report A Nation at Risk (Citation1983), which used a series of so-called risk indicators as a factual representation of US students’ poor performance on national and international tests (Morgan Citation2009), an analysis that was also repeated two years later, in the US Department of Education’s 1985 publication ‘Indicators of Education – Status and Trends’.

From the US perspective, education in the 1980s had become a precarious business. In 1987, the United States proposed an international conference on educational indicators to be held under the auspices of the OECD. In this document, the US authorities describe the background for this initiative as follows:

Over the past three years, the United States has been evolving a set of indicators of education’s status and progress in this country. These indicators are intended to describe the “health” of the educational system so that the public and citizens who make decisions about the future of education in this country might be better informed. (US Department of Education Citation1985, 1)

More specifically, the document lists three reasons for the development of indicators. The first reason is that education should contribute to the making of a viable economy; second, parents are entitled to knowledge of the performance of the education system; and third, ‘it is important that the best of information about schools and colleges be collected and disseminated broadly in the society to inform debate and decision-making’ (US Department of Education Citation1985, 3).

Thus, the Federal level of the United States strongly framed the reasons for the need to produce education indicators. These reasons related closely to economic concerns, though the right to education also appears here as an American value that requires protection. Permeating these reasons is the view that indicators are providers of reliable data that are statistically valid and able to provide benchmarks for measuring progress. In this way, a certain degree of standardisation is proposed here, a vital element towards the construction of a data infrastructure. Additionally, new thinking based on indicators did not emerge from nowhere. On the one hand, we saw that the production of indicators by the international organisations bothered the United States; on the other, the proposal for new indicators is strongly embedded in the nation’s policies to improve economic prospects.Footnote2 In this vein, negative educational progress, as manifested in the IEA data, is seen to affect economic indicators and growth. As Henry et al. (Citation2001) argue, the United States repeatedly called for work on outcome indicators, particularly in relation to school effectiveness, at one stage even threatening to withdraw its support of CERI if its demands were not met. However, Henry et al. also demonstrate that, from a different ideological direction, France – with its bureaucratic interest in statistical data collection – joined the United States in pushing the OECD towards developing educational indicators. In Martens (Citation2007) reading of the situation, ‘the US feared losing the technology race of the Cold War; France’s left-wing government was concerned about educational opportunities for socio-economically disadvantaged children.’ (45). Although the United States and France had different interests in pushing for the making of education indicators, in the end they came to have a fairly open, fluid, but shared belief in the need for numbers to serve policy purposes. It is therefore evident that INES was not simply an infrastructure of measurement. Actors such as the IEA, the OECD, CERI, the United States, and France, although setting off quite different interests and values, ultimately found in INES, as we will see, a key boundary space that brought them all together. On the one hand, they became key players in the education performance data game; on the other, they could take home the particular discourses and ideas that worked within their own contexts and cultures.

The events of the time speak clearly about the different, often opposing, starting points of the actors during this first preparatory period. Consider, for example, the 1984 visit by the education comparativist and World Bank employee S. P. Heyneman to the OECD in Paris. In his address to the Comparative and International Education Society, Heyneman (Citation1993, 375) captures the acrimonious meeting of the CERI board of directors:

The US delegate was said to have put a great deal of pressure, and in very direct language, for OECD to engage itself in a project collecting and analyzing statistical education ‘inputs and outcomes’ – information on curricular standards, costs and sources of finance, learning achievements on common subject matter, employment trends, and the like. The reaction among the staff of CERI was one of shock, and deep suspicion. Those whom I interviewed believed it was unprofessional to try and quantify such indicators, and that it would oversimplify and misrepresent OECD systems, and that it would be rejected by the twenty-four member states whose common interests they were charged to serve.

However, the strength of the United States’ conviction (and investment) was such that CERI had to adjust its stance in this emerging paradigm shift, not only because of increasing internal pressures from the upper echelons within the OECD but also the rising popularity of managerial accountability reforms in several member countries during this period.Footnote3 Historian, and long-time OECD employee, George Papadopoulos (Citation1994, 190) commented, ‘It seemed therefore logical to add an international dimension to these national efforts, even though the difficulties, both conceptual and technical, were fully recognised from the outset’.

Moreover, because of the considerable groundwork that had already been done, the OECD was ‘well placed to respond to the mounting pressure in the late eighties for a new governmental effort to develop such indicators’ (Papadopoulos Citation1994, 190). According to Gass,Footnote4 reflecting on the 1980s work on indicators in the OECD in general and the US role in particular,

The Americans were very active in terms of OECD Education. At a certain point in time it was the Neo-conservatives, Reagan, etc., and they had a very different view of how policies were made – I sum it up by saying: It was a “what-works” view. … So, they came up with a proposal to set up an education indicators programme. Meanwhile, at that time, the Americans were making sort of negative noises about the future of CERI. … This business of developing indicators is highly complex, takes a lot of money, it takes a lot of capacity to develop. … So, I said: “Okay, but this is a development project, it should be in CERI.” So that was agreed, that it should go to CERI, which meant that the Americans could not kill their pet-project, so that took the pressure of CERI. CERI developed a very successful and interesting program on educational indicators. … I insisted that there should be a policy group, that is to say: “Okay, we do the indicators, but then we have a group of policy makers, reflecting on, okay, what do they mean for policy, and … . But this was dropped.Footnote5

Thus, a picture can be drawn of the US Department of State in the Reagan administration driving the OECD to launch a programme aimed at improving the international indicators of education to make transnational comparisons more reliable and valid. Gass makes it clear that the future of CERI was at stake unless they took on INES. However, Gass’ quote also suggests that the policy dimension of indicators in the production of the INES programme was neglected at this point and that the atmosphere was one of optimism concerning the possibilities of designing objective ‘what-works’ informed education systems. Despite the different standpoints of the diverse actors, the construction of the shared space of a new indicator measurement infrastructure could offer the boundary space where these actors would eventually form a community of practice (see Lave and Wenger Citation1991).Footnote6 Although the actors came together with quite different interests in this new project – and some with much more conviction about its need than others – a degree of interpretative flexibility (i.e. in how they interpreted their role in and the effects of their efforts) and materiality (i.e. in actual investment, reports, working groups, and meetings) in the new endeavour meant that INES would eventually become a new reality in the education measurement and policy world.

Indeed, the foundational stones in the development of indicators were laid in two conferences, hosted by the governments of the US and France (National Research Council [NRC] Citation1995). The first conference was titled ‘An Intergovernmental Conference on Educational Indicators’ held in Washington, DC, on 3–6 November 1987, jointly organised by the US Department of Education and the OECD.Footnote7 According to the conference note, the 22 participant countries viewed the OECD as the most suitable forum for the development of internationally comparable indicators in education. The indicators discussed at the conference can be grouped in three clusters that demonstrate the all-encompassing scope of the work:

  1. Student assessment. What students have learnt in various subjects and what relations can be identified and described among variables affecting students’ learning, students’ background, and school practices. This includes a description of the curriculum content and achievements in relation to learning goals.

  2. Participation and attainment. Entry and retention rates in various types of educational programmes and among various sectors of the population; level of completion among different population groups. This includes issues of equity and access, post-school activity, investment in education in relation to the gross domestic product, and returns on such investment.

  3. Schools and teachers. The quality of schools and teachers, including the use of funds, teachers’ salaries and characteristics, class size, the use of instructional time, homework, student attendance, attitudes, and parent involvement.Footnote8

In the conference summary report, Papadopoulos of the OECD Secretariat placed the conference in the context of a longstanding commitment of member countries to develop cross-national educational statistics. Several delegates welcomed the renewed attention to educational indicators and noted that ‘the previous interest witnessed in the late 60s and early 70s had not been sustained in the following years’ (2). This observation testifies to the organisational struggles over indicators mentioned above. However, the major question for the conference, Papadopoulos concluded, was whether the national interest in developing indicators could be translated into measures that would be internationally meaningful. This was the new big challenge: the development of international comparative indicators.

This incremental development of the project is an important element in the build-up of a boundary infrastructure: we observe that not only are the new indicators embedded in previous measurement structures and developed over time and different places, but also that they are changing, expanding, and taking new forms. Such an analytical framework is helpful for an additional reason: it reveals key actors and events while it highlighting important absences and strategic omissions. For example, it is striking that no observers from UNESCO – the only international organisation with a formal mandate to work in education – participated in the conference, while observers from the World Bank, the RAND Corporation, and a number of US institutions attended.

Briefly turning the lens on the relationship between the OECD and UNESCO, we observe that a formal agreement of cooperation was signed between them as early as 1963. The agreement committed the two organisations to ‘an exchange of information and documentation on questions considered by the two parties to be of common interest’ (OECD Citation1964, 367). In the 1960s, UNESCO developed the International Standard Classification of Education systems. It was adopted by all UNESCO (and hence OECD) member states in 1978 (Papadopoulos Citation1994). Thus, although there could have been good reasons for involving UNESCO in the work in the development of INES, UNESCO was excluded from the new measurement project. The politicisation of the process of indicator development is evident here, since UNESCO’s absence was primarily due to geopolitics. In 1984, the Reagan administration had withdrawn the United States from UNESCO because it argued the organisation had been politicised ‘leftward’. The ideological differences between the organisations have been manifest since their establishment: while the OECD has generally pursued an economistic approach to education (Resnik Citation2006),Footnote9 UNESCO was standing for a more holistic and humanistic approach.

Largely orchestrated by the United States, but with the involvement of others, too, INES was therefore embedded in previous education statistical data work. It had wide scope and was soon to draw in its workings all the OECD member states of the time; the key actors involved formed a community of practice where, despite initial disagreements, all soon found the interpretative flexibility to support the making of the new project. It involved standardisation work at an international scale. Finally, bit by bit, or, according to Star (Citation2010), changes in modular increments took place involving prolonged negotiations and adjustments that were larger than the field of education itself and involved distinct geopolitical and economic components. In a report for the OECD of the American Educational Research Association conference, held in Boston, 12–14 April 1990, Alan Gibson, Her Majesty’s Inspectorate, Department of Education and Science of the United Kingdom, contends that

The experience of the USA, as evidenced in the AERA [American Educational Research Association] sessions concerned with American activity, may be seen from the point of view of the ensemble of OECD countries as a ‘natural experiment’ capable of informing the protocols and planning for the use of Els [i.e. education indicators] on an international scale.Footnote10

This was, therefore, the context that gave rise to the creation of INES. As we will see, apart from some dissenting voices such as Fasano, by the early 1990s, most doubters had been won over, and the indicators project had become fully established within the OECD’s educational work. As Heyneman (Citation1993, 378) observes, this reflected a burgeoning ‘new industry of comparative education’. Nevertheless, that does not mean that agents in the wider context all subscribed to the OECD approach to education. In the next section, we shed light on the process and effects of INES as they were informed by the organisational and agential context.

The INES programme: its processes and effects

After the first two preparatory conferences that set out the details of the project, the INES programme kicked off after a final conference hosted by the Swiss government in 1991. At no point did CERI shy away from the magnitude and significance of the endeavour. In summary, INES aimed to accomplish the following (NRC Citation1995, 29):

  • Develop, collect, analyse, and offer a preliminary interpretation of a set of key indicators for international comparisons;

  • Provide a forum for international cooperation and the exchange of information about methods and practices of developing and using educational indicators for national policy making and managing education systems; and

  • Contribute to evaluation methodology and practice to develop more valid, reliable, and comprehensive indicators, and to gain a better understanding of their use in policy making.

CERI managed the project through organising four networks and one technical group. The networks initially came together through a collaborative consortium of countries, as follows:

  • Network A, Educational Outcomes, chaired by the United States;

  • Network B, Education and Labour Market Destinations, chaired by Sweden;

  • Network C, Features of Schools, chaired by the Netherlands; and

  • Network D, Attitudes and Expectations of Education System Users, chaired by the United Kingdom.Footnote11

Using a network structure as its basic organisational framework, the first phase of the INES project explored the feasibility of developing and reporting comparable indicators concerning the educational systems of participating countries.Footnote12 For example, key tasks of the main network, Network A, were to develop indicators of student performance outcomes, as well as suggest how the OECD could apply the principles to obtain such achievement data on a regular basis. The network chair was responsible for coordinating members’ contributions, supporting the theoretical and technical work required, and producing the network report. Network A was headed by Gary Phillips of the US National Center for Education Statistics, suggesting a leading role for the United States in the INES programme.

The project included four main phases: 1988–1989 (exploratory phase), 1990–1991 (development and construction of indicators), 1992–1996 (shift towards the regular production and use of indicators), and 1997–2001 (work on human capital, social capital, economic growth, and sustainable development) (OECD Citation2012).Footnote13 In this analysis, we focus on the two first phases of the INES programme, since they were the most contentious and thus decisive in constructing and consolidating the making of education indicators before PISA’s launch, in 1997.

Designing the study

The first step was to establish the networks and commit them to developing a plan for exploratively constructing a limited set of indicators (NRC Citation1995). During the project’s first two phases, the work centred on three main areas: the completion of secondary education, cognitive achievement, and post-secondary schooling activity. The networks’ plans were discussed, refined, and endorsed by a scientific advisory group, which was reconfigured in Phase 2 into a consultative group. Interestingly, during Phase 3, the consultative group was replaced by a policy review advisory group, chaired by the chair of the CERI Governing Board, Karen Nossum Bie of Norway. This change is an indication that, although policy concerns had not been salient in the 1980s, they clearly re-entered the scene in the early 1990s, coinciding with the publication of the first issue of Education at a Glance (EAG). This is also an example of the way INES, as a boundary infrastructure, was not a closed space; rather, it was linked to, and embedded in, other measurement projects and incrementally changed to adapt to new conditions and demands. The establishment of the networks working on the different topics was soon to create a common language and thus ‘conventions of practice’ (Star Citation2010). In this case, concerns about the effectiveness of education meant that the new education indicators were slowly traversing the mere production of knowledge to move into the policy field: INES had emerged as a new solution in decision making for national education policy makers, inherently pushing education to realign with global competitiveness mantras.

This background of the re-entry of the policy actors was a significant new turn of events and offered even greater momentum to the efforts. At a very early stage, it was quite clear that Network A needed a set of technical standards by which to judge the adequacy of educational outcome indicators. This was deemed necessary because users of international outcome indicators should have some sense of their technical quality and applicability (NRC Citation1995). Network A drafted standards for international indicator data to direct readers to the fact that a standard was not being met and, therefore, the results ought to be interpreted with caution. Network A also recommended that future international assessment studies be conducted with these standards in mind and that other indicator projects ought to consider technical standards as part of their reporting practices (NRC Citation1995).

In addition, Network A significantly expanded its scope to explore ways in which indicators of non–curriculum-bound outcomes could be developed. This ambition was – and remains – a core pillar of standardised international indicators. In the first instance, this developmental work concentrated on identifying suitable instrumentation and data sources for measuring sociocultural knowledge and skills, such as the basic knowledge required for orientation in the political, social, and economic world; problem-solving capacities in everyday and critical situations; self-perception in social contexts; and the perception of critical human values. The network’s efforts on collecting data on attitudes to learning is also of interest. INES’ work on both non-curricular knowledge and attitudinal data is an important precursor to PISA and clearly connected with the education policy level. This technisization of the political decision to construct internationally comparable indicators was essential and became the building block of the new era of the development of international comparative assessments; while the technical work was fully transparent, it also regularly added new parts and connections to the new data infrastructure.

Methodology: the role of the IEA and voices of dissonance

In Phase 2 of the INES programme, Network A surveyed member countries to identify data from national assessments or examinations that could serve as outcome indicators. According to the US National Research Council, this survey produced valuable information concerning the variety of assessment structures and practices and allowed for some conclusions about commonalities. However, the survey also demonstrated that it would be extremely difficult, if not impossible, to depend on data produced from these examinations or assessments to produce comparable indicators, because they varied widely in purpose, ages tested, subjects tested, and forms of testing (NRC Citation1995). Hence, the scope expanded to explore ways in which indicators of non–curriculum-bound outcomes could be developed.

As a consequence of this realisation, Network A proposed a set of indicatorsFootnote14 that could provide information about system productivity relative to other countries, the effectiveness with which a curriculum had been taught, and how achievement could be equally distributed within a country. These indicators were calculated on a trial basis, using data from the IEA Second International Mathematics Study and the mathematics portion of the Second International Assessment of Educational Progress. The results were published in the first edition of EAG (Lundgren Citation2011).

The IEA was heavily involved in the development of the methodology used in the INES programme. According to Gass, Tom Alexander, his successor as director of CERI, ‘pulled off a major coup because he succeeded in absorbing’ IEA methodology into the INES programme, allegedly because ‘the IEA fell into difficulties financially.’Footnote15

Nevertheless, the IEA had a somewhat different take on the development of international comparative indicators. At the 1990 AERA conference, Tjeerd Plomp from the IEA, commented that the INES programme had shown promise but that there was a need for a research component in the study, so that it did not ‘just provide “horse-race” data or league tables’Footnote16 At the same conference, Alan C. Purves, former chairman of the IEA and then-professor emeritus of education and humanities at the State University of New York at Albany, shared his experiences at the IEA, which showed that many differences between the curricula of different countries and schools, especially the actual curricula experienced, were differences of choice, whether for cultural reasons, whim, or ignorance of alternatives: the ‘IEA was able to uncover these variations but not explain them, and certainly did not intend to inaugurate an educational Olympics masking culturally valued and valuable differences.’Footnote17 These quotes are clear indications of continued dissonance with – yet participation in – the aim and scope of the INES programme and, not least, in terms of policy implications.

Gibson, reporting to the OECD from the 1990 AERA conference, comes to a very telling conclusion, testifying to the differences in terms of how to approach the development of international comparative indicators:

It was impossible for the visitor not to contrast the views of the research community with the optimistic and reassuring voices from the Centre [CERI] and from State policy officials. The policy advisers see education indicators as a positive and constructive force for good in raising standards, enabling progress to be monitored and measures for improvement to be targeted. But a larger number of discordant voices gave solemn, scientific warnings of the difficulties of introduction of education indicators, and of limitations on their trustworthiness.Footnote18

Following the AERA conference, the second phase of the INES project culminated in a major meeting in Lugano in September 1991. The first draft edition of EAG was presented there and it contained data on 30 indicators that ranged from relatively traditional items, such as participation rates, to complex and contested measures, such as the characteristics of decision-making within the system. This meeting also launched Making Education Count (Tuijnman and Bottani Citation1994), a publication addressing a range of conceptual issues revealing the extent to which many matters of definition, bias, and validity of comparison and inference had remained unresolved. For example, one issue that required clarification was the degree to which comparative data should be applied in measuring a system’s relative progress. The question of the relative weighting a system could attach to particular indicators within its framework of priorities was also unresolved.

These issues were reflected in the debates about methodology with a number of externally affiliated agents. For instance, in input to the INES programme, Fasano (Citation1991, 21) contends:

Indicator-systems will have to incorporate at all times, and unavoidably, a certain amount of non-knowledge. The risks incurred by education decision-makers in using indicator-systems with such a mixed knowledge base, cannot be underestimated. Chief among these is the danger of taking indicator-systems as signifying the optimum system. … That is to say, that education decision-makers are in danger of reorientating education towards achieving the match with the organisational models implicit in current indicator-systems.

Nevertheless, in November 1991, it was decided to move the project to Phase 3, whose main aim was to produce an organisational framework that would allow for the regular production of a set of international educational indicators through EAG’s publication.Footnote19 At the same time, the European Commission started to take a keen interest in the INES programme. The conclusions from the INES planning meeting in February 1990 clarified the following:

Related activities at EC and in international educational assessment programs were discussed. Coordination with EC activities in educational statistics is being handled by the OECD Secretariat. Strong interest and support for the INES project continue to be present. In the particular area of Network A, the plan of work took into account developments in international assessment programs. The Network will develop plans and criteria for using existing achievement results, where appropriate, and it will suggest approaches for working with available assessment programs in the future.Footnote20

Again, the picture that emerges is one of dissonance between the OECD and policy makers, on one side, and expert actors occupied with the challenges of research and scientific methodologies, on the other. There are several important takeaways from this second phase of working processes around INES. First, the technical side grew exponentially, with more and more working groups involved in the making of the indicators. Indeed, we also see the involvement of policy questions and policy makers in these groups in the early 1990s, reinforcing the boundary nature of INES as a physical and conceptual meeting space for such a diversity of actors and their associated values and interests. Second, the technisisation brought about more transparency, but also increasing stabilisation and invisibility, since structures of measurement were continuously built on top of others, therefore creating substantial path dependencies and ‘ways of doing things’. Finally, the materiality of the EAG publication meant that, for the first time, the indicator data collected would travel far and wide and enlist even more actors and hence boost the effort’s popularity. Very little of the continued struggles and divergence of interpretations and interests involved in the making of INES is reflected in the orderly world of EAG.

Concluding discussion: the long-Term effects of INES

International comparisons of educational conditions and performance are now perceived as a means of adding depth and perspective to the analysis of national situations. References to other nations’ policies and results are beginning to be routinely used in discussions of education, and comparability now belongs with accountability to that changing set of driving words which shape the current management paradigm of education. (Alexander Citation1994, 17)

Ever since Alexander’s assertion, indicators have become a highly significant instrument of the OECD’s work in education, as well as in the transnational governance of education as a whole (Lindblad Citation2018). As this article has illustrated, the INES project brought a set of diverse actors together in the making of a boundary infrastructure, hosted within the wider measurement infrastructures of the OECD, CERI, and IEA at the time. The group, despite disagreements and doubts, slowly became a community of practice that worked on developing and constructing the new set of educational indicators, leading to the publication of the first EAG report in 1992.

Today, nearly 30 years later, EAG is still widely disseminated, not only throughout OECD countries, but also across the globe. Although such an analysis is not part of this article, it is particularly significant to note that EAG is not simply a statistical report that informs policy makers about the state of education in their respective countries and worldwide. Crucially, by becoming a boundary infrastructure and creating a shared space between international organisations and member states, INES shaped new discursive frameworks and contributed to the language and, thus, meaning and values of education. Therefore, INES could be characterised as a tipping point in the history of education governance globally; via its main reporting mechanism, EAG, it has created a critical tool for the production and dissemination of knowledge. In addition, INES relatively quickly transformed the OECD from a minor player among the educational policy actors of the time into a key actor with the sufficient technical and governing capacity to – slowly yet surely – direct and change the policy agenda of education transnationally.

Of course, this is not to claim that INES was alone in pushing this agenda forward. A number of other infrastructures and comparative studies (e.g. the International Adult Literacy Survey and the Adult Literacy and Lifeskills Survey) laid the groundwork, as did the efforts of other international organisations – such as the IEA and UNESCO – in the business of collecting educational data. Nonetheless, INES was still a critical precursor in constructing a commensurable educational research and policy field, precisely because of its characteristics as a boundary infrastructure. Namely, INES was deeply embedded in previous, well-established measurement tools. It was simultaneously transparent, in that it was a scientific tool, but also invisible in its shaping of education discourses and other measurements elsewhere. As a result, it became dominant in reach and scope, and by now has become almost a universal measure. It created a community of actors who agreed to share the same language and technical conventions. INES also pushed for new standards though it was to develop and change slowly over time and with the influence of policy makers and powerful member countries.

It is precisely this enmeshing of a technocratic exercise with policy concerns and the influence of policy actors that renders INES especially interesting in its function and effects at the boundary between education science and policy. The OECD actively promoted this exercise in international comparison designed to assist in policy formation processes in member countries; further, it even promised that it would eventually contribute to improving the public accountability of educational systems:

At a time when education is receiving increased priority, but, like other areas of public spending, is facing the prospect of limited public funds, understanding better the internal processes that determine the relationship between educational expenditures and educational outcomes is particularly important. (CERI Citation1996a, 7)

However, after the multiple PISA shocks that several countries have experienced, we now know all too well that this perspective of efficiency and effectiveness provides a somewhat understated view of the purposes and significance of the OECD’s work on indicators. INES and the other international comparative studies that followed do provide relevant comparative information to member countries that does not simply shape policy agendas but contributes to a degree of global policy convergence. INES launched a broader politics of change, based as it was on a particular view about the policy directions and approaches needed to reform education. We suggest the INES project played an initial and normative and legitimising role in promoting what could be called a global ideology of hyper-quantification in education governance linked to broader public policy measurement agendas and reforms across member countries. This new public management or corporate managerialist approach to public sector administration enabled steering education at a distance through setting strategic objectives and measuring success through a raft of performance indicators. The OECD has been an important standard bearer for the ideology of new public management, while its indicators contribute to steering at a distance and governing by numbers within nations.

Although often neglected, the role that the OECD’s technical expertise played in constructing a commensurate educational policy field cannot be overestimated. INES was not simply a challenging, ambitious, and ultimately transformative act of metrological imagination for all those involved. It became a key boundary infrastructure, where different actors would come together in the pursuit of the new agenda, solid enough to be a single tool, yet also flexible enough to accommodate their diverse interests and stakes.

The key role of the OECD in offering the organisational structures to make INES happen transformed it into the quintessential technocracy during an era when trust in judgement and tradition were radically diminishing in favour of the then-new worldview of evidence-based policy. On the INES drawing board, educational problems and traditions quickly became technical issues in search of solutions. Although many of the choices, selections, and omissions were deeply political and ideological, the OECD repeatedly denounced education as a political weapon and instead proclaimed its ability to offer quantitative evidence that would simply support policy makers’ and employers’ decision making efforts to produce the kinds of results that mattered: an output-based education that would feed into human capital theory and could respond to the staggering social problems of unemployment and inequality prevailing throughout the Western world.

More crucially, this technical expertise was shared; it was the member states and national actors that were also playing the measurement game. Countries developed their own national (state and non-state) expertise and databases not only to feed information into INES but also to appease national policy makers’ desire to apply statistical knowledge in education governance. Indeed, even before PISA was launched, INES had already constructed and was running a well-oiled, globally connected, and ambitious data infrastructure, hungry with the desire to produce actionable knowledge. This is perhaps why, at the dawn of the millennium and PISA, all things seemed possible, as long as there were enough data to feed the machine.

Acknowledgement

This article is part of a project that has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme, under grant agreement No 715125 METRO (ERC-2016-StG) (‘International Organisations and the Rise of a Global Metrological Field’, 2017–2022, PI: Sotiria Grek).

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

This work was supported by H2020 European Research Council: [Grant Number ERC-2016-StG].

Notes

1 Summary Record, Summary of Discussions by the Committee for Scientific and Technical Personnel on Item 4, Establishment of CERI, DAS/A/67.12, November 1967. OECD Archives, Paris.

2 The federal governance structure of the US needs to be taken into account here: whereas the federal level strives for pushing for further education reforms according to its idea of the national interest, it has no power over states which make their own decisions over education.

3 For some time, European decision makers had been calling for comparative data to assess and monitor the effectiveness of educational systems. An important case in point was the so-called school effectiveness movement arising in the late 1970s, which focused on ‘effective schools’ and worked to identify best practices in pedagogy and school leadership (Goldstein and Woodhouse Citation2000; Townsend Citation2007).

4 Gass is a British sociologist who joined the OECD’s predecessor, the Organisation for European Economic Co-operation, in 1958 and went on to become deputy director for scientific affairs in the then-new OECD. In 1968, he founded the OECD’s CERI. In 1974, he also took on the directorship of the newly formed Directorate of Social Affairs, Manpower and Education, the so-called social arm of the OECD. After his retirement in 1989, Gass acted as consultant to the OECD Forum for the Future, the European Union, and the European Bank for Reconstruction and Development.

5 Interview with Gass conducted in Paris, 22 August 2017, by Dr Maren Elfert and PhD fellow Trine Juul Reder.

6 Although there may be similarities between the term communities of practice with the notion of ‘epistemic communities’ (Haas Citation1992), there is a crucial fundamental difference: Haas is referring to actors who are experts, creating a shared knowledge base. The actors brought together in the boundary infrastructure of INES are much broader; some of them are expert groups, but there are also policy makers, practitioners, civil society actors, and others. Thus, the field of participating actors is much wider here.

7 Note by the Secretariat, Conference on Educational Indicators, ED (87)20; CERI/CD (871), 6 November 1987. OECD Archives, Paris.

8 Note by the Secretariat, Conference on Educational Indicators, ED (87)20; CERI/CD (871), 6 November 1987, ii–iii. OECD Archives, Paris.

9 See, for example, the ‘New Dialogue between Education and the Economy’ initiative launched by the OECD in 1989.

10 Working paper, INES Project, Educational Indicators at the 1990 AERA, CERI/INES/COG/90.04, 17 September 1990. OECD Archives, Paris.

11 In phase 1, Network A counted 18 countries: Australia, Belgium, Canada, Denmark, France, Germany, Italy, Japan, Luxembourg, the Netherlands, New Zealand, Norway, Portugal, Spain, Sweden, Switzerland, the United Kingdom, and the United States. Not all participating countries took part in all the networks or the technical group, but most were members of Network A, led by the United States. INES, CERI/CD (91)10, 28 October 1991. OECD Archives, Paris.

12 International Educational Indicators, Report on the Implementation of the Project, CERI/CD (88)10, October 1988. OECD Archives, Paris.

13 The INES programme was transferred to the Education Policy Committee in 2007 and reinstated in CERI in 2012.

14 These were multiple comparisons of mean achievement scores, international comparative distributions of achievement scores, learning/teaching ratios, and variations in achievement scores across schools and classrooms.

15 Interview with Ron Gass, former director of CERI, conducted in Paris, 22 August 2017, by Dr Maren Elfert and PhD fellow Trine Juul Reder.

16 Working paper, INES Project, Educational Indicators at the 1990 AERA, CERI/INES/COG/90.04, 17 September 1990, 8. OECD Archives, Paris.

17 Working paper, INES Project, Educational Indicators at the 1990 AERA, CERI/INES/COG/90.04, 17 September 1990, 8. OECD Archives, Paris.

18 Working paper, INES Project, Educational Indicators at the 1990 AERA, CERI/INES/COG/90.04, 17 September 1990, 11. OECD Archives, Paris.

19 International Educational Indicators, Implementation of Phase 3, DEELSA/ED/CERI/CD (92)2, 16 April 1992. OECD Archives, Paris.

20 Note by the Secretariat, INES Phase 2 Planning Meeting – Conclusions, CERI/INES/COG/90.01, 21 February 1990, 3. OECD Archives, Paris.

References

  • Addey, C. 2018. “The Assessment Culture of International Organizations: ‘From Philosophical Doubt to Statistical Certainty’ Through the Appearance and Growth of International Large-Scale Assessments.” In Student Assessment Cultures in Historical Perspective, edited by M. Lawn, and C. Alarcon, 379–408. Berlin: Peter Lang.
  • Alexander, T. J. 1994. “Introductory Address.” In Making Education Count: Developing and Using International Indicators, edited by A. Tuijnman and N. Bottani, 13–20. Washington, DC: Organisation for Economic Co-operation and Development.
  • Bowker, G., and S. L. Star. 1999. Sorting Things Out: Classification and Its Consequences. Cambridge, MA: MIT Press.
  • Bürgi, R. 2017. “Engineering the Free World: The Emergence of the OECD as an Actor in Education Policy, 1957–1972.” In The OECD and the International Political Economy Since 1948, edited by M. Leimgruber and M. Schmelzer, 285–309. Cham: Springer International Publishing.
  • Cardoso, M., and G. Steiner-Khamsi. 2017. “The Making of Comparability: Education Indicator Research from Jullien De Paris to the 2030 Sustainable Development Goals.” Compare: A Journal of Comparative and International Education 47 (3): 388–405. doi:https://doi.org/10.1080/03057925.2017.1302318.
  • Centeno, V. G. 2017. The OECD’s Educational Agendas: Framed from Above, Fed from Below, Determined in an Interaction: A Study on the Recurrent Education Agenda. Frankfurt am Main: Peter Lang.
  • Fasano, C. 1991. “Conceptual and Theoretical Aspects, Knowledge, Ignorance and Epistemic Utility.” Paper written for the General Assembly of the INES Project, Lugano–Cadro, Switzerland, 16–18 September, CERI/INES (91)5. Paris: Organisation for Economic Co-operation and Development Archives.
  • Godin, B. 2006. “The Knowledge-Based Economy: Conceptual Framework or Buzzword?” Journal of Technology Transfer 31: 17–30. doi:https://doi.org/10.1007/s10961-005-5010-x.
  • Goldstein, H., and G. Woodhouse. 2000. “School Effectiveness Research and Educational Policy.” Oxford Review of Education 26 (3/4): 353–363.
  • Haas, Peter M. 1992. “Introduction: Epistemic Communities and International Policy Coordination.” International Organization 46 (1): 1–35.
  • Henry, M., B. Lingard, F. Rizvi, and S. Taylor. 2001. The OECD, Globalisation, and Education Policy. London: Pergamon Press.
  • Heyneman, S. P. 1993. “Quantity, Quality, and Source.” Comparative Education Review 37 (4): 372–388.
  • Lave, J., and E. Wenger. 1991. Situated Learning: Legitimate Peripheral Participation. Cambridge: Cambridge University Press.
  • Lindblad, S.2018. Education by the Numbers and the Making of Society: The Expertise of International Assessments. New York: Routledge.
  • Lundgren, U. P. 2011. “PISA as a Political Instrument.” In PISA Under Examination: Changing Knowledge, Changing Tests, and Changing Schools, edited by M. A. Pereyra, H.-G. Kotthoff, and R. Cowen, 17–30. Rotterdam: Sense Publishers. doi:https://doi.org/10.1007/978-94-6091-740-0_2.
  • Lyons, R. 1964. “The OECD Mediterranean Regional Project.” American Economist 8 (2): 11–22.
  • Martens, K. 2007. “How to Become an Influential Actor – The ‘Comparative Turn’ in OECD Education Policy.” In Transformations of the State and Global Governance, edited by V. K. Martens, A. Rusconi, and K. Lutz, 40–56. London: Routledge.
  • Martens, K., and A. P. Jakobi. 2010. Mechanisms of OECD Governance. Oxford: Oxford University Press.
  • Morgan, C. 2009. The OECD Programme for International Student Assessment: Unravelling a Knowledge Network. Saarbrucken: VDM Verlag.
  • Morgan, C. 2011. “Constructing the OECD Programme for International Student Assessment.” In Pisa Under Examination, edited by M. A. Pereyra, H.-G. Kotthoff, and R. Cowen, 47–59. Rotterdam: Sense Publishers.
  • National Commission on Excellence in Education. 1983. A Nation at Risk: The Imperative for Educational Reform: A Report to the Nation and the Secretary of Education. Washington, DC: US Department of Education.
  • National Research Council. 1995. International Comparative Studies in Education: Descriptions of Selected Large-Scale Assessments and Case Studies. Washington, DC: National Academies Press. doi:https://doi.org/10.17226/9174.
  • Nowotny, H., P. Scott, and M. Gibbons. 2003. “Introduction: ‘Mode 2’ Revisited: The New Production of Knowledge.” Minerva 41 (3): 179–194. doi:https://doi.org/10.1023/A:1025505528250.
  • OECD. Centre for Educational Research and Innovation, CERI. 1996a. Education at a Glance: Analysis. OECD: Paris.
  • Organisation for Economic Co-operation and Development. 1964. Acts of the Organisation, 3 vols. Paris: Organisation for Economic Co-operation and Development Archives.
  • Organisation for Economic Co-operation and Development. 1971. Activities of OECD in 1971: Report by the Secretary-General. Paris: Organisation for Economic Co-operation and Development.
  • Organisation for Economic Co-operation and Development. 2012. “Historical Summary of CERI’s Main Activities.” OECD. Accessed May 14, 2020. http://www.oecd.org/education/ceri/50234306.pdf.
  • Ozga, J. 2020. “The Politics of Accountability.” Journal of Educational Change 21 (1): 19–35. doi:https://doi.org/10.1007/s10833-019-09354-2.
  • Papadopoulos, G. S. 1994. Education 1960–1990: The OECD Perspective. Paris: Organisation for Economic Co-operation and Development.
  • Resnik, J. 2006. “International Organizations, the ‘Education–Economic Growth’ Black Box, and the Development of World Education Culture.” Comparative Education Review 50 (2): 173–195.
  • Spring, J. H. 2015. Economization of Education: Human Capital, Global Corporations, Skills-Based Schooling. New York: Routledge.
  • Star, S. L. 2010. “This is Not a Boundary Object: Reflections on the Origin of a Concept.” Science, Technology and Human Values 35 (5): 601–617.
  • Star, S. L., and L. Giesemer. 1989. “Institutional Ecology, ‘Translations’, and Boundary Objects: Amateurs and Professionals on Berkeley's Museum of Vertebrate Zoology.” Social Studies of Science 19: 387–420.
  • Star, S. L., and K. Ruhleder. 1996. “Steps Toward an Ecology of Infrastructure: Design and Access for Large Information Spaces.” Information Systems Research 7: 111–134.
  • Townsend, P. 2007. “Preface.” In International Handbook of School Effectiveness and Improvement: Part 1, edited by P. Townsend, 3–26. Dordrecht: Springer.
  • Tröhler, D. 2014. “Change Management in the Governance of Schooling: The Rise of Experts, Planners, and Statistics in the Early OECD.” Teachers College Record 116: 1–26.
  • Tuijnman, A., and N. Bottani. 1994. Making Education Count: Developing and Using International Indicators. Paris: Organisation for Economic Co-operation and Development.
  • United States Department of Education. 1985. Indicators of Education Status and Trends. Washington: U.S. Department of Education.
  • Ydesen, C., and K. E. Andreasen. 2020. “Historical Roots of the Global Testing Culture in Education.” Nordic Studies in Education 40 (2): 149–166. doi:https://doi.org/10.23865/nse.v40.2229.
  • Ydesen, C., and S. Grek. 2020. “Securing Organisational Survival: a Historical Inquiry Into the OECD’s Work in Education During the 1960s.” Paedagogica Historica 56 (3): 412–427. doi:https://doi.org/10.1080/00309230.2019.1604774.