1,698
Views
2
CrossRef citations to date
0
Altmetric
Articles

Evidence-Based Governing? Educational Research in the Service of the Swedish Schools Inspectorate

ORCID Icon, ORCID Icon & ORCID Icon
Pages 642-657 | Received 28 Aug 2020, Accepted 03 Feb 2021, Published online: 05 Mar 2021

ABSTRACT

The rise in the use of scientific evidence as a basis for educational policymaking has been a noticeable feature globally. In this study, we describe and discuss how educational research is used to make policy and governing evidence-based. To illustrate this, we use the Swedish Schools Inspectorate (SSI) as a case and focus on two of the processes it performs: regular supervision and quality audit. We present interviews with inspection personnel and researchers involved in these processes along with observations and documents. Our case description shows that despite the SSI’s efforts to base its work on knowledge (evidence) gained through educational research, it also had to use both embodied and enacted forms of knowledge. Research knowledge was chosen and redrafted to form a unified picture of how to act in educational practice, thus giving the work of school inspection a governing power that legitimises and sustains particular national policies.

Introduction

The global rise in the use of scientific evidence as a basis for policymaking has arguably been one of the most salient features of educational policymaking in many countries. The evidence-based movement has spread from the field of medicine into other fields such as education and has also “become part of the machinery of audit and governance” (Gray et al., Citation2009, XIV). Currently, there is a strong belief that professional practices in education should be based on—or at least informed by—evidence and that evidence should inform political decisions (Biesta, Citation2010; Krejsler, Citation2013; Slavin, Citation2020; for an overview of the field of education, see Wiseman, Citation2010). Underlying the objective of basing policy and governing on research or evidence is the “assumption that, if they were research-based, policies or practices would be more likely to maximise desirable outcomes” (Hammersley, Citation2013, p. 31; cf. Weinberg, Citation2019). Hence, scientific evidence can carry direct implications for an inefficient educational practice in need of improvement. According to the classic version of this model, evidence can only come from research and not from practice itself (Davies et al., Citation2000; Hammersley, Citation2013). Practice, it has been argued, needs reform because it relies on malfunctioning “tradition, prejudice, dogma, and ideology” (Hargreaves quoted in Hammersley, Citation2013, p. 17).

There exists a rich body of research literature that critically appraises the methodological foundations of evidence-based policy and practice (McKnight & Morgan, Citation2020; Pawson, Citation2002), particularly its experimentally grounded approaches (e.g., Bridges et al., Citation2009; Hammersley, Citation2013; National Research Council, Citation2012; Oliver et al., Citation2014; Otto et al., Citation2009; Slavin, Citation2002; Smeyers & Depaepe, Citation2006). In their extensive review of research into evidence-based policy, Oliver et al. (Citation2014) conclude that there is still a lack of more detailed studies exploring what actually influences policy. Rather few empirical studies have focussed on how research knowledge is used to make policy work evidence-based. More knowledge is needed concerning the various pathways through which research findings are translated into policy. This article contributes by going behind the scenes of policy and governing work to describe some of the processes often obscured to closer inquiry.

In this article, we describe how educational research is used in policy and governing work in the context of school inspection. The use of governing and governing work points to our conception of governance as the actual work performed in the assemblage of places, people, policies, practices and power (Clarke, Citation2015, p. 21). Furthermore, we understand school inspection as a mediator between national policy and local educational practice. The work of inspection is therefore part of the policy and governing work of education. We situate the Swedish Schools Inspectorate (SSI) as an example or case of a specific organisation that consolidates arrangements between research, policy and governing, and we explore two particular activities—or mini-cases—performed by the SSI: (a) regular supervision and establishment of a common knowledge base for that supervision, and (b) quality audit, including how the SSI has planned, designed and conducted that audit.

Thus, the aim is to describe and discuss how educational research is used to make policy and governing work evidence-based. We address the issue through the following questions:

• Through what processes does the SSI’s policy and governing work involve and use educational research and researchers?

• What are the possible implications for the governing of education when educational research is used to make school inspection evidence-based?

Two clarifications are needed at this juncture. First, we use the concepts evidence-based and research-based interchangeably to address the strong belief that scientific evidence and research knowledge form—or should form—the bases for policy and practice in governing. We document specific processes of the integration—perhaps even colonisation—of research and expert opinion and the struggle of those voices to gain a foothold in real policy formation and enactment. Second, our objective here is to illuminate the processes that are common in policy and governing work—in Sweden as elsewhere. To this end, we offer a rich and rather unique insight into the work of the SSI and the ideas and procedures that govern some of their activities. Following the case study approach (Stake, Citation1995, Citation2006), we provide a detailed case description and subsequently discuss it in relation to our theoretical resources.

Next, we introduce our theoretical resources. In the subsequent sections, we outline the case study approach employed and the material used in this article. We also provide an account of our case at hand (the SSI), including its two mini-cases. We finish by discussing our case in relation to the theoretical concepts and research questions.

Policy and Governing Work, Evidence and Knowledge

Ozga (Citation2008) observes that, in contemporary societies, research knowledge is a resource for the work of policy and governing, but policy and governing work also govern research knowledge. Starting from such a perspective, the SSI can be expected to use existing research knowledge and to produce knowledge that influences policy in one way or another. Thus, the SSI’s work to develop inspection policy (doing policy) involves using research knowledge. The SSI tries to practise this policy in its inspection processes, which in turn influence and govern educational practice in one way or another (see Ehren et al., Citation2015; Grek & Lindgren, Citation2015a) as well as produce knowledge on which schools may or may not act. The governing potential in school inspection greatly relies on what is inspected and how it is assessed because inspection directs the attention of schools and their governing bodies to what they have to demonstrate and how they should perform (Grek & Lindgren, Citation2015a; Hall, Citation2017; Novak, Citation2019).

Traditionally, the knowledge or evidence generated by research is considered the foundation for evidence-based policy, whether undertaken inside or outside of government agencies. Research-based knowledge usually refers to the product resulting from the systematic analysis of current and past conditions and trends and from the analysis of the inter-relationships between conditions and trends. Hence, research-based knowledge includes general evidence of broad trends and provides explanations of social and organisational phenomena; it also includes specific evidence generated through performance indicators and programme evaluations (Nutley et al., Citation2002; Oakley et al., Citation2005). Research-based forms of knowledge primarily consist of the work of professionals trained in systematic approaches to gather and analyse information. Generally, researchers who use the evidence-based approach aspire to produce the knowledge required for fine-tuning programmes and constructing guidelines and toolkits to deal with known problems (Head, Citation2008). Many researchers take for granted that improving research knowledge or evidence and using it in decision- and policymaking, will improve the results of the policies and their implementation (Oliver et al., Citation2014; Weinberg, Citation2019).

The role of knowledge in policy and governing work can be analytically divided into what Issakyan and Ozga (Citation2008) consider knowledge as informing policy and knowledge as forming policy. This distinction points to different conceptions of the relationship between knowledge, policy and governing. While the former represents the idea that research knowledge is something outside of, or separate from policy and governing work (cf. Wildavsky, Citation1979), the latter represents the idea that knowledge is an integrated part of policy and governing work. However, the analytical division between knowledge as informing policy and knowledge as forming policy, may not be so clear in actual policy and governing work, since both views can be relevant at the same time.

A fuller appreciation of the complexities of governing work entails recognition that policy-relevant knowledge exists in multiple forms. Hence, there is not one basis for evidence but several bases (Pawson et al., Citation2003; Schorr, Citation2003). Disparate bodies of knowledge represent multiple sets of evidence that inform and influence policy rather than determine it. Nevertheless, as Head (Citation2008) argues, we must place this work in a wider context. Therefore, governing work depends on several evidentiary bases all involved in developing and assessing the quality of programmes. Different kinds of knowledge and the corresponding views of evidence are especially salient in policymaking.

Freeman and Sturdy (Citation2014) propose that three forms of knowledge can be discerned in policy work: embodied, inscribed and enacted. These forms of knowledge may shift from one phase to another. Embodied knowledge is confined to human bodies, including tacit and verbal knowledge, and is only mobile when human bodies move. Inscribed knowledge is written in texts, represented in pictures and diagrams, or incorporated in instruments and tools that represent the world and mediate and inform our interactions with the world. This type of knowledge is mobile and easy to reproduce. Finally, enacted knowledge denotes what is done with the two former phases of knowledge. Freeman and Sturdy (Citation2014) argue that knowledge is meaningless until it is put to work or acted on. This means that enacted knowledge, such as verbalising our thoughts in collective activities, may later change to inscribed knowledge when the collective statements are put into texts, where they become new knowledge. Enacted knowledge is highly variable and unpredictable and is therefore also often innovative and collective. The SSI likely makes use of all forms but perhaps for different purposes.

Researchers have pointed to the importance of observing when and how research is used in policy and governing work; thus, what research knowledge is used in terms of forms/phases as well as selection of areas and the purpose of using particular knowledge (and not others) is interesting to study. Moreover, an emphasis on embodiment and enactment draws particular attention to the role of actors—the inspectors and researchers—who carry their knowledge into the processes of policy and governing work.

Methods and Data

We base this article on information that we collected in a larger research project on school inspection and governing.Footnote1 Here, we use Stake’s (Citation1995, Citation2006) approach to case studies. This study is instrumental, as our aim is to describe and discuss a particular issue within a specific setting. The particular issue is how educational research is used to make policy and governing work evidence-based. The specific setting is the SSI, which serves as an example of an organisation that consolidates arrangements between research knowledge, policy and the public. Two mini-cases (Stake, Citation2006)—that is, cases embedded in the larger case of the SSI—will illustrate how this arrangement takes form. These mini-cases reveal the processes by which knowledge gained from educational research is integrated into decisions about actions and the practitioners' work within these processes of policy and governing work. Although our material was collected quite some time ago (2011–2013), our mini-case descriptions may potentially provide insight for similar processes.

The first mini-case, designated (A), profiles the inspection activity called regular supervision and the establishment of a common knowledge base for this type of activity (see further descriptions of SSI activities, below). Before we collected material, the SSI had developed a standardised process description and had used it to guide the inspectors in their regular supervisions of each school. These inspections focussed on four main assessment areas: (a) a school’s attainment or goal fulfilment and results, (b) educational leadership and development, (c) the learning environment and (d) individual pupils' rights. The SSI was strongly informed by research knowledge when developing these areas, as this mini-case will show.

The second mini-case, designated (B), consists of one particular example of the activity called quality audit; it includes how this activity was planned, designed and conducted. In quality audits, the SSI performs an in-depth audit of a specific and well-defined problem area within the school system. Similar to regular supervisions, quality audits are also based on the national objectives and guidelines, but focus primarily on various “quality aspects” perceived as needing improvement; for example, a subject’s (English, physics, mathematics, etc.) contents and format or the head teachers' work as educational leaders. A quality audit is conducted on a sample of approximately 30 schools, with the findings considered as nationally representative of the problem area. This type of inspection activity is explicitly aimed at supporting school development, although quality audits also always assess student attainment in relation to national goals—that is, in relation to the national curriculum and syllabi (Skolinspektionen, Citation2010, p. 7).

To describe and discuss how educational research is used to make policy and governing work evidence-based, we have selected information relevant to our research questions from the collected data. For mini-case A, we drew from transcripts of individual interviews with seven members of the head management group and three inspectors and juridical experts at the SSI. For mini-case B, we drew from transcripts of individual interviews with two education researchers engaged in the quality audit process. In addition, we drew from observation notes on two internal meetings we attended at the SSI and documents concerning the quality audit. We obtained informed consent for all interviews and observations. The interviews were semi-structured and inquired into the processes behind the formation of the common knowledge bases for regular supervision (mini-case A) and the quality audit (mini-case B), respectively. Each recorded interview lasted for 45–60 min and was transcribed. Although our attendance in the internal SSI meetings may have influenced the discussions among personnel, we were surprised by their honest communication and by how thoroughly they discussed issues of uncertainty despite our presence. Although we knew the researchers by name, neither of us had worked with them previously, meaning that there are no conflicts of interest. We found them to be very open in their responses to our questions about their role in the development of the quality audit process.

Our descriptions of the mini-cases reflect the difference in the amount and type of information we collected. Mini-case A is based more on retrospective interview questions, and mini-case B is based more on real-time observations and interview questions. To maintain confidentiality, we refrain from referring explicitly to what area case B (the quality audit) addressed. Likewise, we use the informants' positions when reporting quotations from the interviews.

The Swedish Schools Inspectorate

After being absent for more than a decade, school inspection was reintroduced in Sweden in 2003. The government commissioned the National Agency for Education (NAE) to carry out regular school inspections in a six-year cycle. From its inception in the 1990s, the NAE had been commissioned to undertake broader evaluations and, to some extent, to research as well as to finance educational research from external scholars. This was altered in the 2000s, and research was no longer included in the NAE’s commission.

In autumn 2008, a new inspection agency, the SSI, commenced operation.Footnote2 As a new state authority, the SSI placed a strong political emphasis on meeting demands from the government to increase the general level of student attainment, raise the level of pupils' subject knowledge and safeguard the principle of equivalence (Rönnberg, Citation2014)—the latter emerging as an important objective after World War II.Footnote3 These ambitions led the SSI to start fresh when developing and designing its different inspection processes. One starting point was a requirement in the Swedish Education Act that schooling must be based on “scientific grounds and proven experience” (SFS, Citation2010:Citation800, Ch.1 §5). Another was that the SSI was to inspect all pre-schools, primary and secondary schools, adult educationFootnote4 and special schools proportionally in a three-year cycle (Skolinspektionen, Citationn.d.a).

When the SSI was established, the government commissioned it to carry out four main tasks: (a) perform regular supervision of all schools and governing bodies in a three-year cycle; (b) conduct quality audits in which a sample of schools are audited thematically; (c) address complaints from individuals (e.g., concerning bullying); and (d) manage licences for independent schools. The agency was organised into five regional departments, and the head management group consisted of the director general, the director of inspections, five department head managers and managers for central functions such as communication, internal support, personnel and law.

The SSI has informed the public that it makes use of research knowledge in the planning, designing and execution of its auditing activities—for example, in research reviews, judgement criteria and reports. Documents such as the national curriculum along with research and proven experience are more prominent in quality audits than in regular supervision.

Moreover, the SSI further stresses the importance of scientific methods: “In order to achieve trustworthiness and reliable results, it is imperative that conventional and scientifically established methods are used” (Skolinspektionen, Citationn.d.b). To facilitate this and develop a common terminology within the inspectorate concerning quality audits, an evaluation expert at Gothenburg University was also commissioned to develop a thesaurus (ibid.). The emphasis on methods was further manifested in the internal documents developed to guide the inspectors in their work. In the very comprehensive and detailed document entitled “Process descriptions for quality audits”, the purpose was clearly stated as “ … promot[ing] good and uniform quality in the assessment work” (Skolinspektionen, Citation2010, p. 7).

Another indication of this conscientious effort to base inspections on research knowledge and scientific methods was the deliberate recruitment strategy utilised when the SSI was launched. They aimed to employ persons so that one-third of the inspectors had a “general investigative background”, meaning that they had some academic/research background. To support the agency in its inspection work, the SSI also invited scholars to be listed as research resources (Skolinspektionen, Citation2011a). This resulted in a list of persons with high levels of research training who are predominantly used in quality audits.

The Advisory Council (Insynsråd in Swedish) at the SSI was another link to the research community and the type of evidence that research could produce. The council consisted of 10 individuals who would give advice to the director general. During the period when we collected information, three of these 10 persons were employed at universities as senior researchers. One scholar who had been a member of this council for several years had conducted research on successful schools in Sweden for a long period of time.

Case A: Regular Supervision

Our first mini-case describes the development and implementation of an inspection activity called regular supervision and the establishment of a common knowledge base for it. Since regular supervision was the most comprehensive activity at the time we collected our information, we investigated how its particular assessment areas and indicators were developed and determined. Although we discovered that regular supervision is strongly related to and based on juridical/legal considerations—even more so since July 2011Footnote5—we also found that educational research plays a significant role in the development of assessment areas (see also Hult & Segerholm, Citation2016). Based on the interviews we conducted with SSI staff, we describe the establishment of a knowledge base for regular supervision and the prioritising of assessment areas as follows.

One informant from the working group that was formed to develop regular supervision stated that the new inspection activities for the SSI were being developed and designed parallel to the closure of the NAE’s inspection department in 2008. This group consisted of persons from the SSI’s head management group, some inspectors with a general investigative competence, and one person with an educational background and experience in inspection in the NAE. All informants emphasised that regular supervision was to be based on legal and regulatory governing instruments such as the Education Act (SFS, Citation2010:Citation800) and the Education Ordinance (SFS, Citation2011:Citation185); thus, they stated, all assessment areas and indicators needed a clear relation to those documents.

However, this first working group did not move forward in proposing a design or deciding on a basis for the assessment areas beyond the legal and regulatory documents. Our informants told us that these documents do not explicitly point out what is meant by, for example, “good local quality assurance work” (juridical expert B) or what should comprise “good teaching and learning” (national officer), particularly in terms of what to assess. The legal documents required interpretations to relate them to educational practice. According to one informant who was invited to work on this assignment, the director general was impatient when the working group had not produced any results after four months. Our informant said:

I was invited to this group halfway through this time. And together with educational researchers from X University who were also part of this, we came into this. And we went two steps forward, one step back, one forward, two back. Well, it was very difficult to know (…) to work with. (National Officer)

This person described the process as work by a limited number of persons from the working group who started to discuss and consider how to proceed. They believed and proposed that research on successful schools would help in determining what made certain schools produce good results:

And it was actually we who said that we should look at successful schools instead. Let us look at research from Sweden and from other parts of the world and check schools that are doing well and achieving good results—what do they do? (National Officer)

Apparently, the working group chose one particular type of evidence or research knowledge in the development of assessment areas. However, in educational research and social science in general, there are often several areas of research knowledge that may be useful in policy and governing work (National Research Council, Citation2012). The SSI’s juridical experts also corroborated this unidirectional choice of research on successful schools. They all stressed the importance of educational research in the process of developing assessment areas. Juridical expert B stated:

As far as I understand it, they [the assessment areas, our clarification] are supported by laws and regulations, but they are generated from what is considered important in schooling based on research and then controlled so that there is juridical or legal support in these requirements. So this is the way the process has evolved (…) not the other way around. (Juridical expert B)

The assignment to develop assessment areas based on the political objective of increasing the level of attainment was also stressed in this process:

the assessment areas, the idea is that it should be such areas … should be directed to such areas that we believe, as research believes, are important for good results. Good attainment in schooling. (Juridical expert B)

Eventually, the working group proposed about 10 assessment areas that they had developed from research on successful schools. They then presented these to the head management group, which, according to one of the juridical experts, was quite involved in the final design of regular supervision.

That is, the head management was very active in this work, too. It was constant feedback and comments. Because they also had their particular ideas of how it should be, what was important and not, and about the design. And what to prioritise. (Juridical expert A)

This informant nuanced the description of the common knowledge base for developing regular supervision by telling us that some attention was also paid to experiences drawn from the inspection at the NAE and their inspectors. This shows that, in this policy-work process, different actors brought their particular embodied knowledge to the meetings, discussions, negotiations and drafting of the assessment areas. The assessment areas, in turn, were finally decided and put down, inscribed in instructions to inspectors, and in guidelines to schools and governing bodies (Freeman & Sturdy, Citation2014).

The initial assessment areas and indicators have changed over the years (Baxter et al., Citation2015). The SSI has constantly updated them on its webpage and also developed instructional materials to inform and help schools and governing bodies to adhere and adjust to the shifting requirements. Furthermore, the SSI promotes and stresses the responsibilities of schools and governing bodies, both public and private, through webinars and web-based material where head teachers can test and check whether their schools meet the requirements. Again, these SSI activities are aimed at preparing schools to take a more active role in adjusting to and complying with the requirements specified in the assessment areas and indicators. This need for compliance became even more evident when the most recent Education Act and Education Ordinances were implemented in 2011, in which the SSI was allowed to use sanctions. Gormley proposes that “the use of scientific information by public officials, when it occurs, is more likely to involve justification (reinforcement of a prior opinion) than persuasion (conversion to a new opinion)” (Gormley Citation2011, pp. 978–979 in National Research Council, Citation2012, pp. 40–41). Following Gormley, it is clear that the SSI’s choice of research knowledge was an effort to base regular supervision on evidence in accordance with the political objectives of enhancing student attainment and sustaining the principle of equivalence.

Case B: Quality Audit X

The second mini-case (B) is an example of a quality audit that illustrates how this process is planned, designed and conducted. When we collected information, the SSI had instructions for quality audits called “Model for process description”. After the SSI internally formulated a directive, researchers were invited to participate in audits, according to the instruction. This meant that researchers became involved when the audit area had been identified and when information and documentation concerning the knowledge base (a research review) and the juridical framework had been assembled. The design and questions (definition of the problem), expected effects and eventual success factors should also have been identified (Skolinspektionen, Citation2011b).

For this quality audit, the role of researchers as reference persons was defined in the project plan. Their responsibilities were to advise in the following areas: knowledge review and scope, orientation and relevance of the questions formulation, design of the judgement basis and criteria, and the choice of methods. Furthermore, they were to assess the knowledge review, choice of methods, tools, implementation and resource dimensioning, the content of results and conclusions, and also monitor the process to ensure data collection and analysis were done in a scientifically established way and that question formulations, judgement basis and criteria in the project plan were illuminated and answered (Skolinspektionen, Citation2011c).

The project leader argued that the SSI was primarily looking for researchers with the methodological competence and research expertise within the area of the audit. Interviews with the two researchers engaged in the reference group confirmed this picture, but they also said that they could have had a say in the quality audit’s design as well.

The standardised and presumably unbiased and objective audit process described in the internal instruction resembles the “gold standard” approach to evidence-based practice (Gray et al., Citation2009, p. 28). The process displays rigorous formal elaboration in terms of tools, data collection, processing, analysis, documentation and communication that is carried out in a methodologically, ethically, juridical and equivalent way (Skolinspektionen, Citation2011b). All quality audits at the time were formalised through process descriptions, guidelines, templates, internal quality assurance (including participation of co-assessors, project leaders, juridical experts, regionally responsible decision makers, communication secretariats and [internal] contractors) and external quality assurance (representatives from the area of audit and science journalists). Hence, the inscribed regulation of the work (Freeman & Sturdy, Citation2014) of the inspectors and researchers was extensive.

The Making of a Knowledge Review

Following the design of evidence-based approaches, the knowledge review was a crucial component of the audit. According to the SSI, it was expected to document the “current state of knowledge”:

Research findings, in particular national but also international results if applicable, and results from investigations and evaluations within the particular audit area are gathered, analysed and summarized. Observations and experiences from The Swedish Schools Inspectorate with relevance for the audit area are also summarized. The synthesis of the material shall render possible a critical view of the overall picture and an independent opinion about the knowledge area. Specific sources of reference shall be provided and notes and descriptions about previous results shall be explicitly separated from their analysis. (Skolinspektionen, Citation2011b)

Although the very idea of “a current state of knowledge” is itself a debatable concept, this was not a problem in this particular audit according to the project leader since,

Research is very unanimous when it comes to what the success factors are. Everyone doing research on Y agrees on Z not being anything controversial or so. (…) And that is not only in Sweden but internationally as well (…) But I can imagine that there have to be other considerations within other audits where there is not as much consensus in the research field. Which researcher are you supposed to listen to? What is right [sic] in research? (…) Who decides whom to listen to? (…) Who are we not listening to? I think these are very interesting questions. There are no [internal] policies about this. I want to make this clear: it is up to the individual inspector. (Project Leader)

The quote conveys the SSI’s awareness of potential disagreements within the scientific community. As argued in the internal guidelines: “Complexity characterizes most knowledge areas. In order to define the problem, a conscious standpoint must be taken in relation to potential contradictory research results that might exist” (Skolinspektionen, Citation2011b). In this particular quality audit, the solution to this problem was to define the research area paradigmatically, i.e., in a way that resolved any potential criticism or conflicting results.

The Role of Researchers

Of particular interest is that so much time and effort was dedicated to managing the human presence by defining the participants' different roles in the audit process. The researcher’s role was not only defined in the formal project plan; according to the project leader, it was crucial to discuss this issue at the very beginning of the quality audit process, to clearly set out expectations and the SSI’s mission.

Researcher 1 confirmed this picture:

Yes, they used to be quite strict about this at SSI—to talk about the role (…). There are certain things that are fixed from the very beginning; those things are not negotiable. There are some things that we can affect and others that we can raise questions about, but with the reservation that they might not take our suggestions into consideration. At the end of the day, it is up to the quality audit processes within SSI to decide on many of these questions. (Researcher 1)

In other words, to be part of a quality audit, researchers must accept formal agency rules and compromise their academic or personal ideas and values. Many of the differences between policy work and academic work were laid open to avoid conflicts. Numerous considerations must be made before accepting a role as an expert in a quality audit. The researchers mentioned the risk of losing independence in relation to state agencies and talked about the difficulty of critically examining policies with which they had been involved.

Researchers saw that their role was to calibrate or “soften” normative aspects within the agency and the overall claims and interpretations made in the quality audit:

It has appeared as if there are sort of “hard” truths when it comes to [the audit area]: you [teachers] are supposed to do it like this and like that. And they [the SSI] had made the interpretation that it was possible to assess if a practice was more or less right or wrong. So in relation to research, they had these ideas initially, but through the reference group, we lowered the ambitions quite a lot. (Researcher 1)

Somewhat paradoxically, researchers worked to counteract aspects of the evidence-based approach to audits, in part to make sure that teachers' perspectives and situations were acknowledged. As one researcher said:

What we did, my research colleague and me, was to talk a lot about teachers' situation. We talked a lot about teachers' working conditions … In that way we made ourselves spokespersons for teachers' experience. (Researcher 2)

In this way, researchers ensured that professional experience—and not only national goals, guidelines, regulations and research evidence—became part of the audit, and moreover, they worked to protect teachers from what they considered unjustified criticism.

During one of the meetings in the reference group, a draft version of the final report was discussed, and the two researchers introduced the idea that one of the main problems in this particular area of teaching was the curriculum (and not the teachers):

Project Leader: All the municipalities have made efforts to implement the new curriculum, but there is a large share of the teachers that are not confident [when it comes to aligning their work to the curriculum]. Why is that? Is there reluctance?

Researcher 2: But really—who understands [the new curriculum]? These documents are not finished. (Observation notes)

The part of the audit process leading to the final official report is of particular interest. The question of what knowledge finds its way into such reports is thus crucial. The next section presents the researchers' attempts to influence the final report as well as their shortcomings in this respect.

The Final Report and the Struggle Over Inscription

One goal of a quality audit is to generate “good examples”, i.e., methods of operation that serve as role models or inspiration for practitioners. This idea does not sit comfortably among researchers who seek nuanced and contextual interpretations of social phenomena (Steiner-Khamsi, Citation2013). The two researchers (together with one representative from a teacher union) tried to challenge the concept of good examples in the reference group.

Researcher 1: Is it going to read “good examples” [in the final report]? Are you going to point out that “this” and “that” is desirable?

Teacher union representative: How do you judge what is good by using evidence?

SSI investigator: By using legal support.

Researcher 1: Are you going to use the idea of good examples at all or are you going to say: this is what the ordinance says and we have seen three examples of how to solve this? Because it is not possible to say what is the best way of doing things—it depends on the circumstances, the local history etc. (…) I don't know what the ordinance says, but you cannot make the argument that this solution is the best one in all possible contexts. (…) The risk is also that you point out examples that are not very good, and, in this way, you are setting the standards rather low. (Observation notes from reference group meeting)

According to the project leader, the SSI management treasured the idea of good examples as a part of the audit: “Begler [the director general at that time] said that it is important that we collect good examples”. However, the final report only mentions the concept of “good example” once. According to Researcher 2, the reference group might have had an influence in this respect.

Nevertheless, there are other examples where opinions favoured by the reference group including the SSI representatives, who often seemed to agree with researchers, were excluded in the final report. For example, although the audit had identified the problem that the curriculum was difficult to understand and implement, the final report argued that teachers were insecure, and said nothing about the reasons for this insecurity that the reference group had discussed. Researcher 2 argued that this was an example of how internal power relations within the agency govern official presentations:

Here we can see that the inspectorate has become “The Inspectorate” again. And I believe that it has to do with this mystical instance within the agency. I can't recall what they call it—it is some kind of analytical unit [the communication secretariat] that scans every document. In this case, they did this kind of “cleaning.” (Researcher 2)

Overall, the two researchers introduced and stressed the importance of contextual factors in understanding the teachers' situation, but this perspective was not present in the final report. “We want to see things in a somewhat larger perspective, but there were people in the reference group that were not open to this” (Researcher 2).

Both project leaders were quite ambitious, and they wanted to include these kinds of problematisations and context descriptions in the report, but, in the end, they settled for something else. (…) I feel that it would have been very good if this mode of reasoning would have been put forward more clearly so that teachers are not made scapegoats all the time. (Researcher 1)

The communication secretariat edited the final report, which is an example of what Smith (Citation2000, p. 343) calls “internal redrafting”, i.e., the process in which state agencies censor or rectify reports. In this sense, inscribed knowledge (Freeman & Sturdy, Citation2014) is open to manipulation before it becomes permanent.

Discussion

This article was motivated by our interest in the relationship between research knowledge, policy and governing work in education as well as a wish to explore the involvement of educational research and researchers in these complex relationships. Focussing on the role of educational research and knowledge in the SSI’s policy work, we have described how educational research is used to make policy and governing work “evidence-based” through establishing regulatory organisational systems, standardised practice cultures and guidelines, knowledge reviews, models of implementation and networks and the embodied knowledge carried out by SSI inspectors, head management and researchers involved in these policy work processes. In this final section, we discuss some observations made in the above mini-cases.

The most important characteristic of case A is that it was based on research knowledge considered relevant to the governmental commission to increase the general level of student attainment and raise the level of students' subject knowledge, along with safeguarding the principle of equivalence. Research on successful schools qualified in that sense. This kind of research translates well between nations and has been surprisingly stable over the decades, emphasising more or less the same factors of success globally and over time. This does not mean that all research on successful schools was equally appropriate or possible to use. The research selected from the literature on successful schools had to be translated and interpreted and then adapted to fit the national and political conditions at the time and the SSI’s specific needs (Ozga, Citation2011). In addition, the research (and hence assessment areas) used in case A needed to fit the strong juridical base and requirements of Swedish school inspections. This involves slightly different considerations than are necessary for translation, interpretation or borrowing (Ozga, Citation2011).

For case A, we argue that the use of knowledge to inform policy plays a more prominent role than the use of knowledge to form policy (Issakyan & Ozga, Citation2008). The concept of knowledge informing policy is rather similar to the concept of “the rational reform paradigm” (Wildavsky, Citation1979), where research knowledge is viewed as separate from the policy process but is used as a rational basis for decision making (cf. Hammersley, Citation2013). It also largely resembles the kind of use Weiss labelled “instrumental” (Citation1979), meaning that research is more or less applied directly to solve a policy problem.

The type of knowledge used in this case is predominantly inscribed (Freeman & Sturdy, Citation2014), where already existing research reviews and the like were important. However, embodied knowledge based on experienced inspectors and individuals who had worked in the NAE was also used. According to Freeman and Sturdy (Citation2014), inscribed knowledge is rather stable and perhaps particularly suitable in processes of informing policy and governing work. This kind of knowledge nevertheless needs to be interpreted and translated to fit the particular political context at that time and is therefore also open to “cherry-picking”. Furthermore, since there are no responsible researchers involved to safeguard its usage, the research may be moulded to fit policy and agency intentions.Footnote6 However, a translation process was needed in which the SSI personnel mediated this type of knowledge between government policy, the head management group at the SSI, and its own educational experience and previous experience as inspectors in the older inspection agency.

The selection of assessment areas is probably one of the most important factors in directing the attentions of schools and governing bodies, meaning that the knowledge base—research in this case—used to develop the assessment areas becomes rather influential in governing education. Research on successful schools became the “evidence” on which to base the new inspection policy and practice.

Case B highlights the role of educational research and researchers in quality audit X. This particular audit was not an example of evidence-based policymaking in any conventional sense. Research was mostly used to identify problems and potential solutions within the particular audit area resembling conceptual use (Weiss, Citation1979).

However, case B also shows how research is used to correct malpractice as the SSI identified it and to orient schools' work to follow regulations and best practice. The SSI approach is an example of how state agencies simplify complex realities in order to make them possible to administrate and control (Scott, Citation1998). Research must be organised and packaged to fit the purposes of the quality audit in order to govern education. Conflicting perspectives and scientific pluralism would cause confusion and problems relative to the agency’s mission. The paradigmatic organisation of the knowledge base (the knowledge review) and the recruitment of researchers are two examples of how such conflicts are avoided.

Overall, the audit process resembles a “forensic style” (Smith, Citation2000) of conducting investigations. The audit is not conducted to find anything unexpected and new or to discover any variations of practical solutions; thus, “evidence was assembled in support of a policy conclusion, rather than findings emerging from a more dispassionate analysis of the data” (Smith, Citation2000, p. 337). As noted by Smith (Citation2000), this is a strategy to set the policy agenda, but we argue that it could also be a strategy to make professionals follow this agenda. Hence, the quality audit violates a crucial component in the search for evidence: namely, that what is evident “is always something surprising that widens the field observed” (Eriksson & Martinsen, Citation2012, p. 625). Instead, the SSI engaged in what has been described as a “postmodern” approach to evidence-based policy: “Information is gathered and used to enhance particular meanings and interpretations of practice and to establish a common understanding of its impact” (Gray et al., Citation2009, p. 68). The aim is to frame accepted knowledge and interpret experiences in the teaching profession through discourse and not through the discovery of new knowledge. The assumption is that the situation in schools is problematic and that reform is necessary. As pointed out by Researcher 1: “Teachers have great difficulties breaking loose from traditional perspectives … (…) In this field [the area of quality audit X], there is a tradition that to a certain extent contradicts the ideas on how to work according to the new policy documents.” Hence, policy implementation is needed, and the overall starting point was not to use the practical knowledge of teachers as a resource other than in terms of good examples, but to correct and replace this knowledge in line with the statutes and research. Thus, when we asked the researchers about the role of proven experience within the audit, they became hesitant: “Proven experience? I can't see that it was acknowledged. I can't recall that discussion at least” (Researcher 1).

Case B also shows how the researchers sought to influence the process to make the audit nuanced and reflexive. The role of research, according to these particular scholars, was to bring professional experience into the audit process, thus informing policy work (Issakyan & Ozga, Citation2008) to balance the evidence-based approach. For example, the researchers challenged the agency’s use of “best practice”, i.e., the idea that it is possible to identify “good examples” that can be transferred to schools on a national level.

When we collected information, “quality audit” carried somewhat different connotations than did regular supervision (see case A) that focussed primarily on regulations and control. Authentic examples from schools were accompanied with references from research. Inscription (Freeman & Sturdy, Citation2014), i.e., the final report, has the potential to speak to school actors nationally and build a common understanding of how to interpret and implement national goals and regulations. Research is used as a resource in the production and communication. It is both a method that provides logos and a source of legitimation that establishes ethos. Thus, the SSI harbours both a “soft” and “hard” approach to evidence-based education policy (Gray et al., Citation2009, p. 142). The SSI, with its composite of professional groups and traditions, is thus able to respond to different audiences and critics and position itself in relation to governments, local politicians, policy makers, researchers and school actors across time.

Based on our findings in the two cases, we argue that inspectors—and especially the project leaders—are crucial since they must be able to draw on different forms of knowledge. They are caught between agency management claims and conflicting claims made by researchers and other stakeholders in reference groups. They must manage, negotiate, translate, interpret and connect formal and rigid internal rules with suggestions made by internal management and expertise within SSI, other stakeholders in reference groups and researchers in both meetings and in text. Therefore, the skill to “stitch together” different forms of knowledge (Tenbensel, Citation2006, p. 210) is an important ability in this kind of policy work. In a way, inspectors “are designated as relays and brokers of knowledge; their mission is to “transform the political project in a situated reality” (Cucu, Citation2014) and to manage and increase efficiency in relation to predefined goals” (Grek & Lindgren, Citation2015b, p. 181). Enacted knowledge is required in this process, but, as argued by Freeman and Sturdy (Citation2014, p. 10), this form of knowledge is “essentially transient: it endures only as long as the enactment itself.” Of course, the same competencies are required for researchers who are engaged in this kind of policy work. They must be able to compromise and combine different forms of knowledge in concrete situations. Their roles as experts might be highly circumscribed by formal directives that serve to police and regulate the “inherent uncertainty and indeterminacy of enacted knowledge” (Freeman & Sturdy, Citation2014, p. 13), but their roles also involves a kind of mastery that goes beyond such schemata. Researchers' work within the SSI’s practices resembles a process Lave and Wenger (Citation1991, p. 29) described as a “legitimate peripheral participation”, i.e., a process involving different practitioners and their “activities, identities, artifacts and communities of knowledge and practice.” It is a form of learning that enables swift changes between roles and relations and the amalgamation of the disparate worlds of science and policy.

Conclusions

Here we return more directly to our research questions. Concerning the question of through what processes does the SSI’s policy and governing work involve and use educational research and researchers, we identified two processes: the first process involves searching for, using and compiling already existing research that fits a particular idea of educational practice and the political decisions “of the day” (case A; research on successful schools); the second process is by actively engaging education researchers who specialise in areas the SSI perceives as problematic (case B). Both processes leave education research and education researchers in a familiar position of potential unease (National Research Council, Citation2012, p. 27, 40–50); thus, they are eager to be of use and influence policy and practice and, at the same time, they risk being used for political purposes, undermining authority and political independence.

Educational research knowledge, researchers and research-based methods are used in the different inspection activities as arguments to legitimise the inspections. The SSI’s position on this includes official language that embraces detached, objective and inscribed knowledge. This knowledge type sustains and underscores temporary political ambitions set by the government. But, as we have shown, the SSI actually use many different types of knowledge and evidence. The process of making policy and governing work evidence-based rests on human interaction, social skills (such as handling negotiations and dispute resolution), experience and embodied knowledge (Freeman & Sturdy, Citation2014). To maintain the ambition to conduct all inspection work in an evidence-based manner, a broad range of knowledge and skills that traditionally do not count as evidence have to be used.

What are the possible implications for the governing of education when educational research is used to make school inspection evidence-based?

The second question concerns what the possible implications for the governing of education are when educational research is used to make school inspection evidence-based. By using particular research knowledge for identifying assessment areas as in case A, and to produce the knowledge review as in case B, the SSI makes certain directions for educational practice clear, unambiguous and controllable. The particular research knowledge used is expected to document “the current state of knowledge.” However, the very concept of a “current state of knowledge” is itself debatable. As Piaget (Citation1971, p. 3) points out:

The current state of knowledge is a moment in history, changing just as rapidly as the state of knowledge in the past has ever changed and, in many instances, more rapidly. Scientific thought, then, is not momentary; it is not a static instance; it is a process. More specifically, it is a process of continual construction and reorganization.

Kuhn (Citation1970, p. 6) describes social sciences as a “tradition of claims, counterclaims, and debates over fundamentals.” Moreover, and according to Bernstein, education belongs to a form of social science characterised by “weak grammar” whose language does not “have an explicit conceptual syntax capable of relatively precise empirical descriptions and/or of generating formal modelling of empirical relations” (Bernstein, Citation2000, p. 163). If correct, this observation presents an important predicament for education when it comes to the prospects for translating evidence, or “truth”, into policy.

The engagement of education researchers to shed light on specific educational practices, as in case B, also makes certain directions for educational practice clear, unambiguous and controllable. As such, the inspection processes, in both cases A and B, produce certain knowledge about education practice sustaining and amplifying a particular hegemonic knowledge paradigm. In this way, research knowledge helps to point out what is important and how to act in schools, and by doing so also interacts with, and becomes part of, the governing work of education.

Acknowledgements

We like to thank three anonymous reviewers for their valuable comments.

Disclosure Statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

This work was supported by the Swedish Research Council under grant [2009-5770]; The Swedish Research Council under grant (2007-3579); Umeå University under grant [223-514-099]; MidSweden University, Faculty of Humanities; and Umeå University, Faculty of Social Sciences.

Notes

1 This information is transcribed interviews with principals, teachers, representatives for governing bodies and inspectors involved in inspection processes at 13 schools (approx. 70 individuals); 10 transcribed interviews with the SSI head management group including juridical experts; 10 transcribed interviews with national policy makers; 20 interviews with nationally randomly selected principals; observations from 13 inspection processes including a handful internal meetings at the SSI; and an extensive number of documents from the schools and the SSI.

2 The NAE previously carried out the SSI's tasks. For a comprehensive analysis of the political motives for separating the supervisory function from the NAE and forming the SSI as a new agency under the government in 2008, see Rönnberg (Citation2014).

3 Englund and Quennerstedt (Citation2008).

4 Universities and university colleges have a different supervisory authority. Thus, they are not included in the SSI’s commission.

5 Starting on July 1, 2011, the SSI is allowed to use sanctions, such as fines, if governing bodies do not comply with their decisions (SFS, Citation2010:Citation800).

6 We wish to emphasise that our conclusions do not imply that the SSI has intentionally misrepresented this kind of research and evidence for successful schools.

References

  • Baxter, J., Grek, S., & Segerholm, C. (2015). Regulatory frameworks. Shifting frameworks, shifting criteria. In S. Grek, & J. Lindgren (Eds.), Governing by inspection (pp. 74–95). Routledge.
  • Bernstein, B. (2000). Pedagogy, symbolic control and identity: Theory, research, critique (Revised ed.). Rowman and Littlefield, Inc.
  • Biesta, G. J. J. (2010). Why ‘what works’ still won't work: From evidence-based education to value-based education. Studies in Philosophy and Education, 29(5), 491–503. https://doi.org/10.1007/s11217-010-9191-x
  • Bridges, D., Smeyers, P., & Smith, R. (2009). Evidence-based education policy: What evidence? What basis? Whose policy? Wiley-Blackwell.
  • Clarke, J. (2015). Inspections: Governing at a distance. In S. Grek, & J. Lindgren (Eds.), Governing by inspection (pp. 11–26). Routledge.
  • Cucu, A. S. (2014). Producing knowledge in productive spaces: Ethnography and planning in early socialist Romania. Economy and Society, 43(2), 211–232. https://doi.org/10.1080/03085147.2014.883795
  • Davies, H. T. O., Nutley, S. M., & Smith, P. C. (Eds.). (2000). What works? Evidence-based policy and practice in the public services. Policy Press.
  • Ehren, M. C. M., Gustafsson, J. E., Altrichter, H., Skedsmo, G., Kemethofer, D., & Huber, S. G. (2015). Comparing effects and side effects of different school inspection systems across Europe. Comparative Education, 51(3), 375–400. https://doi.org/10.1080/03050068.2015.1045769
  • Englund, T., & Quennerstedt, A. (2008). Likvärdighetsbegreppet i svensk utbildningspolitik. (The concept of equivalence in Swedish education policy). In T. Englund, & A. Quennerstedt (Eds.), Vadå Likvärdighet. Studier i utbildningspolitisk språkanvändning. (What about equivalence. Studies in the use of language in education policy). (pp. 7–35). Daidalos.
  • Eriksson, K., & Martinsen, K. (2012). The hidden and forgotten evidence. Scandinavian Journal of Caring Sciences, 26(4), 625–626. https://doi.org/10.1111/scs.12012
  • Freeman, R., & Sturdy, S. (2014). Introduction. In R. Freeman, & S. Sturdy (Eds.), Knowledge in policy. Embodied, inscribed, enacted (pp. 1–17). Policy Press.
  • Gray, M., Plath, D., & Webb, S. A. (2009). Evidence-based social work – a critical stance. Rouledge.
  • Grek, S., & Lindgren, J. (Eds.). (2015a). Governing by inspection. Routledge.
  • Grek, S., & Lindgren, J. (2015b). Why inspect? Europe, knowledge and Neo-liberal narratives. In S. Grek, & J. Lindgren (Eds.), Governing by inspection (pp. 172–183). Routledge.
  • Gromley, W. T., Jr. (2011). From science to policy in early childhood education. Science, 333(6045), 978–981.
  • Hall, J. B. (2017). Examining school inspectors and education directors within the organisation of school inspection policy: Perceptions and views. Scandinavian Journal of Educational Research, 61(1), 112–126. https://doi.org/10.1080/00313831.2015.1120234
  • Hammersley, M. (2013). The myth of research-based policy and practice. SAGE Publications.
  • Head, B. W. (2008). Three lenses of evidence-based policy. Australian Journal of Public Administration, 67(1), 1–11. https://doi.org/10.1111/j.1467-8500.2007.00564.x
  • Hult, A. & Segerholm, C. (2016). The process of juridification of school inspection in Sweden. Utbilding & Demokrati, 25(2), 95–118.
  • Issakyan, I., & Ozga, J. (2008). Chameleon and Post-bureaucracy: Changing Knowledge About Healthcare and Education in Seven European Countries. [Unpublished paper]. Centre for Educational Sociology.
  • Krejsler, J. B. (2013). What works in education and social welfare? A mapping of the evidence discourse and reflections upon consequences for professionals. Scandinavian Journal of Educational Research, 57(1), 16–32. https://doi.org/10.1080/00313831.2011.621141
  • Kuhn, T. S. (1970). Logic of discovery or psychology of research. In I. Lakatos, & A. Musgrave (Eds.), Criticism and the growth of knowledge (pp. 1–23). Cambridge University Press.
  • Lave, J., & Wenger, E. (1991). Situated learning. Legitimate peripheral participation. Cambridge University Press.
  • McKnight, L., & Morgan, A. (2020). A broken paradigm? What education needs to learn from evidence-based medicine. Journal of Education Policy, 35(5), 648–664. https://doi.org/10.1080/02680939.2019.1578902
  • National Research Council. (2012). “Using science as evidence in Public Policy.” Committee on the use of Social Science knowledge in public policy. In K. Prewitt, T. A. Schwandt, & M. L. Straf (Eds.), Division of behavioural and social science and education. The National Academies Press.
  • Novak, J. (2019). Juridification of educational spheres: The case of Sweden. Educational Philosophy and Theory, 51(12), 1262–1272. https://doi.org/10.1080/00131857.2017.1401464
  • Nutley, S., Davies, H., & Walter, I. (2002). Evidence-based policy and practice: Cross sector lessons from the UK. Working Paper 9, ESRC UK Centre for Evidence Based Policy and Practice; Research Unit for Research Utilisation.
  • Oakley, A., Gough, D., Oliver, S., & Thomas, J. (2005). The politics of evidence and methodology: Lessons from the EPPI-centre. Evidence and Policy, 1(1), 5–31. https://doi.org/10.1332/1744264052703168
  • Oliver, K., Lorenc, T., & Innvær, S. (2014). New directions in evidence-based policy research: A critical analysis of the literature. Health Research Policy and Systems, 12(34). https://doi.org/10.1186/1478-4505-12-34
  • Otto, H.-U., Polutta, A., & Ziegler, H. (Eds.). (2009). Evidence-based practice - modernising the knowledge base of Social work? Budrich.
  • Ozga, J. (2008). Governing knowledge: Research steering and research quality. European Educational Research Journal, 7(3), 261–272. https://doi.org/10.2304/eerj.2008.7.3.261
  • Ozga, J. (2011). Knowledge transfer and transformation: Moving knowledge from research to policy. Perspectiva, 29(1), 49–67. https://doi.org/10.5007/2175-795X.2011v29n1p49
  • Pawson, R. (2002). Evidence-based policy: In search of a method. Evaluation, 8(2), 157–181. https://doi.org/10.1177/1358902002008002512
  • Pawson, R., Boaz, A., Grayson, L., Long, A., & Barnes, C. (2003). Types and quality of knowledge in Social care. Knowledge Review No. 3, Social Care Institute for Excellence. Policy Press.
  • Piaget, J. (1971). Genetic epistemology. W. W. Norton and Company Inc.
  • Rönnberg, L. (2014). Justifying the need for control: Motives for Swedish national school inspection During Two governments. Scandinavian Journal of Educational Research, 58(4), 385–400. https://doi.org/10.1080/00313831.2012.732605
  • Schorr, L. B. (2003). Determining ‘What works’ in social programs and social policies: Towards a more inclusive knowledge base. Brookings Institute.
  • Scott, J. C. (1998). Seeing like a state. How certain schemes to improve the human condition have failed. Yale University Press.
  • SFS 2010:800. Skollagen. (The Education Act).
  • SFS 2011:185. Skolförordning. [The Education Ordinance].
  • Skolinspektionen. (2010). Foreword by the director general Ann-Marie Begler. In Terminologihandbok för Skolinspektionens kvalitetsgranskningar. [Thesaurus for the Swedish Schools Inspectorate’s quality audits.]. (p. 7). Retrieved February, 2015, from http://www.Skolinspektionen.se/Documents/Kvalitetsgranskning/terminologihandbok-webb.pdf
  • Skolinspektionen. (2011a). Vill du bli en av Skolinspektionens vetenskapliga experter? [Do you want to be one of the Swedish Schools Inspectorate’s scientific experts? In Swedish.] Dnr 00-2011:2430.
  • Skolinspektionen. (2011b). Skolinspektionens modell för processbeskrivning – processen för kvalitetssäkring. Power point presentation. [The Swedish Schools Inspectorate’s model for process description – the process of quality audit. Internal material].
  • Skolinspektionen. (2011c). Project plan for quality audit X, appendix 3 – responsibility and authority. Internal material.
  • Skolinspektionen. (n.d.a). Regelbunden tillsyn. [Regular Supervision]. Retrieved March, 2017, from https://skolinspektionen.se/sv/Tillsyn–granskning/Regelbunden-tillsyn/
  • Skolinspektionen. (n.d.b). Kvalitetsgranskning. [Quality Audit]. Retrieved February 2015, from http://www.Skolinspektionen.se/sv/Tillsyn–granskning/Kvalitetsgranskning/
  • Slavin, R. E. (2002). Evidence-Based Education policies: Transforming educational practice and research.” Educational Researcher, 31(7), 15–21. https://doi.org/10.3102/0013189X031007015
  • Slavin, R. E. (2020). How evidence-based reform will transform research and practice in education. Educational Psychologist, 55(1), 21–31. https://doi.org/10.1080/00461520.2019.1611432
  • Smeyers, P., & Depaepe, M. (2006). Educational research: Why ‘what works’ doesn’t work. Springer.
  • Smith, G. (2000). Research and inspection: HMI and OFSTED, 1981–1996 – a commentary. Oxford Review of Education, 26(3), 333–352. https://doi.org/10.1080/713688535
  • Stake, R. E. (1995). The Art of case study research. SAGE Publications.
  • Stake, R. E. (2006). Multiple case study analysis. Guilford.
  • Steiner-Khamsi, G. (2013). What’s wrong with the ‘what-went-right’ approach in educational policy. European Educational Research Journal, 12(1), 20–33. https://doi.org/10.2304/eerj.2013.12.1.20
  • Tenbensel, T. (2006). Policy knowledge for policy work. In H. K. Colebatch (Ed.), The work of policy: An international survey (pp. 199–216). Lexington Books.
  • Weinberg, J. (2019). Who’s listening to whom? The UK house of lords and evidence-based policy-making on citizenship education. Journal of Education Policy. https://doi.org/10.1080/02680939.2019.1648877
  • Weiss, C. (1979). The many meanings of research utilization. Public Administration Review, 39(5), 426–431. https://doi.org/10.2307/3109916
  • Wildavsky, A. (1979). Speaking truth to power: The art and craft of policy analysis. Little, Brown and Company.
  • Wiseman, A. W. (2010). The uses of evidence for educational policymaking: Global contexts and international trends. Review of Research in Education, 34(1), 1–24. https://doi.org/10.3102/0091732X09350472