Publication Cover
School Effectiveness and School Improvement
An International Journal of Research, Policy and Practice
Volume 20, 2009 - Issue 1
11,921
Views
51
CrossRef citations to date
0
Altmetric
Articles

School self-evaluation and student achievement

, &
Pages 47-68 | Received 08 Feb 2007, Accepted 02 Sep 2008, Published online: 10 Mar 2009

Abstract

In the last 2 decades, educational systems have developed accountability policies in which schools maintain autonomy for their pedagogical, instructional, and organizational practices (internal control). At the same time, they are held accountable to public authorities (external control) for the quality of their education. It is not clear whether these contrasting types of accountability policies contribute to the improvement of schools' educational quality. In this study, the general research question is whether there is a relationship between school self-evaluation (SSE) and student achievement. Using a database of 81 primary schools and 2,099 students, an analysis of variance and a multilevel analysis show which factors characterize the type of SSE that contributes to students' cognitive achievement. Some SSE perspectives seem positively related to student achievement.

Introduction and research problem

In the last 2 decades, a transition has been noticeable in the governance philosophy of successive national governments, combining devolution of authority with a strong emphasis on the quality of education. According to Eurydice, a useful European database, during this time, improving the quality of education became the central concern of educational policy in many European countries (Wastiau-Schlüter, Citation2004). Several Western European countries created – or were working on – legislation and monitoring in the field of school self-evaluation (SSE), which stressed the schools' own responsibility for quality (Leithwood, Edge, & Jantzi, Citation1999; MacBeath, Meuret, Schratz, & Jakobssen, Citation1999; National Inspectorate of Education, Citation2003; Reezigt, Citation2001; Wilcox & Gray, Citation1996). New models of school regulations based upon accountability measures and evaluation practices were given considerable attention, including standards for schools as prescribed by the National Inspectorates of Education, external student assessment, internal and external evaluations (audits) of schools, and the development of examples of best practice (Broadfoot, Citation1996; Hofman, Dijkstra, De Boom, & Hofman, Citation2005). The objective of these educational policies is to ensure and enhance educational quality and improve schools using a two-sided approach to accountability: a so-called government approach as well as a market-based accountability approach.

A system of school self-evaluation can be understood from several positions depending on the school's goals, ranging from a restricted view that focuses purely on the school's outcomes (output) to a broad perspective in which the school's input, internal processes at the school and classroom levels, and performance are assessed (e.g., the range may include context, input, processes, and output) (Hofman, Dijkstra, & Hofman, Citation2005). In most definitions, school self-evaluation is referred to as a process, directly or indirectly aimed at school improvement. In some cases, school self-evaluation is also regarded as a product, in the sense of the results of the process of school self-evaluation. The concept of school self-evaluation as a process may be narrowly defined as the “check” or “measurement” phase within a system of quality assurance. This is the case in places such as Lower Saxony in Germany, and The Netherlands (National Inspectorate of Education, Citation2006). Even narrower definitions speak of a single measurement instrument, for instance, a satisfaction survey, as being a self-evaluation. School self-evaluation can also be more broadly defined as a systematic process, including cyclic activities such as goal-setting, planning, evaluation, and defining new improvement measures. In these broad definitions, school self-evaluation is almost synonymous with definitions of quality assurance. This is the case in countries and regions such as Belgium/Flanders, Denmark, England, Hesse in Germany, Northern Ireland, and Scotland (National Inspectorate of Education, Citation2006). In both definitions, the process of school self-evaluation is clearly seen as a function or an aspect of school improvement (National Inspectorate of Education, Citation2006, p. 10).

According to the findings of the Dutch Inspectorate of Education, not all schools develop an integrated and systematic approach to SSE. Some schools opt for a restricted form, taking into consideration several bottlenecks and conditions in their school's context (e.g., many ethnic minority pupils, very poor language performance of pupils). The few studies we have available regarding the effects of SSE show a mixed picture, with strong empirical evidence on the effects of school evaluations still lacking. British studies, such as those done by Wilcox and Gray (Citation1996) and Kogan and Maden (Citation1999) suggest that little improvement in the quality of teaching and learning within schools occurs through school inspections. Rosenthal (Citation2004) even showed a slight decline in student achievement levels in the year of the inspection. Alternatively, Matthews and Sammons (Citation2004) found evidence of improved quality, especially among the weakest institutions. However, inspection sometimes seems to have unintended negative effects. In their review of the literature, Ehren and Visscher found that inspections can lead to stress and to a higher workload for school staff, as well as window dressing and fear of innovation because of a concern that this will conflict with the school inspection criteria. Schools may even manipulate data in order to be evaluated positively (Ehren &Visscher, Citation2006).

Very recently, the Dutch Central Planning Agency (CPB; Luginbuhl, Webbink, & De Wolf, Citation2007) presented a study on the impact of school inspections. They conclude as their main finding that school inspections lead to better performances by schools. In the first 2 years following an inspection, test scores increased by 2 to 3% of a test score's standard deviation. The improvement in Dutch elementary schools was strongest in the area of arithmetic and persisted over the 4 years following an inspection that the data covers. The analyses also indicated that the more intensive inspections produced larger improvements in school performance than the less intensive ones (Luginbuhl et al., Citation2007).

Furthermore, it seems that schools with unsatisfactory SSE processes might also show poor school performance and lack of quality in the teaching-learning process (National Inspectorate of Education, Citation2005). This leads to the following general research question: Are different approaches to school self-evaluation related to different levels of student achievement?

Theoretical background

Two perspectives are fundamental to the research in this paper. The first perspective observes school self-evaluation from the viewpoint of the actors involved. It distinguishes two main actors in school evaluation: an internal actor (schools) and an external actor (the National Inspectorate of Education). The second perspective focuses on the actual practices and processes of SSE. Theories of effective school management suggest that we may expect different approaches to SSE due to the focus of a school (e.g., the school functioning as a team of teachers, and/or primarily focusing on optimizing the pupils' school career, or acting due to external pressures).

External versus internal school evaluation

Most accountability policies show an interesting combination of central control and steering by the country's central government with relative autonomy reserved for school governing bodies and individual schools. The findings on the topic of SSE make a distinction between the two main roles regarding school evaluation: an internal and an external one (Newmann, King, & Rigdon, Citation1997; Wastiau-Schlüter, Citation2004; Wilcox & Gray, Citation1996). The external function focuses on the safeguarding of the quality standards of schools, and in most European countries a National Inspectorate of Education is responsible for this task. In this respect, the government (through the efforts of the Inspectorates) maintains strategic control over the goals of the education system, based upon standards, objectives, and criteria of success regarding the outcomes of a school. At the same time, the daily management practices remain the particular school's responsibility. The internal function is the responsibility of the schools themselves. They are supposed to determine, guarantee, and safeguard their quality and improve the teaching-learning process and their school performance (Hofman, Dijkstra, & Hofman, Citation2005). In general, several European countries acknowledge that the evaluation of their schools is at the very heart of the quality of schooling and this includes evaluation by an external inspectorate, as well as internal procedures put into practice by the school community itself (Eurydice, Citation2004; Wastiau-Schlüter, Citation2004).

The study of the National Inspectorate of Education (Citation2006) into the effects of utilizing school self-evaluation for inspection purposes in Europe concluded that the position of SSE is stronger in countries where more accountability demands are imposed on it. The position of SSE is weaker when the national evaluation context is more encouraging of improvement only.

The first group of countries consists of England, The Netherlands, Northern Ireland, and Scotland. In these countries, the national evaluation context is labelled as equally supportive of accountability and improvement-oriented SSE. In these four countries, the position of SSE in the inspection system is strong. In the other group of countries and regions (Belgium/Flanders, Denmark, Hesse and Lower Saxony in Germany), the national evaluation context is more supportive of improvement-oriented SSE. In these areas, the position of SSE in the inspection system is weak to moderate (National Inspectorate of Education, Citation2006).

Newmann et al. (Citation1997) studied the connection between organizational management and internal and external types of school accountability and concluded that (a) external accountability seems to fortify the internal monitoring and use of evaluation systems within schools and (b) seems to promote the search for successes or failures within the schools' educational practices. Supported by the results of the European Pilot project “Quality evaluation in school education”, MacBeath et al. (Citation1999) noted that internal and external evaluation are corresponding procedures and their relationship should be plainly articulated. Western European research findings (Chapman & Harris, Citation2004; De Wolf & Janssens, Citation2005) also indicate that more is to be expected from the internal SSE process than from an external focus. About a third of the schools in this project expected significant teaching improvements due to external evaluation, while more than half expected improvement from internal school self-evaluation. With respect to management, more than a third of the schools expected improvements from an external evaluation, while 63% expected significant management improvement from an internal evaluation system (MacBeath et al., Citation1999).

School management theories and SSE

School self-evaluation includes the determination and assessments of the quality of a school and, alongside this, if necessary, the improvement of the school. Both sides of the coin are fundamental to this research. Hofman and Hofman (Citation2003) developed a framework for SSE using relevant standards from an accountability perspective and combined them with a school improvement perspective. Within this framework, the management theory works by focusing on school improvement processes using a method of integral SSE as a starting point (e.g., Dalin Citation1993; Deming, Citation1989; Hofman & Hofman, Citation2003; Reezigt, Citation2001; Reynolds & Teddlie, Citation2000; Stoll & Wikeley, Citation1998). In this method, four implementation phases of improvement reflect the so-called Plan-Do-Check-Act cycle (PDCA): The first stage includes orientation and preparation (plan phase), the second involves the implementation of an improvement (do phase), the third concerns the evaluation (check phase), and the final stage concerns routinization or integration (act/adapt phase).

School management strategies are viewed through three approaches to discerning how improvement takes place in a certain school setting: (a) school self-evaluation that views schools as highly reliable organizations (Hofman, Hofman, & Guldemond, Citation2001; Stringfield & Slavin, Citation2001), (b) school self-evaluation that views schools as learning organizations (Arts, Kok, Verbiest, Sleegers, & De Wit, Citation2003; Leithwood, Aitken, & Jantzi, Citation2001), and (c) school self-evaluation developed under pressure from external organizations (Hofman, Dijkstra, De Boom, & Hofman, Citation2005). Each of these theoretical approaches are needed because they each focus on different groups and perceptions in the schools as input for school self-evaluation. Thus, for an overall approach to school improvement in which staff and students as well as the school environment are included, we must take into account all three theoretical notions.

The first theoretical approach views the school as a “high-reliability” organization and focuses on the pupils. High reliability as a concept means that all involved strive for excellence and presume the principle trial without error, with an optimal school career for all pupils as their goal (LaPorte & Consolini, Citation1991; Stringfield & Slavin, Citation1992). The idea is that an organization cannot permit itself to make mistakes as the consequences of a mistake would be disastrous. According to Stringfield and Slavin (Citation1992), 12 factors must be considered when characterizing a high-reliability school. However, the improvement of school quality is particularly connected to three central aspects: (a) frequent monitoring and use of rich and extensive data; (b) extensive staff recruiting, training, and retraining; and (c) joint monitoring of staff while not losing autonomy and self-confidence (Stringfield, Reynolds, & Schaffer, Citation2001).

The second theoretical approach views the school as a learning organization focusing on the teachers or staff. Leithwood and Aitken (Citation1995) describe school staff as a group of people who have shared and individual aims and who work out of collective dedication while consistently considering the value of those individual and collective aims and adapting them when it seems necessary, as well as constantly building more efficient and effective methods to realize those aims. This approach regards the learning organization as a dynamic process. The objective is not the realization of a static goal but a continuous emphasis on aims and goals. According to Leithwood, Jantzi, and Steinbach (Citation1995), as learning organizations, schools adjust to their local environment and school population while shaping five factors that stimulate collective learning: (1) vision and mission of the school, (2) school culture, (3) school structure, (4) school strategies, and (5) school policy and means.

A final theoretical approach originates from contingency theory (Mintzberg, 1997, 1998) and includes the perspective that SSE is stimulated by the external community that surrounds a school. For example, schools can be stimulated or forced by the Local Educational Authority or by parents to evaluate and improve their quality. Reezigt (Citation2001) uses the term “external pressure” to describe one of the most important factors in stimulating school improvement. Choice and competition in education have recently found growing support among policy-makers. Yet, evidence of the actual benefits of market-oriented reforms is at best mixed. A study by VanderHoff (Citation2007) into the relationship between the parental evaluation of charter schools and student performance, based on data from New Jersey charter schools, indicated that not all charter schools are equally effective or equally valued. Parents seem to choose academically effective schools, and this research supports a basic tenet for competitive, market-based public school improvement. Hastings, Van Weelden, and Weinstein (Citation2007) also claim that the incentives and outcomes generated by public school choice depend to a large degree on parents' choice behaviour. However, research results also show that performance feedback can both improve and harm school outcomes, and little is known about what factors contribute to these types of effects (Coe, Citation2002). Ehren and Visscher (Citation2006) state that improvement is not only influenced by the role and content of the external inspections and the quality of the feedback but also find the context of the school and especially the support from the environment to be important. A study into factors that relate to schools at risk (Hofman, Citation2005) shows that, alongside contextual factors and the school environment, the governance of schools also plays an important role.

Research model, design, and method

Several theories, approaches, and actors that are viewed as essential to research into the quality management of schools have been presented above. In order to clarify the possible associations, we will now present a conceptual research model (see ). The scales/indicators mentioned in the model that will be used in the analyses are explained in more detail later in this section.

Figure 1. Conceptual research model.

Figure 1. Conceptual research model.

Schools (internal) and the inspectorate (external) both influence the focus and indicators of school self-evaluation. These concepts determine the level of accountability and improvement of schools. Together, the above-mentioned elements provide input for a typology of the quality management of schools. The study analyses the impact of the typology of schools on student achievement in maths.

Sample

This research into SSE is essentially a study of the current state of affairs in Dutch elementary schools. The gross sample consisted of 1,914 randomly selected primary schools, of which 939 primary school principals responded and completed a questionnaire concerning SSE. Along with this, two other datasets were linked to our study. The first is a database from the National Inspectorate that includes school-level data (on all schools). The second source is a large-scale national study that includes (randomly selected) school and student data (PRIMA-5; Van der Veen, Van der Meijden, & Ledoux, Citation2004). The overlap of both datasets has been used in the study undertaken in this paper. The overlap includes 81 primary schools and 2,099 students.

We compared both our net samples (n = 939) and (n = 81) with the total school population in The Netherlands with regard to factors such as achievement levels of the pupils, pupil population, number of pupils in the schools, and degree of urbanization. No significant differences were determined.

Variables and scales: SSE

The school self-evaluation study includes information from 939 primary school principals based upon a survey that concerned four dimensions: (1) the perspective or focus of the school on SSE; (2) the specific features of the SSE system used; (3) the extent to which schools are actively implementing measures regarding school accountability and improvement; and (4) the role of the specific actors involved, external support, and the use of specific instruments for SSE. The psychometric information from scales and indicators that are presented in this section can be found in .

Table 1. Psychometric characteristics of dimensions and indicators of SSE.

Concerning the focus of the school, three reliable scales were constructed to measure the focus of the school in relation to SSE: the “schools as learning organizations” (LO) perspective focuses on the school staff, that is, the expertise of the teachers; the “high-reliability schools” (HRO) perspective focuses on the school career of the pupils; and the third, “SSE influenced by external pressure”, focuses on the involvement of actors surrounding the school (inspectorate, educational authority, governing body, parents), looking at how they have influenced the development of the SSE system. High scores on these scales indicate which specific SSE focus (LO, HRO, external pressure) is most fitting, according to the school principal.

Two reliable scales were developed to describe the evaluation system or process used in the school: The first measures the degree to which the SSE system includes an integrated system (including the information or input of several actors within the school evaluation process); the second is a scale based upon the well-known PDCA cycle introduced by Deming (Citation1989) that focuses on a cyclic approach to the school improvement process, that is, the use of a Plan-Do-Check-Act cycle in the SSE-process. High scores on these scales indicate the school uses an integrated SSE system and follows the relevant phases of the PDCA cycle within the SSE system of the school.

The survey includes subscales concerning “Context/Input”, “Processes at school level”, “Processes at classroom level”, and “Output” reflecting the activities of the school in determining its current position (accountability). In addition, the school improvement subscale reflects the level or stage of improvement of the school according to the items of this so-called CIPPO model, ranging from orientation, implementation, evaluation, to integration within the school. The two overall scales for accountability and school improvement both contain 27 items. These include four subscales each: 8 indicators for context/input, 7 indicators for processes at school level, 7 for processes at classroom level, and 5 items for output, to measure to what degree schools have implemented accountability and school improvement measures. The higher the score the more the school has been implementing these aspects (indicators), measuring accountability on the one hand and school improvement on the other.

The last dimension includes scales that operationalize the SSE process, such as the possible influence of the expertise of teachers on SSE, of the interest groups (actors involved), of external support, and the use of specific SSE instruments. Again, here a high score on the scales indicates high levels of influence by the expertise of teachers, of the interest groups involved, and of external support on the use of specific SSE instruments.

Variables and scales: the National Inspectorate of Education

The National Inspectorate of Education has been quite forthcoming and helpful by making three data files available to our research project. These files include the results or assessments based on the supervision of the schools according to their supervision framework (see Appendix 1). The Inspectorate includes standardized assessments in three domains: (a) the management of school self-evaluation, (b) the quality of the teaching-learning process, and (c) the quality of school outcomes (National Inspectorate of Education, Citation2005).

Inspectorates' scale: quality control (SSE)

Based upon the indicators in the supervision framework and information from the data files of the Inspectorate, reliable scales were constructed (reliability > .80). These scales were standardized and transformed into one combined criterion variable that indicates the degree to which the school has put into practice the indicators of quality control (see Appendix 1) in their school self-evaluation system.

Inspectorates' scale: quality of the teaching-learning process

This scale was developed based on the indicators of the Inspectorates' supervision framework regarding teaching and learning processes. Using the data files of the Inspectorate, three scales were constructed (reliability: .64, .86, .79). These scales were also standardized and transformed into one combined criterion variable that indicates the degree to which the school has implemented the indicators of the quality of the teaching and learning process: curriculum, learning time, the pedagogical and didactic performance of teachers, the school climate, harmonization with the educational needs of pupils, an active and independent role for pupils, and finally, support and guidance for pupils (see Appendix 1).

Inspectorates' scale: quality of the school outcomes

The constructed variable assesses to what degree the achievement levels of the school's pupils over 3 school years are at, under, or above the level that may be expected of the school's pupil population. The score ranges from 1 (= under the expected level, through 2 to 3 (= at the expected level), and to 4 to 5 (= above the expected level (for more information on the construction process, see Hofman, De Boom, Hofman, & Van den Berg, Citation2005). However, it must be said that this scale shows very limited variance between the schools.

Variables and scales: covariates and cognitive outcomes

A large-scale national database was used for the student data which include three cognitive output measures and student background data.

Maths achievement

The cognitive performances at the pupil level are indicated by the criterion variable maths. This standardized test was developed to measure the general numeric skills of pupils aged 6–12. The test consists of three parts, each with a duration of about 45 min, which the pupils can complete independently. The test contains many open questions. There are in principle two subscales: (a) Numbers and Calculations and (b) Measurement, Time, and Money. The arithmetic/mathematics test for pupils aged 11–12 contains 120 questions. The test was developed by the National Institute for Test Development (Cito). The exact content of the questions can be found in the tests included in the documentation for PRIMA-5 (Van der Veen et al., Citation2004).

Student and school covariates

In order to determine a fair estimate of the achievement levels of students and the effectiveness of schools, we had to take into account the individual characteristics of students (covariates at pupil level) and the school's student body as a whole (school covariates). We have selected the covariates that in earlier studies have been shown to have a serious impact on student achievement (Creemers, Citation1994; Reezigt, Citation2001; Scheerens, Citation1989). In this study, at pupil level, we use three variables as covariates:

socioeconomic background [SES];

pupils' intelligence [IQ];

pupils' gender [sexe].

At school level, we use the following variables as covariates:

number of pupils in school [scale];

place of residence of school [urbanization];

number of schools per governing board, one school versus more [one type of board].

Methods of data analysis

To answer our research question, three types of analyses were conducted: (1) cluster analysis, (2) one-way analysis of variance, and (3) multilevel analysis.

Cluster analysis

We searched for a limited number of basic types of SSE. Cluster analysis is a method that respects the functioning of several indicators in combination. A hierarchical type of cluster analysis was employed to create configurations using our indicators of school self-evaluation. This type of cluster analysis is known as Ward's method (Wishart, Citation1987). It starts with as many clusters as there are stimuli, and in the following cycles the most similar clusters are combined. We selected the number of most relevant clusters based on the following three criteria:

  1. The squared fusion coefficients must increase at substantial intervals.

  2. The number of units per cluster has to be substantial.

  3. The interpretation of the clusters has to be clear and consistent with the formulated hypotheses.

One-way analysis of variance

This type of analysis was used to investigate whether a relationship could be found between our independent variables (scales) that describe the school self-evaluation system and processes within the school, on the one hand, and the assessments of the National Inspectorate of Education concerning the quality of the schools' school self-evaluation system, the quality of the teaching-learning process, and ultimately, the quality of student performance, on the other. Note that, at this point in the analyses, we are working with school-level data only. As we also made use of types of school self-evaluation (see the following section), ANOVA one-way analysis of variance was used to investigate whether different types of SSE differ significantly from each other. Significance is based on F testing with post-hoc analysis of cluster (types) deviation: – –, –, 0, + , ++ significant deviation between mutual clusters at p < .05.

Multilevel analyses

In this study, we carried out multilevel analyses, using a hierarchically structured database. Multilevel modelling realistically reflects the nested or hierarchical nature of data found in school-effect studies. Multilevel modelling is particularly suitable for the identification of those school-level attributes that are correlated with student outcomes. The analyses carried out in this study used data from two levels: student and school. In the analysis phase, a number of models were constructed in a step-by-step manner. Typically, an estimate is firstly made of an unconditional (“empty”) model and, using this, the proportion of total variation, that is, parameter variation, can be assessed. The first model (0), which is always formulated with respect to each criterion variable, is the basic model. No explanatory variables are included in this model – it only uses the estimates of the total variant components at every model level.

The next step is to formulate our conditional (theoretical) models (the SSE types and scales) and determine the degree to which they account for true parameter variability (Kennedy & Mandeville, Citation2000). The analyses were carried out with the statistical software program MLWIN, which is able to handle two or multiple-level databases (in this case school and student levels) adequately.

Results

Can we determine configurations of school self-evaluation?

A central assumption of this research is that “the total is more than the sum of parts” (Lammers, Citation1991; Mintzberg, Citation1979, Citation1989). As a starting point, we assume that, apart from the main effects that point to effective management, the variation in the schools' effectiveness will be more strongly explained by the interactions of these main effects. We will attempt to uncover the joint effects of composed indicator variables (or configurations of SSE indicators) on the students' performance. It is especially in terms of configurations or school types that the basic design of this study should be considered. We searched by means of cluster analysis for a limited number of basic types in SSE.

The cluster analysis (see ) draws on the CIPPO model as a directive framework, with the levels: Context/Input, Processes at school level, Processes at classroom level, and Output. This model gives a summary of the most important aspects (for each level) that are significant for the determination of the quality of the school for both the accountability scale and the school improvement scale. The model covers the aspects that influence the quality of the school and on which research and educational experts agree to a fair degree. The survey data of the elementary schools combine four subscales of the CIPPO model that measure accountability with four subscales for school improvement (z scores): Context/Input (C/I), Processes at school level (PS), Processes at classroom level (PC), and Output (O). Based on these scales, a typology of SSE was constructed.

Table 2. The best fitting SSE cluster solution.

Four empirically based types of management, using descriptive scores of the + , 0 and − type, based on the significant deviancy of the clusters per indicator (significance level of p < .05) are presented.

Cluster 1 is the smallest and includes 73 of the 939 original primary schools, that is 8% of the sample of schools in our research. In comparison with the other clusters, the implementation of accountability measures is very low – the schools have not determined their actual position concerning the four CIPO levels (Context-Input-Processes-Output) of school self-evaluation (see section above: Variables and scales: SSE ). The same tendency is evident for school improvement, with an exception for the scoring on the subscale Output. The school improvement subscales also reflect the stage of improvement of the school according to the CIPPO model, ranging from orientation, implementation, evaluation, to integration in the school. Schools in Cluster 1 paid some attention to the improvement of their performance in terms of outcomes. Ultimately, this cluster can be characterized as undertaking hardly any SSE, with very few accountability measures (AC) and hardly any school improvement (SI) in the stage of implementation (acronym “AC–SI–”).

Cluster 2 is the largest and includes 33% of all primary schools. The schools score average on the accountability subscales (see ). This type of school shows accountability to some extent, with the exception being the below-average score on the subscale concerning the Context/Input measures. Their scoring on the school improvement subscales is comparable to the low scoring of Cluster 1. The schools in Cluster 2 can be described as engaged in average school self-evaluation, paying some attention to accountability and school improvement that reaches the stage of implementation (acronym “AC0SI–”).

Cluster 3 contains 30% of the Dutch primary schools involved in the research. This cluster can be presented as the counterpart of the first cluster – the schools score the highest on all subscales of accountability and school improvement. The high scores on the school improvement subscales are particularly noticeable. This cluster typifies advanced school self-evaluation – including highly implemented accountability measures and school improvement in the evaluation stage (acronym “AC+SI ++”).

Cluster 4 includes a group of almost 29% of the schools studied. This cluster differs from the other clusters in that it exhibits above-average scores on the accountability scales with extremely low scores on the school improvement subscales. The scores on accountability are comparable with those of Cluster 3; however, remarkably, the attention paid to school improvement is considerably lower than the scoring of Cluster 1. In short, schools in Cluster 4 can be characterized as showing mixed school self-evaluation (acronym “AC + SI−”).

Are SSE and the Inspectorate's assessment of school quality related?

We distinguish three domains that the National Inspectorate uses in its supervision framework to judge the quality of a school's educational processes. Based on these domains, three scales were constructed (see the methods section). The first scale is the “Inspectorate quality-control scale”, which reflects the assessment of the Inspectorate with respect to the involvement of the schools in quality control through the use of SSE at the school level. The second scale is the “Inspectorate quality of teaching-learning process scale”. It reflects the quality of the teaching-learning process of the schools according to the assessment of the Inspectorate. The third scale constructed concerns the outcomes of the schools in terms of the “Inspectorate's student achievement scale”.

The intriguing issue is whether there is a relationship between these three quality scales (based on assessments of the Inspectorate) and our clusters of school self-evaluation (based on school directors' assessments). shows the results of this comparison.

Table 3. Comparison of types of SSE and quality assessments of the Inspectorate.

shows that a significant difference is visible for one of the constructed quality scales, especially between Cluster 3 and Cluster 4. This concerns the scale that indicates the quality of the “teaching and learning process” in the schools. We observed a positive effect for Cluster 3 (advanced in SSE) versus the score in Cluster 4 (mixed in SSE), as well as Clusters 1 and 2. Schools characterized by well-implemented accountability measures and who are already at the stage of evaluating their school improvement measures have a significantly better teaching-learning process quality than the other groups of schools.

This indicates that schools with an advanced SSE system show on average a higher quality (according to the Inspectorate) regarding (see Appendix 1: left-hand side) the curriculum, the use of the available learning time, the pedagogical and didactic performances of teachers, the school climate, harmonization with the educational needs of pupils, an active and independent role for pupils, and finally, a higher quality of support and guidance for pupils, in comparison to the rest of the schools in our study. Although significant differences have not been found for the other two scales of the Inspectorate, the “quality control” scale shows the same trend as the above-mentioned outcome: Schools with advanced SSE show the highest score in this respect. What is surprising is the fact that no significant differences were found with regard to student achievement in the sampled schools. However, as was stated above, this scale shows relatively very little variance between the schools.

Does the type of SSE have an impact on pupil cognitive achievement?

The research question that is central to this section of our paper includes the testing of the central or general hypothesis that there is a relationship between types of school self-evaluation and the cognitive achievement of pupils. This hypothesis was tested using a subsample of 81 schools from our original dataset that included 2,099 pupils in Grade 8 of primary school (11–12-year-olds). Our general hypothesis was specified and divided into the following subhypotheses.

Sub-hypothesis 1: Pupils in schools using the enhanced type of SSE show a higher achievement level for mathematics than pupils in the other three types of SSE.

Sub-hypothesis 2: Pupils in Cluster 1 using hardly any school self-evaluation have lower achievement levels for mathematics than pupils in the other three clusters and especially when compared to the Cluster 3 schools with enhanced SSE.

shows to what degree significant differences were found between the four types of school self-evaluation on the measure for cognitive achievement.

Table 4. Significant differences in cognitive achievement between SSE types (subset).

Significant differences are shown for the maths achievement measure. However, our first subhypothesis must be rejected: The third SSE type (Cluster 3) does not score highest on maths achievement. Furthermore, shows that the second subhypothesis is confirmed: The schools that implemented hardly any school self-evaluation measures (Cluster 1) scored significantly lower on mathematics achievement.

However, to compare the four types of SSE in a fair way, we have to take into account the pupil population and other characteristics of the schools. Multilevel analysis is a fitting model for such a comparison.

Multilevel analyses

Firstly, we estimated which part of the total variance is situated at the school level and which part at the student level. By including student input characteristics in the second model and school input characteristics in the third multilevel model (the so-called covariate models), we determined the fair effectiveness scores (value added) of schools. The “value added” is the difference between the actual achievement and the predicted achievement. The prediction of achievement is generally drawn from covariates such as intelligence and social class. After the identification of these models below, we will represent other educational models that are theoretically based. These theoretical models aim to estimate the degree to which types and characteristics of school self-evaluation account for differences in maths achievement.

We used one measure of cognitive achievement: maths outcomes of pupils in Grade 8 in primary school. If no substantial between-school variance can be found, a further search for predictors of achievement differences at school level makes no sense.

shows the results of the multilevel analyses. After the so-called empty model, four different models were tested, starting with covariate models at individual and school level. Consequently, the typology was incorporated into the analysis, and finally, we included indicators of internal and external evaluation.

Table 5. Multilevel analyses for maths (Grade 8).

The results presented in , Model 0 show there is a 13% between-school variance in maths. This outcome indicates that it makes sense to search for predictors at the school level that could account for differences in achievement levels between schools.

Next, we included the student-level covariates in the model. The students' intelligence, gender, and the socioeconomic status of the family show substantial positive relationships for maths performance in Grade 8 of our primary schools. These results are in line with outcomes of studies into effective schools and classrooms that try to explain the role of contextual factors in school effectiveness research (Teddlie, Stringfield, & Reynolds, Citation2000).

The following step concerned the inclusion of school-level covariates. This step is crucial in preventing a situation in which the student population of a school determines the effectiveness of these same schools. Willms (Citation1992, p. 41) stated the following: “The composition of a school's intake can have a substantial effect on pupils' outcomes over and beyond the effects associated with pupils' individual ability and social class.” shows that the only school-level covariate that exerts an influence on maths achievement, even after the inclusion of student-level characteristics, is the number of schools that a school board governs. Using boards with one school as the baseline, we observed substantially higher achievement levels for pupils in schools with school boards that govern more than one school.

We now come to the heart of our research. The two theoretical models concerning characteristics of school self-evaluation represented by Clusters 1–4 were included in the multilevel analysis. Firstly, the four types of SSE (with Cluster 1, hardly any SSE, as the baseline), and secondly, the indicators of the SSE process (perspectives/focus of the SSE, actors, external support, and SSE instruments) were included in the analysis (see ).

The results presented in can be summarized as follows:

Type of school self-evaluation does not seem to matter. Students in schools with or without advanced SSE show no differences in maths achievement.

Schools that function as a single school under one board perform less well than schools that fall under boards that govern other schools. Grade 8 students in the latter schools perform better in maths.

Schools that we can typify as “learning organizations” perform significantly better in maths.

Conclusion and discussion

The last 2 decades have shown a transition in the governance philosophy of national governments, combining devolution of authority with a strong emphasis on the quality of education. Most accountability policies combine central control with relative growth in the autonomy of school governing bodies and individual schools. Pressure to maintain school quality and improvement may be exerted in two ways. The quasi-market model (Harris & Herrington, Citation2006) observes that there is pressure to improve based on a rational consumer viewpoint whereby competition between schools is perceived as the primary means to quality improvement (Chubb & Moe, Citation1990). Within this approach, a strong element of parental school choice empowers parents to influence the quality of their children's schools especially when supported by published performance data providing evidence on the academic achievements of their chosen schools (Reynolds, Muijs, & Treharne, Citation2003, p. 84).

The contrasting model, government-based accountability, also includes a form of regulation based on results (Harris & Herrington, Citation2006), but this is employed in education systems (e.g., in several states in the USA, France, and Portugal) in which educational authorities do not officially promote free school choice. In these settings, quality improvement is stimulated more through external evaluation mechanisms. School reports provide performance details based on the external assessment of pupils or on an internal audit of the organizational or educational processes within the school. This information is fed back to school staff in order to motivate them to improve their performance. Placing pressure on and regulating schools is a task of national or local governments, and in particular it is achieved through formal engagement with the schools. In England and in some American states (e.g., Texas), this type of public-sector pressure can even lead to strong intervention by the government, resulting in the takeover of a school or a restructuring of its staff.

These approaches towards accountability must be regarded as signs of the significance of schools and the chosen priorities of a government's policy. However, in both cases, it is crucial to investigate to what extent the chosen accountability policies are based only upon academic achievement within schools or whether information about the school's student population is also incorporated; including classroom observations of the teaching and learning processes as occur in The Netherlands and some American states (Ellet & Teddlie, Citation2003; Hofman, Dijkstra, & Hofman, Citation2005),

Although many countries employ certain accountability policies, policy-makers and researchers still lack evidence on the real impact of this latest line of reforms. In our research, we observed specific conditions or contexts that influence the development of SSE practices. These practices are related to the accountability systems described, and we detected some trends with respect to what seems to contribute to high-quality schools.

Types of school self-evaluation

When we commenced our research, we had two major concerns. Firstly, we wanted to explore whether it was possible to distinguish different types of SSE systems in Dutch primary schools. The key dimensions which make up what we refer to as SSE are accountability (the determination and position of the quality of a school) and school improvement. This links with the definition of the Dutch Inspectorate of Education's approach to accountability. Secondly, if such types were to be found, the question was then how these relate to the quality assessments of the National Inspectorate of Education.

Four clusters or types of school self-evaluation were developed, ranging from Cluster 1, hardly any SSE, Cluster 2 with average SSE, Cluster 3 typified as advanced in SSE, and Cluster 4 as mixed in SSE.

Three theoretical notions were formulated as guidelines in this study: schools as high-reliability organizations, as learning organizations, and as mainly anticipating external pressure (contingency).

Accountability and SSE types of schools

In general, it is obvious that schools which work very little on the development of an SSE system will especially lack a clear focus or vision on SSE. Interestingly, such schools feel the least encouraged by the Inspectorate of Education to improve their SSE system and they are also less influenced by external organizations or the community around their school. However, it is even more interesting that schools with mixed SSE (Cluster 4) are indeed positively encouraged by the Inspectorate and more strongly influenced by external organizations and other school support. Thus, the question arises concerning whether the Cluster 4 group of schools (mixed SSE) were encouraged to work on their quality at an earlier stage because they lagged behind and showed insufficient school quality according to the Inspectorate's assessment. Using contingency theory (Mintzberg, 1997), it is plausible that school self-evaluation can be positively stimulated by external pressure from the community around the school. According to Reezigt (Citation2001), external pressure is one of the most important factors stimulating school improvement. Based on an analysis of 30 improvement projects in eight European countries, Reezigt developed a comprehensive framework for effective school improvement. In this framework, school improvement is stimulated through pressure from external evaluations, external agents, and market mechanisms such as competition between schools (Reezigt, Citation2001, p. 33). Our study shows that an external focus on SSE (through the Inspectorate, the Local Education Authority, or the parents) could stimulate schools that lag behind. Moreover, it is helpful to learn that, of all the schools in our sample (939), at least 83% prefer the support of the Inspectorate to a certain extent (Hofman, Dijkstra, De Boom, & Hofman, Citation2005).

On the other hand, schools that have already accomplished a high level of SSE seem to possess certain internal characteristics that are of importance to SSE (a learning organization and high-reliability approach). The high-reliability schools project of Reynolds, Stringfield, and Schaffer (Citation2002) showed that a programme of school improvement based on insights from the knowledge gained from high-reliability research, school effectiveness, and school improvement, is linked with a school's enhanced “added value” with respect to pupil achievement. The authors showed that individual aspects of the HRS “technology of practice” are differentially associated with these gains. Furthermore, they make clear that “maintenance of the school environment”, “mutual monitoring of staff”, and “data richness to improve the processes of the organization” are particularly important determinants of the degree of gains in pupil achievement over time.

In terms of school self-evaluation and the quality of schools, our research confirmed the hypothesis that schools which implemented very few SSE measures (Cluster 1) score significantly lower in mathematics. Furthermore, a significant relationship was found between the quality of the teaching-learning process and SSE clusters. Schools with an advanced SSE score highest on the teaching and learning scale. This indicates that schools with an advanced SSE system are on average of a higher quality (according to the Inspectorate) regarding the curriculum, the use of the available learning time, the pedagogical and didactic performances of teachers, the school climate, harmonization with the educational needs of pupils, an active and independent role for pupils, and finally, a higher quality of support and guidance for pupils, in comparison to the rest of the schools in our study. Further research should clarify whether specific elements play a more or less dominant role in this teaching and learning dimension.

Furthermore, it is remarkable that the low and average scoring cluster schools received better assessments from the Inspectorate than the mixed cluster schools. This may suggest that implementing an average or even minimal SSE policy seems to be better than having opposing accountability and improvement policies. We assume for now that this outcome seems to stress the importance of consistency in accountability and improvement policies.

SSE and schools as learning organizations

The multilevel analyses showed the relevance of the theory of schools as learning organizations. A definition of a “learning organization” used by many authors is that of Leithwood and Aitken (Citation1995, p. 40):

A group of people pursuing common purposes (and individual purposes as well) with a collective commitment to regularly weighing the value of those purposes, modifying them when that makes sense, and continuously developing more effective and efficient ways of accomplishing those purposes

Those primary schools typified as “learning organizations” optimize the talents of their staff so that they can contribute maximally to the quality of the school. Furthermore, these are schools with high innovation capacities that are able and willing to respond optimally to contextual changes.

These analyses once again seem to support the theory that teachers, or even more so the team of teachers, are central to the processes through which schools are stimulated to pursue improvement. Further research should keep in mind that attention has to be paid to the schools' interpretations of an accountability policy and to the processes in the school and classroom that constitute the implementation of such a policy. Although accountability policies are constructed at the central government level, they are ultimately shaped by the different actors in a specific school. Each teacher's understanding and interpretation is influenced by social interaction with colleagues and by characteristics of the school environment. A context-specific analysis of pressure to improve and the school's improvement efforts must not be forgotten (Coburn, Citation2005).

Finally, we recalled the fact that the few studies we have available regarding the effects of school self-evaluation provide a mixed picture. Strong empirical evidence on the effects of school evaluations is still lacking. It has even been suggested that school evaluation may sometimes have unintended negative effects. This study adds a little more evidence supporting the possible positive effects of school self-evaluation on school quality and student achievement. In particular, the outcomes suggest that school self-evaluation policies that are strongly driven by both accountability and desire for improvement have a positive impact.

Notes on contributors

Roelande H. Hofman is Associate Professor and senior researcher at GION, Groningen Institute for Educational Research, University of Groningen, The Netherlands. Her main research interests are school effectiveness, innovations, school management, and school governance. She is also Director of the International MSc. in Education of the University of Groningen.

Nynke J. Dijkstra is working as policy-maker and researcher at Hanze University Groningen. Her work focuses on research and quality assurance, which concerns quality measurement, monitoring, and the use of quality assurance in the Hanze university.

W.H. Adriaan Hofman is Professor of Education at the University of Groningen. He specializes in school effectiveness and school improvement, higher education, education in developing countries, and research methods. He is also director of the University Centre for Academic Learning and Teaching (UOCG) of the University of Groningen.

Acknowledgements

The National Inspectorate of Education has been quite forthcoming and helpful by making three data files available for our research project. These data files of the National Inspectorate include the results and assessments of schools according to the supervisory framework.

References

  • Arts J. Kok J. Verbiest E. Sleegers P. De Wit C. Professionele ontwikkeling en schoolontwikkeling [Professional and school development] Q*Primair The Hague, , The Netherlands 2003
  • Broadfoot , P. 1996 . Education assessment and society. A sociological analysis , Buckingham, , UK/Philadelphia : Open University Press .
  • Chapman , C. and Harris , A. 2004 . Improving schools in difficult and challenging contexts: Strategies for improvement . Educational Research , 46 ( 1 ) : 219 – 228 .
  • Chubb , J. E. and Moe , T. M. 1990 . Politics, markets and America's schools , Washington, DC : The Brookings Institute .
  • Coburn , C. E. 2005 . Shaping teacher sensemaking: School leaders and the enactment of reading policy . Educational Policy , 19 ( 3 ) : 476 – 509 .
  • Coe , R. 2002 . “ Evidence on the role and impact of performance feedback in schools ” . In School improvement through performance feedback , Edited by: Visscher , A. J. and Coe , R. 3 – 26 . Lisse, , The Netherlands : Swets & Zeitlinger .
  • Creemers , B. P.M. 1994 . The effective classroom , London : Cassell .
  • Dalin , P. 1993 . Changing the school culture , London : Cassell .
  • Deming , W. E. 1989 . Out of the crisis , Cambridge, MA : MIT Press .
  • De Wolf , I. F. and Janssens , F. J.G. 2005 . Effects and side effects of inspections and accountability in education: An overview of empirical studies , Amsterdam : Scholar .
  • Ehren , M. C.M. and Visscher , A. J. 2006 . Towards a theory on the impact of school inspections . British Journal of Educational Studies , 54 ( 1 ) : 51 – 72 .
  • Ellet , C. and Teddlie , C. 2003 . Teacher evaluation, teacher effectiveness and school effectiveness: Perspectives from the USA . Journal of Personnel Evaluation in Education , 17 ( 1 ) : 101 – 128 .
  • Eurydice . 2004 . Evaluation of schools providing compulsory education in Europe , Brussels, , Belgium : Author .
  • Harris , D. N. and Herrington , C. D. 2006 . Accountability, standards, and the growing achievement gap: Lessons from the past half-century . American Journal of Education , 112 : 209 – 238 .
  • Hastings , J. S. , Van Weelden , R. and Weinstein , J. 2007 . Preferences, information, and parental choice behaviour in public school choice , Cambridge, MA : National Bureau of Economic Research .
  • Hofman , R. H. 2005 . Naar indicatoren voor een Early Warning System [Towards indicators for an Early Warning System] , Groningen, , The Netherlands : GION .
  • Hofman , R. H. , Dijkstra , N. J. , De Boom , J. and Hofman , W. H.A. 2005 . Kwaliteitszorg in het primair onderwijs. Eindrapport [Quality control in primary education] , Groningen, , The Netherlands : GION/RUG .
  • Hofman , R. H. , Dijkstra , N. J. and Hofman , W. H.A. 2005 . School self-evaluation instruments: An assessment framework . International Journal for Leadership in Education , 8 ( 3 ) : 253 – 272 .
  • Hofman , R. H. and Hofman , W. H.A. 2003 . Ontwerp van een beoordelingskader voor zelfevaluatie-instrumenten voor scholen [Design of an assessment framework for school self-evaluation] , Groningen/Rotterdam, , The Netherlands : GION, RUG/RISBO, EUR .
  • Hofman , R. H. , Hofman , W. H.A. and Guldemond , H. 2001 . Social context effects on pupils' perception of school . Learning and Instruction , 11 : 171 – 194 .
  • Hofman , W. H.A. , De Boom , J. , Hofman , R. H. and Van den Berg , M. A. 2005 . Variatie in en effectiviteit van de gemeentelijke regiefunctie in het onderwijsachterstandenbeleid. Eindrapportage [Variation in effectiveness of LEA policies regarding school At-Risk] , Rotterdam/Groningen, , The Netherlands : RISBO/GION .
  • Kennedy , E. and Mandeville , G. 2000 . “ Some methodological issues in school effectiveness research ” . In The international handbook of school effectiveness research , Edited by: Teddlie , C. and Reynolds , D. 187 – 206 . London/New York : Falmer Press .
  • Kogan , M. and Maden , M. 1999 . “ An evaluation of evaluators: The OFSTED system of school inspection ” . In An Inspector calls; Ofsted and its effect on school standards , Edited by: Cullingford , C. 9 – 32 . London : Kogan Page .
  • Lammers , C. J. 1991 . Organisaties vergelijkenderwijs. Ontwikkeling en relevantie van het sociologisch denken over organisaties [Organizations compared. Development and relevance of sociological reflection on organizations] , Utrecht, , The Netherlands/Antwerp, Belgium : Het Spectrum .
  • LaPorte , T. and Consolini , P. 1991 . Working in practice but not in theory: Theoretical challenges of High-Reliability Organizations . Journal of Public Administration Research and Theory , 1 : 19 – 48 .
  • Leithwood , K. and Aitken , R. 1995 . Making schools smarter: A system for monitoring school and district progress , Newbury Park, CA : Corwin .
  • Leithwood , K. , Aitken , R. and Jantzi , D. 2001 . Making schools smarter: A system for monitoring school and district progress , Thousand Oaks, CA : Corwin Press .
  • Leithwood , K. , Edge , K. and Jantzi , D. 1999 . Educational accountability: The state of the art , Gütersloh, , Germany : Bertelsmann .
  • Leithwood , K. , Jantzi , D. and Steinbach , R. 1995 . An organizational learning perspective on school responses to central policy initiatives . School Organization , 15 ( 3 ) : 229 – 224 .
  • Luginbuhl , R. , Webbink , D. and De Wolf , I. 2007 . Measuring the effect of school inspections on primary school performance: A study based on Cito test scores (CPB discussion paper) , Den Haag, , The Netherlands : CPB .
  • MacBeath , J. , Meuret , D. , Schratz , M. and Jakobssen , L. B. 1999 . Evaluating quality in school education. A European pilot project. Final report European Commission Educating Training Youth , Luxembourg : Office for Official Publications of the European Communities .
  • Matthews , P. and Sammons , P. 2004 . Improvement through inspection: An evaluation of the impact of Ofsted's work , London : Institute of Education, University of London/Ofsted .
  • Mintzberg , H. 1979 . The structuring of organizations , Englewood Cliffs, NJ : Prentice Hall .
  • Mintzberg , H. 1989 . Mintzberg on management , Englewood Cliffs, NJ : Prentice Hall .
  • National Inspectorate of Education . 2003 . Onderwijsverslag over het jaar 2002 [Education Report for 2002] , The Hague, , The Netherlands : SDU .
  • National Inspectorate of Education . 2005 . 2005 supervisory framework for primary education: Content and working method of the inspection supervision , Utrecht, , The Netherlands : Inspectie van het onderwijs .
  • National Inspectorate of Education . 2006 . Proportional supervision and school improvement from an international perspective. A study into the side effects of utilising school selfevaluations for inspection purposes in Europe , Utrecht, , The Netherlands : Inspectie van het onderwijs .
  • Newmann , F. M. , King , M. B. and Rigdon , M. 1997 . Accountability and school performance: Implications from restructuring schools . Harvard Educational Review , 67 : 41 – 74 .
  • Reezigt G. A framework for effective school improvement (Final report of the Effective School Improvement Project SOE 2-CT97-2027) GION Groningen, , The Netherlands 2001
  • Reynolds , D. , Muijs , D. and Treharne , D. 2003 . Teacher evaluation and teacher effectiveness in the United Kingdom . Journal of Personnel Evaluation in Education , 17 ( 1 ) : 83 – 100 .
  • Reynolds , D. , Stringfield , S. and Schaffer , E. The High Reliability Schools Project: Some preliminary results and analyses . Paper presented at the International Congress for School Effectiveness and Improvement . Victoria, Canada. January .
  • Reynolds , D. and Teddlie , C. 2000 . “ The process of school effectiveness ” . In The international handbook of school effectiveness research , Edited by: Teddlie , C. and Reynolds , D. 134 – 160 . London/New York : Falmer Press .
  • Rosenthal , L. 2004 . Do school inspections improve school quality? Ofsted inspections and school examination results in the UK . Economics of Education Review , 23 ( 2 ) : 143 – 152 .
  • Scheerens , J. 1989 . Wat maakt scholen effectief? [What makes schools effective?] , s-Gravenhage, , The Netherlands : SVO .
  • Stoll , L. and Wikeley , F. 1998 . “ Issues on linking school effectiveness and school improvement ” . In Effective school improvement: State of the art contribution to a discussion , Edited by: Hoeben , W.Th.J.G. 29 – 58 . Groningen, , The Netherlands : GION/RUG .
  • Stringfield , S. , Reynolds , D. and Schaffer , E. C. The High Reliability Schools Project . Paper presented at the International Congress for School Effectiveness and Improvement . Toronto, Canada. January .
  • Stringfield , S. C. and Slavin , R. E. 1992 . “ A hierarchical longitudinal model for elementary school effects ” . In Evaluation of educational effectiveness , Edited by: Creemers , B. P.M. and Reezigt , G. J. 35 – 69 . Groningen, , The Netherlands : ICO .
  • Stringfield , S. C. and Slavin , R. 2001 . Title I: Compensatory education at the crossroads, socio-cultural, political, and historical studies in education , Mahwah, NJ : Lawrence Erlbaum Association .
  • Teddlie , C. , Stringfield , S. and Reynolds , D. 2000 . “ Context within school effectiveness research ” . In The international handbook of school effectiveness research , Edited by: Teddlie , C. and Reynolds , D. 160 – 187 . London/New York : Falmer Press .
  • VanderHoff , J. 2007 . Parental valuation of Charter Schools and student performance (Working Paper Rutgers University) , Newark, NJ : Department of Economics .
  • Van der Veen , I. , Van der Meijden , A. and Ledoux , G. 2004 . School- en klaskenmerken speciaal basisonderwijs. Basisrapportage PRIMA-cohort onderzoek. Vierde meting 2002–2003 [School and classroom characteristics special education. Basic report PRIMA-Cohort research. Fourth measure 2002–2003] , Amsterdam : SCO-Kohnstamminstituut .
  • Wastiau-Schlüter P. A close-up on the evaluation of schools Eurydice Brussels, , Belgium 2004
  • Wilcox , B. and Gray , J. 1996 . Inspecting schools: Holding schools to account and helping schools to improve , Buckingham, , UK/Philadelphia : Open University Press .
  • Willms , J. D. 1992 . Monitoring school performance: A guide for educators , London : The Falmer Press .
  • Wishart , D. 1987 . Clustan user manual , St. Andrews, , UK : University of St Andrews .

Appendix 1. 2005 supervisory framework of the National Inspectorate of Education

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.