2,020
Views
1
CrossRef citations to date
0
Altmetric
Review Article

Use of evidence to promote inclusive education development commentary on Mel Ainscow. Promoting inclusion and equity in education: lessons from international experiences

Pages 21-24 | Received 27 Dec 2019, Accepted 10 Feb 2020, Published online: 04 Mar 2020

ABSTRACT

In his essay, Mel Ainscow looks at inclusion and equity from an international perspective and makes suggestions on how to develop inclusive education in a ‘whole-system approach’. After discussing different conceptions of inclusion and equity, he describes international policies which address them. From this international macro-level, Ainscow zooms in to the meso-level of the school and its immediate environment, defining dimensions to be considered for an inclusive school development. One of these dimensions is the ‘use of evidence’. In my comment, I want to focus on this dimension and discuss its scope and the potential to apply it in inclusive education development. As a first and important precondition, Ainscow explains that different circumstances lead to different linguistic uses of the term ‘inclusive education’. Thus, the term ‘inclusive education’ does not refer to an identical set of objectives across countries, and neither does the term ‘equity’.

This is particularly interesting, as there is a strong, widely-subscribed set of international declarations in favour of inclusive education policy worldwide. From the declaration of Salamanca in 1994 as a central starting point for a global policy awareness to create schools that serve all children (education for all (EFA)), the essay describes the development of international policies up to the 2015 Incheon Declaration of the World Forum on Education, which declared inclusion and equity as fundamental preconditions for quality education. In this historical sketch of international declarations, it becomes clear that there are different justifications for inclusive education. Agents might refer to educational, social, and/or economic reasons for justifying an inclusive educational policy, but it is important to recognize that the Human Rights foundation can play a central role in argumentation for inclusive education. The United Nations’ Convention on the Rights of Persons with Disabilities (UNCRPD) of 2008 marks an important turn for Human Rights. It insists on the obligation of educational services to provide an individualized response to individual needs. This is different from traditional Human Rights declarations that define rights of defence, e.g. non-discrimination. The UNCRPD decrees a right to educational services for all individuals.

Accordingly, it becomes clear that inclusive education is recognized and decreed worldwide. Otherwise, it does not always refer to the same if inclusive education is at stake. This is likely one factor that can explain large differences in numbers, quality, and more between countries with regard to inclusive education, despite all of them having signed the declaration of Salamanca and the UNCRPD.

Ainscow does not propose a single clear meaning of what inclusive education development is in his essay. He focuses on the possibility for the individual school to start working in the direction of inclusion and supports an organizational ‘whole-system approach’ model. This model consists of five dimensions, with the school development dimension at the core. The individual school is able to start the endeavour of inclusive school development.

According to the model’s first (top) dimension, schools should adopt inclusion and equity as the guiding principles of the process. For example, the potential of all students should be focused on the process, as well as that of the teachers. Support to overcome barriers that might hinder students in their full development should be a primary objective of the pedagogical work. Second and third, the administration and community need to be involved and explicitly targeted, making clear that the schools depend on their surroundings, but also that they have the power to influence the surroundings themselves. Fourth, the model aims for the use of evidence to select and evaluate strategies of action.

In my commentary, I want to stress this last aspect, the use of evidence, in particular. Evidence based-anything is a very popular, and commonly-used term, e.g. as evidence based medicine or evidence based pedagogy. Nevertheless, the understanding of what exactly is meant by the term is not universally agreed. Accordingly, it seems very worthwhile to reflect on what evidence based practice in inclusive educational development looks like and could look like.

Ainscow develops a model of inclusive inquiry to evaluate the success of inclusive education development in school, e.g. whether they work with the index for inclusion. The core of inclusive inquiry requires a continuous dialog between teachers and students about their school cultures, structures, and practices. The index for inclusion is a helpful catalogue to guide such dialogues by use of structured questions referring to the three above-mentioned dimensions. In his essay, Ainscow describes the process as being operated by talking about learning and teaching, learning from differences, and developing inclusive practices. According to modern organizational developmental theories, such a process is very promising for achieving improved working practices which consider all stakeholders in the system (cf. Senge, Citation2006). Nevertheless, some questions remain open from the point of view of evidence based approaches. Since it seems quite clear that there is a strong tension between different fields, as for example, in politics with both an international and a national frame, or political frames and school practices, it is important to find to terms of communication that might ease concerns of any doubters. Inclusive inquiry is a process that I am convinced is helpful, and agents who are willing to engage in such a process have a large opportunity to experience the benefits and realize continuous improvements in their schools and in the practice of the entire system as they engage in the inquiry process. There are several qualitative descriptions of such processes in Germany – e.g. of schools that have been granted awards, such as the Jakob-Muth-Preis (https://www.jakobmuthpreis.de/). Accordingly, there is evidence of success with this kind of approach, but such evidence does not necessarily convince researchers and politicians that these cases are more than isolated examples, but and that they are scalable to larger populations. Additionally, they do not deliver much ‘objective’ information. In this regard, I would like to comment on options for an evidence based practice to check for effects of measures of the whole-system approach or other means of inclusive education development that might have higher chances of acceptance among sceptical audiences.

Before I come to frameworks and practices of evidence based approaches, I want to make clear that here evidence based research only refers to the process, not to the content of the research. This does not mean that numerical scores alone should be used to measure the effectiveness of a programme or a developmental process, especially not if inclusion and equity are at the core of interest. Inclusion and equity in a whole-system approach must be measured within a multilevel framework looking at a) the community, in terms of e.g. socio-economic structure, available agencies, potential supporters and partners, b) the school, in terms of e.g. infrastructure and resources, principal, staff, students, c) the class, in terms of e.g. its teachers and students, d) the teachers individually, e) the students individually, and f) more. Alongside numerical scores that have to be considered from a developmental perspective (in terms of inter-individual development rather than a social comparison of all children), social data needs to gathered, including social relations in the class and in the school and beyond, feelings of integration (or isolation), self-concept and identity-development, opportunities for participation and growth, and much more. This is already happening in empirical studies on inclusive education in schools, so there are many positive examples to support further development.

I return to my main point: to convince someone who is sceptical, it is good to be aware of their language, and apply it thoroughly. I refer here to evidence based research practice as one key to scale up successful inclusive educational practice that is likely not used to its full potential. This is notwithstanding my great appreciation for the developments and achievement of Ainscow and the results of previous studies, and is not intended to devalue or doubt studies that have been conducted. The objective of this comment is to show how it may be possible to extend the research practice in a way that might help to convince decision-makers to adopt successful school and surroundings practices.

An influential frame of reference for evidence based research is provided by the Oxford Centre of Evidence Based Medicine (OCEBM; Howick et al., Citation2011), which was also referred to in education in Germany (e.g. Casale, Hennemann, & Hövel, Citation2014). The OCEBM identifies five levels of evidence. The lowest level of evidence (level 5) refers to mechanism-based reasoning on the basis of state of the art research and theory. Level 4 refers to case-series, case-control or historically controlled studies. Level 3 is granted for non-randomized control-studies, level 2 for randomized control-studies, level 1 belongs to systematic reviews of many high-quality studies on mainly level 2 and 3. A merely qualitative, descriptive empirical evaluation of inclusive education development can be located at level 4, if it is very systematic. Theoretical assumptions that draw on evidence based results can reach level 5, if they are very systematic.

Köller (Citation2009) states that, to ensure a good evaluation, control groups should be applied and, alongside pre- and post-measures, at least one follow-up-measure should also be taken. This allows researchers to identify so called ‘sleeper-effects’. These kinds of effects are especially likely in complex interventions, such as a whole-system approach and an inclusive inquiry to develop an inclusive school. There are many experiences from evaluation research that such interventions do not necessarily have positive effects on the post- or second measure after the start of an intervention. Due to insecurity with new and old practices after the implementation of new work procedures, results can be even worse than at the beginning. The ‘sleeper-effect’ refers to the phenomenon when once new practices are elaborated and routines return, the positive effects of complex interventions can be found. Such research designs usually refer to level 3 of the OCEBM framework. With random assignments, they could reach level 2, but random assignment of students and teachers to schools seems to be rather problematic. Nevertheless, what can be assigned randomly, is a subsample of schools where an intervention is taking place. After a waiting period, the other half of schools can start with the intervention. Such a design allows for the investigation of a real control-waiting group in comparison to an intervention group which has already started. It becomes quite clear that it is possible but also quite challenging to reach even level 3 of the OCEBM with an evaluational study of school development. Nevertheless, if several level 3 evaluations are conducted, they can be used for a systematic review to gain evidence that would meet the requirements of the most elaborated level 5 of the OCEBM framework.

Another taxonomy for assessing the focus of an evaluation of interventions was made by Kirkpatrick, who described the four level training-evaluation model (cf. Kirkpatrick & Kirkpatrick, Citation2006). He distinguished between reaction, learning, behaviour, and results as levels that can be addressed as the targets of an evaluation. If inclusive inquiry is conducted mostly dialogically, this refers more to the reaction level. Stakeholders talk about their impressions of what has happened and how they view it. Of course, aspects of learning, behaviour and results are addressed in the dialogue, but they are not necessarily explicitly measured. Learning would be explicitly measured if you were to give, for example, teachers questionnaires to answer. If emerges that they have gained knowledge or changed opinions since an earlier measurement, this can be considered learning. Behaviour refers to what they do in the classroom. Did this change since the beginning of the intervention? To not only assess ‘feelings’ about this situation dialogically, one can use systematically evaluated video-graphic material and/or systematic observational tools, e.g. the CLASS-inventory, that has been applied worldwide (Pianta & Hamre, Citation2009). The last level, results, refers to outputs and outcomes that result from the intervention. This refers to the above mentioned concepts of student integration, chances of participation, achievements, and more. To make sure that student development can be tackled on an individual level, there are powerful tools available for highly sophisticated and peer-reviewed internationally publishable investigations of only a handful of students (cf. Wilbert, Citation2016).

Accordingly, there are ways to promote evidence based research practices on inclusion and equity with the potential for broader applicability in research practice on inclusive education. Such a broadening could help to scale up application of so many successful practices in schools and beyond around the world. An orientation towards a framework of evidence-based research includes the necessity and the value of research of all kinds within different frameworks. These are all needed to build on each other and to be convincing to all stakeholders. However, there is no perfect paradigm, only the old but sometimes neglected phrase: the research question determines the adequate method. Accordingly, if the research question concerns whether inclusive educational practices can be adopted in a productive and positive manner on a large scale, then a larger scale, control-trial, and results-oriented research-design should be adopted. If someone wants to find out how a given school could improve inclusivity, inclusive inquiry with the index for inclusion are very useful tools, and these can a be considered a measurement and/or an intervention to be combined with even more large scale, control-trial, and results-oriented research, to gain a more comprehensive picture.

To conclude I would like to draw on one quote of Ainscow: ‘Consequently, the starting point must be with policy-makers and practitioners: in effect, enlarging their capacity to imagine what might be achieved, and increasing their sense of accountability for bringing this about. This may also involve tackling taken-for-granted assumptions, most often relating to expectations about certain groups of students, their capabilities and behaviours’. From my point of view, this is a very important starting point for a large shift such as this, towards a large scale inclusive educational system and/or society. Evidence based research is a very powerful tool to achieve this end, as it has the means to show the effects of specific interventions and what can be expected on a large scale. If it is possible to show empirically that large scale benefits of inclusive education outweigh potential costs, sceptical stakeholders will be more likely to support such efforts.

Disclosure statement

No potential conflict of interest was reported by the author.

References

  • Casale, G., Hennemann, T., & Hövel, D. (2014). Systematischer Überblick über deutschsprachige schulbasierte Maßnahmen zur Prävention von Verhaltensstörungen in der Sekundarstufe I. Empirische Sonderpädagogik, 1, 33–58.
  • Howick, J., et al.; Oxford Centre of Evidence Working Group. (2011). The Oxford 2011 levels of evidence. Oxford Centre of Evidence-Based Medicine. Retrieved from http://www.cebm.net/index.aspx?o=5653
  • Kirkpatrick, D. L., & Kirkpatrick, J. D. (2006). Evaluating training programs – The four levels. San Francisco: Berrett-Koehler Publishers.
  • Köller, O. (2009). Evaluation pädagogisch-psychologischer Maßnahmen. In E. Wild & J. Moller (Eds.), Pädagogische Psychologie. Berlin/Heidelberg: Springer.
  • Pianta, R., & Hamre, B. K. (2009). Conceptualization, measurement, and improvement of classroom processes: Standardized observation can leverage capacity. Educational Researcher, 38(2), 109–119.
  • Senge, P. M. (2006). The fifth discipline: The art and practice of the learning organization. London: Random House Business Books.
  • Wilbert, J. (2016). Single-case data analyses for single and multiple AB designs. R-Package Scan. Retreived from https://www.uni-potsdam.de/fileadmin01/projects/inklusion/scan/scan.pdf