4,170
Views
5
CrossRef citations to date
0
Altmetric
Editorial

The challenges and prospects for educational effectiveness research

Pages 1-12 | Published online: 06 Aug 2009

The issue of effectiveness is (or should be) a concern for all of us involved in education. In relation to practice, for example, teachers are routinely concerned with whether they are being effective in the classroom. In planning their lessons they naturally search for approaches that they feel will work best with their particular pupils; whether this is based upon formal research evidence, the experience of colleagues or their own experience of teaching that particular subject previously. Teachers also continually monitor and reflect upon how they are getting on day‐by‐day and week‐by‐week and do this, most fundamentally, in terms of seeking evidence of whether their pupils are picking up the knowledge and skills they seek to convey. Moreover, teachers continually reflect upon and seek ways of evaluating how successful they have been at the end of each school term or year and an important element of this evaluation involves a focus on effectiveness or, in other words, whether they have achieved their desired effects or results with their pupils.

Alongside practitioners, policy‐makers are also inevitably concerned with issues of effectiveness. While it is accepted that policy‐making is an inherently political process and that decisions over which educational policies to adopt will be based on consideration of a number of factors, there is no doubt that the relative effectiveness of particular approaches will be one of the factors that are considered. Within this, policy‐makers will often need to consider whether specific policies or approaches they are interested in are likely to have differential effects on particular subgroups. Given governments’ concerns with tackling social exclusion and disadvantage, they will tend to be especially interested in seeking out information on what approaches are likely to work best for communities in areas of high deprivation.

Beyond all of these issues of policy and practice, the political sphere also demands a focus on issues of effectiveness. In relation to social justice for example, it is difficult to see how one can be concerned with addressing educational inequalities in relation to social class, race, gender, disability and/or sexuality without including a focus on monitoring and evaluating the outcomes of existing educational policies and practices. Moreover, in advocating for alternative approaches aimed at promoting social and educational inclusion there will inevitably be the need to make judgements concerning which approaches to favour and promote and this will, in turn, include an assessment of which is likely to be most effective.

Of course while we may all be concerned with educational effectiveness, whether in terms of politics, policy and/or practice, the difficulty comes when attempting to define what we mean by ‘effective’ and how we know when a particular educational programme, intervention and/or approach is effective or not. There is obviously the difficulty of determining which methods and forms of analysis are most appropriate in attempting to demonstrate evidence of effectiveness. However, the challenges associated with research into educational effectiveness go far beyond this. There are, for example, important political and value judgements to be made concerning which outcome measures are to be used to judge whether a specific approach is effective or not and also who should be involved in selecting these outcomes. Moreover, there are equally significant issues involved in the interpretation of findings, especially in terms of acknowledging methodological limitations and being specific about the claims that can be made in relation to any effects that may have been found. Following on from this, there are also important distinctions to be made between the effects found in relation to a well‐planned and organised pilot study (or what is often called an efficacy test) of a programme and the effects that one can expect when it is delivered on a much wider basis in a range of real‐world contexts (i.e., when taken to scale). Finally, and beyond all of these issues, it is barely sufficient simply to ascertain whether a programme is having an effect or not. Another key difficulty for effectiveness research is determining why these effects have taken place. This, in turn, requires the use of differing (often qualitative) methods and thus the need to design effectiveness studies carefully, making use of a range of different methods. Ultimately, the use of multi‐methods also raises important ontological and epistemological questions concerning the nature of evidence, how it can be collected and who can know it.

The importance of the issue of effectiveness in education, and the many different issues and challenges that researching educational effectiveness pose, provides the rationale for this new journal. Effective Education seeks to promote research into the effectiveness of educational programmes, interventions and/or types of provision as well as providing a platform for critical debate regarding the wider practical, methodological, political and philosophical issues associated with such research. For the remainder of this editorial it is worth setting out briefly what research into effective education looks like and some of the key challenges facing research in this area that will be addressed through the pages of this journal in the years to come.

What does research into effective education look like?

Empirical research into educational effectiveness commonly takes one of two forms: research that seeks to determine the effectiveness of a particular educational programme, intervention and/or type of provision; and/or that which seeks to understand and explain the effects found. Studies that seek to determine the effectiveness of a particular intervention, by definition, will be focused on ascertaining whether that approach has been successful in achieving the desired effects or results specified for it. In this sense, such studies have an explicit focus on outcomes and the need to establish whether there has been the desired improvement in the outcomes specified caused by the intervention under investigation.

The most convincing way to demonstrate this, wherever possible, is through the use of experimental methods or what have been termed randomised controlled trials (RCTs). RCTs typically include the use of pre‐test and post‐test measures to determine directly whether there has actually been an improvement in outcomes among those participating in a particular intervention. Moreover, any such increases are compared with those of a control group comprising those who have not participated in the intervention. Where both groups are matched we can be confident that any improvements found among those participating in the intervention group that are above and beyond those in the control group are likely to be caused by the intervention. The reason for this is simply that all other factors likely to influence the outcomes among those participating in the intervention are equally likely to impact upon the control group as well. Thus, any changes that occur between the pre‐test and post‐test scores among the control group can be interpreted as the net effect of all of these other factors. As such, if those in the intervention group have experienced an additional improvement then we can conclude that this is likely to be due to the intervention as this is the only systematic difference between the two groups.Footnote 1

Of course the validity of these arguments rests upon the assumption that the two groups are matched and this is why, wherever possible, random allocation to the intervention and control groups is preferred to optimise the chance that all of the factors that may be associated with the outcomes under study are evenly spread across the two groups. In this regard random allocation can occur at the level of the individual or, and more commonly in educational research, at the level of the class or school (in which case it is known as a cluster randomised trial). However, there are occasions where random allocation is not possible and/or is not desirable and in such circumstances the next best alternative is to use a purposively selected control group that is chosen on the basis that it is as similar as possible to the group participating the intervention. Where there is no random allocation involved such a study is referred to as a quasi‐experimental design. Beyond this there will be occasions where it is not possible to establish any sort of clear experimental design, whether involving randomisation or not. In such circumstances it is usually possible to adopt a more naturalistic design where either a longitudinal method is used to track individuals over time to see whether improvements in particular outcomes are associated with exposure to and/or experience of differing types of educational provision or to use a single retrospective survey to examine the same.

What unites all of these differing approaches is a concern with attempting to ascertain whether a particular educational approach or type of provision is effective in terms of achieving improvements in relation to some specified outcomes. As with all social research however each has its strengths and limitations. While being the preferred method for studying effectiveness, the randomised (and cluster randomised) controlled trial is not always an appropriate or practical option, although in my experience the arguments concerning the difficulties of establishing randomised trials in education tend to be much exaggerated. For all of the other methods – the quasi‐experimental method and the longitudinal study and retrospective survey – while they may be more practical they do introduce a significant additional degree of uncertainty into the analysis in terms of never being able to rule out the possibility that any effects found to be associated with a particular educational approach or type of provision could be due to other (unidentified) intermediate or confounding factors. This is why researchers employing one or more of these types of research design make every effort to identify other potential factors and control for these in their analyses. While such analyses can never prove to be 100% conclusive, they still offer the best method we have of developing some understanding, however provisional, of the effectiveness of particular educational programmes or approaches. Indeed it is also worth stressing that even the findings arising from a RCT carry with them some degree of uncertainty. More specifically, there is always the possibility that the random allocation of individuals did not give rise to two balanced and matched groups and thus that some of the effects found may be due to systematic differences between the two groups rather than due to the intervention itself. While such a possibility can be reduced with the use of larger samples it can never be eliminated entirely and this is why the reporting of findings from an RCT will always be accompanied by an indication of the level of uncertainty pertaining to these findings through the reporting of confidence intervals and the findings of conventional significance tests (Raudenbush, Citation2005). It is with all this in mind that while there is the need to do the best we can to research and understand the effects of education, we need to be ever mindful of the provisional nature of any evidence that we have. As Oancea and Pring (Citation2009, p. 28) have recently argued:

No search for evidence, however systematic, can give grounds for certainty in the conclusions reached. There is a need to live with uncertainty – to be open to further evidence, further reconceptualisation of the evidence received and further criticism of the interpretation of that evidence. The state of knowledge is always provisional. The world, both physical and social, to which that knowledge applies is itself changing in an unpredictable way – the social world in particular because of the inevitable evolution of meanings attributed to experience as each interacts with each other.

This sense of the provisional status of evidence is equally applicable to the use of systematic reviews and meta‐analyses of existing research studies. Such methods are the most appropriate for the specific task of synthesising the findings of a variety of studies that describe the effects of particular educational programmes or interventions. However, and as with all research, the findings of such research syntheses are never absolute and conclusive but can only ever be regarded as provisional and open to change as more evidence becomes available and/or as the social world changes.

Alongside all of these methods for determining whether particular educational programmes and approaches are effective, the other type of empirical study associated with research into educational effectiveness that tends to run alongside those described previously is that concerned with explaining the reasons for any effects found. In this sense it is recognised that while RCTs and the other types of method described previously play an extremely important role in determining whether an intervention has achieved a desired effect, they do not tend, in themselves, to lead to major advances in educational theory or understanding (Thomas, Citation2004). To achieve the latter we need also to be asking the question of why a particular effect has happened and this, in turn, often requires the use of additional, usually qualitative methods to develop an appreciate of the actual meanings and motivations associated with those involved and the complex sets of social processes and practices associated with the intervention. It is important to stress, however, that the use of qualitative methods in this way only makes sense if there is clear (and generally concurrent) evidence available that the programme or intervention under study is effective. Ultimately, it is rather like ‘putting the cart before the horse’ to undertake a qualitative study focusing specifically on understanding why a particular educational programme or intervention is effective before there is any convincing evidence that it actually has been effective.

‘Gold standards’ and the politics of educational research

The establishment of this new journal and the promotion of particular forms of research into the effectiveness of education, and specifically the RCT, should not be interpreted as implying that these methods are being held up as the ‘gold standard’ for educational research. Neither should the foregoing discussion be read as implying that research into educational effectiveness is the most important form of research evidence to be gathered and used by policy‐makers and practitioners in education. On the contrary, it is fully recognised that the research evidence required to inform education policy and practice needs to ask many different questions beyond the narrow issue of whether an educational approach is effective and, consequently, this requires the use of a wide variety of methods (Bridges, Smeyers, & Smith, Citation2009). As Whitty (Citation2006, p. 162) makes clear:

…even research that is centrally concerned with improving practice and supporting teachers – in whatever phase of education – needs to be more diverse in its nature than the rhetoric of ‘what works’ sometimes seems to imply. Research defined too narrowly would actually be very limited as an evidence base for a teaching profession that is facing the huge challenges of a rapidly changing world, where what works today may not work tomorrow. Some research therefore needs to ask different sorts of questions, including why something works and, equally important, why it works in some contexts and not in others. And anyway, the professional literacy of teachers surely involves more than purely instrumental knowledge. It is therefore appropriate that a research‐based profession should be informed by research that questions prevailing assumptions – and considers such questions as whether an activity is a worthwhile endeavour in the first place and what constitutes socially‐just schooling.

It is important to stress, therefore, that the focus for this journal is on just one, albeit extremely important, issue within the broader realm of education research. It is not being claimed that this focus on effectiveness comprises the totality of what is required to evaluate particular educational programmes or interventions or what should be considered ‘evidence’. Again, while educational effectiveness research should constitute an important element of any evaluation there are other, equally significant, issues that a well‐designed evaluation also needs to consider including: the efficiency of the intervention (i.e., it may be achieving its desired outcomes, but at what cost?); the underlying processes associated with the intervention (i.e., how was the intervention delivered? How was it perceived and experienced by those involved?); a consideration of possible unintended consequences (i.e., it may be achieving its desired effects, but is this at the cost of also leading to some adverse consequences?); whether such a programme is sustainable (i.e., it may ‘work’ when delivered in a small, well‐resourced and tightly controlled manner but will it still be effective when taken to scale?); and, of course, a good quality evaluation should also be able to help us understand why the intervention is having an effect. This latter element of an evaluation requires the careful use of information derived from the process evaluation to help construct and apply some form of theoretical model to make sense of what is going on. This, in turn, is why good evaluations of the effectiveness of educational interventions are multi‐method in nature, often involving a strong qualitative component running alongside a quantitative (often experimental) study.

However, and even with all these points in mind concerning the necessarily eclectic nature of education research notwithstanding, there remains a need for more specialist journals that focus on particular methods and/or substantive issues. Such journals provide an important platform from which the specialist knowledge and methodological skills associated with particular forms of research can be explored and developed at a depth not possible across the range of existing and more generalist journals. This is certainly the case for research into educational effectiveness where there is a need for a dedicated forum not only to help consolidate and develop further the specialist expertise required to undertake research in this area (which tends to involve complex research designs and the use of advanced statistically analyses), but also to provide space for an open and constructive debate concerning the nature, purpose and uses of educational effectiveness research.

Key challenges for research into educational effectiveness

It is in relation to the role that Effective Education seeks to play in encouraging a wider debate regarding educational effectiveness research that it is worth, finally, drawing attention to a number of key challenges facing researchers in this area. Three particular challenges are outlined in this section. These are not meant to constitute an exhaustive list however but are, rather, intended simply to give a flavour of the types of debate that this journal seeks to encourage.

Perhaps the first and most obvious challenge to arise from the earlier discussion is the need to consider the role that educational effectiveness research can and should play alongside other forms of research in education. As Chatterji (Citation2005) has proposed through the notion of the ‘extended‐term mixed‐method evaluation design’, the use of RCTs is only one aspect of any programme of research focused on the design and development of educational programmes and interventions and their eventual implementation and evaluation. In particular, the success of a RCT is heavily dependent upon significant preparatory work and prior research involving a range of different methods (Raudenbush, Citation2005). Most basically there is a need initially to identify the nature of the social problems at hand and the factors impacting upon these for those that will provide the focus for the intended intervention. Through the use of a range of methods including in‐depth qualitative case studies and wider quantitative surveys a theoretically informed understanding of the nature of the problems can be gained and this, in turn, allows for the identification of the specific outcomes that are to provide the focus for a subsequent intervention. With the outcomes specified and a better understanding of the nature of the problems associated with these gained, it is then appropriate to identify and develop a suitable intervention. The formal randomised trial only comes at the end of this process and is itself not run in isolation, but tends to be accompanied by further qualitative research in order to document and understand the processes and practices that give rise to any effects found or, equally, that can help understand why no effects have been found or even why there have been negative effects.

Ultimately, and as Chatterji (Citation2005, p. 15) contends, without seeing effectiveness research as part of a wider research process involving a rich and diverse range of methods, there is the risk of generating ‘prematurely implemented experimental designs [that] do not lead to improved understandings of “what works”’. As she goes onto argue:

What often results is an atheoretical, poorly conceptualised, ‘black box’ evaluation… where little is unveiled as to the reasons and conditions under which a programme worked (if indeed desired outcomes were manifested), or the causes for its apparent failure (in the event outcomes could not be documented). External validity and replicability issues – critical for programme expansion and dissemination – remain unresolved.

Such use of mixed methods is becoming more commonplace in effectiveness research (e.g., Oakley, et al., Citation2003; Toroyan, et al., Citation2004; Reinking & Bradley, Citation2008; Connolly, Citation2009). However this, in turn, raises significant practical, methodological and philosophical issues in combining experimental designs with other methods, especially where the aim is to ensure a proper integration of methods in an ongoing and interactive manner (Day, Sammons, Stobart, Kington, & Gu, Citation2007).

The second key challenge facing effectiveness research in education is the need to develop ways of engaging much more meaningfully with teachers and practitioner research (Coe, Fitz‐Gibbonm, & Tymms, Citation2000). Perhaps one of the problems here is the way that effectiveness research has tended to be seen largely as a form of summative evaluation, as implied in the earlier model where it tends to be large‐scale and used to test the effectiveness of an already well‐developed and established programme or intervention. In this model, practitioners can often be left feeling that there is no role for them in the design and use of randomised trials that tend, in turn, to be undertaken by researchers external to the school or institution where they work. Indeed for some, their only knowledge and experience of RCTs may be when the findings of such trials are used as evidence to justify a particular programme or approach they are being required to adopt. In this sense, the use of randomised trials can be regarded as ignoring, if not undermining, practitioners’ professional autonomy and accumulated ‘craft knowledge’ and relegating their role to that of technician where they simply receive and act upon the knowledge and guidance of others (Cochrane‐Smith & Lytle, Citation1999).

However, the use of effectiveness research in education does not have to be this way. Indeed alongside the use of larger‐scale trials as summative evaluations of already well‐designed programmes and interventions, there is an important role for RCTs and quasi‐experimental methods when used on a smaller scale to pilot test a programme that is in development. This use of small‐scale experimental designs as a form of efficacy test can be undertaken by the practitioners in collaboration with researchers and can be used in an iterative and formative manner as a means of trialling and reflecting upon programmes in the early stages of their development. Collaborative ventures of this kind ensure the most effective use of practitioners’ existing expertise and craft knowledge while also enabling them to be directly involved in the selection of outcomes and the design of pilot trials as well as the interpretation and subsequent use of the findings (for examples see Connolly and Hosken (Citation2006) and Connolly, Fitzpatrick, Gallagher and Harris (Citation2006)).

Moreover, there is no reason why such collaboration needs to be confined to that between practitioners and researchers. Indeed, and as Oanacea and Pring (Citation2009) contend, given the provisional state of all of our knowledge there is a strong argument for ‘democratising’ the research process further by involving in a meaningful manner all key stakeholders in the development and refinement of educational interventions including parents, the wider community and most significantly children themselves. As Lundy and McEvoy demonstrate in this first issue, for example, it is quite possible to involve young children in a constructive and meaningful way as genuine participants in relation to effectiveness research including, in their case, the identification of outcomes and the design of interventions aimed at achieving these outcomes.

Overall, and as Hiebert, Gallimore and Stigler (Citation2002) have argued, the increased use of such pilots has the potential to enhance radically our existing knowledge base with the routine undertaking, reporting and sharing of findings providing an extremely rich source of information on a wide range of differing approaches and how these have fared when delivered in a variety of different settings and contexts.

The final key challenge for effectiveness research is the need to address the current tendency for ‘what works’ research to be caricatured as a crude and simplistic methodological project that is positivist in its outlook and that seeks through the dogmatic use of RCTs to establish universal truths about what educational programmes work regardless of their target or the context within which they are delivered. Such caricatured portrayals and criticisms of ‘what works’ research are not difficult to find. At the time of writing this editorial, for example, the latest issue of the official newsletter of the British Educational Research Association – Research Intelligence – carries an article entitled: ‘“What works” as sublinguistic grunt’. Written by a leading and respected educational researcher, the article concludes by arguing that: ‘there are no silver bullets… Teachers are thinking, independently reasoning professionals and professional development surely depends more on engagement in a community that shares and argues about ideas than it does on being spoon‐fed tips, bullet points and too‐easy answers about “what works”’ (Thomas, Citation2009, p. 22).

In a similar vein, the textbook Research Methods in Education (Cohen, Manion & Morrison, Citation2007), perhaps the most popular and widely used textbook in education within the UK, includes a particularly negative and misleading section on RCTs. Students reading this will learn, for example, that: ‘randomised controlled trials belong to a discredited view of science as positivism’ (p. 278) and that ‘often in educational research it is simply not possible for investigators to undertake true experiments, for example, in random assignment of participants to control or experimental groups’ (p. 282). Moreover, they will also be taught that:

…even if we could conduct an experiment… it is misconceived to hold variables constant in a dynamical, evolving, fluid, open situation. Further, the laboratory is a contrived, unreal and artificial world. Schools and classrooms are not the antiseptic, reductionist, analysed‐out or analysable‐out world of the laboratory. (p. 277)

These criticisms need to be understood, in part, as a response to what is perceived to be the promotion of a particular and rather restricted political discourse on ‘what works’ by governments, particularly in the UK and USA, which tend to hold up ‘what works’ research and evidence‐based practice as the panacea that will address educational decline and promote standards (Oanacea & Pring, Citation2009). Moreover, such a discourse has been felt to underpin the increasing regulation and control of teachers and schools and, as touched upon earlier, the undermining of teachers’ professional autonomy by reducing their role to that of technician; increasingly being required simply to deliver ready‐packaged programmes that have been proven to work (Cochrane‐Smith & Lytle, Citation1999; Moore, Citation1999). Moreover, the privileging of particular methods – most notably the RCT – has been perceived by some as leading to the marginalisation and exclusion of other (particularly qualitative) forms of educational research and inquiry (Hammersley, Citation2004).

Such perceptions need to be taken seriously, not least because they run deep within the education research community and tend to play a powerful role in reproducing unhelpful methodological dualisms and divisions (Pring, Citation2000; Gorard & Taylor, Citation2004). Left unchecked, they will continue to mislead researchers in education and discourage them from engaging in research into educational effectiveness. Fortunately, there are a number of ways in which these perceptions can be challenged and addressed. The greater use of mixed methods, the promotion of collaborative research with practitioners and the democratisation of effectiveness research more generally will all help to break down this view of research into educational effectiveness as being narrowly defined and exclusionary.

However, and beyond this, more can be done in relation to educational effectiveness research to help counter the perception of it as crudely positivist in nature and concerned with generating simplistic and universal claims. In this sense, for example, the greater contextualisation of the findings of RCTs will help to shift the discourse from ‘what works’ to a more nuanced, grounded and realist(ic) account of ‘what has worked for a particular group of individuals, within a particular context and at a particular time’ (Pawson, Citation2006; Pawson & Tilley, Citation1997). Moreover, even these more grounded claims concerning ‘what works’ are dependent ultimately on the quality and validity of the outcome measures used. As such, it would be helpful for researchers when reporting the findings of randomised trials to reflect upon the inevitably partial and limited nature of many of the outcome measures used and the consequences of the decisions made in relation to which outcomes to focus on and which to leave out.

In addition, while it may be possible to demonstrate that an educational programme is effective with a particular group of children in a specific context through a well‐designed and delivered pilot trial or efficacy test, it cannot be assumed that these effects will be achieved when that programme is taken to scale. As part of the need to develop more grounded approaches to the interpretation of the findings of effectiveness research, therefore, it is important to avoid the temptation to assume that the effects of a well‐controlled pilot programme can be replicated when that same programme is taken to scale. In this regard a greater focus is needed within educational effectiveness research on what has been termed ‘Type II translation research’ (see, for example, Rohrbach, Grana, Sussman and Valente (Citation2006)), which seeks to study and learn from the implementation of educational programmes that are proven to be effective through carefully controlled pilot projects on a much wider scale, in real schools and communities (Domitrovich & Greenberg, Citation2000; Walker, Citation2004; Greenberg, Domitrovich, Graczyk, & Zin, Citation2005).

Overall, there are many challenges ahead in relation to the promotion of high‐quality research into the effectiveness of education. Some of these are more technical in nature and include a range of issues relating to the design and statistical analysis of experimental studies and longitudinal and retrospective surveys. These more technical challenges can cover a wide range of issues from the most appropriate methods for analysing cluster randomised trials (Murray, Citation1998) through to more fundamental considerations concerning what constitutes evidence and debates surrounding the potential use of Bayesian approaches to the analysis of randomised trials (Spiegelhalter, Abrams, & Myles, Citation2003). However, the focus of the foregoing discussion has been more broadly based simply because there is a more fundamental challenge facing effectiveness research in education and that is the scepticism and critical responses it currently attracts from certain sections of the education research community. In this sense perhaps the most important task ahead is to create the space for an open and constructive engagement between researchers in the field and the wider education research community that will, progressively, provide the foundations upon which effectiveness research can be seamlessly integrated with the many other forms and approaches to research in education.

An open call for papers

With this vision and purpose of Effective Education in mind, and by way of a summary, it is worth concluding this editorial with an open call for papers. Given the foregoing discussion the journal would therefore particularly welcome the submission of papers in the following areas:

  • Papers that report the findings of empirical studies into the effectiveness of specific educational programmes, interventions or types of provision. These papers can relate to any aspect of education (either focusing on formal or informal types of provision and education relating to any part of the lifespan). However, and by definition, they need to be outcomes‐focused and thus seek to ascertain whether a particular educational approach has had a desired effect. Given the very focused nature of such studies they will need to make use of some type of experimental or quasi‐experimental design and/or other appropriate methods for estimating effectiveness (e.g., longitudinal studies or retrospective survey designs).

  • Papers reporting the findings of qualitative investigations of particular educational programmes or interventions that seek to explain the effects (or lack of effects) found. Such qualitative investigations will normally have been undertaken alongside an experimental study or other appropriate design that has provided concurrent evidence of the effectiveness or otherwise of the particular programme or intervention under study.

  • Papers that report the findings of systematic reviews or other appropriate forms of research syntheses of educational research studies that focus either on the effectiveness of a particular type of programme or intervention or on what approaches tend to be most effective in improving clearly defined educational outcomes.

  • Methodological papers that either provide critiques of existing approaches to the design and analysis of research into the effectiveness of education and/or seek to develop new and innovative methods. Given the foregoing discussion, papers that focus on the following areas are particularly welcomed:

    papers that use and/or seek to develop new approaches to combining methods in the study of educational effectiveness;

    papers that report the findings of well‐designed, small efficacy tests and/or that explore new and innovative ways of undertaking effectiveness research in collaboration with practitioners and other key stakeholders including children, parents and the wider community;

    papers that examine the reliability and validity of existing outcome measures and/or report on the reliability and validity of newly developed measures.

    Critical and theoretical papers that challenge existing orthodoxies and encourage new ways of thinking about the nature, purpose, politics and/or policy impact of research into the effectiveness of education.

Notes

1. There are, of course, the individual effects of teachers to be taken into account. In the extreme case, where only two classes are being compared and where each class is taught by a different teacher, then the experiment is completely confounded by the ‘teacher factor’. In other words, if any differences are found between the two classes it is clearly impossible to determine whether such differences are due to the different approaches adopted or due to the inevitable differences in the character and styles adopted by both teachers. This is why it is preferable to include a larger number of classes/teachers in such experimental studies and to randomly allocate them between the intervention and control groups.

References

  • Bridges , D. , Smeyers , P. and Smith , R. , eds. 2009 . Evidence‐based education policy: What evidence? What basis? Whose policy? , Oxford : Wiley‐Blackwell .
  • Chatterji , M. 2005 . Evidence on ‘what works’: An argument for extended‐term mixed‐method (ETMM) evaluation designs . Educational Researcher , 34 : 14 – 24 .
  • Cochrane‐Smith , M. and Lytle , S. 1999 . The teacher researcher movement: A decade later . Educational Research , 28 : 15 – 25 .
  • Coe , R. , Fitz‐Gibbon , C. and Tymms , P. . Promoting evidence‐based education: The role of practitioners (round table) . Paper presented at the British Educational Research Association . Cardiff. September ,
  • Cohen , L. , Manion , L. and Morrison , K. 2007 . Research methods in education , (6th ed.) , London : Routledge .
  • Connolly , P. 2009 . Developing programmes to promote ethnic diversity in early childhood: Lessons from Northern Ireland (Working Paper No. 52). The Hague , The Netherlands : Bernard van Leer Foundation .
  • Connolly , P. , Fitzpatrick , S. , Gallagher , T. and Harris , P. 2006 . Addressing diversity and inclusion in the early years in conflict‐affected societies: A case study of the Media Initiative for Children – Northern Ireland . International Journal for Early Years Education , 14 (3) : 263 – 278 .
  • Connolly , P. and Hosken , K. 2006 . The general and specific effects of educational programmes aimed at promoting awareness of and respect for diversity among young children . International Journal of Early Years Education , 14 (2) : 107 – 126 .
  • Day , C. , Sammons , P. , Stobart , G. , Kington , A. and Gu , Q. 2007 . Teachers matter: Connecting work, lives and effectiveness , Buckingham, , UK : Open University Press .
  • Domitrovich , C. and Greenberg , M.T. 2000 . The study of implementation: Current finding from effective programs for school‐aged children . Journal of Educational and Psychological Consultation , 11 : 193 – 221 .
  • Gorard , S. and Taylor , C. 2004 . Combining methods in educational and social research , Maidenhead, , UK : Open University Press .
  • Greenberg , M.T. , Domitrovich , C.E. , Graczyk , P.A. and Zin , J.E. 2005 . The study of implementation in school‐based preventive interventions: Theory, research and practice , Vol. 3 , Rockville, MD : Center for Mental Health Services, Substance Abuse and Mental Health Services Administration .
  • Hammersley , M. 2004 . “ Some questions about evidence‐based practice in education ” . In Evidence‐based practice in education , Edited by: Thomas , G. and Pring , R. 133 – 149 . Buckingham, , UK : Open University Press .
  • Hiebert , J. , Gallimore , R. and Stigler , J. 2002 . A knowledge base for the teacher profession: What would it look like and how can we get one? . Educational Research , 31 : 3 – 15 .
  • Moore , A. 1999 . “ Beyond reflection: Contingency, idiosyncrasy and reflexivity in initial teacher education ” . In Researching school experience , Edited by: Hammersley , M. London : Falmer Press .
  • Murray , D.M. 1998 . The design and analysis of group‐randomized trials , Oxford, , UK : Oxford University Press .
  • Oakley , A. , Strange , V. , Toroyan , T. , Wiggins , M. , Roberts , I. and Stephenson , J. 2003 . Using random allocation to evaluate social interventions: Three recent UK examples . Annals of the American Academy of Political and Social Science , 589 : 170 – 189 .
  • Oancea , A. and Pring , R. 2009 . “ The importance of being thorough: On systematic accumulations of ‘what works’ in education research ” . In Evidence‐based education policy: What evidence? What basis? Whose policy? , Edited by: Bridges , D. , Smeyers , P. and Smith , R. 11 – 35 . Oxford, , UK : Wiley‐Blackwell .
  • Pawson , R. 2006 . Evidence‐based policy: A realist perspective , London : Sage .
  • Pawson , R. and Tilley , N. 1997 . Realistic evaluation , London : Sage .
  • Pring , R. 2000 . The ‘false dualism’ of educational research . Journal of Philosophy of Education , 34 : 247 – 260 .
  • Raudenbush , S. 2005 . Learning from attempts to improve schooling: The contribution of methodological diversity . Educational Researcher , 34 : 25 – 31 .
  • Reinking , D. and Bradley , B. 2008 . Formative and design experiments , New York : Teachers College Press .
  • Rohrbach , L. , Grana , R. , Sussman , S. and Valente , T. 2006 . Type II translation . Evaluation & the Health Professions , 29 : 302 – 333 .
  • Spiegelhalter , D. , Abrams , K. and Myles , J. 2003 . Bayesian approaches to clinical trials and health‐care evaluation , Chichester, , UK : John Wiley & Sons Ltd. .
  • Thomas , G. 2004 . “ Introduction: Evidence and practice ” . In Evidence‐based practice in education , Edited by: Thomas , G. and Pring , R. 1 – 20 . Buckingham, , UK : Open University Press .
  • Thomas , G. 2009 . ‘What works’ as a sublinguistic grunt, with lessons from catachresis, asymptote, football and pharma . Research Intelligence , 106 : 20 – 22 .
  • Toroyan , T. , Oakley , A. , Laing , G. , Roberts , I. , Mugford , M. and Turner , J. 2004 . The impact of day care on socially disadvantaged families: An example of the use of process evaluation within a randomised controlled trial . Child: Care, Health and Development , 30 (6) : 691 – 698 .
  • Walker , H.M. 2004 . Commentary: Use of evidence‐based interventions in schools: where we’ve been, where we are, and where we need to go . School Psychology Review , 33 : 398 – 407 .
  • Whitty , G. 2006 . Education(al) research and education policy making: is conflict inevitable? . British Educational Research Journal , 32 : 159 – 176 .

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.