42,241
Views
69
CrossRef citations to date
0
Altmetric
Editorial

Evidence-informed practice in education: meanings and applications

&

The term ‘evidence-informed practice’ (EIP) attracts much attention, with many arguing that evidence-informed schools and colleges are an essential feature of effective education systems (see, for example, Mincu Citation2014; and Greany Citation2015). This focus on EIP is not new (see Weiss Citation1979; and Hargreaves Citation1996). A variety of programmes has been developed, over the decades, to improve the quality of evidence, its comprehensibility, and its impact on teaching and learning (see Borg Citation2010; Bevins, Jordan, and Perry Citation2011; Haslam Citation2011; Gough Citation2013; and Hulme Citation2013). Examples include the Teaching and Learning Research Programme in the UK (Parsons and Burkey Citation2011; Pollard Citation2011), changes in educational accountability policies in the USA, which have influenced the nature of educational research and data (Slavin Citation2004; Easton Citation2010), and a comprehensive Education Research and Evaluation Strategy in Ontario, Canada (Campbell Citation2014).

In spite of such developments, and examples of effective practice, it has proven difficult to achieve EIP at a system level (Bryk, Gomez, and Grunow Citation2011; Durbin and Nelson Citation2014) and the debate about how to achieve this continues. This is partly due to a lack of consensus about the ‘meaning’ of EIP, with key questions still requiring resolution. For example: ‘What constitutes reliable evidence?’; ‘Is experimental research always the “gold standard?”’; and ‘What is the status of educator/practitioner-led research?.’ Further investigation is also needed to understand better the ‘mediating processes’ that connect evidence and practice.

A recent development has been an apparent growth in grassroots demand for evidence among the educator community. In the UK, this has been seen, for example, through the teacher-led ResearchED movementFootnote1 and the launch of an evidence-informed Chartered College of Teaching.Footnote2 Internationally, there have been calls to ‘Flip the System’ (Evers and Kneyber Citation2015) by teachers themselves seeking to lead educational change with an emphasis on their professional judgement rather than top-down policies. There is also increasing recognition that EIP is not a simple matter of improving the supply of research, or increasing the demand for it (Nelson and O’Beirne Citation2014), but rather that key preconditions must be in place so that educators are ‘ready’ to critique, implement and adapt evidence as they encounter it (Roberts Citation2015). Evidence needs to be planted in ‘fertile ground’ if it is to take root and grow.

This special issue of Educational Research invited contributors to provide analysis and critique around many of these issues. Authors were guided by a series of conceptual and application questions, which may be summarised as follows.

Conceptual questions

(1)

How is the term EIP defined and understood by different stakeholders?

(2)

What is the relationship between EIP and positive outcomes for different groups? What constitutes a positive outcome in the context of EIP?

(3)

How can EIP be measured effectively? What challenges are associated with measurement?

Application questions

(4)

How do evidence-informed schools or teachers undertake their practice? What conditions facilitate progress?

(5)

What strategies enable effective ‘knowledge mobilisation’? What enables and inhibits success?

The issue includes papers from researchers and educators across Australia, Canada, England, the Netherlands and the USA. These provide rich and varied conceptualisations of EIP as well as insights into the application of evidence in practice. We introduce these papers through our editorial, which is structured around three key themes: Definitions of EIP (Question 1); the Application of EIP – conditions for effective knowledge mobilisation (Questions 4 and 5); and, Relevant outcomes and the challenge of measurement (Questions 2 and 3).

Definitions of EIP

Securing a precise definition of EIP is challenging. There are a number of semantic disparities that, on the one hand, appear to detract from the task of achieving systemic change but, on the other, strike at the heart of issues such as belief, ownership and relevance, and the likelihood of evidence gaining traction with the teaching profession. Contentious questions include: are ‘research’ and ‘evidence’ one and the same, for example? (Nelson Citation2014); are ‘evidence-based’ and ‘evidence-informed’ practices fundamentally different? (McFarlane Citation2015); and, perhaps the most intensely debated, ‘Whose evidence counts?’

Whose evidence counts?

In the UK, Ben Goldacre recently argued that education is not evidence based, citing a paucity of robust, ‘what works’ evidence, gathered through randomised controlled trials (RCTs) (Goldacre Citation2013). Partly in response, the UK has seen a development of ‘What Works Centres’ seeking to apply robust evidence to improve public services.Footnote3 Similarly, the USA has witnessed an increase in Federal Government-endorsed ‘what works’ methodologies (Slavin Citation2004; Easton Citation2010). There is much value in understanding the impact of policies and practices on outcomes, but there is also a recognised challenge in embedding such learning across education systems (Becheikh et al. Citation2009; CUREE Citation2011; Gough Citation2013; Sharples Citation2013). Additionally, not all scholars regard this kind of scientific approach to measuring outcomes to be the solution to improving practice. Bredo (Citation2006), for example, expressed concern about a narrowing of education research to an experimentation of ‘what works?’ at the exclusion of broader questions such as ‘what matters?’ (see also Nutley, Powell, and Davies Citation2013). EIP is not simply a technical activity; it is influenced by personal and professional values and beliefs, and affected by wider political and educational contexts, policies, and changes.

Evidence-based, research-based or evidence-informed practice?

We named this special issue ‘Evidence-informed practice in education’ to reflect our view that evidence is just one of a number of factors that influence educational decisions, with educators needing to apply professional judgment, rather than being driven solely by research evidence or data. Many stakeholders use the terms ‘evidence-based’ and ‘evidence-informed’ interchangeably. Jonathan Sharples, in his description of what evidence-based practice is not, makes a strong case for evidence-informed practice:

‘Evidence-based practice is not ‘cook book’ teaching or policing, nor should it be about prescribing what goes on from a position of unchallenged authority. It is about integrating professional expertise with the best external evidence from research to improve the quality of practice’ (Sharples Citation2013, p.7).

The papers in this special issue provide a variety of lenses on EIP. However, they also promote a common message – that EIP is not a one-dimensional concept. The papers by Danielle LaPointe-McEwan, Christopher DeLuca and Don Klinger (Queen’s University, Ontario, Canada), and Chris Brown, Kim Schildkamp and Mireille Hubers (University College London, England and the University of Twente in The Netherlands) argue that EIP must be seen as the integration of professional judgement, system-level data, classroom data and research evidence.

A related point is that ‘evidence-informed’ and ‘research-informed’ practice are not one and the same, although research evidence is an integral piece of the evidence-informed jigsaw. The question of what constitutes ‘research evidence’ of course still divides researchers and educators. Not only is there ongoing debate about the relative robustness of different academic research methodologies (Bredo Citation2006; Goldacre Citation2013; Nutley, Powell, and Davies Citation2013); there is also disagreement about the value of research that is generated by educators. Such research is often dismissed as small scale, anecdotal or non replicable (Borg Citation2010; CUREE Citation2011; Enthoven and de Bruijn Citation2010; Wilkins Citation2012). However, Bryk (Citation2015) argues that there is a place for, what he terms, ‘practice-based evidence’ (as opposed to evidence-based practice). His view is that fine-grained practice-relevant knowledge, generated by educators, can often be applied formatively to support professional learning and student achievement.

One of the papers focuses less on research and more on classroom data as a form of evidence. LaPointe-McEwan and colleagues remind us that, just as there is a hierarchy of research evidence, so classroom data is perceived as hierarchical, with formal summative assessments dominating decision making. Their research illustrates the importance of educators taking a broad view of classroom evidence, developing data-literacy skills, and learning to ‘triangulate’ qualitative and quantitative sources to inform well-balanced decisions. Brown and colleagues consider the worlds of data-based decision making (DBDM) and research-informed teaching practice (RITP). They argue that in an evidence-informed system, these approaches should be integrated, yet rarely are. Their paper proposes a model for ‘evidence-informed school and teacher improvement’, taking best practices in DBDM and RITP and applying these through a systematic school enquiry cycle.

An interesting perspective on EIP is provided by Mark Rickinson, Kate de Bruin, Lucas Walsh and Matthew Hall (Monash University, Victoria, Australia and Ministry for the Environment, New Zealand) in another paper in this special issue. They argue that educational practice has much to learn from educational policy. Indeed, they present policymaking as a form of practice, and posit a number of similarities in the types and variety of evidence utilised by policymakers and educators. Their research illustrates that in both policy and practice, although a variety of sources are used, there is a tendency to rely on that which is well known rather than, necessarily, appropriate to task. The authors conclude that the distinction between ‘available’ and ‘appropriate’ evidence (Earl and Timperley Citation2009) is an issue that applies across both practice and policy. They also note that the notion of ‘appropriateness’ must be considered in relation to context, and not prescribed.

The application of EIP – conditions for effective knowledge mobilisation

Several of the papers attempt to identify, critique and evaluate what evidence actually exists about the applications of EIP. EIP can relate to individuals or groups throughout the education system engaging in or with research (CUREE Citation2011; Nelson and O’Beirne Citation2014). For example, LaPointe McEwan and colleagues identify the importance of ‘middle leaders’ in supporting collaborative inquiry networks within and across schools; Amanda Cooper, Don Klinger and Patricia McAdie (Queen’s University, Ontario, Canada and Elementary Teachers’ Federation of Ontario (retired), Canada) focus on teachers but also identify important roles for school and district leaders, professional development providers and teachers’ unions; and Carol Campbell, Katina Pollock, Patricia Briscoe, Shasta Carr-Harris and Stephanie Tuters (Ontario Institute for Studies in Education, University of Toronto, and Western University, Ontario, Canada) provide an example of a government-university partnership to support research and practice projects with multiple stakeholders intended to mobilise research across an entire education system. As Andy Hargreaves and colleagues have argued about educational change in general, the purpose, goals and scales of activity have become ‘bigger’ (Hargreaves et al. Citation2010, xii) in intended impact and outcomes. EIP can be multi-level and multi-layered, engaging individuals, groups, organisations, networks and entire education systems.

A key debate, therefore, as proposed by Campbell and colleagues in this issue, is ‘how to get evidence into practice and vice versa.’ Six years ago, a systematic review published by the Centre for Use of Research and Evidence in Education (CUREE Citation2011) concluded that ‘Practitioner engagement in and/or with research, in terms of engagement with outputs and in processes, is becoming an increasingly prevalent feature of professional learning and development in England’(p. 6) with potential benefits for teaching practices and students’ learning: yet, the same review also concluded that ‘there is still a long way to go’ (CUREE Citation2011, 6). This concern to further develop practitioner engagement in and with research is not unique to England; for example, the Organisation for Economic Cooperation and Development has conducted reviews of how to improve the use of research by practitioners and policy-makers in Denmark, Mexico, New Zealand, and Switzerland, as well as England.Footnote4 As indicated, for example in the Cooper et al paper, developing EIP requires the producers of research (as well as the users) to develop their capacity to communicate, connect and apply evidence and practice in new ways. There are many conceptions about how EIP processes could be developed and applied. The term ‘knowledge mobilisation’ (KMb) has risen in prominence in recent years, particularly in education in Canada, as reflected in Cooper et al’s paper in this issue, which explores ‘the process through which research and data become integrated (or fail to become integrated) into educational policies and practices.’ For Campbell and colleagues, ‘Mobilisation implies social interaction and iterative processes of co-creating knowledge through collaboration between and among researchers, decision-makers and practitioners’ to support the sharing, creating and using of evidence. Across the papers, four main themes concerning KMb strategies, processes and outputs emerge: communication and dissemination; capacity building; partnerships and networks; and systemic approaches.

The notion that research-use involves the dissemination of research publications and findings is long-standing (Weiss Citation1979). Nevertheless, the effective communication of evidence remains a central concern in advancing EIP. In an article exploring EIP from a unique perspective, Nathalie Carrier (Ontario Institute for Studies in Education, University of Toronto) investigates how popular educational innovations may be promoted and gain appeal, irrespective of their relationship with the evidence-base. Her findings draw attention to the use of persuasive communication strategies that can potentially be highly influential. Taking a different starting point, Cooper and colleagues investigate how teachers seek out information concerning effective classroom assessment (an area of educational practices where there is considerable evidence). Their findings indicate that ‘teachers predominantly acquire information about assessment practices from other teachers.’ Even when attention is paid to developing teachers’ use of evidence, for example, as in the LaPointe-McEwan et al study, the findings suggest that use of evidence from practice develops more prevalently than the use of original research studies. Therefore, knowledge mobilisation (KMb) needs to attend to the capacity to access, understand, share and act on many forms of evidence, including research. Carrier calls for educators to have opportunities to develop an ’analytical stance’ and ’evaluative skills’ in order to understand a range of evidence. Brown and colleagues explain the importance of both research and data literacy to bring together evidence for professional inquiry and school improvement.

This special issue also highlights the need for researchers to improve their capacity to mobilise their work more clearly, accessibly and effectively. A finding from the Knowledge Network in Applied Education Research (KNAER) case study, presented by Campbell and colleagues in this issue, is that both producers and users of evidence often do not know how actually to ‘mobilise knowledge’ for EIP. In this case, a system-wide strategy with specific tools, resources and capacity building for research-practice partnerships developed over time. Indeed, the development of partnerships and networks within, between and among individuals and organisations is considered an important way to advance EIP. Opportunities for collaboration, co-creation, sharing and application of professional knowledge and external evidence can be beneficial. Individuals, such as school and system leaders, and organisations, such as professional associations and research institutions, can play an important intermediary role in mediating what evidence is communicated and connected to practice. Furthermore, in light of persisting challenges of time and resources to access, understand and use evidence, attention to developing a system-wide culture and infrastructure for EIP is also important. This is indicated by Campbell and colleagues (this issue) who conclude that ‘Blending the importance of quality products, collaborative relationships and commitment to developing capacity and addressing challenges system-wide are crucial to the mobilisation of research and professional knowledge genuinely for evidence-informed practice.’

EIP requires multiple strategies, processes and activities, which will vary depending on the purposes to be achieved, the contexts of practice, the availability of evidence, the individuals and/or organisations involved. The approaches will vary, too, over time - as needs, challenges and successes emerge. However, the question of which specific combinations of EIP strategies, processes and activities have the greatest impact for particular outcomes requires further attention.

Relevant outcomes and the challenge of measurement

When we launched the Call for Papers for this special issue, we were aware that there were two key knowledge gaps. These were: (a) knowing how to measure, and therefore understand, the extent of EIP amongst the teaching profession and; (b) understanding the impact, if any, that EIP has on teaching practice and learner outcomes.

The measurement of EIP is challenging, not least because it relies on clarity of definition, or at least some decisions about the features of EIP that should, or can, be measured. It also requires decisions to be made about the evidence needed to judge whether or not EIP has been achieved, and to what ends. Dagenais et al. (Citation2012) found that evidence about the impacts (and therefore benefits) of EIP was scant. Although there is some research indicating that EIP can contribute to school improvement (see, for example, CUREE Citation2011; Greany Citation2015; and Schleicher Citation2011), there is still a need for more rigorous evaluation, both qualitative and quantitative.

A number of researchers are currently considering the challenge of measurement and are devising strategies and instruments to understand better a range of outcomes. Sophisticated work is underway, but most of this is under development and not yet ready for reporting.

In the USA, the Center for Research Use in Education (CRUE)Footnote5 and the National Center for Research in Policy and Practice (NCRPP)Footnote6 are developing suites of survey instruments to measure research-use in schools and school districts at a variety of outcome levels.

In Scotland, the Research Unit for Research Utilisation (RuRu) at the University of St Andrews is considering the issue of measurement from a cross-sector perspective, including education.

In England, the Education Endowment Foundation has funded a number of collaborative projects that are working to improve the mobilisation of research information. These are being evaluated to assess the relative effectiveness of different strategies. The National Foundation for Educational Research (NFER) has developed the measurement survey that is being used by the various evaluation teams (see Nelson et al. Citation2017; Poet, Mehta, and Nelson Citationin press).

In the knowledge that there is still much to learn about the impacts of EIP, we invited contributors to this special issue to consider challenges and solutions in defining appropriate outcomes and measuring impact. We did not receive many papers that considered these issues in detail. The paper by Laura Wentworth, Christopher Mazzeo and Faith Connolly (California Education Partners, Education Northwest, and Baltimore Education Research Consortium, USA) is one contribution in this special issue that considers this area in depth.

Wentworth and colleagues consider the conceptual and practical challenges in developing a survey to quantify the perceived impact of Research Practice Partnerships (RPPs) on educators’ evidence-based decision making in the USA. Drawing on the work of Cynthia Coburn and colleagues at the NCRPP, they highlight the importance of teasing out the outcomes that contribute to impact. Their paper signifies the need to consider intermediate outcomes (for example, educator behaviours or ‘mindsets’) as well as long-term outcomes as identified by NCRPP academics (such as impacts on practices (instrumental outcomes); impacts on thinking (conceptual outcomes); and the use of evidence to legitimise an approach or persuade others of its value (symbolic outcomes)). The authors indicate that the survey is to be developed further; it will be interesting to see how it enhances understanding of the benefits of middle-tier systems such as RPPs on EIP. They remind us that context is critical when interpreting outcomes.

Conclusions – further developing EIP

From the origins of work on research use (Weiss Citation1979), there have been shifting debates about and attention to the conceptualisation, application and impact of EIP within education and across public policy sectors (for example, Nutley, Walter and Davies, 2007). In the papers presented in this special issue of Educational Research, there is a general consensus that ‘evidence’ constitutes a range of types and sources of knowledge and information, including professional expertise and judgement, as well as data and research. Indeed, despite the considerable debate about ‘gold standards’ of research methodologies, the most frequently used sources of ‘evidence’ are often derived from professional experiences and colleagues rather than original research studies. While EIP should not be exclusively about researchers pushing their findings out (Lavis et al. Citation2003), and while practice-informed evidence from educators is vital (Tseng Citation2012), we need to consider carefully the accessibility, appeal and capacity to use a range of evidence from, in and for practice.

The process of being evidence-informed requires both rigorous evidence and a rigorous process of professional judgement (Campbell Citation2016). The papers in this special issue indicate considerable attention to, and activity for, EIP, with promising examples involving students, teachers, school and system leaders, researchers, intermediary organisations including professional associations/unions, governments and a range of partners. At the same time, common and persisting challenges of access to quality evidence, time for professional engagement and inquiry, professional development and capacity for all involved to understand, share, (co)develop and apply evidence in and for practice - and, vitally, approaches for evaluating the strategies, process, activities and outcomes of EIP, all require further attention. Ironically, perhaps, we continue to need more evidence about the application and outcomes of EIP in practice.

This takes us back to the questions: ‘What constitutes a positive outcome in the context of EIP’ and ‘How can EIP be measured effectively?.’ We need to be mindful that these are not simple questions with easy answers. As Wentworth and colleagues remind us, outcomes are likely to be context-specific and to occur at different levels, according to the extent and depth of practice. However, if the educator community is to engage in and with EIP, its benefits need to be apparent and clearly articulated. This means that researchers and educators, in partnership, must seek to understand the quality of different knowledge mobilisation (KMb) strategies (with reference to well-developed theories of change), and to measure educators’ varied uses of educational research and data; production of knowledge; and the impacts of these activities on professional learning and, ultimately, learner outcomes. This will require multiple quantitative measures alongside rich, descriptive data. EIP is a dynamic process - we must not fall into the trap of seeking to understand only those components that can easily be quantified.

Julie Nelson
National Foundation for Educational Research, England
[email protected]
Carol Campbell
Leadership and Educational Change, Ontario Institute for Studies in Education, University of Toronto, Canada
[email protected]

Notes

References

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.