5,864
Views
8
CrossRef citations to date
0
Altmetric
Editorial

Data use in education: alluring attributes and productive processes

, &

In recent years, governing regimes in education that emphasize ‘data’ and ‘evidence’ as a basis for decision making, performance management and accountability have been introduced in several countries, including the Nordic countries. Data use has often been referred to as the centrepiece of so-called evidence-based governing regimes, where student performance data are considered to form an ideal basis for coordinating decisions and activities on different levels in the school system (Ozga, Citation2009). Various types of assessment tools that produce data on student performance provide a basis for generating the information used for policy making and initiating and legitimating change in education.

Data use practices are usually defined as what takes place when individuals interact using test scores, grades and other forms of assessment data in their work (Coburn & Turner, Citation2011; Spillane, Citation2012). Data in themselves are just raw material. Data thus need interpretation and rely on actions performed by different actors in order to become evidence (Little, Citation2012). This means that knowledge about the various ways in which data are used by authorities, school leaders and teachers is crucial to critically assess and discuss the possible consequences of the developments in terms of governing education (e.g. Jennings, Citation2012; Racherbäumer, Funke, van Ackeren, & Clausen, Citation2013; Schildkamp, Karbautzki, & Vanhoof, Citation2014). Such processes have, to a limited extent, been empirically researched within European and Nordic contexts (Prøitz, Mausethagen, & Skedsmo, Citation2017). At the same time, the multitude of data use levels, perspectives and practices demonstrates how data have a strong and important presence and impact on all areas of contemporary education, including the fields of policy, practice and research.

In particular, data on student performance are increasingly being used for accountability purposes. Such data use often includes a double aim that has a certain tension embedded: the control and monitoring of professional practice on the one side, and organizational and professional development on the other. This tension is often under-communicated in policy and research, and it may create dilemmas for practitioners (e.g. Skedsmo, Citation2009). Therefore, knowledge about how these new forms of data are used by different actors, at different organizational levels, and in different countries can provide insights into the characteristics of new regimes of school governance; furthermore, it can address the possible constraints involved, along with the potential for learning and development that can take place.

Data in themselves are often considered to be efficient, standardized, uniform and intuitive measures that are productive for use in a range of processes for the development of the educational system and teaching and learning (Porter, Citation1995). However, the very same attributes can lead to exaggerated expectations of what can be achieved based on data and data use, and simplification of complex education processes. Consequently, what we here describe as the alluring attributes of data and data use (i.e. standardized, effective, uniform and intuitive) might mask important aspects of knowledge and nuances in education processes that are important for productive developments in education. At the core of this discussion also lies a question about educational values: monitoring student progress and outcomes can inform policy makers, administrators, teachers and school leaders so they may better support and help all students to meet their learning goals. Nonetheless, testing and the use of data can also lead to the marginalization of students, decrease motivation and narrow the broader educational goals of inclusion, well-being and formation.

This special issue presents research that discusses data and data use in education from a broad range of perspectives: in practice, in research and in theory, and as transnational, international and national policy. It also discusses data and data use as a basis for change for and by actors on the individual, local, regional, national and international levels of the education system. The contributions represent positive as well as more critical outlooks on data and data use in education. We find it important to present and discuss data and data use from various perspectives and research traditions and in different contexts for the development of research and knowledge production in this area. Although different theoretical frameworks, research methods and analytical approaches are applied in the studies, there is a common message related to the need for further investigations into the tensions inherent in data use in education.

The articles presented in this special issue highlight numerous issues embedded in concepts related to data and data use – all of which underscore important questions regarding what constitutes data and data use, and the magnitude of associations these concepts produce, both negative and positive. Underscoring the complexity of understanding data and data use, the articles emphasize the difficult balance that must be struck between data use for development and control purposes. The articles also have in common the fact that they contribute to the field by nuancing and refining our understanding of the different contexts in which data and data use appear in education. By this, they reinforce how much of the educational sphere is currently occupied by data and data use, and how the phenomenon seems to be more or less borderless. Moreover, data use is situated as both the problem and the solution in education, in terms of policy, practice and research. This not only leads to questions about the consequences of such an expansion and data being everywhere for the core education project – student learning – but also raises several epistemological questions about data use as a research subject. This notion of data and data use being everywhere in education can be an investigative problem if everything becomes data and data use.

Focusing on research, in particular, the characteristically borderless and open natures of data and data use may reflect a poorly theorized concept for investigative purposes, indicating a need for further studies on the theoretical and conceptual fundament for data use in education. One suggestion could be to develop more fine-grained concepts for what we are studying and define it more explicitly, and to use specific terms such as student grades, national tests and standardized testing. This presents a challenge as such information is rarely used alone, thereby illustrating the complexity of educational decisional processes on all levels. For example, what characterizes professional knowledge is that different elements in the knowledge base are interrelated because they are necessary to perform specific tasks, or because they concern particular cases (Grimen, Citation2008; Shulman, Citation1987). From this perspective, a number of knowledge sources have to be used when making educational decisions. For example, we know that teachers draw upon a range of knowledge sources when deciding upon their instructional strategies, such as informal data on unstructured classroom observations, subject-didactic knowledge, relational knowledge, formal data on assessment results, research results and research reviews (e.g. Heitink, Van der Kleij, Veldkamp, Schildkamp, & Kippers, Citation2016; Mausethagen, Prøitz, & Skedsmo, Citation2017; Schildkamp et al., Citation2014). However, teachers often struggle with how to use and integrate these, and additionally, policy makers and administrators tend to draw upon somewhat different knowledge bases than teachers, and partly also school leaders (Labaree, Citation2005; Møller, Citation2015). Thus, one could argue that the integration of knowledge sources and professional discretion is becoming a rather more pressing issue as the amount of data increases. A significant implication of a stronger focus on data use is to uphold a great awareness of the complexity of educational knowledge and processes as a strong asset for development, rather than accepting simplified problems and solutions as framed by the data.

In the first article in this special issue, Altrichter and Gamsjäger analyse and discuss how performance standard policies are included in evidence-based governance in education in Austria. The policies that were introduced in 2008 comprise the communication of competence-based output standards, the provision of support material (e.g. competence-based assignments and diagnostic tests) and in-service training opportunities, nationwide comparative competence tests (at the end of the primary and lower secondary cycle of schooling), and data feedback of assessment results to students, teachers, schools and administrative authorities. The authors aim to develop a conceptual model for research into the processes and effects of performance standard policies. Official documents are analysed in order to formulate the ‘programme theory’ underpinning the policies (i.e. its intended effects) and its processes and intermediary mechanisms, which are outlined in a conceptual model that may be used to organize and orchestrate research into performance standard policies. The results show that five major intermediary processes are meant to organize the pathways, from ‘policy elements’ to ‘intended effects’, by aligning relevant actors to specific ways of organizing and coordinating their actions: ‘setting expectations, stimulating by data feedback, alignment by support, involving stakeholders and alignment by in-school coordination’. The authors conclude that for further research, the unintended effects of these policies should be explored.

Huguet, Allen, Coburn, Farrell, Kim and Penuel’s paper makes a methodological contribution to micro-level investigations of data use. As they point out, while there is an abundance of data use literature available, there is a need to further develop methodological approaches to studying naturally occurring data use in decision-making processes over time. The authors present a strategy to understand data use via long-term observations of policy-making deliberations among educational leaders. Through the use of longitudinal and observational data, the authors create ‘decision trajectories’ that enable them to trace micro-processes of deliberation around specific decisions over time. By means of this methodological approach, the authors are able to address data use as arising in the context of longitudinal observations. Such approaches can provide deeper insight into how data are used to inform, frame and justify educational decisions: ‘Participants talked about a variety of topics, often in a non-linear way, with some topics being connected to and interdependent with other topics. Creating a decision trajectory out of episodes of topical deliberation helped us see how education leaders made sense of specific issues over time and decided on next steps in ways that shaped future work […] looking beyond a single observation or meeting made it possible to understand how invocations of data fit in the larger context’.

Petterson, Popkewitz and Lindblad elaborate on a systematic research review of international large-scale assessment (ILSA) research. The authors find that several activities operating under the ‘formal radar’ of science and governmental policy are observed, which they analytically name ‘grey-zone’ activities. These activities are historicized, presented and discussed. In their study, three different reasons for performing the activities are given based on an analytical division into entrepreneurial policy, entrepreneurial profit and appurtenance. This division contributes towards highlighting some of the actors in the educational grey zone, such as McKinsey and Pearson. The article contains examples of the activities that can be found in such a grey zone, and the authors argue that ‘grey-zone activities are involved in creating a “neutral” vision of education and a “neutral” vision of what education should be like, which is in itself reason enough to further investigate the increasing number of grey-zone activities involved in forming today’s educational policy’.

Prøitz, Mausethagen and Skedsmo report on the findings of their literature review of research on data use in education written in English, German and Scandinavian languages published between 2000 and 2014. The review is inspired by methods for systematic mapping. The analysis illustrates how the characteristics of the total corpus of 129 articles on data use in education vary across different contexts, countries and regions. In all contexts, the studies primarily investigate structures and systems around data use. While the Anglophone studies are mainly empirical and often concerned with implementation and effectiveness in terms of data use, the studies published in German and Scandinavian languages focus more heavily on discussions and analytical reflections upon the developments of data use in education. Six investigative modes (overlapping and non-mutually exclusive) of study on data use in education are identified, defined, presented and discussed that can contribute towards creating a more nuanced understanding of research in this area: ‘implementation studies, explorative studies, overview studies, discussion studies, methodological studies and system critical studies’.

Lundahl, Hultén and Tveit use the Swedish school system as an example to demonstrate how teacher-assigned grades have a major role in performance management and accountability. They study how politicians view and legitimize the strengths of grading in an outcome-based accountability system. Based on a two-part analysis, the authors show how grades, through complex processes of legitimation, have acquired and retained a central position in governing the overall quality of the educational system in Sweden. They argue that in the Swedish system, grades are used in an administrative, rather than a pedagogical, capacity and thus function as a shorthand that effectively reduces the complexity of communication between various actors with regard to what students learn and accomplish in education. Grades then become legitimate in terms of their communicative rationality. The authors conclude that in order to turn grading into an instrument that can moderate some of the downsides of testing regimes, a broader view of what constitute outcomes in education needs to follow: ‘Even though grades, as compared to external tests, have unique potential in an accountability system, when used as a quick language of comparisons and competitions, some of the finer nuances of grading are lost, e.g. how they express teacher trust; longitudinal observation of children’s development; and how they reveal the interconnection between curriculum, teaching and evaluation’.

The special issue continues to present two empirical studies on teachers’ perceptions of the use of standardized test results, derived from two different European contexts. Werler and Klepstad Færevaag report on a study on the use of data based on the results of national tests in Norway by teachers. They take, as a starting point, that national tests are part of the Norwegian school system’s top–down accountability, and that – according to official regulations – teachers have to use the test results to improve learning outcomes even if the testing system is not able to deliver the necessary data. By drawing on Bernstein’s concept of pedagogical devices, the authors find that the data from the national tests rule both the work of teachers in the classroom and the content provided to the pupils. They argue that the very existence of the national tests seems to challenge teacher autonomy, restrict teachers’ practice and reinforce the impact of unfair structures on pupils’ learning. Yet, the results of the study also point to how teachers are in what the authors describe as a state of data illiteracy towards complex item response theory tests, leading to challenges in terms of how to handle data use: ‘On the one hand, teachers evaluate national tests as a professionally non-reliable source of information. On the other hand, they do not criticize its psychometric approach. The resulting contradiction suggests that teachers value their experience as well as their beliefs as sources of reliable data’. Thus, the authors also ask whether the tests have the power to confuse teachers about the knowledge they should focus on in their work, as the teachers do not necessarily know how they can help pupils to achieve better test results.

Denise Demski and Kathrin Racherbäumer consider how poor performance in international student assessments led to calls to enhance evidence-based practice in the German educational system. Based on questionnaire data and interview data from three different studies, and comparing schools in different circumstances, the authors examine the perceived usefulness and the application of 13 different sources of information that can inform the practice of teachers and school leaders. Their results show that practitioners attributed little usefulness to standards-based reform and consequently hardly used the data from standardized testing following the reform. Instead, practitioners preferred process-oriented and more informal information sources, such as student feedback. A comparison of the different samples indicated that data use may be lower in schools in challenging circumstances. In face-to-face interviews, a considerable proportion of the interviewees explained that they used data little because of a lack of time. Furthermore, problems related to recontextualizing data and evidence and adapting them to suit their personal needs were described. The authors point to a couple of possible explanations for this: ‘The fact that teachers do not receive immediate feedback about their work, but should derive implications for actions from feedback concerning their students’ performance, can explain why practitioners in our studies had problems in recontextualizing evidence and why they favoured information sources such as student feedback or collaborative measures. A further problem (…) might derive from the fact that setting standards and imposing data use in top–down processes goes together with little intrinsic motivation in most cases’. The authors conclude with a call for more micro-level investigations to disclose the influence of the school context on data use.

The special issue concludes with two articles that discuss methodological shortcomings and possibilities concerning the use of aggregated test data and register data in education. Hovdhaugen, Vibe and Seland address the publication of results from national tests in primary and lower secondary schools by Norwegian national authorities. The authors point out that the aggregated test results are meant to provide information on school quality for local government, as well as to be used for school development. However, how the data are presented influences their usability, and this is further affected by the fact that many municipalities and the majority of schools in Norway are quite small. Consequently, the authors argue that in many instances, the information that can be retrieved from aggregated test results at the school or municipal level are of little or no value to the users: ‘A trained user of statistical information could easily interpret the data presented, and would be able to see the limitations of the aggregated test results presented, as well as implications for the work carried out in school leadership and administration. Our concern, however, is that the actual users of this information – who could be principals, teachers or educational administrators at the municipality level, as well as parents and local newspaper journalists – may not always possess the skills needed to obtain a full understanding of what the presented aggregated test results actually mean, and the limits of their utility’.

In the last article in this special issue, Mellander explores the potential of using register data in educational research. Although register data have been used extensively for quite some time in many social sciences, such as sociology, economics and political science, their potential has thus far been insufficiently exploited in educational research related to pedagogy and didactics. The author points to the fact that register data are fundamentally a Nordic phenomenon since many countries do not have available data about their entire population that can also be linked to other registers. Furthermore, two specific features of register data are considered: their panel data nature, implying that register data analyses can, under certain conditions, account for aspects of the registers that are not informative, and the intergenerational links that these data contain that facilitate the separation of genetic and environmental influences on learning. It is observed that while register data do not contain direct links between students and teachers, this shortcoming can be overcome by merging register data with survey data on these links. The author shows how register data can be used in combination with other types of data and thereby provide a fuller picture of the research area: ‘Often, the best way to promote a new instrument – such as register data for (most) educational scientists – is to show, with concrete examples, what it can do’.

This special issue highlights the striking variation in what falls under the umbrella of data and data use in education. The articles included reflect this variability in terms of their theoretical frameworks, methodological designs, focus and national contexts. While the research field in general tends to focus on the structural and organizational aspects of data use and how to encourage practitioners to use data, less attention has been paid to data use in practice. Consequently, and as the articles included in this special issue clearly demonstrate, there is a need for further studies on what data and data use are conceptually, as well as how and to what extent data use can be useful in education practice, education policy and education research. In order to do this, there is a need for research that makes use of different research designs and analytical frameworks – and for the creation of channels through which various research traditions can communicate.

Acknowledgements

We would like to express our appreciation to the authors, the reviewers and the editorial board for making this special issue on data use in education possible.

Disclosure statement

No potential conflict of interest was reported by the authors.

References

  • Coburn, C., & Turner, E. O. (2011). Research on data use: A framework and analysis. Measurement: Interdisciplinary Research and Practice, 9(4), 173–206.
  • Grimen, H. (2008). Profesjon og kunnskap. In A. Molander & L. I. Terum (Eds.), Profesjonsstudier [Professional studies] (pp. 71–86). Oslo: Universitetsforlaget.
  • Heitink, M. C., Van der Kleij, F. M., Veldkamp, B. P., Schildkamp, K., & Kippers, W. B. (2016). A systematic review of prerequisites for implementing assessment for learning in classroom practice. Educational Research Review, 17, 50–62. doi:10.1016/j.edurev.2015.12.002
  • Jennings, J. L. (2012). The effects of accountability system design on teachers’ use of test score data. Teachers College Record, 114(11), 1–23.
  • Labaree, D. F. (2005). Progressivism, schools and schools of education: An American romance. Paedagogica Historica. International Journal of the History of Education, 41(1–2), 275–288. doi:10.1080/0030923042000335583
  • Little, J. W. (2012). Understanding data use practice among teachers: The contribution of micro-process studies. American Journal of Education, 118(2), 143–166. doi:10.1086/663271
  • Mausethagen, S., Prøitz, T., & Skedsmo, G. (2017). Nye styringsformer og kunnskapskilder i sam(spill). Eksempler fra skolefeltet [Interplay between new governance forms  and knowledge sources. Examples from the field of education]. In S. Mausethagen & J.-C. Smeby (Eds.), Kvalifisering til profesjonell yrkesutøvelse [Qualification for professional practice]. Oslo: Universitetsforlaget.
  • Møller, J. (2015). Norway: Researching Norwegian principals. In H. Arlestig, C. Day, & O. Johansson (Eds.), A decade of research on school principals cases from 24 countries, series: Studies in educational leadership (Vol. 21, pp. 77–101). Switzerland: Springer international Publishing.
  • Ozga, J. (2009). Governing education through data in England: From regulation to self‐evaluation. Journal of Education Policy, 24(2), 149–162. doi:10.1080/02680930902733121
  • Porter, T. M. (1995). Trust in numbers: The pursuit of objectivity in science and public life. New Jersey: Princeton University Press.
  • Prøitz, T. S., Mausethagen, S., & Skedsmo, G. (2017). Investigative modes in research on data use in education. Nordic Journal of Studies in Educational Policy, 2016(1).
  • Racherbäumer, K., Funke, C., van Ackeren, I., & Clausen, M. (2013). Datennutzung und Schulleitungshandeln an Schulen in weniger begünstigter Lage. Empirische Befunde zu ausgewählten Aspekten der Qualitätsentwicklung [Data use and school leadership in schools located in areas with low SES. Empirical findings on selected aspects of quality development]. Die Deutsche Schule, 13(12), 226–254.
  • Schildkamp, K., Karbautzki, L., & Vanhoof, J. (2014). Exploring data use practices around Europe: Identifying enablers and barriers. Studies in Educational Evaluation, 42, 15–24. doi:10.1016/j.stueduc.2013.10.007
  • Shulman, L. S. (1987). Knowledge and teaching: Foundations of the new reform. Harvard Educational Review, 57(1), 1–23. doi:10.17763/haer.57.1.j463w79r56455411
  • Skedsmo, G. (2009). School governing in transition. Perspectives, purposes and perceptions of evaluation policy ( Doctoral Thesis). University of Oslo, Oslo.
  • Spillane, J. P. (2012). Data in practice: Conceptualizing the data-based decision-making phenomena. American Journal of Education, 118(2), 113–141. doi:10.1086/663283