417
Views
0
CrossRef citations to date
0
Altmetric
Research Articles

Profiles of teachers’ assessment techniques and their students’ involvement in assessment

, , , &
Pages 369-388 | Received 07 Sep 2022, Accepted 07 May 2024, Published online: 20 May 2024

ABSTRACT

Two aspects of formative assessment practices in the Dutch and Flemish educational context were explored: the degree to which secondary mathematics teachers implement a variety of assessment techniques in their classrooms, and the extent to which their students are involved in assessment practices. By developing profiles based on the combination of these aspects of formative assessment, we were able to distinguish various developmental stages in teachers’ implementation of formative assessment. Compared to their Flemish colleagues (n = 83), Dutch teachers (n = 120) used a wider variety of assessment techniques and stimulated student involvement in assessment more. Features of the educational context, such as the availability of teacher professional development for formative assessment, possibly influence development towards use of formative assessment practices in the classroom. The profiles can be used to inform teacher professional development initiatives, as they give insight into the current status of teachers’ assessment practices.

Introduction

Assessments can play a key role in teaching and learning, because data resulting from assessments can inform both teachers and students about students’ current status in relation to achievement of the learning goals (Hattie and Timperley Citation2007). In the past, assessments were often isolated activities to test whether students had mastered the desired knowledge, for example, at the end of a course (i.e. summative purposes). Nowadays, the use of assessments for formative purposes is being increasingly emphasised; an essential part of formative assessment is answering the question, ‘How am I going?’ (Brooks et al. Citation2021; Hattie and Timperley Citation2007). In formative assessment, it is expected that through a variety of both quantitative and qualitative assessment techniques, teachers and students can make better informed decisions on how to (re-)adapt their teaching and learning efforts (Wiliam and Leahy Citation2016). The assessment results are considered to be a starting point for follow-up teaching and learning. For example, teachers can use assessment results for providing students with timely feedback or for posing diagnostic questions to students that can illuminate (the causes of) their misconceptions (Wiliam Citation2011). The role of assessor can also be fulfiled by students themselves, by investigating their own learning process and using checklists or rubrics (e.g. a tool that includes the learning goals and parameters for success in achieving those learning goals) to assess their success with respect to the learning goals (Wiliam Citation2011). The combined perspectives of both teacher and students on the students’ learning progress can provide rich evidence for (the lack of) student learning progress and for how best to continue that process (Mandinach and Schildkamp Citation2021).

Currently, however, there is little knowledge on how teachers and students use assessment techniques to improve learning and teaching, despite (national) policy asserting the importance of formative assessment (Dutch Ministry of Education, Culture and Science Citation2020; Flemish Inspectorate of Education Citation2021). For example, government-funded pamphlets are shared on what kinds of assessment techniques exist, how they can be used and how students can be involved (Curriculum Development Foundation Citation2018). We do know teachers’ views on using assessment techniques and involving students in assessment, which is that they acknowledge the importance of combining assessment techniques and using them frequently in combination with higher student involvement, such as peer assessment (Kippers et al. Citation2018). However, how this influences their actual educational practice and how their practice is influenced by policy efforts, such as teacher professional development, is unknown. It is important to know how teachers’ actual practice is influenced by these policy-driven initiatives, as this can help increase the focus and efficiency of such efforts.

Assessment techniques and student involvement

The idea of using multiple assessment techniques and combining data from various assessment sources is not new and can be compared with data triangulation in research, which refers to ‘(…) combining different sorts of data against the background of the theoretical perspectives that are applied to the data. […] At the same time, triangulation (of different methods or data sorts) should allow a surplus of knowledge’ (Flick Citation2018, 23). In a classroom setting, the use of multiple assessment techniques could, for example, mean using diagnostic exercises or questions on students’ (mis)conceptions, and complementing this data by having students orally explain how they solved a certain problem. Of course, results from different assessment techniques can contradict each other. When the teacher investigates the cause(s) of these contradictions, this may further solidify the validity of the picture of student’s understanding (Mortelmans Citation2013). By combining such quantitative and qualitative assessment techniques, it may also be possible, for example, to distinguish between a cognitive problem (e.g. ‘Did the student not have enough prior knowledge to do this exercise?’) and a metacognitive problem (e.g. ‘Did the student not read the exercise carefully?’). The recommendation to use a variety of assessment techniques seems to be especially emphasised in formative assessment (as opposed to summative assessment): ‘In the process of formative assessment, teachers elicit evidence about student learning using a variety of methods and strategies – for example, observation, questioning, dialogue, demonstration, and written response’ (Heritage et al. Citation2009). The use of multiple modalities in assessment techniques, such as oral in combination with written assessment, is considered to be part of advanced formative assessment practices (Gotwals and Cisterna Citation2022). Moreover, in these more advanced practices, students are highly involved in assessment activities, for example, by regularly assessing each other’s work.

Assessment techniques can take many forms, and can be categorised as written assessment techniques, oral assessment techniques, and performance assessment techniques (Christoforidou et al. Citation2014). Written assessments are all assessments in which students are required to provide their answer in a written form (either digital or paper-and-pencil). All assessments in which students are asked to give their answer orally (e.g. in group discussions, or when answering teacher questions) are defined as oral assessments. Performance assessments are all assessments in which students are asked to physically demonstrate their knowledge and skills. For example, this can take the form of students demonstrating a worked-out example of a specific exercise on the blackboard in front of the classroom, or on paper. Along with using a variety of suitable assessment techniques, another important aspect of formative assessment is the frequency of assessment. To obtain a more valid representation of students’ learning progress requires multiple assessments, even during one lesson (Wiliam and Leahy Citation2016). In the case of assessment for summative purposes, teachers are often inclined to use more traditional, written assessment techniques, which are primarily paper-and-pencil tests (Cauley and McMillan Citation2010). Most often, these assessments are administered with a low frequency, only once per chapter or per quartile (Harlen and James Citation1997).

Assessments can be performed not only by teachers, but also by students themselves or their peers (i.e. ‘agents’ other than the teacher). Involving students in assessment practices is a way for students to gain insight into the desirable next steps in their learning process, and is often called peer and self-assessment (Nicol and MacFarlane-Dick Citation2006). Including students and their peers in assessment can make students’ reasoning and sense-making more visible to students and teachers, which, in turn, can allow for more adequate follow-up actions (Suurtam Citation2012). In addition, involving students in the assessment process has been shown to be related to increased student motivation and better student achievement (Cauley and McMillan Citation2010). Teachers can foster students’ degree of involvement in the assessment process by including activities in the lessons in which students need to regulate their own learning (Carless and Boud Citation2018; Nicol and MacFarlane-Dick Citation2006). A strategy often mentioned in reference to formative assessment is self-assessment (or peer assessment), which can be defined as:

A process of formative assessment during which students reflect on and evaluate the quality of their work [or that of their peers] and their learning, judge the degree to which they reflect explicitly stated goals or criteria, identify strengths and weaknesses in their work, and revise accordingly. (Andrade and Du Citation2007, 160)

Typically, strong student involvement is seen exclusively in assessment for formative purposes and not in assessment for summative purposes, as the former often concerns more low-stakes decisions, which are more focused on the student’s learning process (National Foundation for Educational Research Citation2007). This is particularly true for classes in which teachers are further advanced as users of formative assessment, as student involvement in assessment is considered a more difficult teacher skill (Christoforidou and Kyriakides Citation2021; Christoforidou et al. Citation2014).

Educational contexts of formative assessment in the Netherlands and Flanders

Initiatives to instigate the development of formative assessment practices in the classroom have become common in some Western countries, since Inside the Black Box by Black and Wiliam (Citation1998) was published. In our study, we studied and compared formative assessment practices in two educational contexts: the Netherlands and Flanders (i.e. the Dutch-speaking region of Belgium). Since multiple studies (Heitink et al. Citation2016; Tang, Cheng, and So Citation2006; Wallace and Priestley Citation2011) have already shown that teachers are more willing to implement formative assessment when they feel supported by their school and government, we have identified similarities and differences between the two contexts in policy and school support for formative assessment.

At the national policy level, both the Dutch and the Flemish government have emphasised the value of (elements of) formative assessment. The Dutch Educational Inspectorate wants to put ‘ … more emphasis on formative evaluation with the aim of giving feedback and feedforward to students, and as a learning moment for teachers’ (translated; Dutch Inspectorate of Education Citation2018). The Flemish Inspectorate of Education (Citation2021) addressed some concepts associated with formative-assessment, such as ‘feedback’ and ‘differentiation’, when evaluating teaching quality. It is also clear that both governments support the development of formative assessment practice in the competency goals for starting teachers, which were recently (re-)formulated by the governments of the two countries In the competency goals in the Netherlands, it is stated that: ‘The teacher can collect useful and reliable student progress information and analyse, and on that basis can adjust his teaching where necessary’ (translated; Dutch Ministry of Education, Culture and Science Citation2020), In the Flemish context, beginning teachers are expected to be able to:

Prepare and carry out observation and evaluation with a view to adjustment and remediation as part of the learning process of (a) learner(s) and can use that observation and evaluation data to question his own didactic actions and adjust where necessary. (translated; Flemish Ministry of Education and Training Citation2018)

Such policy directions by governments may help to create a more positive attitude regarding formative assessment among teachers, and may also stimulate schools to provide support to implement formative assessment, such as teacher professional development (Birenbaum, Kimron, and Shilton Citation2011; Yan et al. Citation2021). Furthermore, support at the school level plays an important role in teachers’ willingness to implement formative assessment practices (Heitink et al. Citation2016; Tang, Cheng, and So Citation2006; Wallace and Priestley Citation2011). At the school level, there is a difference between the two contexts: Dutch school policies are seemingly more focused on formative assessment than Flemish school policies (Nusche et al. Citation2014, Citation2015). School boards across the Netherlands point to the importance of the development of their teachers regarding formative assessment, often under the assumption that this will reduce the pressure from high-stakes, summative assessments (Vermeulen et al. Citation2021). In Flanders, educational policies in the majority of the schools do not meet the expectations of the Educational Inspectorate regarding assessment and feedback, and are not specifically directed at incorporating elements of formative assessment in their classrooms (Flemish Inspectorate of Education Citation2021).

Both the Netherlands and Flanders have decentralised school structures, meaning that schools and teachers in both countries are given autonomy with respect to how to shape their professional development and schools have much autonomy on how to shape these elements (Hempen and Vanleke Citation2013; Nieveen and Kuiper Citation2012; OECD Citation2018). Although schools are given autonomy to develop more towards formative assessment, they are often still heavily focused on summative assessment (Kippers et al. Citation2018; Van Gasse et al. Citation2017; Yin and Buck Citation2019). The accountability aspect of summative assessment can hinder teachers’ development of formative assessment (Sach Citation2013; Yan and Brown Citation2021). In the Netherlands, central exams are used as a summative assessment for accountability purposes. As of 2023, central exams will also be implemented in Flanders. These exams provide teachers with a framework for what should be taught. Exam scores are also used by the Educational Inspectorate for determining school quality (Dutch Inspectorate of Education Citation2018; Flemish Inspectorate of Education Citation2021).

In general, in both contexts formative assessment is valued at the policy level, despite the current emphasis on summative assessment in schools, but at the school level it appears that support for formative assessment is stronger in the Netherlands than in Flanders. This might influence how teachers perceive the importance of the integration of formative assessment, and indeed, the frequent use of multiple assessment techniques and student involvement in their classrooms.

The current study

Even though the Netherlands and Flanders both value formative assessment at a policy level, it is unclear whether this has actually led to development of assessment practices that are typical of formative assessment. By including a comparison between these regions in our study, we hope to explore the contribution of a policy focus on formative assessment to teachers’ actual implementation of formative assessment practices. As mentioned, we do know that teachers can value aspects of formative assessment practices, such as the use of a broader range of assessment techniques and the involvement of students, but it is unclear how recent policy efforts might have influenced teachers to actually change their educational practices in line with these views. In addition, how teachers combine these two aspects of formative assessment practices can provide further information on how teachers answer the ‘How am I going?’ question of formative assessment (Hattie and Timperley Citation2007). Knowledge of the current situation may, for example, eventually inform initiatives to stimulate and facilitate a balance between assessment for formative and summative purposes (Christoforidou et al. Citation2014). The aim of this study was to identify teacher profiles regarding assessment techniques and student involvement in two countries with policy-level similarities and school-level differences, the Netherlands and Flanders. Our research questions are:

  1. What profiles can be identified for Dutch and Flemish teachers’ use of assessment techniques?

  2. What profiles can be identified for Dutch and Flemish teachers’ use of student involvement in assessment?

  3. What profiles can be identified for Dutch and Flemish teachers regarding the combination of assessment techniques and student involvement in assessment?

Based on policy efforts in both countries in recent years, we expected teachers in both countries to have started the development of formative assessment, especially since elements of formative assessment have become part of required teacher competences in both countries. However, based on the emphasis on formative assessment in schools, including strong focus on the involvement of students, and the wider availability of formative assessment-related professional development in the Netherlands, we expect Dutch teachers to show stronger and more advanced implementation of formative assessment, including profiles with a broader arrangement and more frequent use of multiple assessment techniques and higher frequency of involvement by students.

Method

Respondents

The target population consisted of Dutch and Flemish mathematics teachers in secondary education, in both lower and higher grades. Teachers were approached through online and offline mathematics teacher networks, such as a newsletter addressed to a large number of mathematics teachers in the Netherlands and Flanders, and they participated voluntarily in the online survey (convenience sampling).

A total of 306 mathematics teachers responded to the survey. For the analyses, we excluded data from incomplete surveys and teachers who did not indicate their educational context (i.e. Flemish or Dutch). Given an over-dimensioning of Dutch teachers in the total sample (n = 203), we took a random teacher sample from the Dutch data, to balance the ratio of Flemish (n = 83) and Dutch teachers (n = 120) in our sample to match the ratio in the target population (1:1.4). As a result, we ran the analyses for this study on a sample of 203 mathematics teachers in total. The demographic characteristics of these teachers can be found in .

Table 1. Demographic characteristics of participating teachers in the Netherlands and Flanders.

Some of the demographic differences between these two groups were statistically significant. This was not the case for gender, χ2(4, 203) = 0.237, p = .626, but it was true for years of experience, χ2(4, 202) = 17.966, p = .001, and for teaching in lower and/or higher grades, χ2(6, 203) = 18.014, p = .006.

Instrument

To collect information on teachers’ assessment purposes, their use of the various assessment techniques, and the degree of student involvement in assessment in their classrooms, a digital questionnaire was completed by the teachers. We used parts of the questionnaire developed and validated by Christoforidou et al. (Citation2014), which was developed to assess teachers’ assessment skills. For the purpose of the current study, three sections from this questionnaire, all including multiple items, were used (examples of items have been translated from Dutch):

  1. The balance between a more formative or a more summative-oriented purpose of assessment in the classroom. Teachers were asked to rank, from more to less important, the following three statements:

    1. Assessment for formative purposes: ‘I assess to detect my students’ learning needs and to tailor my teaching accordingly’.

    2. Assessment for formative purposes: ‘I use assessment results to evaluate the results of my teaching’.

    3. Assessment for summative purposes: ‘I use assessment results to rank my students by giving them a grade’.

  • (2) Teachers’ use of different assessment techniques in the classroom. These items were rated by teachers on a 5-point Likert scale: 1 - Never, 2 – Sometimes, 3 – Regularly, 4 – Often, 5 – Always. Examples were given for each of the items.

    1. Written assessment: ‘To assess students’ learning in mathematics, I use written tests’.

    2. Written exercises: ‘To assess students’ learning in mathematics, I use written assessment activities’.

    3. Oral assessment: ‘To assess students’ learning in mathematics, I use oral assessment’.

    4. Performance assessment: ‘To assess students’ learning in mathematics, I use performance assessment’.

  • (3) Teachers’ distribution of the involvement of different agents, including students, in assessment. For each item, teachers were asked to indicate the frequency of involvement of themselves, students and students’ peers in the assessment of the students’ learning process: 1 - Never, 2 – Sometimes, 3 – Regularly, 4 – Often, 5 – Always

    1. Teacher assessment: ‘In my classroom, the person responsible for assessing students’ learning is me (the teacher)’.

    2. Self-assessment: ‘In my classroom, the person responsible for assessing students’ learning is the student themself’.

    3. Peer assessment: ‘In my classroom, the person responsible for assessing students’ learning is a peer (i.e. classmate)’.

Analyses

To obtain insight into (1) the assessment techniques used by teachers and (2) the distribution of the involvement of different agents (including students) in mathematics assessments, we used hierarchical cluster analysis (Ward method). In the first step, exploratory analyses were run that resulted in three to five potential clusters for both groups of items (i.e. assessment techniques and student involvement). We conducted an ANOVA to assess the internal validity of the potential cluster solutions. Higher η2 measures implied that the variance was better explained by a specific cluster solution. Throughout the study, we used a cut-off p-value of .01, as this increases the probability of not rejecting the null hypothesis when it is correct.

Besides the statistical grounding of the cluster analysis, we evaluated to what extent profile membership reflected assessment practices that were more indicative of either formative (i.e. greater variety of assessment techniques and more student involvement) or summative (i.e. less variety of assessment techniques and more teacher assessment ownership) assessment practices. The same was done for potential profiles regarding student involvement. In that case, stronger involvement of self- and peer assessment implied profiles more indicative of formative assessment practices. This resulted in a five-profile solution for teachers’ assessment practices and a four-profile solution for student involvement in assessment. We interpreted and ordered the profiles from the least to the most formative assessment behaviour. The validity of this proposed order was checked by means of cross-tabulation analyses against teachers’ primary assessment purpose. These analyses showed that for both types of profile solutions, teachers in profiles that we interpreted as reflecting a more formative approach also more often indicated that their primary assessment purpose was of a formative kind (p < .001).

After establishing the profiles for assessment techniques and the profiles for student involvement, we first explored the mutual relation between the different profiles related to formative assessment practices (i.e. profiles for assessment techniques and student involvement). We expected that we would find a stronger combination ofassessment techniques and student involvement, including both self- and peer assessment and other types of assessment techniques besides written tests. In addition, we expected a higher frequency of assessment in classrooms of teachers with assessment practices that were more aligned with formative purposes. To check whether relations found were indeed due to teachers’ motivations to formatively assess and not explainable by their other background characteristics, we ran a cross-tabulation analyses for all background characteristics in combination with the profiles for assessment techniques and student involvement. Unfortunately, we did find a relationship between the teaching assignment of teachers and their profiles for assessment techniques, χ2(24, 203) = 42.827, p = .01.

Second, we explored the relation between the assessment techniques profiles, the student involvement profiles and the educational context. Based on some differences in these contexts, we expected this to possibly reveal further insights into the impact of educational factors on teachers’ assessment practices. These analyses were run via cross-tabulation analysis. The software used for all analyses was SPSS 28.

Results

Assessment techniques

The five-profile solution for assessment techniques, as presented in , was found to best fit the data. These profiles go from teachers using mostly pen-and-pencil tests (i.e. summative assessment) to teachers using more varied assessment techniques (i.e. formative assessment).

Table 2. Teacher profiles with regard to assessment techniques.

In general, written assessment was the most prevalent assessment technique. In Profiles 1, 2 and 3, this was the most often used assessment technique. Respondents in Profile 1 (79 teachers, 38.9% of the total) represented more traditional assessment practices, relying almost entirely on written assessment. In the following profiles, more varied use of assessment techniques was shown. For example, respondents assigned to Profile 2 (21 teachers, 10.3%) used written exercises, such as Kahoot or online quizzes, whereas respondents in Profile 3 (41 teachers, 20.2%) monitored student learning through oral assessment techniques in addition to written assessments.

The frequency of use of each individual assessment technique decreased as the variety of different assessment techniques used increased. This became clear in Profile 4 (32 teachers, 15.8%), where teachers indicated using all techniques, but not very frequently, and also in Profile 5 (30 teachers, 14.8%), in which teachers indicated the most varied and regular use of all different assessment techniques. This progression was validated by cross-tabulating the profiles against the relative importance teachers assigned to different purposes of assessment, which indicated a moderate relationship with more formative purposes (Cramer’s V = 0.24).

Student involvement in assessment

In the survey, we asked teachers about the frequency of involvement in assessment across the different agents in the classroom (i.e. the teacher, the students, and the students’ peers). A four-profile solution emerged as the best fit for the data, as shown in . We ordered these profiles from one with teachers as the main owners of assessment (i.e. summative assessment), to one in which involvement was more distributed among several agents (i.e. formative assessment).

Table 3. Teacher profiles with regard to student involvement.

As can be seen in , assessment ownership lay mostly with the teacher in our sample. In both Profile 1 (90 teachers, 44.3%) and Profile 2 (62 teachers, 30.5%), teachers indicated that assessment ownership lay with the teachers themselves. While in Profile 1, assessment was carried out exclusively by teachers, in Profile 2, students sometimes were responsible for assessing their own learning as well. In Profile 3 (27 teachers, 13.3%), the responsibility was more evenly distributed across teachers and students. Peer assessment was only common in the classrooms of teachers assigned to Profile 4 (24 teachers, 11.8%). This profile showed the highest distribution of student involvement, as all agents in the classroom (the teacher, the student and peers) were frequently responsible for assessment in this profile. Again, the progression from more summative to more formative assessment profiles could be validated by cross-tabulating these profiles with the relative importance teachers gave to formative purposes for assessment (Cramer’s V = 0.25).

Combining assessment techniques and student involvement

Since formative assessment requires the frequent and varied use of assessment techniques on the one hand, and distributed student involvement on the other hand, we cross-tabulated the profiles for both kinds of assessment practices, as shown in .

Table 4. Cross-tabulation of assessment techniques profiles and student involvement profiles.

At the upper left, we see that 49 teachers (24% of the total) were primarily using written assessments, and assessment responsibility there lay at the teacher level. This aligns with a traditional approach to assessment. At the bottom right, we see the combination of profiles that could be considered most formative, since teachers in this profile combination showed regular use of a variety of assessment techniques, with more student involvement in responsibility for assessment. Of the total sample, 8 teachers (3.94%) were assigned to this combination of profiles. The rankings for the two types of profile had some correspondence with each other, as can be seen from the diagonal cells of the cross-tabulation.

Comparing assessment techniques profiles and student involvement in assessment profiles in the Netherlands and Flanders

In the last step, we studied the relationship between the assessment techniques profiles, student involvement profiles and the educational context. The educational contexts were the Netherlands, where teachers were slightly more focused on the formative purposes of assessment (55.8%), and Flanders, where teachers were slightly more focused on the summative purposes of assessment (55.4%). The distribution of the Dutch and Belgian teachers across the assessment techniques profiles can be found in .

Table 5. Cross-tabulation of assessment techniques profiles with educational context.

The teachers in the Netherlands were more dispersed across the profiles. They seemed to vary their assessment techniques to a larger extent. Teachers in Flanders (63.9%) still predominantly indicated using traditional paper-and-pencil tests. These findings are in line with the results of the cross-tabulation of the student involvement profiles and the educational context; see . Teachers in the Netherlands were typically not the sole owner of the assessment process in their classroom (25.8%), possibly indicating more student-centred assessment in those classrooms. In Flanders, a more teacher-centred approach to assessment was found, in which teachers most often took the role of assessor in the classroom (71.1%).

Table 6. Cross-tabulation of student involvement profiles with educational context.

Conclusion and discussion

There is growing emphasis on formative assessment, in both educational research and educational practice. This has influenced countries to adopt policies that more strongly emphasise the importance of formative assessment practices by including them in teacher competences, but also by more teacher professional development efforts regarding formative assessment (Dutch Ministry of Education, Culture and Science Citation2020; Flemish Inspectorate of Education Citation2021).

We know that two aspects are important in formative assessment: a) the frequent use of multiple assessment techniques in line with the learning goals to investigate learning progress, and b) involving students in the assessment process. Although the success and degree of implementation of these two aspects may depend on the educational context in which teachers are situated, and educational policies may indeed have a facilitating or hindering effect on the development of formative assessment (Yan et al. Citation2021), cross-contextual research into the implementation of assessment for formative purposes has been scarce. In addition, little is known about teacher development in the use of assessment techniques, students’ involvement in assessment and how these two phenomena are related to the educational context. To obtain more insight into these issues, in this study, teachers’ assessment practices in their classrooms were investigated in two different educational contexts: the Netherlands and Flanders.

In interpreting the results of our study, one should take note of the limitations of the sampling method. The teachers who participated in this study were contacted in, among other places, social media groups focused especially on formative assessment and/or mathematics. This group of teachers might not be representative of the entire population of Dutch and Flemish mathematics teachers and could have a different attitude towards formative assessment. In addition, this study made use of a questionnaire that was used to measure teachers’ own perceptions of their assessment behaviour, which may have given a more positive view than when, for example, the data are collected by means of lesson observations and/or student perception questionnaires.

Profiles of assessment techniques and student involvement

In our sample, most teachers in both the Netherlands and Flanders belonged to assessment techniques and assessment-involvement profiles in which traditional paper-and-pencil tests were administered by teachers, to assess where learners are in their learning process. Nevertheless, there also seems to be a trend towards the use of oral assessment, performance assessment and alternative written exercises (i.e. all written assessment that are not tests), to verify where learners are. Notably, for 30.8% of the teachers involved, another assessment technique was also dominant in the classroom: oral assessment (M = 3.44) for Profile 3 and written exercises (M = 3.57) for Profile 2. A clear ordering from less to more formative assessment behaviours was found when organising the profiles. The same range of clusters from more traditional (i.e. emphasis on summative assessment, teacher-centred) to more progressive (i.e. emphasis on formative assessment, student-centred) was found for student involvement. Most teachers still felt in charge of the administration of assessment (44.3%). However, in other profiles, teachers indicated increasingly less ownership of assessment for themselves, and increasingly more for their students. A similar pattern emerged from the cross-tabulation of the profiles for assessment techniques and student involvement. In classrooms in which written assessment was mainly used, teachers were the main assessment owners, whereas in classrooms in which more varied assessment practices were applied, student involvement was more distributed. Our findings indicate that teacher assessment practices and students’ degree of involvement are strongly related, from less (written assessment, teacher assessment ownership) to more (varied techniques, distributed student involvement) formative approaches.

Profiles of assessment techniques and student involvement across educational contexts

The results showed clear differences between the Netherlands and Flanders when it comes to formative assessment in the classroom. Among Flemish teachers, the assessment-technique profile in which written paper-and-pencil testing is most frequent was prevalent. Dutch teachers were more dispersed across profiles, and were assigned to assessment-technique profiles that showed use of a greater variety of assessment techniques more often than Flemish teachers were. And although teacher assessment was most frequently used in teachers’ classrooms in both countries, Dutch teachers were often assigned to profiles in which both teacher and students play a significant role. In contrast, teachers from Flanders in this sample were assigned to less formatively oriented assessment profiles. Since teacher education may only recently have started to prepare teachers for formative assessment practices (cf. Joosten-ten Brinke et al. Citation2022), a possible explanation for this may lie in the difference in teaching experience: the great majority (77.1%) of the Flemish respondents had more than 10 years of teaching experience, as compared to just under half (49.1%) of Dutch respondents.

The results of this study lead to two main considerations. The first is that we see reasonable coherence in the more summative profiles (i.e. traditional assessment techniques and little student involvement) and the more formative profiles (i.e. a broad variety of assessment techniques and greater student involvement). Teachers using a more varied palette of assessment techniques also reported including students more strongly in the assessment process. This was also found in other studies (Christoforidou and Kyriakides Citation2021; Gotwals and Cisterna Citation2022), which indicated that the use of multiple modalities of assessment techniques and an increase of student involvement in assessment are seen in more advanced formative assessment practices.

However, many teachers were currently assigned to profiles showing a traditional, teacher-centred approach to assessment. This is in line with findings that suggest that teachers are hesitant to change their classroom practice and, for example, involve their students in the assessment process (Kippers et al. Citation2018; Vattøy, Gamlem, and Rogne Citation2021). Both the Dutch Inspectorate of Education (Citation2018) and the Flemish Inspectorate of Education (Citation2021) found that most teachers still do not use assessments systematically to address students’ needs. At the same time, our study shows that a small number of teachers have indeed started to develop a more formative, student-centred approach to assessment in which various assessment techniques complement each other and in which students are involved in assessing their own learning process. Regarding the distribution of the teachers across the profiles, it could be that teachers are gradually moving towards more formative assessment behaviours, and that this change is possibly linked with their years of experience. Interestingly, when teachers seem to move to profiles more indicative of formative assessment, the frequency of assessment initially seems to decrease. This can possibly be explained by the complexity of formative assessment. The development that can be observed in these profiles, moving from assessment practices that are more characteristic of summative purposes to assessment practices that are more typical of formative purposes, can be important input for teacher professional development for formative assessment. Teachers with a more traditional profile showing more summative assessment behaviour might benefit from a different professional development approach than teachers who are already using a variety of assessment techniques in their classrooms.

The second consideration is that the characteristics of educational systems seem to matter for implementing formative assessment. The emphasis on formative assessment seems to be greater in the Dutch government compared to the Flemish government, which could, in turn, have led to greater focus on the development of formative assessment in Dutch schools. Further explanations for the finding that the teachers in the Netherlands reported exhibiting more formative assessment behaviours may be found at the school level. For example, almost all secondary schools in the Netherlands reported having a vision of assessment in which the formative character of learning and assessment is emphasised (Nusche et al. Citation2015), which could have resulted in more teacher professional development for formative assessment. The stronger focus on formative assessment by schools in the Netherlands, which was seemingly lacking in Flanders, may have stimulated teachers’ formative actions in the classroom (e.g. Heitink et al. Citation2016; Schildkamp et al. Citation2020).

Suggestions for research and practice

Before this study, only limited information was available on teachers’ current formative assessment practices. This study complements previous findings that formative assessment is still likely to be rarely implemented in classrooms, according to teachers’ own views on formative assessment practices, at least in Western countries such as the Netherlands and Flanders (cf. Kippers et al. Citation2018). Even though the development of formative assessment and the focus on formative assessment in policy efforts has not yet achieved teachers’ integration of it as intended, the current study also shows that some policy directions may have promise in their ability to fast-track teachers’ formative assessment practices (Yan et al. Citation2021), such as financing teacher professional development interventions to promote formative assessment in classrooms.

In addition, the current study shows that it is possible to collect information on teachers’ (starting) levels regarding formative assessment, albeit very general information, with the brief questionnaire used in this study. This might make it feasible to use the questionnaire on a larger scale, for example, by administering it to all teachers at one school. Moreover, this study resulted in only a ‘snapshot’ of teachers’ formative assessment practices at that time. By administering such a questionnaire frequently (at least once or twice per year), it will also be possible to reveal whether the development from more summative to more formative assessment profiles as proposed here is, in fact, an accurate representation of how teachers develop in the use of these formative assessment practices. Influential factors in the on-going development of formative assessment, such as teacher professional development, can be observed in more detail. The information gained can be used for designing teacher professional development trajectories, as it shows how teachers can have different starting points and therefore needs regarding formative assessment. Schools and teachers can benefit from assessing how they shape their formative (and summative) assessment practices, as this can impact whether they will be able to improve (Christoforidou et al. Citation2014).

Acknowledgments

We would like to thank the teachers who participated in this study.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

The work was supported by the Erasmus+ [2017-3118/001-001].

Notes on contributors

Jitske de Vries

Jitske de Vries MSc. is a post-doctoral researcher at the Section of Teacher Development of the University of Twente in The Netherlands. She is involved in projects on teacher professional development for Assessment for Learning and has conducted multiple impact studies of teacher professional development programmes.

Roos Van Gasse

Roos Van Gasse is a post-doctoral researcher at the Department of Education and Training Sciences (Faculty of Social Sciences) of the University of Antwerp. Her research activities focus on the interface between quality assurance, assessment and professional development in schools. The use of (output) data has always been the common thread.

Marieke van Geel

Marieke van Geel is an assistant professor at the Department of Teacher Development at the University of Twente in the Netherlands. Her research focuses on how teachers and school leaders can use a wide variety of data for decision making and differentiated instruction, and on professional development in these areas. She is especially interested in the knowledge and skills teachers need to make sense and use of all data available to them.

Adrie Visscher

Adrie Visscher is a full professor at the University of Twente in the Section of Teacher Development. In his research he investigates how the provision of various types of feedback to students, teachers and schools can support the optimisation of the quality of classroom teaching and student learning. His research also focuses on the characteristics of effective teacher professionalisation.

Peter Van Petegem

Peter Van Petegem is full professor of Educational Sciences at the University of Antwerp and associated with the Department of Educational and Educational Sciences of the Faculty of Social Sciences. His research interests are in evaluation research, mainly at the macro level (education policy, international comparative research) and meso level (educational innovation, school policy, quality assurance).

References

  • Andrade, H., and Y. Du. 2007. “Student Responses to Criteria‐Referenced Self‐Assessment.” Assessment & Evaluation in Higher Education 32 (2): 159–181. https://doi.org/10.1080/02602930600801928.
  • Birenbaum, M., H. Kimron, and H. Shilton. 2011. “Nested Contexts That Shape Assessment for Learning: School-Based Professional Learning Community and Classroom Culture.” Studies in Educational Evaluation 37 (1): 35–48. https://doi.org/10.1016/j.stueduc.2011.04.001.
  • Black, P. J., and D. Wiliam. 1998. Inside the Black Box: Raising Standards Through Classroom Assessment. London, UK: King’s College.
  • Brooks, C., R. Burton, F. van der Kleij, C. Ablaza, A. Carroll, J. Hattie, and S. Neill. 2021. “Teachers Activating Learners: The Effects of a Student-Centred Feedback Approach on Writing Achievement.” Teaching and Teacher Education 105:103387. https://doi.org/10.1016/j.tate.2021.103387.
  • Carless, D., and D. Boud. 2018. “The Development of Student Feedback Literacy: Enabling Uptake of Feedback.” Assessment & Evaluation in Higher Education 43 (8): 1315–1325. https://doi.org/10.1080/02602938.2018.1463354.
  • Cauley, K. M., and J. H. McMillan. 2010. “Formative Assessment Techniques to Support Student Motivation and Achievement.” The Clearing House: A Journal of Educational Strategies, Issues & Ideas 83 (1): 1–6. https://doi.org/10.1080/00098650903267784.
  • Christoforidou, M., and L. Kyriakides. 2021. “Developing Teacher Assessment Skills: The Impact of the Dynamic Approach to Teacher Professional Development.” Studies in Educational Evaluation 70:101051. https://doi.org/10.1016/j.stueduc.2021.101051.
  • Christoforidou, M., L. Kyriakides, P. Antoniou, and B. P. Creemers. 2014. “Searching for Stages of teacher’s Skills in Assessment.” Studies in Educational Evaluation 40:1–11. https://doi.org/10.1016/j.stueduc.2013.11.006.
  • Cohen, J. 1988. Statistical Power Analysis for the Behavioral Sciences. 2nd ed. Mahwah, New Jersey, USA: Erlbaum.
  • Curriculum Development Foundation. 2018. “Hoe weet ik waar mijn leerlingen staan [How do I know what the learning process of my students is]?” https://www.slo.nl/publish/pages/11974/fe_hoe_weet_ik_waar_zij_staan.pdf.
  • Dutch Inspectorate of Education. 2018. De Staat van het Onderwijs: Onderwijsverslag 2016/2017. [The Status of Education: Educational Report 2016/2017]. Inspectie van het Onderwijs. https://www.onderwijsinspectie.nl/documenten/rapporten/2018/04/11/rapport-de-staat-van-het-onderwijs.
  • Dutch Ministry of Education, Culture and Science. 2020. “Besluit bekwaamheidseisen onderwijspersoneel [Decree on Competence Requirements for Teaching Staff].” https://wetten.overheid.nl/BWBR0018692/2020-08-01.
  • Flemish Inspectorate of Education. 2021. Jaarlijks rapport van de onderwijsinspectie: Onderwijsspiegel 2021. [Annual Report of the Education Inspectorate: Education Mirror 2021]. https://www.onderwijsinspectie.be/sites/default/files/atoms/files/OS2021-web.pdf.
  • Flemish Ministry of Education and Training. 2018. “Besluit van de Vlaamse Regering betreffende de basiscompetenties van de leraren [Decision of the Flemish Government Regarding the Basic Competences of Teachers].” https://codex.vlaanderen.be/Zoeken/Document.aspx?DID=1029484m=inhoud.
  • Flick, U. 2018. “Triangulation in Data Collection.” In The SAGE Handbook of Qualitative Data Collection, edited by U. Flick, 527–544. London, UK: Sage.
  • Gotwals, A. W., and D. Cisterna. 2022. “Formative Assessment Practice Progressions for Teacher Preparation: A Framework and Illustrative Case.” Teaching and Teacher Education 110:103601. https://doi.org/10.1016/j.tate.2021.103601.
  • Harlen, W., and M. James. 1997. “Assessment and Learning: Differences and Relationships Between Formative and Summative Assessment.” Assessment in Education 4 (3): 365–380. https://doi.org/10.1080/0969594970040304.
  • Hattie, J., and H. Timperley. 2007. “The Power of Feedback.” Review of Educational Research 77 (1): 81–112. https://doi.org/10.3102/003465430298487.
  • Heitink, M. C., F. M. Van der Kleij, B. P. Veldkamp, K. Schildkamp, and W. B. Kippers. 2016. “A Systematic Review of Prerequisites for Implementing Assessment for Learning in Classroom Practice.” Educational Research Review 17:50–62. https://doi.org/10.1016/j.edurev.2015.12.002.
  • Hempen, B., and M. Vanleke. 2013. “Education in Flanders.” In Exploring Childhood in a Comparative Context: An Introductory Guide for Students, edited by M. A. Brown and J. White, 16–27. New York, NY, USA: Routledge.
  • Heritage, M., J. Kim, T. Vendlinski, and J. Herman. 2009. “From Evidence to Action: A Seamless Process in Formative Assessment?” Educational Measurement Issues & Practice 28 (3): 24–31. https://doi.org/10.1111/j.1745-3992.2009.00151.x.
  • Joosten-ten Brinke, D., R. Bartman, J. Gulikers, G. Van Silfhout, L. Baartman, M. Leenknecht, J. Arts, C. De Koster, and N. Kok. 2022. “Formatief evalueren in de lerarenopleidingen. Een analyse van de kennisbases [Formative Evaluation in Teacher Education. An Analysis of the Knowledge Bases].” Tijdschrift voor Lerarenopleiders 43 (2). www.velon.nl.
  • Kippers, W. B., C. H. D. Wolterinck, K. Schildkamp, C. L. Poortman, and A. J. Visscher. 2018. “Teachers’ Views on the Use of Assessment for Learning and Data-Based Decision Making in Classroom Practice.” Teaching and Teacher Education 75:199–213. https://doi.org/10.1016/j.tate.2018.06.015.
  • Mandinach, E. B., and K. Schildkamp. 2021. “Misconceptions About Data-Based Decision Making in Education: An Exploration of the Literature.” Studies in Educational Evaluation 69:100842. https://doi.org/10.1016/j.stueduc.2020.100842.
  • Mortelmans, D. 2013. Handboek kwalitatieve onderzoeksmethoden [Qualitative research methods handbook]. The Hague, The Netherlands: Acco.
  • National Foundation for Educational Research. 2007. “National Foundation for Educational Research Position Paper on Assessment.” https://www.nfer.ac.uk/nferposition-paper-on-assessment-2007.
  • Nicol, D., and D. MacFarlane-Dick. 2006. “Formative Assessment and Self-Regulated Learning: A Model and Seven Principles of Good Feedback Practice.” Studies in Higher Education 31 (2): 199–218. https://doi.org/10.1080/03075070600572090.
  • Nieveen, N., and W. Kuiper. 2012. “Balancing Curriculum Freedom and Regulation in the Netherlands.” European Educational Research Journal 11 (3): 357–368. https://doi.org/10.2304/eerj.2012.11.3.357.
  • Nusche, D., H. Braun, G. H. Halász, and P. Santiago. 2014. OECD Reviews of Evaluation and Assessment in Education: Netherlands 2014. Paris, France: OECD Publishing.
  • Nusche, D., G. Miron, P. Santiago, and R. Teese. 2015. OECD Reviews of School Resources: Flemish Community of Belgium 2015. Paris, France: OECD Publishing.
  • OECD. 2018. “How Decentralised Are Education Systems, and What Does it Mean for Schools?” Education Indicators in Focus, No. 64. OECD Publishing. https://doi.org/10.1787/e14575d5-en.
  • Sach, E. 2013. “An Exploration of Teachers’ Narratives: What Are the Facilitators and Constraints Which Promote or Inhibit ‘Good’ Formative Assessment Practices in Schools?” International Journal of Primary, Elementary and Early Years Education 43 (3): 322–335. https://doi.org/10.1080/03004279.2013.813956.
  • Schildkamp, K., F. M. van der Kleij, M. C. Heitink, W. B. Kippers, and B. P. Veldkamp. 2020. “Formative Assessment: A Systematic Review of Critical Teacher Prerequisites for Classroom Practice.” International Journal of Educational Research 103:101602. https://doi.org/10.1016/j.ijer.2020.101602.
  • Suurtam, C. 2012. “Assessment Can Support Reasoning & Sense Making.” The Mathematics Teacher 106 (1): 28–33. https://doi.org/10.5951/mathteacher.106.1.0028.
  • Tang, S. Y. F., M. M. H. Cheng, and W. W. M. So. 2006. “Supporting Student teachers’ Professional Learning with Standards-Referenced Assessment.” Asia-Pacific Journal of Teacher Education 34 (2): 223–244. https://doi.org/10.1080/13598660600720629.
  • Van Gasse, R., K. Vanlommel, J. Vanhoof, and P. Van Petegem. 2017. “The Impact of Collaboration on teachers’ Individual Data Use.” School Effectiveness and School Improvement 28 (3): 489–504. https://doi.org/10.1080/09243453.2017.1321555.
  • Vattøy, K. D., S. M. Gamlem, and W. M. Rogne. 2021. “Examining students’ Feedback Engagement and Assessment Experiences: A Mixed Study.” Studies in Higher Education 46 (11): 2325–2337. https://doi.org/10.1080/03075079.2020.1723523.
  • Vermeulen, W., T. Schwartz, S. Hoekstra, and M. Kleinjan. 2021. “Druk in het voortgezet onderwijs: onderzoek naar schooldruk in het voortgezet onderwijs [Pressure in secondary education: research into school pressure in secondary education].” https://www.rijksoverheid.nl/binaries/rijksoverheid/documenten/rapporten/2021/06/30/druk-in-het-voortgezet-onderwijs/druk-in-het-voortgezet-onderwijs.pdf.
  • Wallace, C. S., and M. Priestley. 2011. “Teacher Beliefs and the Mediation of Curriculum Innovation in Scotland: A Socio-Cultural Perspective on Professional Development and Change.” Journal of Curriculum Studies 43 (3): 357–381. https://doi.org/10.1080/00220272.2011.563447.
  • Wiliam, D. 2011. “What Is Assessment for Learning?” Studies in Educational Evaluation 37 (1): 3–14. https://doi.org/10.1016/j.stueduc.2011.03.001.
  • Wiliam, D., and S. Leahy. 2016. Embedding Formative Assessment. Cheltenham VIC, Australia: Hawker Brownlow Education.
  • Yan, Z., and G. T. L. Brown. 2021. “Assessment for Learning in the Hong Kong Assessment Reform: A Case of Policy Borrowing.” Studies in Educational Evaluation 68:100985. https://doi.org/10.1016/j.stueduc.2021.100985.
  • Yan, Z., Z. Li, E. Panadero, M. Yang, L. Yang, and H. Lao. 2021. “A Systematic Review on Factors Influencing teachers’ Intentions and Implementations Regarding Formative Assessment.” Assessment in Education Principles, Policy & Practice 28 (3): 228–260. https://doi.org/10.1080/0969594X.2021.1884042.
  • Yin, X., and G. Buck. 2019. “Using a Collaborative Action Research Approach to Negotiate an Understanding of Formative Assessment in an Era of Accountability Testing.” Teaching and Teacher Education 80:27–38. https://doi.org/10.1016/j.tate.2018.12.018.