44,556
Views
26
CrossRef citations to date
0
Altmetric
Original Articles

Characteristics of improved formative assessment practice

&

ABSTRACT

An earlier study showed that the changes in teachers’ classroom practice, after participation in a professional development program in formative assessment, significantly improved student achievement in mathematics. The teachers in that study were a random selection of Year 4 teachers in a Swedish mid-sized municipality. In the present study, we analyse and describe the characteristics of these changes in classroom practice, which were based on a combination of various strategies for formative assessment. Data were collected through teacher interviews and classroom observations. The teachers implemented many new activities that strengthened a formative classroom practice based on identifying student learning needs and modifying the teaching and learning accordingly. The characteristics of the changes the teachers made reveal the complexity of this formative assessment practice and why such developments of practice are likely to require major changes in most teachers’ practices. We also discuss how such changes in practice afford new learning opportunities.

Introduction

Background

Several research reviews have demonstrated that formative assessment can substantially improve student achievement (e.g. Black and Wiliam Citation1998; Hattie 2009, 297). There is no clearly agreed-upon definition of formative assessment that all researchers adhere to (Dunn and Mulvenon Citation2009; Filsecker and Kerres Citation2012; Good Citation2011), but the following conceptualisation by Black and Wiliam (Citation2009) captures the meaning of many definitions found in the literature:

Practice in a classroom is formative to the extent that evidence about student achievement is elicited, interpreted, and used by teachers, learners, or their peers, to make decisions about next steps in instruction that are likely to be better, or be better founded, than the decisions they would have taken in the absence of evidence that was elicited. (Black and Wiliam Citation2009, 9)

Figure 1. Footnote1The relation between key strategies, instructional processes, and agents in the classroom (After a table in Wiliam and Thompson Citation2008, 63)

Figure 1. Footnote1The relation between key strategies, instructional processes, and agents in the classroom (After a table in Wiliam and Thompson Citation2008, 63)

This definition allows for several different ways of carrying out formative assessment, and indeed different approaches to formative assessment described in the research literature have had different foci. Some researchers focus on the teacher using tests to gather evidence of student learning, with subsequent adjustment of instruction. Some focus on teachers’ feedback given to students based on the gathered evidence on student learning. Others focus on the students’ role in the formative assessment process. This role might be that of self-regulated learners, which includes self-assessment and subsequent actions to attain the learning goals (Zimmerman Citation2002). The student role might also be to support each other’s learning, which involves peer-assessment and subsequent suggestions to peers on how they might act to reach their learning goals. Black and Wiliam’s review (Citation1998) indeed included publications investigating the impact of different strategies for formative assessment. Research reviews focusing on each of these strategies have also confirmed their potential for enhancing student achievement. The reviews show strong relationships between student achievement and formative assessment strategies such as teachers’ adjustment of teaching based on collected evidence of student learning (Yeh Citation2009), feedback (Hattie and Timperley Citation2007), self-regulated learning (Dignath and Büttner Citation2008), self-assessment using rubrics (Panadero and Jönsson Citation2013), and peer-assisted learning (Rohrbeck et al. Citation2003). It should be noted that research about formative assessment is not always conducted using the term “formative assessment”. Some scholars researching these aspects of formative assessment use the term formative assessment (or assessment for learning), while others use denotations specifying the specific focus, for example, “feedback”.

All of these approaches share the defining characteristic of formative assessment that agents in the classroom (teacher, peer and learner) collect evidence of student learning and, based on this information, adjust teaching and/or learning. However, the variation in conceptualisation and implementation of formative assessment has led to controversies about the available evidence for large achievement gains from formative assessment (Bennett Citation2011; Briggs et al. Citation2012; Dunn and Mulvenon Citation2009; Filsecker and Kerres Citation2012; Kingston and Nash Citation2011; McMillan, Venable and Varier Citation2013), and difficulties in identifying best practices related to formative assessment have been experienced (Dunn and Mulvenon Citation2009). Therefore, when referring to the effects of formative assessment it is vital to be clear about the way formative assessment is conceptualised and implemented.

Thus, (1) there is empirical evidence that each of the above-mentioned strategies has the power to improve student achievement, (2) the strategies all have a common core, and (3) the strategies focus on different agents for, and processes of, assessment and learning. This implies that learning might be further improved by integrating several of these strategies into classroom practice. Wiliam and Thompson (Citation2008) suggested a framework specifying how these different strategies can be integrated to make up a unity. However, as shown by Vingsle (2011), such a practice can be very complex, and there have not been many published empirical studies investigating the impact of such an integrated practice on students’ achievement. Only a few published studies exist, but one prominent example is the study of teachers participating in The King’s-Medway-Oxfordshire Formative Assessment Project (KMOFAP) and how they developed their practice in four areas that were also included in the framework by Wiliam and Thompson (Citation2008). Significant positive outcomes in student achievement were reported from this change in practice (Black et al. Citation2003; Black and Wiliam Citation2003; Wiliam et al. Citation2004). A later study by Andersson and Palm (Citation2017) investigated the outcomes of a professional development programme (PDP), the content of which was based on the whole framework by Wiliam and Thompson. The mathematics teachers participating in the PDP were randomly selected from a middle-sized municipality in Sweden. After controlling for the performance on a pre-test at the beginning of the year, the (Year 4) classes of the teachers who had participated in the PDP significantly outperformed the students of the teachers in a control group after one school year of implementation of formative assessment (Andersson and Palm Citation2017). Thus, the area has much potential but suffers from a severe shortage of studies investigating both improvements in student achievement and differences or changes in teachers’ classroom practices. Such studies are necessary to understand the impact, and to further develop the theory, of formative assessment. To develop the theory, both evidence of the impact of a certain approach to formative assessment and the identification of the characteristics and components of this particular approach are needed. Furthermore, such evidence needs to be accompanied by postulates about how these characteristics and components work together to create the impact, and by concrete illustrations of what formative assessment built on this approach looks like and how it might work in a real-life setting (Bennett Citation2011).

The study reported in the present article investigated the changes in formative assessment practice made by the teachers of the students who in the study by Andersson and Palm (Citation2017) were shown to have improved their achievement in comparison with a control group. This assessment practice is conceptualised here as an integration of several strategies of formative assessment. The present study contributes to the current research literature by investigating the following research question: What are the characteristics of the changes in formative assessment practice that these teachers made? In addition, we will discuss the connection between the characteristics of these changes and new learning opportunities for the students in an attempt to shed light on possible reasons for the enhanced student achievement.

Formative assessment as an integration of different strategies

The definition of formative assessment by Black and Wiliam (Citation2009) quoted above emphasises that formative assessment might pertain to, and be inherent in, the whole classroom practice rather than exist as separate activities conducted by different individuals. Thus, the definition constitutes an overarching description of the idea of formative assessment as a unified practice of integrated strategies, and we will use the terms formative assessment and formative classroom practice interchangeably. In addition, Wiliam and Thompson operationalised the definition in a form that facilitates the learning, and practical use, of formative assessment in the classroom (Wiliam and Thompson Citation2008). This framework was used to form the content of the PDP for the teachers participating in the present study, as well as to structure the analysis of the formative assessment carried out in the classrooms of the participating teachers. The framework comprises the “big idea” of using evidence of student learning to adjust instruction to better meet the identified student learning needs and the following five key strategies (KS) (Wiliam and Thompson Citation2008):

KS 1. Clarifying, sharing, and understanding learning intentions and the criteria for success

KS 2. Engineering effective classroom discussions, questions, and tasks that elicit evidence of learning

KS 3. Providing feedback that moves learners forward

KS 4. Activating students as instructional resources for one another

KS 5. Activating students as the owners of their own learning

shows how the key strategies can be connected to the defining characteristics of formative assessment, as well as how they are related to each other and are integrated into a unified framework of formative assessment. The figure connects the key strategies to the definition of formative assessment in two dimensions. The first dimension constitutes the following three key processes in teaching, learning, and formative assessment: establishing where the learners are going in their learning, establishing where the learners are right now in their learning, and establishing what needs to be done to get where they are going (Wiliam and Thompson Citation2008). The three processes constitute the defining characteristics of formative assessment inherent in the definition above (as well as the “big idea” in the framework). In addition to acknowledging the importance of teachers and students reaching a common understanding of the specific learning goals that are aimed for, the processes focus on eliciting, interpreting, and using information to make decisions about the next steps in instruction or learning in order to reach these goals.

The second dimension involves the three agents in the classroom referred to in the definition (teacher, peer, and learner) who may participate in the three formative assessment processes. All students are both learners and peers, and the inclusion of both terms makes the point that the students might help each other as well as themselves in the formative assessment process. The teacher, in turn, can support students’ motivation and skills to take an active part in the processes as instructional resources for one another as well as becoming self-regulated learners. When all of the strategies are used as an integrated classroom practice, they support each other to facilitate student engagement and learning. In addition, the interactions between the teachers and students support learning during all three key processes.

A third dimension that differentiates between different ways of conducting formative assessment is the time frame of the adjustment cycle, which includes the length and frequency of the cyclic formative assessment process. The length of the adjustment cycle is the length of time from when the information about student learning is elicited to the actual use of this information through feedback or adjusted instruction (Wiliam and Thompson Citation2008). For example, a teacher might respond to elicited information about students’ learning by adjusting instruction during the same lesson that the information is elicited, and a student might adjust their learning behaviour immediately after receiving feedback from the teacher. In these cases the length of the adjustment cycle would be a few seconds or minutes. However, the teacher might also wait until the next lesson, instructional unit, or term before using the elicited information to make changes to their instruction. In these instances, the cycle length would be in terms of days or months. The frequency of the adjustment cycle means how often teaching or learning is adjusted based on elicited information about student learning. These three dimensions of formative assessment will be used, in addition to the key strategies, to structure the analysis of the teachers’ formative classroom practice and to connect this practice to students’ learning opportunities.

The professional development programme in this study

The teachers whose practice was analysed in this study all participated in a professional development programme (PDP) in formative assessment. The content of the PDP was formative assessment conceived of as a unity of integrated strategies, which was operationalised through the framework consisting of the “big idea” and five key strategies by Wiliam and Thompson (Citation2008). The PDP was led by the second author. The 22 participating teachers and the programme leader met at the university for about six hours once a week over one term (144 hours in total). In addition, the teachers had another 72 hours available for reading literature, planning, and reflecting over new formative assessment activities. Between the meetings, the teachers were also supposed to put theory into practice by trying out formative assessment activities that had been introduced and discussed at the meetings. To make it possible for the teachers to participate in the PDP, the teachers’ workload at their schools was decreased by the same amount of hours they were supposed to be engaged in the PDP. This was accomplished by the schools arranging for substitute teachers to perform some of the teaching duties. A typical university meeting comprised lectures presenting the theory of formative assessment and concrete activities for its implementation in the classroom, group discussions about the content and implementation, and discussions about experiences from the previous week’s implementation. In the latter discussions, the teachers evaluated the implementations, shared experiences of success or failure, and discussed how they could overcome obstacles and develop the use of particular activities. The second author supported these discussions and intervened with suggestions when deemed useful. Generally, the PDP had a formative and process-oriented character. At the final meeting, time was put aside for the teachers to plan, individually or in small groups, for their use of formative assessment in their teaching in the next term.

Generally, the teachers were satisfied with the conditions they had during the PDP, but several teachers experienced mixed age groups, large class sizes, or groups with a large spread in achievement (including students who were disruptive in the classroom) to be aggravating factors for developing their classroom practice. Almost all of the teachers also experienced that during the implementation of their formative classroom practice in the school year after the PDP a heavy workload made it a challenge to free up time for developing and practising this new way of teaching (Andersson and Palm Citation2015).

Method

Participants

The 22 teachers (14 females and 8 males) who participated in the study were selected by a stratified random sampling procedure from the population of all teachers who were going to teach mathematics in Year 4 in a middle-sized Swedish city in the 2011–2012 school year. In the schools with one Year 4 class, the teacher was randomly selected to participate or not participate in the PDP in formative assessment during the spring of 2011. For schools with two Year 4 classes, one of the teachers was randomly selected to participate in the PDP. In the three schools with three Year 4 classes, one or two teachers were randomly selected to participate in the PDP. Most of the teachers had only Year 4 students in their class, but four of them had a mixed-age class, and the socio-economic and cultural backgrounds of their students were diverse. Six of the teachers declined to participate in the PDP. The reasons for declining included upcoming retirement, already feeling proficient in formative assessment, and other school priorities.

Procedure for data collection

Data collection was performed using classroom observations and teacher interviews. Each teacher’s classroom practice was observed during the 2011–2012 school year after they had attended the PDP. Interviews were then carried out with all teachers in the spring of 2012 after the classroom observations had been completed. Data were sought about their formative classroom practices with the intent to identify all activities pertaining to such practices. The teachers’ practices, with respect to formative assessment, were compared with their practices before participating in the PDP, which had been analysed and described in an earlier study (Andersson, Boström and Palm Citation2017). The research project was guided by the Ethical Review Act with later additions (SFS Citation2008:192) and ethical guidelines from the Swedish Research Council (Vetenskapsrådet Citation2002; Citation2011). For the type of research conducted in this study, it is not necessary to submit an application for ethical evaluation to the university research ethics body. The teachers agreed willingly to participate in the research, and even though the research focus was on the teachers, the students and their parents were informed of the study. The students were specifically informed that the teacher, and not them, was the object of study.

Classroom observations

The teachers were informed that they were going to be visited at least twice during the school year. However, to increase the probability of observing regular lessons, agreements were made that they were only going to be notified of the exact day and time the evening before the visit. An observation schema based on the framework comprising the “big idea” and the five key strategies structured the observations. During the observed lessons, all identified formative assessment activities were recorded and described in field notes.

Teacher interviews

The interviews were semi-structured and lasted approximately 1½ hours, and they were directed by an interview guide. In the interviews, the teachers were explicitly asked about the changes they had made in their classroom practice after the PDP. Follow-up questions were asked to obtain clarification, examples, and more details. For example, if the teacher said they had used an activity very frequently, we asked them what this meant in practice. All interviews were recorded and transcribed by the first author shortly after the interview.

The interview guide had three parts. In the first part, the teachers were asked to describe their teaching after the PDP without having to go into detail. One kind of follow-up question prompted the teachers to describe a bit more about the changes made, for example, “What distinguishes the current teaching from the past?” Another kind of follow-up question asked about the frequency of the use of certain new activities mentioned by the teacher. This information was important because only activities that were used regularly were considered to be a change in this study (see the section on data analysis below). An example of such a question was “How often would you say you do so?” In the second part, following the structure of the five key strategies, the teacher was asked to elaborate on the changes made for each key strategy. The questions pertaining to each strategy followed after the interviewer had referred to changes mentioned in the first part of the interview or had introduced each key strategy by saying, “This key strategy is about …, and you may have changed some things and not others. What are you doing now compared to before? What has changed?” For the purpose of another study, the third part of the study focused on questions about the teachers’ experiences of the implementations and the reasons for the specific instructional decisions that they had made. Similar follow-up questions were also asked in the second part of the study in direct connection with descriptions of the teachers’ practice.

Data analysis

The main purpose with the data analysis was to identify the characteristics of the changes in the teachers’ formative classroom practices. In the first step of this analysis, the formative assessment activities that were regularly implemented in each teacher’s practice were identified in the transcripts from the interviews and in the field notes from the classroom observations. The framework by Wiliam and Thompson (Citation2008) was used in the analysis of the teachers’ practices, and we defined formative assessment activities as activities used in classroom practice that have the potential to contribute to the attainment of the goal of at least one of the key strategies, and the “big idea” of formative assessment. Each activity was classified in relation to the five key strategies. The “big idea” permeates the key strategies, so for analytical reasons we have not classified a single activity as belonging to a ‘big idea category’. However, the requirement of contributing to the purpose of the big idea means that, for example, gathering information about student learning is only regarded as a formative assessment activity (pertaining to Key Strategy 2, KS 2) if the information is used to modify teaching or learning. From a student perspective, the “big idea” is included in KS 4 and KS 5. From a teacher perspective, the “big idea” pertains to KS 1–3, but teachers’ adjustment of instructional activities is not included in any of the key strategies in the framework. Therefore, we have included Adjusted Teacher Instruction (ATI) as a new ‘strategy’.

For an activity to be regarded as a new formative assessment activity in the teachers’ practice, in other words, to be a change, it would have to be an activity not used before or used to a lesser extent before. Thus, new activities could include activities that were presented in the PDP and then implemented in the classroom as presented in the PDP or in a modified form. An activity could also be considered new if an old activity was modified or used more frequently. In addition, to be regarded as a new activity in their classroom practice, it also had to be used regularly as a normal part of the practice. What exactly should be counted as “regular” is, of course, a matter of judgement and differs between types of activities. The data about how frequent a teacher used the activities came from the teacher interviews. Utterances such as “for each chapter”, “every week”, “often”, and “that is how we do it now” would count as describing sufficient regularity. Utterances such as “seldom” or “just tested” would exclude the activity from being regarded as a change in practice.

The last requirement for an activity to be regarded as a regular part of a new practice was that the teacher in the interview either provided details or examples from the use of the activity (it was not sufficient that the teachers only said that they used the activity), or that the teacher mentioned it in the interview and it could be concluded from the classroom observations that it was used regularly. Examples of such indications from the classroom observations were when students seemed used to, talked about, or asked for the activity. Thus, the interviews were the leading source of information about the teachers’ practice, and the classroom observations served to validate the conclusions drawn from the interviews. Difficult classification decisions were made by both authors in consensus.

The following is an example of an analysis concluding that a teacher had implemented a regular use of exit passes (described in the Results section). In the interview, she said: “Exit passes, I have usually used them for lessons just before lunch or a break… I stand by the door and receive them when they leave the classroom….approximately once a week… in exitpasses I can use more extensive task, compared to when using whiteboards on which you usually only get one short answer. Exit passes provide more exhaustive information”. In the analysis, the use of exit passes was considered a new activity because the analysis of her practice before the PDP showed that she did not use them then. The utterance “once a week” supported the interpretation that she used the exit passes regularly in her new practice. She also provided sufficient details on how the activity had been used. Finally, the teacher’s use of exit passes was considered a formative assessment activity from the following utterance, indicating that the information gathered from the exit passes was used to adjust her teaching: “[In the exit passes] you can also see this, Oops, that’s something we need to go through one more time, and then I usually do that”.

Based on the analysis of the teachers’ practices described above, a list was made for each teacher of every new formative assessment activity that had been implemented on a regular basis. These lists provided material for describing, on a group level, the type of formative assessment activities that were most commonly implemented. Such a compilation is provided in in the Results section.

Table 1. The number of teachers implementing a certain number of new formative assessment activities

Table 2. The most commonly implemented new formative assessment activities (A1–A21), each activity’s relation to the key strategies (KS) or to the “big idea” in terms of adjusted teacher instruction (ATI), and the number of teachers implementing each activity (NoT).

In addition, comprehensive descriptions of each teacher’s changes in their formative assessment practice were made. The descriptions were structured along the three dimensions and three main processes of formative assessment defined above, and connections to the five key strategies and ATI were made throughout the descriptions. The descriptions began with the identified activities in the first dimension, related to the teacher as an agent, and in the order of the three main processes in formative assessment (see above). This corresponded to part of KS 1, KS 2–3, and the teachers’ ATI. With regard to the second dimension, the activities focusing on the students as agents were described in order of the three processes. This corresponded to other parts of KS 1 and to KS 4 and 5. The descriptions then ended with an analysis of how the new activities affected the frequency and length of the adjustment cycle (the third dimension).

Analyses of the descriptions of the changes in formative assessment practice of two teachers are provided in the Results section. The purpose of presenting these analyses is to complement the compilation of the most common new activities provided in with examples of how these different formative assessment activities are used together and thereby make up a formative classroom practice as an integration of strategies. The two analyses pertain to the teacher who implemented the fewest new activities and the teacher who implemented the most new activities. This choice was made to mirror the continuum of the extent to which a new classroom practice was implemented in the group of teachers analysed in the study. The analyses are also used in the Discussion section to discuss the extended learning opportunities opened up by these new practices.

Results

In this section we describe the changes the teachers made in their classroom practice with regard to formative assessment. We first present analyses of the formative classroom practice of the teachers who implemented the least and the most number of new activities. We then provide information about the practice pertaining to the whole group of teachers. shows how many new formative assessment activities the teachers regularly used as an inherent part of their normal classroom practice after participating in the PDP, and specifies those activities that were most commonly implemented. Together, these different types of results provide a picture of the formative classroom practices that were implemented by the group of teachers analysed in this study.

Analysis of Fredric’s2 formative classroom practiceFootnote2

Fredric is the teacher who implemented the fewest new formative assessment activities in his classroom. With respect to the first dimension of conducting formative assessment (the three processes of identifying the learning goals, the current learning needs, and how to attain the goals) Fredric did not strengthen the first process, which corresponds to KS 1. The other two processes, however, were significantly strengthened. To elicit evidence of students’ learning (KS 2), he began to use mini-whiteboards and exit passes. He also assigned one or two tasks targeted to specific learning goals to individual students during independent work, and he consciously acted to create a classroom climate where mistakes and questions were accepted. When mini-whiteboards are used, the students write their answers to the teacher’s questions on them, and then, all at the same time, hold them up for the teacher to view. Thus, the use of mini-whiteboards is an all-response system that provides the teacher with information about every student’s learning during a lesson and thus solves the problem of acquiring information about every individual student in large classes. Based on an interpretation of the thinking and skills underlying the students’ responses, the teachers can adjust the instructional activities during the same lesson to better meet the students’ learning needs that have just been identified. When using exit passes, the students write their responses to the teacher’s questions on a piece of paper at the end of the lesson and give it to the teacher when leaving the classroom. Based on these responses, instruction in the next lesson is adjusted to fit the students’ understanding. This means that information about every student’s specific learning needs, not only those who have raised their hands to voluntarily answer the teacher’s questions, is collected more often. Thus Fredric’s choice of adjustment of instructional activities (ATI) and feedback (KS 3) were better founded and could provide better service for each individual student. In addition, he was able to use this information to choose or adjust the tasks for the individual students to work with (instead of just working with all of the tasks in the textbook), and this was often in dialogue with the student. He also used a conscious selection of tasks after a diagnosis had been made or when information about student learning had been gathered in other ways. The initiative to make a selection of tasks sometimes came from a student and sometimes from himself. Fredric also added additional ways of providing feedback (KS 3) as a way of acting on information about students’ identified learning needs. For example, he replaced scores with comments on homework and diagnostic tests . His written feedback most commonly communicated two things the students had done well and one specific suggestion about how to improve (called “two stars and a wish” (Wiliam Citation2011)).

In the second dimension about the involvement of all agents in the classroom, Fredric started to encourage the students to help each other (KS 4), but he did not describe, or train them, how to use formative assessment to best support each other. Regarding KS 5, he implemented a system for how the students should act during individual work when they did not understand how to solve a task, but he did not describe or let them practice how to self-regulate their learning more generally. Thus, in Fredric’s classroom formative assessment was still very much the responsibility of the teacher. In the third dimension, Fredric had increased the frequency of activities involving the formative assessment adjustment cycle. The implementation of tasks to be answered on the mini-whiteboards and exit passes extended his gathering of information about each individual student’s learning to at least once a week. The lengths of the adjustment cycles were often short because feedback or adjusted instructional activities followed during the same or the next lesson.

Analysis of Helen’s formative classroom practice

Helen was the teacher who implemented the most new activities. In regard to the first dimension, Helen had a much stronger focus on learning goals. This manifested itself in all of the activities that were conducted to strengthen the first process of clarifying the learning intentions and criteria for success (KS 1). Helen’s lessons often started with: “After this lesson you should know…”, together with exemplars to clarify the sub-goals of the learning intentions. The written suggestions for learning tasks that Helen gave to the students were also structured according to relevant sub-goals and included exemplars of what it means to attain the sub-goals.

With regard to the second process, Helen more often gathered information about her students’ learning. She used mini-whiteboards several times every week to collect evidence about students’ learning in order to adapt instruction during the same lesson. She also used exit passes at least every second week in order to make instructional decisions about the following lessons (KS 2). Her increased focus on the learning goals manifested through her questions and tasks being based on her knowledge about common misconceptions and were consciously aimed at specific learning targets that were necessary for further learning. She used the mini-whiteboards both to create engagement and learning among the students and to elicit evidence of learning. Helen also gathered information about student learning by using questions for students to discuss in pairs or groups. She also developed two diagnoses per content area (approximately once a month) and some small tests for checking specific skills. She no longer used the diagnoses from the textbook, and she spent less time correcting students’ notebooks. To facilitate learning and gathering of valid information about every student’s learning, she supported the engagement of all students in thinking. For example, choosing individual students to answer her questions in whole-class situations was based on a random selection procedure, and before revealing which student was selected to respond, time was allocated for all students to think about the question. All students also needed to be prepared to follow-up on each other’s responses. Through feedback, Helen fostered a classroom climate that focused on learning and in which students were willing to ask and answer questions as well as participate in discussions and share ideas. Her message was that wrong answers and revealed misconceptions are positive opportunities for learning. For example, she often repeated: “In this class we do not focus on having the right answer from the beginning – it is quite all right to change one’s mind – and wrong answers have more potential for learning than right answers”.

Helen made different kinds of adjustments of her instruction (ATI) based on the information gathered about student learning. Information from newly implemented small tests was used for decisions about which tasks were most appropriate for individual students to work with. Information from mini-whiteboards and exit passes was used for instructional adjustment during the same or next lesson, and such instructional adjustment came in the form of lectures, tasks, or other learning activities. If many students showed similar misconceptions, the whole group received the same instruction, otherwise learning activities were modified for smaller groups or individuals. Extra lectures for groups of students or individual students were used prior to participating in the PDP, but afterwards the instruction in these lessons was based on the specific misconceptions that were identified in her assessments. Although Helen’s long-term planning included learning tasks for all students, tasks for both independent seatwork and homework could be added or removed, either on the student’s or Helen’s initiative. She also used summarised information from several assessment sources to adjust the number of weeks spent on a textbook chapter.

Helen significantly developed her way of giving feedback (KS 3). Before the PDP, her oral feedback was mostly concerned with praising students for providing correct answers and with correcting faulty answers, and feedback on tests and diagnoses was in the form of right or wrong answers. After the PDP she directed her feedback towards the learning goals and used more specific feedback in line with “two stars and a wish” to structure the feedback to comprise specifics about what the student already understood and guidance about what the next step in learning should be. She replaced scores on tests and diagnoses with comments. In addition, Helen set aside time for, and structured the students’ use of, her written feedback, and she followed up on their use of her feedback.

With regard to the second dimension, Helen significantly involved all agents in all three instructional processes. She involved the students in interpreting the learning goals (first process, KS 1) by organising the instruction so that students took part in the work towards reaching a mutual understanding of the learning goals. For example, the students worked in groups of 4 or 5 to exemplify sub-goals by assessing various student solutions and arguing for their judgement in relation to criteria for attaining the goals at different levels. With regard to KS 4 (which pertains to both the second and third processes) Helen described, evaluated, and discussed with her students their use of formative assessment activities. For example, she gave her students direct support in the form of examples of good questions to use in their roles both as help seekers and help givers, and she let the students compare and correct their homework in groups of 3–6 students each week. To support the students, Helen specified what aspects of the solutions that the students should focus their comparisons on. This was also the case for the activities used for KS 5, which also pertained to the second and third processes. The students often received some form of written suggestions, which supported the students’ autonomy. For example, when students were stuck on a task or otherwise felt a lack of understanding, they were to first self-reflect on what they already knew, what they had already tried, and what the next step might be (this was called “wise decisions”). If this did not help, they were to ask their peers for help (called “wise peers”), and as a last resort they were to ask the teacher. When Helen experienced that students did not use the classroom time efficiently, she let the students monitor, document, and reflect on their use of time. In practice, this might be done by students setting a goal for how many minutes of a lesson should be spent on focused learning, for every ten-minute period noting how many minutes had been used well, and then assessing at the end of the lesson if the goal was reached, why (or why not), and making a plan for the next lesson. To engage the students in, and make them reflect on, their learning, Helen also let her students self-assess their learning in relation to different goals. At the end of a lesson they might, for example, evaluate whether they attained a learning goal for that lesson. Helen supported this goal-oriented self-assessment, for example, by providing comments on students’ performance on diagnoses as models for self-assessment.

In the third dimension, her timely actions based on her frequently collected information about student learning, as well as her support of students’ own regulation of learning, increased the frequency and shortened the length of the adjustment cycle in her practice.

Implemented formative assessment activities

During the school year after attending the PDP, all of the teachers implemented new formative assessment activities that were regularly used in their classroom practice. shows that the number of new activities varied between 8 and 34, and the median number of new activities was 20.

shows the formative assessment activities that were most commonly implemented by the teachers. Some of them were exemplified in the analyses of Fredric’s and Helen’s formative classroom practices above, and some will be exemplified below. As can be seen in the table, the teachers implemented new activities related to all key strategies as well as to the category of the teacher adjusting instruction in light of the collected evidence of student learning (ATI). The ATI category pertains to the “big idea” from a teacher’s perspective.

The most common activities that the teachers implemented involved new ways of collecting evidence about students’ thinking and skills (activities pertaining to KS 2), which they used to adjust instruction (activities pertaining to ATI) to better meet the identified learning needs. In the following, we will describe the use of the most commonly implemented activities. All but one teacher extended their repertoire of assessment activities by implementing the use of student mini-whiteboards (on which every student responds to the teacher’s questions) to elicit evidence of student learning as a regular activity in their classrooms (KS 2). Of these 21 teachers, 18 also used the mini-whiteboards for the purpose of creating student engagement in learning activities. Seventeen of the 22 teachers also used “exit passes” (see the analysis of Fredric’s practice above) at the end of the lessons to gather evidence of the students’ understanding. The teachers used both the mini-whiteboards and the exit passes at least every second week. The most frequent users of mini-whiteboards used them almost every lesson, while the most frequents users of exit passes used these 2–3 times a week. Other common new assessment activities were to use tests for formative purposes, to encourage students to reveal uncertainties in their understanding, and to replace students raising their hands as a system for choosing who will respond to the teacher’s questions with an all-response or a random-choice system.

The most common adjustment (made by 19 teachers) was to modify instruction for the whole class, while 9 teachers provided extra or adjusted instructional activities for smaller groups of students. Thus, a little less than half of the teachers adjusted instruction in both ways. Other common ways of modifying instruction were to modify the time spent on a mathematical unit, and – in dialogue with individual students – to make sure that each student worked with tasks appropriate to their current mathematical understanding. All of these adjustments became better grounded because of the teachers’ new activities for gathering information about each individual student’s learning.

A shift was also made in the teachers’ practice in relation to KS 1. Most teachers broke down the learning goals into sub-goals and the specific criteria for achieving these goals at different levels, and these sub-goals and criteria were clarified in discussions with the students. However, most of the rubrics, describing various levels of attainment of learning objectives, were those developed during the PDP. Only a few of the teachers created new rubrics or modified the rubrics developed during the PDP to match the needs of new specific learning areas. In their communication with their students, about half of the teachers replaced a focus on the number of tasks to be solved with a focus on emphasising and clarifying the intended learning from the lesson. The teachers’ use of feedback (KS 3) developed considerably after the PDP. Most of the teachers were more conscious about the characteristics of the feedback they gave to students, and 13 of the teachers only gave feedback in the form of comments and not as points or grades. Their feedback most commonly communicated two things the students had done well and one specific suggestion about how to improve (i.e. feedback in the form of “two stars and a wish”).

To a large extent, the teachers tried to activate the students in the formative classroom practice to become instructional resources for one another (KS 4) and owners of their own learning (KS 5). Most teachers encouraged their students to help each other, and about half of the teachers also described for them how to seek and provide help. Fifteen of the 22 teachers supported their students in becoming more autonomous learners by describing how to act to regulate their own learning in a formative classroom practice.

Discussion

Extended learning opportunities

The teachers’ changes in formative assessment practice ranged from complementing previous teaching with new activities pertaining to the “big idea” of formative assessment (e.g. Fredric), to classroom practices that were radically altered at their very foundation (e.g. Helen). The teachers’ development, along the three dimensions of a practice based on the integration of different strategies for formative assessment, provided extended learning opportunities. The strengthening of each of the processes in the first dimension enhanced the benefit of the other processes and of using all three of them. For example, from the perspective of the teacher as an agent in the teaching and learning processes, clarification of the goals for the students also forced the teachers to specify and raise their own awareness of their personal interpretation of the goals. Clearer goals facilitated the assessment of the most relevant knowledge and skills, which in turn provided more valid and reliable data about students’ learning needs and thus the possibility to make more valid and reliable decisions about how to act to support all students’ learning towards the learning goals. Indeed, the teachers used this information for both extended explanations and for feedback that was adapted to the identified learning needs. From the perspective of the students as agents in the teaching and learning processes (the second dimension of practice), a clear and mutual understanding of the learning goals would have facilitated the quality of the students’ collaborative learning processes as well as self-regulated learning processes in which all three key processes were involved. A mutual understanding of the learning goals would have facilitated, for example, the provision of useful feedback and the interpretation of feedback between peers. In addition, the learning goals guided the students, as well as provided standards for self-assessment, in self-regulated learning processes (Zimmerman Citation2002). Hence, the activities clarifying these goals also would have provided better learning opportunities through the empowerment of higher-quality peer-assisted and self-regulated learning (Dignath and Büttner Citation2008; Rohrbeck et al. Citation2003).

The strengthened use of the three key processes and the use of all agents as resources in the classroom also contributed to an increased frequency of activities involving the adjustment cycle (the third dimension of practice), which meant that the time in the classroom could be used more efficiently for learning. Indeed, several teachers expressed that it did not feel acceptable to not act on the explicit information about student learning that they now frequently possessed. For example, the teachers’ use of questions answered on mini-whiteboards during a lesson and their use of exit passes at the end of a lesson, along with adjustments of the same or the next lesson, continuously provided students with learning activities that were adapted to their actual learning needs. Less time was spent on activities that were less optimal for their learning. The time from assessment to action was also significantly reduced when the students sought and provided high-quality support from each other and activated themselves in self-regulated learning processes, which were processes that were supported by most of the teachers in the study. The students spent less time waiting for help from the teacher because they were less dependent on the teacher. Previous research has provided evidence of positive effects of formative assessment on student achievement for shorter cycle lengths of minutes up to days, but not for longer cycle lengths (Wiliam Citation2010).

Final remarks

The definition of formative assessment by Black and Wiliam (Citation2009) that was used in this article was part of the development of the theory of formative assessment beyond the stage reached in their seminal earlier work (Black and Wiliam Citation1998) that “did not start from any pre-defined theoretical base but instead drew together a wide range of research findings relevant to the notion of formative assessment” (Black and Wiliam Citation2009, 5). However, as Bennett (Citation2011) argues, further development of the theory of formative assessment is needed in order to realise the maximum benefit from the use of this concept. Such developments require research that provides instantiations illustrating the characteristics of formative assessment practice and how the components of such an approach to formative assessment might work in a real setting and how they might impact on student achievement (Bennett Citation2011). Such an instantiation is the main contribution of the present study. The study identified the changes in formative classroom practice made by a random selection of Year 4 mathematics teachers, and it exemplifies how these changes might afford new learning opportunities. That the students improved their achievement when these teachers changed their practice has been shown elsewhere (Andersson and Palm Citation2017). The present study complements the studies in the KMOFAP project (Black et al. Citation2003) in which the participating teachers were not randomly selected. Many implemented activities were similar in the two projects, which is not surprising because the PDP in the present project was inspired by the KMOFAP project and used a similar framework. However, comparing the average implementation of 20 new activities per teacher with the average of 6 new activities in the KMOFAP project (Black et al. Citation2003, 22-23), it seems that the teachers in the present project made quantitatively greater changes in their practice.

The identified characteristics of the changes in practices of this group of teachers show, as in the case study of one teacher by Vingsle (2011), how complex this classroom practice is and thus, as anticipated by Black & Wiliam (Citation1998), major changes are likely to be required in most teachers’ practice in order to attain this kind of formative assessment use. Therefore, most teachers would benefit from substantial support in their endeavour to develop their use of formative assessment, and when such support is provided it might result in improved classroom practice that enhances student achievement.

Acknowledgement

The second author’s participation in this study was made possible through a grant from the Swedish Riksbankens Jubileumsfond.

Additional information

Notes on contributors

Catarina Andersson

Catarina Andersson, PhD is a member of Umeå Mathematics Education Research Centre (UMERC). She works as a Special Educator in Skellefteå and as a researcher at the Department of Science and Mathematics Education, Umeå University. Her research interests about professional development, formative assessment, and mathematics education, complemented by special education in general and assessment in inclusive classroom practice in particular.

Torulf Palm

Torulf Palm is associate professor in pedagogical work and a member of Umeå Mathematics Education Research Centre (UMERC). He works at the Department of Science and Mathematics Education, Umeå University. His main research interests are formative assessment and teacher professional development.

Notes

1. With kind permission of Dylan Wiliam 150320

2. For anonymity reasons, the names used in the article are not the teachers’ real names.

References

  • Andersson, Catarina, Boström, Erika and Palm, Torulf. 2017. Formative assessment in Swedish mathematics classroom practice. Nordic Studies in Mathematics Education Forthcoming.
  • Andersson, Catarina and Palm, Torulf. 2015. Reasons for teachers’ successful development of a formative assessment practice through professional development – a motivation perspective. In Catarina Andersson, Professional development in formative assessment: effects on teacher classroom practice and student achievement (Doctoral thesis, Department of Science and Mathematics Education). Umeå, Sweden: Umeå University.
  • Andersson, Catarina and Palm, Torulf. 2017. The impact of formative assessment on student achievement: a study of the effects of changes to classroom practice after a comprehensive professional development programme. Learning and Instruction 49: 92–102. doi:10.1016/j.learninstruc.2016.12.006
  • Bennett, Randy. 2011. Formative assessment: a critical review. Assessment in Education: Principle, Policy & Practice 5 (1): 5–25. doi:10.1080/0969594X.2010.513678
  • Black, Paul, Harrison, Christine, Lee, Clare, Marshall, Bethan and Wiliam, Dylan. 2003. Assessment for learning: Putting it into practice. Buckingham: Open University Press.
  • Black, Paul and Wiliam, Dylan. 1998. Assessment and classroom learning. Assessment in Education: Principles, Policy & Practice 5 (1): 7–74. doi:10.1080/0969595980050102
  • Black, Paul and Wiliam, Dylan. 2003. ‘In praise of educational research’: formative assessment. British Educational Research Journal 29 (5): 623–37.
  • Black, Paul and Wiliam, Dylan. 2009. Developing the theory of formative assessment. Educational Assessment, Evaluation and Accountability 21 (1): 5–31. doi:10.1007/s11092-008-9068-5
  • Briggs, Derek C., Ruiz-Primo, Maria Araceli, Furtak, Erin, Shepard, Lorrie and Yin, Yue. 2012. Meta-analytic methodology and inferences about the efficacy of formative assessment. Educational Measurement: Issues and Practice 31 (4): 13–17. doi:10.1111/j.1745-3992.2012.00251.x
  • Dignath, Charlotte and Büttner, Gerhard. 2008. Components of fostering self-regulated learning among students. A meta-analysis on intervention studies at primary and secondary school level. Metacognition and Learning 3 (3): 231–264. doi:10.1007/s11409-008-9029-x
  • Dunn, Karee E. and Mulvenon, Sean W. 2009. A critical review of research on formative assessment: The limited scientific evidence of the impact of formative assessment in education. Practical Assessment, Research & Evaluation 14 (7). Retrieved from: http://pareonline.net/getvn.asp?v=14&n=7.
  • Filsecker, Michael and Kerres, Michael. 2012. Repositioning formative assessment from an educational assessment perspective: A response to Dunn & Mulvenon 2009. Practical Assessment, Research & Evaluation 17 (16). Retrieved from: http://pareonline.net/getvn.asp?v=17&n=16.
  • Good, Robert. 2011. Formative use of assessment information: It’s a process, so let’s say what we mean. Practical Assessment, Research & Evaluation 16 (3). Retrieved from: http://pareonline.net/getvn.asp?v=16&n=3.
  • Hattie, John. 2009. Visible learning: a synthesis of over 800 meta-analyses relating to achievement. London: Routledge.
  • Hattie, John and Timperley, Helen. 2007. The power of feedback. Review of Educational Research 77 (1): 81–112. doi:10.3102/003465430298487
  • Kingston, Neil & Nash, Brooke. 2011. Formative assessment: A meta-analysis and a call for research. Educational Measurement: Issues and Practice 30 (4): 28–37.
  • McMillan, James H., Venable, Jessica C. and Varier, Divya. 2013. Studies of the effect of formative assessment on student achievement: So much more is needed. Practical Assessment, Research & Evaluation 18 (2). Retrieved from: http://pareonline.net/getvn.asp?v=18&n=2.
  • Panadero, Ernesto and Jönsson, Anders. 2013. The use of scoring rubrics for formative assessment purposes revisited: A review. Educational Research Review 9: 129–144. doi:10.1016/j.edurev.2013.01.002
  • Rohrbeck, Cynthia A., Ginsburg-Block, Marika D, Fantuzzo, John W. and Miller, Traci R. 2003. Peer-assisted learning interventions with elementary school studies: A Meta-analytic review. Journal of Educational Psychology 95 (2): 240–257.
  • SFS 2008:192 Lag om ändring i lagen (2003:460) om etikprövning av forskning som avser människor [Act amending the Act (2003: 460) on Ethical Review of Research Involving Humans]. Stockholm: Utbildningsdepartementet. Retrieved from http://www.lagboken.se/Views/Pages/GetFile.ashx?portalId=56&cat =27526&docId=181354&propId=5
  • Vetenskapsrådet (2002). Forskningsetiska principer inom humanistisk- samhällsvetenskaplig forskning [Research Ethical Principles]. Retrieved from http://www.codex.vr.se/texts/HSFR.pdf
  • Vetenskapsrådet (2011). God forskningssed [Good Research Practice]. (Vol. 3:2011). Retrieved from http://www.vr.se/download/18.3a36c20d133af0c1295800030/1340 207445948/Good+Research+Practice+3.2011_webb.pdf
  • Vingsle, Charlotta. 2014. Formative assessment: Teacher knowledge and skills to make it happen. (Licentiatavhandling, Umeå universitet, Institutionen för naturvetenskapernas och matematikens didaktik). Retrieved from http://umu.diva-portal.org/smash/get/diva2:735415/FULLTEXT01.pdf
  • Wiliam, Dylan. 2010. An integrative summary of the research literature and implications for a new theory of formative assessment. In Heidi L. Andrade. & Gregory J. Cizek (Eds.), Handbook of formative assessment. Abingdon: Routledge, 18–40.
  • Wiliam, Dylan. 2011. Embedded formative assessment. Bloomington, Indiana: Solution Tree Press.
  • Wiliam, Dylan, Lee, Clare, Harrison, Christine and Black, Paul. 2004. Teachers developing assessment for learning: Impact on student achievement. Assessment in Education Principles Policy and Practice 11 (1): 49–65. doi:10.1080/0969594042000208994
  • Wiliam, Dylan and Thompson, Marnie. 2008. Integrating assessment with learning: what will it take to make it work? In Carol A. Dwyer (Ed.), The future of assessment: Shaping teaching and learning. Mahwah: Lawrence Erlbaum Associates, 53–82.
  • Yeh, Stuart S. 2009. Class size reduction or rapid formative assessment?: A comparison of cost-effectiveness. Educational Research Review 4 (1): 7–15. doi:10.1016/j.edurev.2008.09.001
  • Zimmerman, Barry J. 2002. Becoming a self-regulated learner: An overview. Theory Into Practice 41 (2): 64–70.