542
Views
0
CrossRef citations to date
0
Altmetric
Research Articles

Fostering student teachers’ research-based knowledge of effective feedback

ORCID Icon, ORCID Icon, ORCID Icon & ORCID Icon
Pages 389-407 | Received 18 Oct 2022, Accepted 21 Mar 2024, Published online: 05 Apr 2024

ABSTRACT

Properly designed feedback can be highly conducive to students’ learning. Therefore, teacher education needs to equip future teachers with research-based knowledge of how to provide effective feedback. The present study reports the implementation and quasi-experimental evaluation (a pre-post control group design; N = 141) of a four-week intervention programme that aimed to enhance student teachers’ knowledge of effective feedback and their ability to provide it to students. As a secondary objective, we also tested whether the experience of applying research-based knowledge about feedback improved participants’ attitudes towards educational research. The results showed a substantial increase in knowledge about effective feedback. Moreover, in-depth analysis of written feedback indicated an improvement in participants’ ability to provide high-quality feedback. However, there was no additional effect on their attitudes towards the usability of knowledge from educational research. We discuss the implications for teacher education and teaching about effective feedback.

Introduction

Feedback is one of the most influential factors in learning (Alfieri et al. Citation2011; Hattie Citation2009; Wisniewski, Zierer, and Hattie Citation2019). This is especially true when it relates to individuals’ performance, goes beyond the immediate task and includes advice on self-regulatory strategies related to the learning objective. However, feedback can also be ineffective or even harmful for learning, such as when it focuses solely on the correctness of an answer and provides no specific information on the performance or learning process (Bangert-Drowns et al. Citation1991; Hattie and Timperley Citation2007; Kluger and DeNisi Citation1996; Shute Citation2008). In fact, teachers often seem to give ineffective feedback that is too unspecific (Voerman et al. Citation2012) or focuses on praise or summative assessments rather than providing specific constructive suggestions for improvement (Drake and Nelson Citation2021; Hattie and Clarke Citation2019; Wisniewski, Zierer, and Hattie Citation2019). In addition, feedback that is constrained to the task provides little guidance on self-regulatory processes and learning goals (van den Bergh, Ros, and Beijaard Citation2013).

Two reasons may explain why teachers frequently fail to give productive feedback. First, they often rely on everyday knowledge of or misconceptions about how to formulate effective feedback (Hess, Werker, and Lipowsky Citation2017). Second, they overestimate the effectiveness of their day-to-day feedback. Research shows significant discrepancies between teachers’ own assumptions about how to formulate effective feedback, their own feedback behaviours and their learners’ perceptions of the feedback given (Carless Citation2006; Lee Citation2009). To counteract these shortcomings, it is important that teachers acquire research-based knowledge about effective feedback in initial and continuing teacher training. Such knowledge can help them recognise the impact and complexity of effective feedback and acquire the knowledge and skills to provide it. Beyond improving everyday feedback practice in schools, learning about the principles of effective feedback may also be beneficial to teachers’ adoption and use of research knowledge for their own practice. Despite frequent calls to foster evidence-based practice in teaching (Bauer and Prenzel Citation2012; Detrich and Lewis Citation2013; Diery et al. Citation2020; Rousseau and Gunia Citation2016; Scheeler, Budin, and Markelz Citation2016), teachers rarely seem to refer to research to inform their teaching (van Schaik et al. Citation2018). Hence, showing that research can actually provide practical insights that transfer easily to classroom practice may help foster favourable attitudes towards educational research (Datnow and Hubbard Citation2016; Kippers et al. Citation2018). The research base on feedback is both of high practical relevance and scientifically solid (Hattie and Timperley Citation2007).

In line with these arguments, this contribution presents an intervention study that was embedded in the internship phase of teacher training in a master’s programme. The intervention had two aims: First, it was designed to enable student teachers to gain and apply educational research knowledge about the effects and provision of effective feedback. Second, by exemplifying the application of research knowledge, it aimed to foster positive attitudes towards educational research use. In our empirical study, we aimed to evaluate whether this intervention increased knowledge (Research Question 1a) and abilities (Research Question 1b) regarding the formulation of effective feedback and improved participants’ attitudes towards educational research findings (Research Question 2).

Below, first, we provide a short overview of the principles of effective feedback. Second, we outline the role of prior assumptions about feedback in learning to provide effective feedback, drawing on research on knowledge integration and revision. Third, we explain how acquiring educational research knowledge about feedback may serve to improve student teachers’ attitudes towards educational research.

Characteristics of effective feedback

According to the well-known model of effective feedback by Hattie and Timperley (Citation2007), feedback can be understood as ”information provided by an agent (e.g. teacher, peer, book, parent, self, experience) regarding aspects of one’s performance or understanding” (p. 81). Effective feedback provides guidance to close gaps that are observed between the current and intended performance or understanding of learners. Specifically, Hattie and Timperley (Citation2007) highlight the relevance of different levels of feedback that should be addressed and detail feedback questions that direct learners to improve their performance or understanding, as discussed below.

Feedback can address four different levels: the task, the process, self-regulation or the learner’s self. Depending on the level of feedback, it can be more or less informative for learners. Feedback at the task level refers to task completion and is one of the most common forms of feedback (van den Bergh, Ros, and Beijaard Citation2013). It provides information about the correctness of a solution as a whole and sometimes about which parts of the task processing were correct or incorrect (Henderson et al. Citation2019). The effectiveness of task feedback can be enhanced when it includes additional cues addressing the process or self-regulatory levels (Brooks et al. Citation2019). Process-level feedback entails detailed suggestions and instructions on how to perform better on similar tasks. Feedback at the self-regulatory level provides recommendations concerning self-regulatory strategies, such as optimising time management and monitoring one’s learning process (Nicol and Macfarlane‐Dick Citation2006). In contrast, feedback at the level of the self has been shown to be less effective as it is not directly related to the specific performance or learning goal at stake (Hattie and Timperley Citation2007; Kluger and DeNisi Citation1996; Wisniewski, Zierer, and Hattie Citation2019). Commonly, such feedback refers to attributes of the person (‘You are a great student’, ‘That’s an intelligent response, well done’) and does not provide constructive cues on how to improve.

Each of these four levels can address present, past and future performances and thus answer three specific feedback questions (Hattie and Timperley Citation2007): ‘Where am I going?’, ‘How am I going?’, and ‘Where to next?’. These questions serve as informants about the current level of performance in relation to the intended learning goal (feed up), to a previous level of performance (feed back) and to future challenges (feed forward). Feedback at the task level can refer to a past performance, point to the current goal achievement and include future goals (e.g., ‘Your underlining is now better implemented [feed back]. You marked all the important words in the text [feed up]. In the future, the goal will be to generate single paragraphs of meaning from your underlining [feed forward]’).

In summary, the discussed model highlights the importance of defining precise learning goals to guide learners (Brooks et al. Citation2019; Hattie and Timperley Citation2007; Shute Citation2008). In contrast, mere praise often fails to provide such informative feedback as it commonly represents a general form of evaluation without any goal orientation (‘You’re a good student!’). Such assessment primarily targets the level of the self and sometimes the level of the task (‘Good job!’). It is therefore less productive than precise feedback in guiding learners to improve their performance and learning (Hattie and Timperley Citation2007; Kluger and DeNisi Citation1996).

Learning about effective feedback: a case of knowledge integration and revision

To provide effective feedback, (future) teachers need to be aware of the conditions that make feedback informative to learners. Yet they seldom start with blank slates when it comes to feedback. Therefore, learning about effective feedback needs to be framed from a knowledge integration and revision perspective that addresses learners’ (potentially mistaken) prior beliefs and knowledge (Britt and Sommer Citation2004; Chi Citation2008; Kendeou et al. Citation2019; Lehmann, Rott, and Schmidt-Borcherding Citation2019). Because feedback is an everyday phenomenon, student teachers often have some prior assumptions about alleged feedback rules, such as the sandwich feedback method (i.e. critical feedback should be embedded between two instances of positive feedback; von Bergen, Bressler, and Campbell Citation2014). They may have also picked up and adopted feedback practices from in-service teachers during the internship phases of their teacher training or from university lecturers (Hobson et al. Citation2009; Lofthouse Citation2018). However, these ‘feedback rules’ do not necessarily correspond with knowledge from educational research (Gigante, Dell, and Sharkey Citation2011; James Citation2015) and may not lead to the desired effects (Molloy et al. Citation2020). Furthermore, numerous misconceptions exist about how to provide effective feedback, such as non-specific praise (e.g., ‘You’re so smart’) would have positive effects on children’s motivation and self-esteem (Brummelman, Crocker, and Bushman Citation2016). In fact, children seem to attribute non-specific praise internally, leading them to choose easier tasks to avoid internal attribution of failure (Baumeister, Tice, and Hutton Citation1989; Pomerantz and Kempner Citation2013). Accordingly, student teachers can be expected to have widely varying levels of more or less profound knowledge about feedback. Interventions to foster knowledge and skills to provide effective feedback, therefore, need to take into account and potentially correct such partial and possibly questionable understandings (see Chi Citation2008). To this end, we draw on theoretical approaches to knowledge integration and knowledge revision (Britt and Sommer Citation2004; Chi Citation2008; Kendeou et al. Citation2019; Lehmann, Rott, and Schmidt-Borcherding Citation2019), which acknowledge the need to incorporate and/or revise prior knowledge and held assumptions.

Updating existing knowledge about feedback with new information can be seen as a process of knowledge integration. To foster knowledge integration, the targeted activation of prior assumptions through macro-structure focusing tasks has proven to be effective (Britt and Sommer Citation2004). Macro-structure tasks focus on information in a text that describes global relations and reasoning contexts instead of on detailed information (Britt and Sommer Citation2004). Therefore, they pose elaborative questions, such as what and why something happened, instead of narrower ones that can be answered with a single word or number. Britt and Sommer (Citation2004) argued that structuring a text through macro-structure focusing tasks facilitates the integration of new knowledge because it can be grasped in a more structured and efficient manner. Knowledge integration can further be supported by integration prompts (Lehmann, Rott, and Schmidt-Borcherding Citation2019) that provide learners with explicit cues to look for possible overlaps, complements and reasoning connections between different bodies of knowledge. If students’ prior knowledge and assumptions about feedback are incompatible with the new knowledge, a process of knowledge revision is required, which takes a form similar to knowledge integration (Kendeou et al. Citation2019). According to Kendeou et al. (Citation2019), three conditions foster the initiation of knowledge revision. First, prior assumptions must be activated simultaneously with the correct new information (coactivation) to highlight inconsistencies between them (Kendeou and van den Broek Citation2007; Kendeou, Muis, and Fulton Citation2011; van den Broek and Kendeou Citation2008). Second, the new information must be integrated with the previously misconceived knowledge to update its long-term memory representation (integration; O’Brien, Cook, and Guéraud Citation2010; Zwaan and Madden Citation2004). Third, the frequent activation of the new, correct information gradually weakens the propensity to activate false prior assumptions (competing activation; Kendeou, Smith, and O’Brien Citation2013, Citation2014; McNamara and McDaniel Citation2004; van Boekel et al. Citation2017). In the present study, we applied these instructional concepts to foster students’ learning of effective feedback principles.

Learning about effective feedback as an exemplar of using research for teaching

Though the primary purpose of our study was to enhance student teachers’ knowledge and ability to provide effective feedback, we were also interested in whether learning research-based knowledge of this topic would enhance participants’ positive attitudes towards educational research and its use for teachers. Favourable attitudes towards educational research and its perceived usefulness have been highlighted as crucial drivers that facilitate its usage in the literature on evidence-based practice and research-based teacher education (e.g., Diery et al. Citation2020; Heitink et al. Citation2016; Kippers et al. Citation2018; Thomm, Sälzer, et al. Citation2021; van Schaik et al. Citation2018). First, conceptions of research-based teacher education rely on the notion that the contents of teacher education need to be supported by pertinent theory and evidence. This includes addressing misconceptions and questionable beliefs about teaching and learning that students may carry by confronting them with research-based knowledge. Second, engaging with research-based knowledge may foster pre-service teachers’ knowledge, skills and attitudes in becoming competent and critical recipients of research-based knowledge as a pre-requisite of lifelong professional development. These two perspectives also provide a basis for teachers to take a more active role in enriching their practice on the grounds of their own classroom-based research (Hammersley Citation1993; Stenhouse Citation1975; Westbroek et al. Citation2022). Though this latter teacher-as-a-researcher perspective provides a more encompassing view and is prominent in some countries (e.g., Niemi Citation2008). The focus of the present paper is more strongly on (pre-service) teachers as recipients and users of research-based knowledge.

Unfortunately, students and in-service teachers often perceive research-based knowledge as irrelevant to their practice (Cain Citation2016; van Schaik et al. Citation2018). Moreover, they tend to dismiss research when its results conflict with their prior assumptions (Thomm, Gold, et al. Citation2021). Teaching and teacher education have long been criticised to be more strongly based on tradition, ideology, conventional wisdom and personal experience and belief rather than scientific knowledge about learning and teaching (e.g., Cain Citation2016; Schildkamp and Kuiper Citation2010; van Schaik et al. Citation2018). To increase the likelihood that individuals engage in a particular activity (i.e. the use of research-based knowledge in the present case), it is vital to increase the perceived value of that activity (Eccles and Wigfield Citation2002). In this sense, utility-value interventions aim to increase one’s attitude towards an object by perceiving its usefulness for one’s everyday practice (Hulleman and Harackiewicz Citation2021; Rosenzweig et al. Citation2019). If student teachers experience that scientific knowledge is superior to their own pre-suppositions or conventional knowledge, it can be assumed that the aforementioned negative tendencies will be counteracted. That is, if student teachers experience that research knowledge can offer helpful insights into issues relevant to their teaching, they may (re)consider the relevance and usefulness of educational research in relation to their own professional practice (Eccles and Wigfield Citation2002). Even short-term utility-value interventions can be expected to have an impact on attitudinal variables (Hulleman et al. Citation2010; Rochnia and Gräsel Citation2022; Zeeb et al. Citation2019). As argued above, we believe that feedback research provides a particularly suitable example in this regard as knowledge of effective feedback is directly applicable to teachers’ everyday classroom practice. Moreover, the principles derived from feedback research can be used to analyse and improve existing practice (Kirkhart Citation2000; Weiss Citation1998). Even if research-based knowledge about feedback may conflict with students’ prior assumptions, it is relatively easy to recognise its superiority to common simple feedback rules – a crucial condition for knowledge revision (Chi Citation2008). Therefore, as a second aim, we explored whether the designed intervention positively affects student teachers’ attitudes towards educational research and its usefulness.

The present study

The present contribution describes the development and evaluation of an intervention that followed the discussed instructional principles to promote student teachers’ knowledge of and ability to formulate effective feedback. As a secondary objective, the intervention aimed to improve student teachers’ attitudes towards educational research by illustrating its usefulness. The intervention was embedded as a companion course to a compulsory half-year school internship in a master’s teacher training programme. In this context, student teachers could learn about effective feedback and relate it to their internship. By means of a quasi-experimental evaluation design, we aimed to test whether the intervention increased participants’ knowledge about effective feedback (Research Question 1a) and their ability to provide it to students (Research Question 1b). We assumed that participants would more often formulate feedback conducive to learning (i.e. refer more often to the different levels and feedback questions and make references to learning objectives). Furthermore, we expected the students to make less use of praise and feedback for the learner’s self. We also looked at whether the intervention improved the participants’ attitudes towards educational research findings (Research Question 2) and assumed it would do so.

Methods

Sample and study design

Overall, N = 141 student teachers (78.0% female; Mage = 24.08 years, SDage = 1.70) participated in the study. All participants were in the last semester of their master’s degree programme for teaching in primary or secondary schools. We implemented a quasi-experimental pre-post control group design. Overall, n = 63 participants were placed in the intervention group (74.6% female, Mage = 24.46 years, SDage = 2.13) and n = 78 participants in the control group (80.8% female, Mage = 23.77 years, SDage = 1.18). The control group participated in an alternative learning environment that did not offer any content on the topic of feedback during the same period as the intervention group. Dependent variables were (i) participants’ knowledge about effective feedback, (ii) their ability to provide effective feedback and (iii) their attitudes towards educational research findings. Note that, for practical reasons, only participants in the intervention group could be asked to provide written feedback on a students’ performance (i.e. our ability measure). Therefore, the design did not allow for a comparison of the growth in ability between the experimental groups. Nevertheless, the written feedback enabled us to analyse how participants implemented features of effective feedback and how this changed over the course of the intervention.

Intervention

The intervention was implemented as a four-week course that accompanied the internship (delivered online because of the COVID-19 pandemic). Each week, participants attended their assigned school for four days and the online course for one day. Each online session lasted about 90 minutes, and students received additional coursework to be done at home. Below is a detailed description of the course programme. summarises the goals and tasks for each week.

Table 1. Overview of goal and tasks for each week.

Week 1

The aim of the first week was to help participants activate and structure their prior assumptions about effective feedback. They were tasked with writing a first draft feedback on a student’s performance and instructed to reflect on and explicate why they had formulated their feedback in this specific manner.

Following the idea of macro-structure focusing tasks (Britt and Sommer Citation2004), these guiding questions were intended to support participants in structuring their prior assumptions and integrating newly read content into their knowledge structures. As a result, participants were expected to be prepared to grasp the subsequent research-based materials on effective feedback in a more focused and efficient manner. This activity also served as a preparatory task for integration prompts and the coactivation of prior (possibly false) assumptions and new knowledge in week 4.

Week 2

The aim of the second week was to build up and ensure a shared understanding of the educational research findings presented in two complementary educational research textbooks (Lipowsky Citation2015; Weckend, Schatz, and Zierer Citation2019).

Week 3

The third week aimed to encourage participants to use their new knowledge to provide effective feedback. We asked participants to write another feedback response and provide a rationale for it, as we did in week 1. This time, participants were instructed to draw on the research they had read to formulate their feedback.

Week 4

We designed the final session to help participants reflect on and integrate their prior assumptions with the new research knowledge. As can be seen in , we used three integration prompts (Lehmann, Rott, and Schmidt-Borcherding Citation2019) to help participants identify connections and conflicting information between their prior assumptions and the research knowledge concerning effective feedback.

These prompts supported participants with correct prior assumptions in identifying overlaps and additional knowledge about effective feedback (Lehmann, Rott, and Schmidt-Borcherding Citation2019) and allowed participants with incorrect prior assumptions to directly contrast (coactivation, Kendeou et al. Citation2019) and update (integration) these assumptions with correct knowledge.

Measures

Knowledge about effective feedback

To measure the participants’ knowledge about effective feedback, we constructed a test with 15 single- or multiple-choice items, including essential components of the model developed by Hattie and Timperley (Citation2007). These items covered knowledge about the levels of feedback, feedback questions, role of praise and importance of learning objectives in feedback. Points were awarded for each correct answer (i.e. two points for a multiple-choice task with two correct answers), with a maximum possible score of 27 points. Though five items proved to be quite easy, we decided to retain them to ensure that the test covered all theoretical aspects of effective feedback. Readers should note that, similar to many other knowledge tests, our test cannot be considered to measure a homogeneous construct; instead, it was designed to represent the most important aspects of the knowledge domain of interest (cf. Stadler, Sailer, and Fischer Citation2021; Taber Citation2018). Because it has been highlighted that reliability indices, such as Cronbach’s alpha, are inappropriate in such cases (Stadler, Sailer, and Fischer Citation2021), we do not report them here.

Ability to provide effective feedback

We used participants’ written feedback from weeks 1 and 3 as a measure of their ability to provide effective feedback. As not every participant gave permission to analyse their written feedback, 42 feedback responses per time of measurement were analysed (overall 84 documents). Following the model by Hattie and Timperley (Citation2007), we developed a coding scheme (see Supplement) that contained four top-level categories: level of feedback, feedback question, learning goal focus, and praise. The categories level of feedback and feedback question were separated into subcategories related to the different levels of feedback (task, process, self-regulatory and self) and different feedback questions (feed back, feed up and feed forward), respectively.

When a feedback response entailed at least one mention of a respective category, we coded one. If there was no mention in the whole feedback text, we coded zero for this category. Three independent raters coded 30 of the 84 feedback instances (35.7%). In a first step, the coding segments were determined. Second, each rater assigned these segments to the categories described above. Third, the rules for when a segment should be assigned to each respective category were discussed and adjusted where necessary. The process resulted in an average chance-adjusted inter-rater agreement of κ = .78 (Brennan and Prediger Citation1981) between the three raters.

Attitudes towards educational research findings

We measured attitudes towards the use of educational research findings at three levels, as proposed by Weiss (Citation1998). We asked participants whether educational research findings help change, understand and justify teaching practices and pedagogical phenomena. Two scales with five items on changing and justifying teaching practices and pedagogical phenomena were adapted from Haberfellner (Citation2016). Moreover, we constructed a scale for understanding teaching practices and pedagogical phenomena. All items were rated on a 6-point Likert scale from 1 (strongly disagree) to 6 (strongly agree). shows the sample items and reliability values.

Table 2. Sample items and reliability values of the attitude towards educational research findings.

Results

Knowledge about effective feedback and ability to provide it

Regarding research question 1a, summarises the descriptive statistics for the test of knowledge about effective feedback. Descriptively, the intervention group scored higher than the control group. We tested this in a mixed repeated measures ANOVA with the within-subjects factor time (pre-test vs. post-test) and the between-subjects factor intervention (intervention group vs. control group). Indeed, the statistically significant interaction, F(1, 139) = 49.81, p < .001, η2 = .08, revealed that the intervention group gained more knowledge compared to the control group. A Cohen’s d of 1.21 indicates that this is a large effect. We also observed a statistically significant main effect of time, F(1, 139) = 62.69, p < .001, η2 = .10, which is almost entirely due to the knowledge gains in the intervention condition. The main effect of the intervention was non-significant, F(1,139) = 2.48, p = .118, η2 = .01.

Table 3. Descriptive statistics of the feedback test.

To investigate learning gains in the ability to provide effective feedback in the intervention group (Research Question 1b), we analysed which characteristics of effective feedback participants considered in the written feedback activities in weeks 1 and 3. We expected an increase in the levels of feedback, feedback questions and in the references to learning goals. The intervention was also expected to reduce feedback on the level of the self and the amount of praise. shows the average relative frequencies as well as the results of dependent sample t-tests.

Table 4. Characteristics of the written feedback.

We first analysed the extent to which participants considered the different levels of feedback. As shows, on both occasions, participants mainly focused their feedback on the task and process levels. For example, 98% of the participants’ feedback responses addressed the task level. The results also showed that the relative focus on the different levels remained relatively unchanged across the intervention. The descriptively apparent reduction in self-related feedback and slight increase in mentioning aspects of self-regulation were non-significant.

In contrast, we found changes in how the participants utilised the feedback questions. There were statistically significant increases in the frequencies of references to students’ past performances (feed back) and of mentioning learning goals. There was also a slight, though non-significant, decrease in the use of general praise. Feed-up information on the comparisons between the student’s current and intended performance was present in almost all written feedback on both occasions. Finally, feed-forward information was present in about a third of the feedback texts, and this proportion did not change over the course of the intervention.

Attitudes towards educational research knowledge

shows descriptive statistics as well as the results of mixed repeated measure ANOVAs on the three scales measuring attitudes towards research use. Overall, the participants reported relatively positive attitudes, with mean values in the upper half of the used answer scale. Across time, only very small differences occurred in these values. With the exception of the effect of time on the scale of the usefulness of educational research knowledge in justifying teaching practices, none of the ANOVA test results were statistically significant.

Table 5. Means, standard deviations, and mixed repeated measure ANOVAs on the attitude towards educational research knowledge.

Discussion

As feedback is one of the most influential factors in supporting learning processes (Hattie and Timperley Citation2007), it is important that student teachers learn what makes feedback effective. The present study evaluated an intervention that aimed to impart knowledge about feedback and to improve student teachers’ attitudes towards educational research findings in general. In the following, we discuss our findings along the two research questions and reflect on limitations of the study.

Fostering knowledge about feedback and the ability to provide effective feedback (RQ 1)

The results for research question 1 showed, first, that knowledge of effective feedback significantly increased in the intervention group in contrast to the control group. Hence, educational research knowledge can be an important source for student teachers’ learning about pedagogical phenomena during an internship, supplementing the practical knowledge of the supervising teachers (Hobson et al. Citation2009; Lofthouse Citation2018). Even though providing feedback is a commonplace pedagogical activity (Drake and Nelson Citation2021), the stability of the control group’s declarative knowledge indicates that knowledge gains cannot necessarily be expected from internship activities alone. Rather, the results indicate that student teachers need a purposeful and research-based learning context to increase their knowledge of effective feedback. These results are also in line with our proposition that student teachers already possess varying degrees of prior knowledge about effective feedback that they bring to the intervention. The relatively high baseline performance of both groups on the feedback test indicated that participants – either informally or through other courses – had already acquired some knowledge of effective feedback that aligned with the existing body of research. For example, almost all participants were aware that feedback should not primarily refer to learners’ weaknesses and should be formulated as specifically as possible. However, we also found non-negligible use of self-related feedback and praise, which research has shown to be less effective than other feedback forms. This finding indicates that when teaching about feedback, considering participants’ prior assumptions is crucial to facilitating the integration of new knowledge (Britt and Sommer Citation2004) and overcoming false prior assumptions (Kendeou et al. Citation2019). As this study shows, participants’ explication of prior knowledge and assumptions as well as comparison to research-based knowledge are helpful instructional strategies for this purpose.

Consistent with other research (van den Bergh, Ros, and Beijaard Citation2013), our analyses of the participants’ written feedback levels suggested that teachers mainly addressed the task and process levels. Because both levels showed ceiling effects in the pre-test, there was little potential for further improvement through the intervention. Feedback related to self-regulatory strategies occurred relatively rarely and did not change over the intervention. The reason could be that to recognise and support self-regulation, teachers need specific pedagogical knowledge that the intervention did not address, for example metacognition, self-regulation and direct self-regulation strategies (Dignath-van Ewijk and Van der Werf Citation2012; Geduld Citation2019). Thus, pre-service teachers would need more information or special training on self-regulation strategies (e.g., Dignath Citation2021) to get ideas for their feedback on the self-regulation level. Similarly, in terms of feedback questions, the amount of feed-forward questions did not significantly change; however, pre-service teachers addressed the feedback perspective significantly more. Feedback on students’ progresses may be easier to describe than giving advice on how students can proceed with the next task, their learning process or their self-regulation, for which specific pedagogical and pedagogical content knowledge is necessary.

In addition, there was a clear focus on feed-up, with almost all pre-service teachers incorporating the same in their feedback in the pre-test. In return, none of the Week-1 feedback responses contained any reference to learning goals, even though such feedback is crucial to guiding students’ attention to what they are expected to achieve. While our intervention effectively increased student teachers’ awareness of the use of including learning goal references, only about 20% of the feedback texts included such information. Overall, the intervention successfully enhanced students’ ability to include important aspects of effective feedback but left substantial room for improvement. Potentially, a longer-term intervention with repeating the cycles of writing and reflecting on feedback from the perspective of feedback models would be more effective than the current intervention in this regard.

Regarding questionable prior assumptions, the intervention only led to a descriptive tendency to reduce misconceptions, for example about the use of global praise and self-related feedback. This may be seen as surprising given that the intervention explicitly conveyed that such practices do not contribute to effective feedback (Brummelman, Crocker, and Bushman Citation2016). However, this is also in line with evidence from research on many topics showing that misconceptions may be hard to change, even if they are expressly debunked (Kendeou et al. Citation2019). In most cases, misconceptions cannot be simply replaced by new, correct knowledge (Chi Citation2008), they continue to exist and can be activated simultaneously with new knowledge (Kendeou et al. Citation2019). For example, our participants apparently still drew on their own prior assumptions about the importance of praise when writing feedback (Dagenais et al. Citation2012; Schildkamp and Kuiper Citation2010). Potentially, they still believed that including such information does not hurt, even though it does not enhance feedback efficacy, and therefore adhered to common ‘rules’ that feedback should contain positive aspects. Hence, it may be worthwhile to make the fact more explicit that praise and self-related feedback can indeed harm the effectiveness of feedback as they draw students’ attention to aspects that are not conducive to improving their learning.

Fostering attitudes towards educational research knowledge (RQ 2)

Concerning research question 2, we assumed that a research-based feedback intervention could provide a Trojan Horse-approach to fostering student teachers’ attitudes about the usefulness of educational research for their classroom practice. As discussed above, research shows that it is notoriously difficult to increase teachers’ research use and related attitudes (e.g. van Schaik et al. Citation2018), despite many calls to make teaching more evidence-based (e.g. Rousseau and Gunia Citation2016). On basis of research on utility-value interventions (Hulleman and Harackiewicz Citation2021; Rosenzweig et al. Citation2019), we had hoped that offering student teachers a positive experience of using research-based knowledge to provide better feedback would help demonstrate the usefulness of research and, thus, improve participants’ attitudes towards it. However, this attempt clearly failed as there were no effects on any of the investigated aspects of attitudes towards research use. One potential reason for this may be that the student teachers already had relatively positive attitudes towards educational research findings (cf. Thomm, Gold, et al. Citation2021), so the single experience of the intervention did not make a salient difference. Alternatively, student teachers may not have transferred the experience of the intervention to the usefulness of educational research findings in general, or they may have seen the topic of feedback as an exception. Even though the described approach did not work in the present study, we still assume that providing positive experiences of applying research knowledge to school-related problems can help foster student teachers’ orientations towards educational research (cf. Hulleman et al. Citation2010; Rochnia and Gräsel Citation2022; Zeeb et al. Citation2019). Future studies might make such experiences more explicit and dedicate more time to reflective activities.

Limitations

Beyond the already-mentioned shortcomings, we acknowledge several limitations of our study. First, in the framework of Kendeou et al. (Citation2019), competing activation is one of the necessary conditions to correct false prior assumptions and involves new knowledge being activated more frequently in future situations than prior incorrect assumptions. Unfortunately, it was not possible in this study to re-examine the participants in a follow-up session. Thus, it remains unclear whether the learned content led to a longer-term correction of false prior assumptions. It would be interesting for future studies to determine the lasting impact of knowledge revision in in an internship context.

Second, despite the important role of supervising teachers in student teachers’ internship experiences (Hobson et al. Citation2009; Lofthouse Citation2018), we could not take their assumptions concerning effective feedback into account. In the intervention, we only contrasted individual prior assumptions with educational research questions. It would be interesting to examine the extent to which the student teachers’ change in prior assumptions is moderated by the assumptions of supervising teachers as feedback is an everyday practice for teachers and, therefore, likely to be a topic of discussion during an internship.

Third, the ability to provide effective feedback could only be measured in the intervention group. It is possible that the changes in ability would also have occurred in the control group. However, given that the control group did not demonstrate any changes in knowledge about feedback, we consider this possibility unlikely.

Despite these limitations, we are convinced that the present study contributes to a better understanding of how intervention designs can foster (student) teachers’ knowledge and ability to provide effective feedback to their students (Ha and Murray Citation2021; Knochel et al. Citation2022; Mrachko, Kostewicz, and Martin Citation2017). In particular, we believe that knowledge integration and revision approaches (Kendeou et al. Citation2019) offer a theoretical perspective that could be incorporated more systematically into this field of enquiry.

Ethics compliance

The authors declare that the work described has been carried out in accordance with The Code of Ethics of the American Psychological Association and the World Medical Association (Declaration of Helsinki) for research involving humans. No formal approval from an ethics board was required at the time of the data collection.

Supplemental material

Supplemental Material

Download MS Word (17.8 KB)

Disclosure statement

No potential conflict of interest was reported by the authors.

Supplementary material

Supplemental data for this article can be accessed online at https://doi.org/10.1080/02619768.2024.2338841

Additional information

Funding

This work is part of the “Qualitätsoffensive Lehrerbildung”, a joint initiative of the Federal Government and the Länder which aims to improve the quality of teacher training. The programme is funded by the Federal Ministry of Education and Research. The authors are responsible for the content of this publication [grant number 01JA1904]

Notes on contributors

Thomas Bock

Thomas Bock is a research employee at the University of Erfurt, Germany. His current research interests include teacher education research, student teachers’ attitudes and research-based practice in teaching.

Eva Thomm

Eva Thomm is a postdoctoral researcher at the chair of Educational Research and Methodology, University of Erfurt, Germany. Her research focuses on scientific reasoning, argumentation, sourcing and science communication. Particularly, she is interested in investigating how to support non-experts in understanding and engaging with scientific evidence in their daily lives and in professional contexts.

Johannes Bauer

Johannes Bauer is full professor of educational research and methodology at the University of Erfurt, Germany. His research interests include the communication and reception of scientific knowledge, research on teachers and teacher education, medical education, and digitalisation in higher education.

Bernadette Gold

Bernadette Gold is professor for school pedagogy and general didactics for primary and lower secondary school at the TU University Dortmund, Germany. Her main research interests are teachers’ pedagogical-psychological competencies and how it can be promoted in teacher education with the use of classroom videos. Furthermore, her research focuses on student-teacher-interactions and research-based teacher education.

References

  • Alfieri, L., P. J. Brooks, N. J. Aldrich, and H. R. Tenenbaum. 2011. “Does Discovery-Based Instruction Enhance Learning?” Journal of Educational Psychology 103 (1): 1–18. https://doi.org/10.1037/a0021017.
  • Bangert-Drowns, R. L., C. L. C. Kulik, J. A. Kulik, and M. Morgan. 1991. “The Instructional Effect of Feedback in Test-Like Events.” Review of Educational Research 61 (2): 213–238. https://doi.org/10.3102/00346543061002213.
  • Bauer, J., and M. Prenzel. 2012. “Science Education. European Teacher Training Reforms.” Science 336 (6089): 1642–1643. https://doi.org/10.1126/science.1218387.
  • Baumeister, R. F., D. M. Tice, and D. G. Hutton. 1989. “Self-Presentational Motivations and Personality Differences in Self-Esteem.” Journal of Personality 57 (3): 547–579. https://doi.org/10.1111/j.1467-6494.1989.tb02384.x.
  • Brennan, R. L., and D. J. Prediger. 1981. “Coefficient Kappa: Some Uses, Misuses, and Alternatives.” Educational and Psychological Measurement 41 (3): 687–699. https://doi.org/10.1177/001316448104100307.
  • Britt, M. A., and J. Sommer. 2004. “Facilitating Textual Integration with Macrostructure Focusing Tasks.” Reading Psychology 25 (4): 313–339. https://doi.org/10.1080/02702710490522658.
  • Brooks, C., A. Carroll, R. M. Gillies, and J. Hattie. 2019. “A Matrix of Feedback for Learning.” Australian Journal of Teacher Education 44 (4): 14–32. https://doi.org/10.14221/ajte.2018v44n4.2.
  • Brummelman, E., J. Crocker, and B. J. Bushman. 2016. “The Praise Paradox: When and Why Praise Backfires in Children with Low Self-Esteem.” Child Development Perspectives 10 (2): 111–115. https://doi.org/10.1111/cdep.12171.
  • Cain, T. 2016. “Research Utilisation and the Struggle for the Teacher’s Soul: A Narrative Review.” European Journal of Teacher Education 39 (5): 616–629. https://doi.org/10.1080/02619768.2016.1252912.
  • Carless, D. 2006. “Differing Perceptions in the Feedback Process.” Studies in Higher Education 31 (2): 219–233. https://doi.org/10.1080/03075070600572132.
  • Chi, M. T. H. 2008. “Three Types of Conceptual Change: Belief Revision, Mental Model Transformation, and Categorical Shift.” In Handbook of Research on Conceptual Change, edited by S. Vosniadou, 61–82. Hillsdale, NJ: Erlbaum.
  • Dagenais, C., L. Lysenko, P. C. Abrami, R. M. Bernard, J. Ramde, and M. Janosz. 2012. “Use of Research-Based Information by School Practitioners and Determinants of Use: A Review of Empirical Research.” Evidence & Policy: A Journal of Research, Debate & Practice 8 (3): 285–309. https://doi.org/10.1332/174426412x654031.
  • Datnow, A., and L. Hubbard. 2016. “Teacher Capacity for and Beliefs About Data-Driven Decision Making: A Literature Review of International Research.” Journal of Educational Change 17 (1): 7–28. https://doi.org/10.1007/s10833-015-9264-2.
  • Detrich, R., and T. Lewis. 2013. “A Decade of Evidence-Based Education: Where Are We and Where Do We Need to Go?” Journal of Positive Behavior Interventions 15 (4): 214–220. https://doi.org/10.1177/1098300712460278.
  • Diery, A., F. Vogel, M. Knogler, and T. Seidel. 2020. “Evidence-Based Practice in Higher Education: Teacher educators’ Attitudes, Challenges, and Uses.” Frontiers in Education 5 (62). https://doi.org/10.3389/feduc.2020.00062.
  • Dignath, C. 2021. “For Unto Every One That Hath Shall Be Given: Teachers’ Competence Profiles Regarding the Promotion of Self-Regulated Learning Moderate the Effectiveness of Short-Term Teacher Training.” Metacognition and Learning 16 (3): 555–594. https://doi.org/10.1007/s11409-021-09271-x.
  • Dignath-van Ewijk, C., and G. Van der Werf. 2012. “What Teachers Think About Self-Regulated Learning: Investigating Teacher Beliefs and Teacher Behavior of Enhancing students’ Self-Regulation.” Education Research International 2012:1–10. https://doi.org/10.1155/2012/741713.
  • Drake, K. R., and G. Nelson. 2021. “Natural Rates of Teacher Praise in the Classroom: A Systematic Review of Observational Studies.” Psychology in the Schools 58 (12): 2404–2424. https://doi.org/10.1002/pits.22602.
  • Eccles, J. S., and A. Wigfield. 2002. “Motivational Beliefs, Values, and Goals.” Annual Review of Psychology 53 (1): 109–132. https://doi.org/10.1146/annurev.psych.53.100901.135153.
  • Geduld, B. 2019. “A Snapshot of teachers’ Knowledge and Teaching Behaviour to Develop Self-Regulated Learning.” Journal of Education 77 (77): 60–78. https://doi.org/10.17159/2520-9868/i77a04.
  • Gigante, J., M. Dell, and A. Sharkey. 2011. “Getting Beyond “Good Job”: How to Give Effective Feedback.” Pediatrics 127 (2): 205–207. https://doi.org/10.1542/peds.2010-3351.
  • Haberfellner, C. 2016. Der Nutzen von Forschungskompetenz im Lehramt. Eine Einschätzung aus der Sicht von Studierenden der Pädagogischen Hochschulen in Österreich. Klinkhardt.
  • Hammersley, M. 1993. “On the Teacher as Researcher.” Educational Action Research 1 (3): 425–445. https://doi.org/10.1080/0965079930010308.
  • Ha, X. V., and J. C. Murray. 2021. “The Impact of a Professional Development Program on EFL teachers’ Beliefs About Corrective Feedback.” System 96:102405, Article 102405. https://doi.org/10.1016/j.system.2020.102405.
  • Hattie, J. 2009. Visible Learning: A Synthesis of Over 800 Meta-Analyses Relating to Achievement. London: Routledge.
  • Hattie, J., and S. Clarke. 2019. Visible Learning: Feedback. Routledge. https://doi.org/10.4324/9780429485480.
  • Hattie, J., and H. Timperley. 2007. “The Power of Feedback.” Review of Educational Research 77 (1): 81–112. https://doi.org/10.3102/003465430298487.
  • Heitink, M. C., F. M. Van der Kleij, B. P. Veldkamp, K. Schildkamp, and W. B. Kippers. 2016. “A Systematic Review of Prerequisites for Implementing Assessment for Learning in Classroom Practice.” Educational Research Review 17:50–62. https://doi.org/10.1016/j.edurev.2015.12.002.
  • Henderson, M., M. Phillips, T. Ryan, D. Boud, P. Dawson, E. Molloy, and P. Mahoney. 2019. “Conditions That Enable Effective Feedback.” Higher Education Research & Development 38 (7): 1401–1416. https://doi.org/10.1080/07294360.2019.1657807.
  • Hess, M., K. Werker, and F. Lipowsky. 2017. “Was wissen Lehramtsstudierende über gutes Feedback? Zur Erfassung konzeptuellen Wissens und zu dessen Zusammenhang mit der Selbsteinschätzung der Studierenden.” Jahrbuch für Allgemeine Didaktik 11–29.
  • Hobson, A. J., P. Ashby, A. Malderez, and P. D. Tomlinson. 2009. “Mentoring Beginning Teachers: What We Know and What We Don’t.” Teaching and Teacher Education 25 (1): 207–216. https://doi.org/10.1016/j.tate.2008.09.001.
  • Hulleman, C. S., O. Godes, B. Hendricks, and J. M. Harackiewicz. 2010. “Enhancing Interest and Performance with a Utility-Value Intervention.” Journal of Educational Psychology 102 (4): 880–895. https://doi.org/10.1037/a0019506.
  • Hulleman, C. S., and J. M. Harackiewicz. 2021. “The Utility-Value Intervention.” In Handbook of Wise Interventions, edited by G. M. Walton and A. J. Crum, 100–125. New York: Guilford Press.
  • James, I. A. 2015. “The Rightful Demise of the Sh*t Sandwich: Providing Effective Feedback.” Behavioural and Cognitive Psychotherapy 43 (6): 759–766. https://doi.org/10.1017/s1352465814000113.
  • Kendeou, P., R. Butterfuss, J. Kim, and M. van Boekel. 2019. “Knowledge Revision Through the Lenses of the Three-Pronged Approach.” Memory & Cognition 47 (1): 33–46. https://doi.org/10.3758/s13421-018-0848-y.
  • Kendeou, P., K. R. Muis, and S. Fulton. 2011. “Reader and Text Factors in Reading Comprehension Processes.” Journal of Research in Reading 34 (4): 365–383. https://doi.org/10.1111/j.1467-9817.2010.01436.x.
  • Kendeou, P., E. R. Smith, and E. J. O’Brien. 2013. “Updating During Reading Comprehension: Why Causality Matters.” Journal of Experimental Psychology: Learning, Memory, and Cognition 39 (3): 854–865. https://doi.org/10.1037/a0029468.
  • Kendeou, P., and P. van den Broek. 2007. “The Effects of Prior Knowledge and Text Structure on Comprehension Processes During Reading of Scientific Texts.” Memory & Cognition 35 (7): 1567–1577. https://doi.org/10.3758/bf03193491.
  • Kendeou, P., E. K. Walsh, E. R. Smith, and E. J. O’Brien. 2014. “Knowledge Revision Processes in Refutation Texts.” Discourse Processes 51 (5–6): 374–397. https://doi.org/10.1080/0163853x.2014.913961.
  • Kippers, W. B., C. H. Wolterinck, K. Schildkamp, C. L. Poortman, and A. J. Visscher. 2018. “Teachers’ Views on the Use of Assessment for Learning and Data-Based Decision Making in Classroom Practice.” Teaching and Teacher Education 75:199–213. https://doi.org/10.1016/j.tate.2018.06.015.
  • Kirkhart, K. E. 2000. “Reconceptualizing Evaluation Use: An Integrated Theory of Influence.” New Directions for Evaluation 2000 (88): 5–23. https://doi.org/10.1002/ev.1188.
  • Kluger, A. N., and A. DeNisi. 1996. “The Effects of Feedback Interventions on Performance: A Historical Review, a Meta-Analysis, and a Preliminary Feedback Intervention Theory.” Psychological Bulletin 119 (2): 254–284. https://doi.org/10.1037/0033-2909.119.2.254.
  • Knochel, A. E., K. S. C. Blair, D. Kincaid, and A. Randazzo. 2022. “Promoting Equity in teachers’ Use of Behavior-Specific Praise with Self-Monitoring and Performance Feedback.” Journal of Positive Behavior Interventions 24 (1): 17–31. https://doi.org/10.1177/1098300720951939.
  • Lee, I. 2009. “Ten Mismatches Between teachers’ Beliefs and Written Feedback Practice.” ELT Journal 63 (1): 13–22. https://doi.org/10.1093/elt/ccn010.
  • Lehmann, T., B. Rott, and F. Schmidt-Borcherding. 2019. “Promoting Pre-Service teachers’ Integration of Professional Knowledge: Effects of Writing Tasks and Prompts on Learning from Multiple Documents.” Instructional Science 47 (1): 99–126. https://doi.org/10.1007/s11251-018-9472-2.
  • Lipowsky, F. 2015. “Unterricht.” In Pädagogische Psychologie, edited by E. Wild and J. Möller, 69–101. Heidelberg: Springer.
  • Lofthouse, R. M. 2018. “Re-Imagining Mentoring As a Dynamic Hub in the Transformation of Initial Teacher Education: The Role of Mentors and Teacher Educators.” International Journal of Mentoring & Coaching in Education 7 (3): 248–260. https://doi.org/10.1108/ijmce-04-2017-0033.
  • McNamara, D. S., and M. McDaniel. 2004. “Suppressing Irrelevant Information: Knowledge Activation or inhibition?.” Journal of Experimental Psychology: Learning, Memory, & Cognition 30 (2): 465–482. https://doi.org/10.1037/0278-7393.30.2.465.
  • Molloy, E., R. Ajjawi, M. Bearman, C. Noble, J. Rudland, and A. Ryan. 2020. “Challenging Feedback Myths: Values, Learner Involvement and Promoting Effects Beyond the Immediate Task.” Medical Education 54 (1): 33–39. https://doi.org/10.1111/medu.13802.
  • Mrachko, A. A., D. E. Kostewicz, and W. P. Martin. 2017. “Increasing Positive and Decreasing Negative Teacher Responses to Student Behavior Through Training and Feedback.” Behavior Analysis: Research and Practice 17 (3): 250–265. https://doi.org/10.1037/bar0000082.
  • Nicol, D. J., and D. Macfarlane‐Dick. 2006. “Formative Assessment and Self‐Regulated Learning: A Model and Seven Principles of Good Feedback Practice.” Studies in Higher Education 31 (2): 199–218. https://doi.org/10.1080/03075070600572090.
  • Niemi, H. 2008. “Research-Based Teacher Education for Teachers’ Lifelong Learning.” Lifelong Learning in Europe 13 (1): 61–69.
  • O’Brien, E. J., A. E. Cook, and S. Guéraud. 2010. “Accessibility of Outdated Information.” Journal of Experimental Psychology: Learning, Memory, and Cognition 36 (4): 979–991. https://doi.org/10.1037/a0019763.
  • Pomerantz, E. M., and S. G. Kempner. 2013. “Mothers’ Daily Person and Process Praise: Implications for children’s Theory of Intelligence and Motivation.” Developmental Psychology 49 (11): 2040–2046. https://doi.org/10.1037/a0031840.
  • Rochnia, M., and C. Gräsel. 2022. “Can the Utility Value of Educational Sciences Be Induced Based on a Reflection Example or Empirical Findings—Or Just Somehow?” Frontiers in Education 7:1006079. https://doi.org/10.3389/feduc.2022.1006079.
  • Rosenzweig, E. Q., J. M. Harackiewicz, S. J. Priniski, C. A. Hecht, E. A. Canning, Y. Tibbetts, and J. S. Hyde. 2019. “Choose Your Own Intervention: Using Choice to Enhance the Effective-Ness of a Utility-Value Intervention.” Motivation Science 5 (3): 269–276. https://doi.org/10.1037/mot0000113.
  • Rousseau, D. M., and B. C. Gunia. 2016. “Evidence-Based Practice: The Psychology of EBP Implementation.” Annual Review of Psychology 67 (1): 667–692. https://doi.org/10.1146/annurev-psych-122414-033336.
  • Scheeler, M. C., S. Budin, and A. Markelz. 2016. “The Role of Teacher Preparation in Promoting Evidence-Based Practice in Schools.” Learning Disabilities: A Contemporary Journal 14 (2): 171–187.
  • Schildkamp, K., and W. Kuiper. 2010. “Data-Informed Curriculum Reform: Which Data, What Purposes, and Promoting and Hindering Factors.” Teaching and Teacher Education 26 (3): 482–496. https://doi.org/10.1016/j.tate.2009.06.007.
  • Shute, V. J. 2008. “Focus on Formative Feedback.” Review of Educational Research 78 (1): 153–189. https://doi.org/10.3102/0034654307313795.
  • Stadler, M., M. Sailer, and F. Fischer. 2021. “Knowledge as a Formative Construct: A Good Alpha Is Not Always Better.” New Ideas in Psychology 60:100832. https://doi.org/10.1016/j.newideapsych.2020.100832.
  • Stenhouse, L. 1975. An Introduction to Curriculum Research and Development. London: Heinemann.
  • Taber, K. S. 2018. “The Use of Cronbach’s Alpha When Developing and Reporting Research Instruments in Science Education.” Research in Science Education 48 (6): 1273–1296. https://doi.org/10.1007/s11165-016-9602-2.
  • Thomm, E., B. Gold, T. Betsch, and J. Bauer. 2021. “When Student teachers’ Prior Beliefs Contradict Evidence from Educational Research.” British Journal of Educational Psychology 91 (3): 1055–1072. https://doi.org/10.1111/bjep.12407.
  • Thomm, E., C. Sälzer, M. Prenzel, and J. Bauer. 2021. “Predictors of teachers’ Appreciation of Evidence-Based Practice and Educational Research Findings.” Zeitschrift Für Pädagogische Psychologie 35 (2–3): 173–184. https://doi.org/10.1024/1010-0652/a000301.
  • van Boekel, M., K. Lassonde, E. J. O’Brien, and P. Kendeou. 2017. “Source Credibility and the Processing of Refutation Texts.” Memory & Cognition 45 (1): 168–181. https://doi.org/10.3758/s13421-016-0649-0.
  • van den Bergh, L., A. Ros, and D. Beijaard. 2013. “Teacher Feedback During Active Learning: Current Practices in Primary Schools.” British Journal of Educational Psychology 83 (2): 341–362. https://doi.org/10.1111/j.2044-8279.2012.02073.x.
  • van den Broek, P., and P. Kendeou. 2008. “Cognitive Processes in Comprehension of Science Texts: The Role of Co‐Activation in Confronting Misconceptions.” Applied Cognitive Psychology 22 (3): 335–351. https://doi.org/10.1002/acp.1418.
  • van Schaik, P., M. Volman, W. Admiraal, and W. Schenke. 2018. “Barriers and Conditions for teachers’ Utilisation of Academic Knowledge.” International Journal of Educational Research 90:50–63. https://doi.org/10.1016/j.ijer.2018.05.003.
  • Voerman, L., P. C. Meijer, F. A. Korthagen, and R. J. Simons. 2012. “Types and Frequencies of Feedback Interventions in Classroom Interaction in Secondary Education.” Teaching and Teacher Education 28 (8): 1107–1115. https://doi.org/10.1016/j.tate.2012.06.006.
  • von Bergen, C. W., M. S. Bressler, and K. Campbell. 2014. “The Sandwich Feedback Method: Not Very Tasty.” Journal of Behavioral Studies in Business 7:1–13.
  • Weckend, D., C. Schatz, and K. Zierer. 2019. “Ich gebe und fordere Rückmeldung. – Feedback in der Unterrichtspraxis.” In Feedback in der Unterrichtspraxis, edited by M. Vierbuchen and F. Bartels, 19–39. Stuttgart: Kohlhammer.
  • Weiss, C. H. 1998. “Have We Learned Anything New About the Use of Evaluation?” American Journal of Evaluation 19 (1): 21–33. https://doi.org/10.1177/109821409801900103.
  • Westbroek, H., F. Janssen, I. Mathijsen, and W. Doyle. 2022. “Teachers as Researchers and the Issue of Practicality.” European Journal of Teacher Education 45 (1): 60–76. https://doi.org/10.1080/02619768.2020.1803268.
  • Wisniewski, B., K. Zierer, and J. Hattie. 2019. “The Power of Feedback Revisited: A Meta-Analysis of Educational Feedback Research.” Frontiers in Psychology 10:1664–1078. https://doi.org/10.3389/fpsyg.2019.03087.
  • Zeeb, H., F. Biwer, G. Brunner, T. Leuders, and A. Renkl. 2019. “Make it Relevant! How Prior Instructions Foster the Integration of Teacher Knowledge.” Instructional Science 47 (6): 711–739. https://doi.org/10.1007/s11251-019-09497-y.
  • Zwaan, R. A., and C. J. Madden. 2004. “Updating situation models.” Journal of Experimental Psychology: Learning, Memory, and Cognition 30 (1): 283–288. https://doi.org/10.1037/0278-7393.30.1.283.