29,164
Views
20
CrossRef citations to date
0
Altmetric
Articles

Impact of feedback request forms and verbal feedback on higher education students’ feedback perception, self-efficacy, and motivation

ORCID Icon, ORCID Icon & ORCID Icon
Pages 6-25 | Received 25 May 2018, Accepted 24 Oct 2019, Published online: 11 Nov 2019

ABSTRACT

In higher education, students often misunderstand teachers’ written feedback. This is worrisome, since written feedback is the main form of feedback in higher education. Organising feedback conversations, in which feedback request forms and verbal feedback are used, is a promising intervention to prevent misunderstanding of written feedback. In this study a 2 × 2 factorial experiment (N = 128) was conducted to examine the effects of a feedback request form (with vs. without) and feedback mode (written vs. verbal feedback). Results showed that verbal feedback had a significantly higher impact on students’ feedback perception than written feedback; it did not improve students’ self-efficacy, or motivation. Feedback request forms did not improve students’ perceptions, self-efficacy, or motivation. Based on these results, we can conclude that students have positive feedback perceptions when teachers communicate their feedback verbally and more research is needed to investigate the use of feedback request forms.

Introduction

In higher education, it is common practice that students receive a lot of written feedback on their work (Higgins, Hartley, & Shelton, Citation2002). Teachers in higher education are spending much of their time writing comments on assignments (Carless, Citation2006). Feedback given as one-way written comments often results in lack of effective feedback (Carless, Salter, Yang, & Lam, Citation2011). Many students, for example, have difficulty understanding written teacher feedback and are disappointed and frustrated when the feedback is unclear, too brief, or unhelpful in terms of future learning (Ferguson, Citation2011; Hounsell, McCune, Hounsell, & Litjens, Citation2008; Hyland, Citation2013). In general, for feedback to be effective it is essential students have positive perceptions about teacher feedback (Van der Schaaf, Baartman, Prins, Oosterbaan, & Schaap, Citation2011). Students’ perception of feedback refers to the extent to which students perceive the feedback to be supportive for their learning (Gibbs & Simpson, Citation2003). Students who perceive feedback positively tend to have high self-efficacy; they have confidence to complete similar tasks, after their efforts have been successful (Caffarella & Barnett, Citation2000; Pajares, Citation2012). Students with high self-efficacy are often also highly motivated to approach difficult tasks as challenges to be mastered (Pajares, Citation2012). Current feedback definitions all contain the provision of information to a student to foster students’ learning (Kluger & DeNisi, Citation1996; Ramaprasad, Citation1983; Sadler, Citation1989; Shute, Citation2008). Several definitions contain the interaction between teachers and students, for example, Carless et al. (Citation2011) who defined feedback as ‘all dialogue to support learning in both formal and informal situations’ (p. 396). In this paper, we investigated feedback and considered it to be conceptualised as a dialogue between students and their teachers (Carless et al., Citation2011).

Providing effective feedback is complicated: the relation between form, timing, and effectiveness of feedback is complex and variable (Price, Handley, Millar, & O’Donovan, Citation2010; Sadler, Citation2010). The effectiveness of feedback can be improved when students have the opportunity to share their feedback preference in advance. These preferences can be expressed using feedback request forms, in which students are asked to identify particular aspects of their work on which they would like to receive feedback on (Bloxham & Campbell, Citation2010; Elbow & Sorcinelli, Citation2011; Gielen & De Wever, Citation2015). Furthermore, for feedback to be effective, students have to understand the feedback and communication is the key factor for that success (Higgins, Hartley, & Skelton, Citation2001). Van der Schaaf et al. (Citation2011) have showed that students who have feedback conversations with their teacher perceive teacher feedback as more useful. We consider feedback request forms and feedback conversations with one-on-one teacher–student interactions as a possible solution for the above-mentioned students’ lack of understanding of feedback. We examined the impact of feedback request forms and the impact of feedback mode (verbal vs. written feedback) on students’ feedback perception, self-efficacy, and motivation.

Feedback and student perceptions of feedback

Providing teacher feedback on students’ assessment tasks is regarded important and beneficial (Hattie, Citation2012). Many studies have found evidence of the impact of feedback on learning (Black & Wiliam, Citation1998; Hattie & Timperley, Citation2007; Kluger & DeNisi, Citation1996; Shute, Citation2008). Still, much of this feedback is sent, but not processed (Hattie, Citation2012), and can have unintended effects on students (Lizzio & Wilson, Citation2008). Vague and ambiguous feedback tends to result in students’ frustration, dissatisfaction, and a feeling of uncertainty (Price et al., Citation2010). Students sometimes do not understand and interpret teacher feedback accurately (Higgins et al., Citation2002; Hyatt, Citation2005), and rarely feel encouraged to think about the feedback (Duijnhouwer, Citation2010). When students receive feedback, the first step in the feedback process consists of perceiving the feedback, before even accepting or acting upon it (de Kleijn, Mainhard, Meijer, Brekelmans, & Pilot, Citation2013). As feedback is one of the most effective interventions teachers can use, fostering positive student perceptions of feedback should be a primary goal of teachers (Ekholm, Zumbrunn, & Conklin, Citation2015). How students interpret feedback and deal with it is critical for subsequent learning (Poulos & Mahony, Citation2008). In order for students to benefit from feedback, they should have positive perceptions of it. Student perceptions of feedback are significant in higher education, as students perceive feedback as a guide towards success, as a means of academic interaction, and as a sign of respect and caring (Rowe, Citation2011). Since students’ understanding of the feedback is often not consistent with the intention of the teacher (Van der Schaaf et al., Citation2011), insight in students’ perceptions of feedback is important.

Feedback and students’ self-efficacy

Self-efficacy refers to the beliefs of people about their capabilities to exercise control over their own level of functioning (Bandura, Citation1993). Students with low self-efficacy have no confidence in their own abilities; they often will not focus on opportunities to improve, or will not use the provided feedback (Wingate, Citation2010). When students are provided with frequent and immediate feedback self-efficacy is increased (Schunk, Citation1983). When feedback is difficult to understand or to act upon, students can develop low self-efficacy and have low expectations of being successful in a task (Wingate, Citation2010). Students who perceive feedback as constructive have a higher self-efficacy of their own writing skills (Caffarella & Barnett, Citation2000). Schunk and Zimmerman (Citation2007) argue that students with high self-efficacy participate more readily, work harder, and persist longer when they encounter difficulties (p.9). Self-efficacy can be measured using self-assessment instruments; the Motivated Strategies for Learning Questionnaire (MSLQ) is a self-assessment instrument and has been widely used in educational research. In the MSLQ, self-efficacy is measured as part of three expectancy components (Pintrich, Smith, Garcia, & McKeachie, Citation1991). We know teacher feedback can influence self-efficacy (Duijnhouwer, Prins, & Stokking, Citation2010) and positive correlations between self-efficacy and academic achievement have been found (Pintrich & De Groot, Citation1990).

Feedback and students’ motivation

Self-efficacy and motivation are strongly connected. When students have high self-efficacy and believe that their actions can produce the outcomes they desire, they are also motivated to act when facing difficulties (Pajares, Citation2012). As feedback can affect persistence and performance through its effect on students’ self-efficacy and motivation (Butler & Winne, Citation1995; Duijnhouwer et al., Citation2010; Kluger & DeNisi, Citation1996), insight in the effect of feedback on students’ self-efficacy and motivation is important. Students can be either extrinsically motivated to understand and act on feedback (e.g. there is a reward) or intrinsically motivated (e.g. motivated to learn) (Ryan & Deci, Citation2000). For a student to remain motivated, there must be alignment between students’ goals and the expectations that these goals are attainable. Students’ reasons why they are engaging in a specific learning task can be measured with the concept of goal orientation (Pintrich et al., Citation1991). Students who apply an intrinsic goal orientation will participate in a task for reasons such as challenge, curiosity and mastery (Pintrich et al., Citation1991). They will have the desire to increase their competence by developing new skills and mastering new situations, and enhance their intrinsic motivation (Dweck, Citation1986; Dweck & Leggett, Citation1988; Shute, Citation2008). Students who apply an extrinsic goal orientation will participate in tasks for reasons as grades, awards and performance (Pintrich et al., Citation1991). They will focus to demonstrate competence to others and to have a positive evaluation by others (Dweck, Citation1986; Shute, Citation2008). Students with an extrinsic goal orientation will enhance their extrinsic motivation (Dweck & Leggett, Citation1988). Ideally, students receive feedback about whether these goals are attained (Shute, Citation2008).

Feedback request forms to foster feedback effectiveness

More focus on students as feedback receivers is important, as students are not seen as passive receivers of information, but are expected to actively take up feedback (Zimmerman, Citation1989). Structured feedback request forms enhance students’ role in the feedback process by expressing their preference of feedback (Prins, Sluijsmans, & Kirschner, Citation2006). The use of feedback request forms aims at raising the quality of the feedback and student’s response to it (Gielen, Tops, Dochy, Onghena, & Smeets, Citation2010). Feedback request forms can be collected together with a student’s work and allow students to formulate their feedback needs. Assessors combine the assessment criteria in a rubric-scoring sheet with student’s feedback request form to address these needs in the feedback (Gielen, Peeters, Dochy, Onghena, & Struyven, Citation2010). When using the feedback request forms, students perceive the feedback more personally addressed, and are more likely to use the feedback (Gielen, et al., Citation2010). Gielen and De Wever (Citation2015) used a feedback request form in their study and asked students to indicate first the criteria, and second the kind of feedback they expected. They found that students who used the feedback request form and received feedback were actively engaged in the assessment activity, and the quality of peer feedback was raised. Bloxham and Campbell (Citation2010) also used feedback request forms, in which students posed questions the assessors could address. When using the forms students were getting more engaged in the feedback process and wanted the question and feedback process to develop more in to a dialogue with the assessor. Elbow and Sorcinelli (Citation2011) argued that giving feedback on draft or final assignments becomes easier and more productive when students write a feedback request form with specific questions. The feedback request form should answer questions such as ‘Which parts feel strong and weak to you?’, and ‘What questions do you have for me as a reader?’ We consider feedback request forms to have a positive impact on students’ feedback perception, self-efficacy, and motivation when assessors are able to address students’ feedback preferences in their feedback comments.

Advantages of feedback conversations

Feedback is often seen as the linear transfer of information from the sender of a message (the tutor) to a recipient (the student) via usually written comments (Higgins et al., Citation2001). A narrow view of learning occurs when feedback is only considered as something that is given to a student (Ajjawi & Boud, Citation2017). It cannot be assumed that just providing written feedback automatically leads to students’ understanding and that they can use the feedback in subsequent work (Havnes, Smith, Dysthe, & Ludvigsen, Citation2012). Direct comments with simple vocabulary and familiar expressions can be helpful for students to know how to improve their work (Bruno & Santos, Citation2010). We stress the fact that one-way written comments are considered to be feedback as well; we argue that interaction during feedback exchange may increase the effectiveness of feedback. As written feedback is often misinterpreted and misunderstood, verbal feedback seems to be a solution for the problems associated with written feedback. Merry and Orsmond (Citation2008) and Van der Schaaf et al. (Citation2011) showed that students respond more positively to verbal feedback, seeing it as being closer to dialogue; students perceived verbal feedback to be a better natural dialogue than written feedback. With the understanding that dialogue is a two-way process, students can learn from feedback comments through interaction (Nicol, Citation2010). Feedback as dialogue will increase the effectiveness of feedback because students do not only receive initial feedback information, but also have the opportunity to engage the teacher in discussion about that feedback (Nicol & Macfarlane-Dick, Citation2006). Feedback conversations give teachers and students the opportunities for this interaction; students can adopt a more active role by asking for particular types of feedback, verifying their interpretation of the feedback, determining whether the feedback is clear to them, whether they agree, and by requesting suggestions for improvement (Prins et al., Citation2006).

Assessment task with a simulated patient

This study was conducted in the context of a standardised simulated patient assessment task in which dietetic students’ behaviour and communication skills were assessed. Undergraduate students of educational health programs at universities, for example, nutrition and dietetics, are prepared for their internship with training in communication skills. Simulated patients and role-play are frequently used in teaching communication skills (Lane & Rollnick, Citation2007). Simulated patients are used to provide realistic and effective training (Beshgetoor & Wade, Citation2007) and to help to bridge the gap between the academic and the practice (Gibson & Davidson, Citation2016). These simulated patients are often actors who play a patient role (Beshgetoor & Wade, Citation2007; Gibson & Davidson, Citation2016). These actors are coached to play a standardised patient, and because the patient really exists, or existed, the entire medical history can be used for fulfilling an authentic simulated patient role (Hampl, Herbold, Schneider, & Sheeley, Citation1999). Using simulated patients is an effective strategy for nutrition counselling curricula. No significant differences for dietetic students on their communication skills and behaviour change skills were found when they encountered a real patient or a standardised patient (Schwartz, Rothpletz-Puglia, Denmark, & Byham-Gray, Citation2015). Todd, McCarroll, and Nucci (Citation2016) even showed that the use of simulated patients could increase students’ self-efficacy before they started with their clinical practice.

Research questions

We investigated the impact of feedback request forms (with or without) and feedback mode (written vs. verbal) on students’ perceptions of teacher feedback, their self-efficacy and motivation after receiving teacher feedback during an assessment task with a simulated patient. The following research questions were addressed:

  1. What is the impact of a feedback request form on students’ feedback perception, self-efficacy, and motivation?

  2. What is the impact of verbal feedback on students’ feedback perception, self-efficacy, and motivation?

First, it was expected that students who were using feedback request forms would be more positive about the feedback, would have a higher self-efficacy, and be more motivated, because these students could receive feedback adapted to their needs. Second, it was expected that students who were receiving verbal feedback would be more positive about the feedback, would be more motivated, and have a higher self-efficacy, because students in the verbal feedback condition could interact more with their teacher. In addition to the two research questions concerning the main effects, we explored whether there was an interaction effect between the use of feedback request forms and feedback mode on students’ perception, motivation, and self-efficacy.

Method

Design

An experimental study was conducted with a two (feedback request form) by two (feedback mode) factorial design. The independent variable feedback request form consisted of a condition in which students could not express their preference about which parts of the assessment they would like the assessor to focus the feedback on and a condition in which the feedback request form was used. The independent variable feedback mode consisted of written feedback and verbal feedback. Written feedback was given with an assessment form and was handed to the student without verbal comments; verbal feedback was given in a one-to-one feedback dialogue between student and assessor. This led to four conditions: (1) no form written feedback (NW), (2) request form written feedback (RW), (3) no form verbal feedback (NV), and (4) request form verbal feedback (RV).

Participants

Data were gathered from a 4-year undergraduate nutrition and dietetics program at the University of Arnhem and Nijmegen, The Netherlands. The participants were 128 students in their second year of this bachelor of health program and two assessors (teachers) who assessed the students. Randomisation was applied using a ‘blocked design’ in which participants were randomly assigned within a block of trials while keeping sample sizes equal across conditions (Vaus & David, Citation2001). All students were ranked on student number. Thirty-two sets of the 4 unique numbers (four conditions) per set were computed and assigned to the 128 students (see Appendix A). The participants were divided into 32 blocks of four participants each and assigned to one of the four experimental conditions; this was repeated until all participants were assigned to a condition. Eight students of the total population of 128 students did not show up for their assessment. Five students had a failed video recording of their performance. The other 115 students received feedback and their data were used for further analysis. Two independent assessors assessed the students across all four conditions; the characteristics of the participants are presented in .

Table 1. Characteristics of participating students (n = 115) and assessors (n = 2)

Materials and procedure

Course

This study was carried out in higher education and in the context of a six-week skills course in a ten-week module called ‘Lifestyle Diseases’. During the course, students from seven different classes received classroom instruction in the professional role of a practitioner with the responsibilities of a dietitian. After instruction, the students practiced their skills in nutritional assessment, dietary diagnosis, and treatment plans with simulated patients. At the end of the skills course, students’ performance was assessed via an assessment task. Providing one-way written feedback to students was the standard procedure that assessors applied with this assessment task.

Assessment task

The assessment task of the course consisted of a student’s individual conversation with a simulated patient. This simulated patient was an actor who was trained to act as a real diabetes patient in order to simulate a set of symptoms or problems. The actors received a detailed description of the simulated patient case and how to react to answers and questions of the student (see for a summary of the description Appendix B). Students had twenty minutes to prepare for the assessment task and then had the counselling conversation. All students videotaped their conversation. After the performance, students sent the videotaped conversation on a secure digital memory card to the first author. He sorted the memory cards between the four conditions and divided them between the two assessors.

Rubric-scoring sheet

In all four conditions, the assessors used the same rubric-scoring sheet with the 10 assessment criteria (see Appendix C). The criteria were formulated in a rubric with three scales per criterion: unsatisfactory, proficient, or outstanding. Students were familiar with the scoring sheet as they practiced with the criteria during the course.

Assessor training

The first author trained both assessors with worked-out video examples to practise their feedback skills of students’ performances. The assessors were experienced dietitians and did not participate in the skills course as a skills course teacher. The objectives of the assessor training were to increase a shared understanding of the assessment criteria between both assessors, and to practise their formulation of verbal and written feedback. A final objective of the training was to get acquainted with the feedback request form.

Feedback-request form

A week before the assessment task, the 59 students (verbal feedback and written feedback) filled out the feedback request form (see Appendix D). The students were asked to identify particular aspects of their performance on which they would like to receive feedback. The feedback request form consisted of three questions: (1) ‘In the diagnostic phase, I prefer to receive feedback on … ’; (2) ‘In the treatment phase, I prefer to receive feedback on … ’; and (3) ‘During the feedback conversation I prefer to receive feedback on the following aspects of my attitude/communication/structure … .’.

Assessment room

The assessors were each sitting in an assessment room; separate from each other. They were sitting behind a laptop, with all memory cards with the videotaped performances, the rubric-scoring sheets, and feedback request forms.

Feedback

Assessors had approximately 30 minutes per student to assess each student’s performance from the memory card. The first 15 minutes were used to assess student’s performance by observing the videotape; as a result, they scored each of the 10 criteria on the rubric-scoring sheet. The other 15 minutes were used for the formulation of the feedback; in the two verbal conditions feedback was given one-to-one orally to the student; in the two written conditions feedback was written down and handed over to the student. Students who filled out the feedback request form received feedback specifically aimed at the issues mentioned in their form.

Measures

Feedback and assessment perception questionnaire

After receiving the feedback students were asked to fill out the Feedback and Assessment Perception Questionnaire (FAPQ). The FAPQ was developed based on the Assessment and Experience Questionnaire (AEQ) of Gibbs and Simpson (Citation2003, Citation2004). Students’ perception was measured using four scales of the AEQ (Gibbs & Simpson, Citation2003, Citation2004), namely: (1) perceived quality of the feedback (six items; e.g. ‘The feedback helps me to understand things better’); (2) perceived use of the feedback (eight items; e.g. ‘I use the feedback to go back over what I have done in the assessment’); (3) perceived quantity and timing of the feedback (six items; e.g. ‘I received plenty of feedback’); and (4) perceived examination and learning that measured the quality of the assessment task (eight items; e.g. ‘I learnt new things as a result of the performance’). In addition to the 28 items of the four AEQ scales, a fifth scale was added to the final FAPQ; (5) this scale of the usefulness of feedback emphasised how useful the feedback is (16 items, e.g. ‘The feedback is very easy to understand’). By that, the FAPQ consisted of 44 items, scored on a five-point Likert-type scale, from 1 (strongly disagree) to 5 (strongly agree) (see Appendix E). Reliability analyses were conducted on all scales of the FAPQ (rir< .3 and a relevant increasing ‘Alpha if item deleted’). Nine items were deleted from the original FAPQ. The FAPQ perception scale examination and learning showed low reliability (alpha = .58). This result fitted the reliability analysis of Gibbs and Simpson (Citation2003) when they designed the FAPQ examination and learning scale (alpha = .54). The other four scales were found to be reliable (Cronbach’s alpha >.70).

Motivation strategies for learning questionnaire

Before students started with the preparation of the assessment task, and after receiving or reading the feedback students were asked to fill out the Motivation Strategies for Learning Questionnaire (MSLQ). The MSLQ, a self-report instrument, was used to assess students’ motivational orientations (Pintrich, Smith, García, & McKeachie, Citation1993). The motivation section of the MSLQ consists of 31 items and six scales. With the first three scales, we measured student’s motivation for the task of practicing dietetic skills with a simulated patient: 1) intrinsic goal orientation measured student’s perception of participating in the task for reasons such as challenge, curiosity, and mastery (four items; e.g. ‘I prefer a performance that really challenges me to learn new things’); 2) extrinsic goal orientation measured student’s perception of participating in the task for reasons such as grades and rewards (four items; e.g. ‘Getting a good grade for the performance is the most satisfying thing’); and 3) task value measured student’s evaluation of how interesting and important the task is (six items; e.g. ‘I think the knowledge and skills assessed in this performance are useful’). With the other three scales, we measured student’s expectancy of accomplishing the task successfully, including self-efficacy: 4) control of learning beliefs measured student’s perception that their learning efforts resulted in positive outcomes (four items; e.g. ‘If I try hard enough, then I will understand the knowledge and skills required for this performance’); 5) self-efficacy measured student’s expectancy for success and student’s appraisal of one’s own ability to master the task (eight items; e.g. ‘I’m confident I can do an excellent job on this performance’); and 6) test anxiety measured student’s negative thoughts that disrupted performance (five items; e.g. ‘When I am doing a performance with a simulated patient I think about how poorly I am doing compared with other students’). The 31 MSLQ items were reformulated with regard to the skills course and simulated patient assessment task; items were scored on a seven-point Likert-type scale, from 1 (not at all true of me) to 7 (very true of me) (see Appendix F). Reliability analyses were conducted on all scales of the MSLQ (rir < .3 and a relevant increasing ‘Alpha if item deleted’). Three items were deleted from the 31 items of the original MSLQ (pre-test and post-test). The MSLQ scales of intrinsic goal orientation (pre-test; alpha = .68), extrinsic goal orientation (post-test; alpha = .69) and control of learning beliefs (pre-test; alpha = .62) showed moderate reliability. These results fitted the reliability analyses of Pintrich et al. (Citation1993) when designing the MSLQ; they argued the scales to be a reasonable representation of the data (p. 808) with Cronbach's alphas for intrinsic goal orientation (.74), extrinsic goal orientation (.62), and control of learning beliefs (.68). The other nine scales (pre-test and post-test) were found to be reliable (Cronbach’s alpha >.70). See for an overview of the study and data gathering.

Figure 1. Overview of study and data gathering

Figure 1. Overview of study and data gathering

Data analysis

Feedback perception

We used two-by-two independent analysis of variance (ANOVA) to analyse the main effects of feedback request forms and feedback mode, and the interaction effect between feedback request forms and feedback mode on students’ perceptions.

Motivation and self-efficacy

We used two-by-two independent analysis of covariance (ANCOVA) analyses to analyse the main effects of feedback request forms and feedback mode, and the interaction effect between feedback request forms and feedback mode on students’ self-efficacy, and motivation. The pre-test scores of the variables of intrinsic goal orientation; extrinsic goal orientation; task value; control of learning beliefs; self-efficacy; and test anxiety were included as a covariate. Each pre-test score was applied as covariate to control for the pre-existing differences on this dependent variable. For example, the pre-test score of control of learning beliefs was used as a covariate to control for the effect on the post-test score of control of learning beliefs. Following recommendations by Lakens (Citation2013), partial eta-squared (ηp 2) was used as a measure of effect size. Effect sizes were qualified as a small (.01), medium (.06), or large effect (.14) (Cohen, Citation1988).

Error inflation correction

As we proposed 11 tests in our analysis (five ANOVAs and six ANCOVAs) a multiple testing correction was needed and the standard alpha level of .05 could not be applied. To correct for error inflation we have applied the False Discovery Rate (FDR) procedure of Benjamini and Hochberg (Citation1995), as it maintains power and controls for the false positives (type I errors). The FDR procedure leads to an adjusted alpha level based on the number of tests conducted, which is called the Benjamini-Hochberg (B-H) critical value. Within the FDR procedure, the standard Alpha-value (.05) was divided by each number of the 11 tests (1–11) leading to 11 FDR adjusted Alpha-values from .05 (.05/1) to .0045 (.05/11). After the data were analysed, the 11 computed statistics (five ANOVA F-ratios and six ANCOVA F-ratios) with their p-values were ranked from high to low and tested against the 11 FDR adjusted Alpha-values. The highest p-value that was lower than its FDR adjusted Alpha level was considered the B-H critical value; all p-values below the B-H critical value were considered as a significant result.

Results

Main effect of feedback request form

Means and standard deviations are shown in , with the pre- and post-test scores of the six MSLQ scales, the post-test scores of the five FAPQ scales, and the reliability results. Contrary to our expectations, there was no significant main effect of the feedback request form on students’ perceptions, self-efficacy, and motivation.

Table 2. Reliability analysis of subscales of the MSLQ (pre-test), FAPQ, and MSLQ (post-test)

Main effect of feedback mode on feedback perception

The error inflation correction following the Benjamini-Hochberg procedure lead to a BH critical value of .227 (see Appendix G). Analyses showed significant results on four of the five feedback perception scales (see ). Students who received verbal feedback during the feedback dialogue perceived the quality of feedback, the use of feedback, the quantity and timing of feedback, and the usefulness of feedback to be higher than students who received written feedback. There was a significant main effect for feedback mode (written vs. verbal) on the perceived quantity and timing of feedback, F(1,111) = 40.49, p < .001, with a large effect size of ηp2 = .27. There was a significant main effect for feedback mode (written vs. verbal) on the perceived quality of feedback, F(1,111) = 27.10, p < .001, with a large effect size of partial ηp2 = .20. There was a significant main effect for feedback mode (written vs. verbal) on the perceived use of feedback, F(1,109) = 10.36, p = .002, with a medium effect of ηp2 = .09. And finally, there was a significant main effect for feedback mode (written vs. verbal) on the perceived usefulness of feedback, F(1,110) = 8.16, p = .005, with a medium effect size of ηp2 = .07. Contrary to our expectations, the analyses showed no significant effect of feedback mode on students’ perceived examination and learning. These results indicate that students who received verbal feedback perceived all aspects of teacher feedback more positively than the students who received written feedback, except for students’ perceived examination and learning.

Table 3. Means, standard deviations, and two-way (feedback mode and request form) Analysis of Variance (ANOVA) for the quantity and timing of feedback, quality of feedback, use of feedback, usefulness of feedback, and examination and learning

Main effect of feedback mode on self-efficacy and motivation

After controlling for the effect of pre-test control of learning beliefs, students who received verbal feedback during feedback dialogue had a significantly higher control of learning beliefs than students who received written feedback (see ). There was a significant main effect for feedback mode (written vs. verbal) on students’ control of learning beliefs, F(1,110) = 6.07, p = .015. The effect size shows a small effect, ηp2 = .05. The covariate pre-test control of learning beliefs was significantly related to the post-test control of learning beliefs, F(1,111) = 72.69, p < .001. The effect size shows a large effect, ηp2 = .40. This significant result indicates that students who receive verbal feedback have stronger beliefs that their efforts will result in positive outcomes than students who receive written feedback. Contrary to our expectations, the analyses showed no significant effect of feedback mode on students’ intrinsic goal orientation; extrinsic goal orientation; task value; self-efficacy; and test anxiety. No significant interaction effects were found between feedback request forms and feedback mode on students’ perceptions, self-efficacy, and motivation.

Table 4. Means, standard deviations, and two-way (feedback mode and request form) Analysis of Covariance (ANCOVA) for the control of learning beliefs, task value, test anxiety, self-efficacy, intrinsic goal orientation, and extrinsic goal orientation

Discussion

The present study aimed to investigate the impact of feedback request form and feedback mode, as well as the interaction between both variables on students’ perception, self-efficacy, and motivation during teacher-student feedback conversations in higher education.

The first research question examined whether the use of feedback request forms had a positive impact on students’ perception, self-efficacy, and motivation. With regard to perception, self-efficacy, and motivation, no significant impact was found concerning the feedback request forms. Other studies have shown feedback request forms to engage students more in the feedback process (Bloxham & Campbell, Citation2010; Gielen & De Wever, Citation2015). Gielen et al. (Citation2010) showed their students to appreciate feedback more as result of the use of feedback request forms. Our students might not have produced high-quality requests and/or assessors did not pay enough attention to this individualised part of the feedback. More detailed instruction and explanation to students and assessors could have increased the effect of the feedback request form. Elbow and Sorcinelli (Citation2011) argued students write better feedback request forms when they are written in class and a couple of examples are discussed. This could have increased the quality of the requests, could have stimulated students filling out the feedback request forms correctly, and motivated them to use the form to strengthen their own learning. We should add the possibility that feedback request forms are not effective at all, as students are not always in the best position to judge what is educationally preferable (Huxham, Citation2007). When students are asked what kinds of strategies do work best, some students use effective strategies that contribute to their achievement. However, many students not only use relatively ineffective strategies (e.g. rereading), but believe that they are relatively effective (Bjork, Dunlosky, & Kornell, Citation2013). The results of our study indicate that there is no significant effect of the feedback request forms on students’ perception of feedback, self-efficacy, and motivation.

The second research question examined whether verbal feedback had a higher impact on students’ perception, self-efficacy, and motivation than written feedback. Students who received verbal feedback perceived the feedback to be better in terms of quality, use, quantity and timing, and usefulness compared to students who received written feedback. These results correspond with findings that feedback is perceived in a more positive way when learner-centred methods are used (Pereira, Flores, Simão, & Barros, Citation2016) and with findings that students perceive high-quality feedback when it does not only judge their work, but also fosters dialogue (Beaumont, O’Doherty, & Shannon, Citation2011). These results can be explained by the differences in opportunities for teachers and students to interact during the feedback conversations in which assessors communicated their feedback verbally. These differences can lead to more questioning and answering by students and assessors and to better understanding and interpretation, which results in the students appreciating the feedback. Students in the written feedback condition did not have the opportunity to receive more explanation and discussion to understand the feedback properly and be able to improve their performance based on the feedback. The sometimes unclear, too brief, and/or unhelpful written feedback could lead to frustration and dissatisfaction (Ferguson, Citation2011; Hounsell et al., Citation2008; Price et al., Citation2010; Weaver, Citation2006). Furthermore, it is possible that students’ perceptions of feedback depend on their prior knowledge and experience with feedback. Prior to the experimental conditions of this study, students were used to receiving only written feedback on their summative performance assessments and might have had negative experiences with written feedback in the past. Based on these results, we conclude that verbal feedback had a significant effect on students’ perception of feedback quality, use, quantity, and usefulness.

Verbal feedback did not have a positive impact on the three motivation scales of intrinsic goal orientation, extrinsic goal orientation, and task value. Delayed instead of immediate effects of the intervention could have played a role. Students had a short time lapse of approximately five minutes between receiving verbal or written feedback and filling out the FAPQ for scoring their feedback perception and filling out the MSLQ for scoring their motivation and self-efficacy. Students who received verbal feedback did have significantly higher control of learning beliefs than students who received written feedback. If students believe that their efforts to study make a difference in their learning, they study more in appropriate ways (Pintrich et al., Citation1993). Verbal feedback influences these efforts more than written feedback does. Students probably will study more in appropriate ways when feedback is communicated verbally. Based on the results of this study, we conclude that verbal feedback improved the control of learning beliefs significantly more than written feedback did, it did not improve motivation, self-efficacy, or test anxiety.

Third, we examined the interaction effect between feedback request form and feedback mode on students’ perception, self-efficacy, and motivation. We found no significant interaction effects on the feedback perception variables, on the self-efficacy, and motivation variables. Based on the results in this study, we cannot conclude that the feedback request forms can influence students’ perceived feedback, self-efficacy, and motivation more, when feedback between teacher and student is communicated verbally.

Limitations

This study is subject to some limitations. This study relies mainly on self-report student perceptions of feedback. When interpreting the results of this study, consideration must be given to the quality and reliability of student responses. Student perceptions can be inaccurate or biased; as, for example, the students might have just ticked boxes in a superficial manner to complete the questionnaires quickly. It must also be noted that students were asked to consider the feedback they had received during this current course. The central task carried out by our students was very specific. Practicing with simulated patients is clearly connected to the domain of health studies. The nature of this assessment task may not be the same as in some other university courses or contexts. The perception of feedback by any student will depend on a complex interaction of the personalities of that of the student and of the teacher, as well as the broader teaching environment and history of interaction between these two (Huxham, Citation2007). However, in our view, the results can be generalised to other studies within the domain of health studies. As students have difficulties reading and understanding written feedback on all kinds of tasks; e.g. writing essays, or writing undergraduate dissertations it can be generalised to other tasks as well. Finally, many potential mechanisms can have caused the effect of verbal feedback. As we conducted a naturalistic experiment, comparing realistic feedback conditions, the mechanisms involved are not that clear. For example, there were many differences between the conditions such as the time spent engaging with the feedback that may also have contributed to the effects.

Practical implications and further research

This study underlines the importance of communicating assessment feedback verbally during teacher and student feedback conversations. As students understood verbal feedback better, it should be the preferred feedback mode for teachers to communicate feedback with their students. Although better understanding is found for feedback that is verbally communicated, it seems not to necessarily result in higher motivation. Feedback conversations in one-to-one settings and small classes are desirable and feasible. Implementation of individualised verbal feedback in larger classes, with full integration of feedback conversations in daily educational practice stays challenging.

Feedback request forms can be used in practice, but more research is needed to show an effect. Future research could focus on students’ use of feedback request forms after training them by using worked-out examples with information of how to use the feedback request form. The quantitative findings in this study could lead to a more qualitative approach into the feedback process focusing on the taking up of the feedback. It would also be interesting to investigate the long-term impact of verbal feedback and feedback request forms on self-efficacy and motivation, and on future performance in a longitudinal design in which multiple feedback cycles are examined. In the end, feedback conversations are complex interactive processes in both students’ and teachers’ learning.

Supplemental material

Appendix_F_MSLQ_items.docx

Download MS Word (93.5 KB)

Appendix_E_FAPQ_items.docx

Download MS Word (88.4 KB)

Appendix_G_False_Discovery_Rate.docx

Download MS Word (17.3 KB)

Appendix_D_Feedback_request_form.docx

Download MS Word (73.7 KB)

Appendix_C_Rubric_scoring_sheet_with_assessment_criteria.docx

Download MS Word (95.2 KB)

Appendix_B_Summary_of_simulated_patient_case_description.docx

Download MS Word (87.1 KB)

Appendix_A_Randomization_results.docx

Download MS Word (52.7 KB)

Disclosure statement

No potential conflict of interest was reported by the authors.

Supplementary material

Supplemented data of this article can be accessed here.

Additional information

Notes on contributors

Bas T. Agricola

Bas T. Agricola is an educational researcher at Amsterdam University of Applied Sciences. He finished his in 2019 at Utrecht University, Department of Education & Pedagogy of the Faculty of Social Sciences. His PhD project encompasses scaffolded feedback in higher education. His research interest encompasses feedback conversations that enable interaction. Students often receive verbal feedback from teachers during feedback conversations, but are rather passive. The aim of this project was to examine how interventions aimed at teachers’ support can influence students’ active roles.

Frans J. Prins

Frans J. Prins is an associate professor at Utrecht University, Department of Education & Pedagogy of the Faculty of Social and Behavioural Sciences. He graduated in 1991 as Developmental psychologist at the University of Amsterdam. After working four years as a teacher and researcher at the Faculty of Psychology (UvA) he left for the University of Leiden in 1996 for PhD research on the role of metacognition and intellectual ability in inquiry-based learning using computer simulations. His research interest encompasses assessment, feedback, and motivation.

Dominique M. A. Sluijsmans

Dominique M. A. Sluijsmans is an associate professor at Zuyd Hogeschool and Maastricht University. Her research interest encompasses sustainable assessment. She finished her PhD-project entitled ‘Student involvement in assessment: the training of peer assessment skills’. After her PhD-project, she further investigated the complexity of assessing one’s own and peers’ work in the context of teacher- and medical education.

References

  • Ajjawi, R., & Boud, D. (2017). Researching feedback dialogue: An interactional analysis approach. Assessment & Evaluation in Higher Education, 42(2), 252–265.
  • Bandura, A. (1993). Perceived self-efficacy in cognitive development and functioning. Educational Psychologist, 28(2), 117–148.
  • Beaumont, C., O’Doherty, M., & Shannon, L. (2011). Reconceptualising assessment feedback: A key to improving student learning? Studies in Higher Education, 36(6), 671–687.
  • Benjamini, Y., & Hochberg, Y. (1995). Controlling the false discovery rate: A practical and powerful approach to multiple testing. Journal of the Royal Statistical Society.Series B (Methodological), 57, 289–300.
  • Beshgetoor, D., & Wade, D. (2007). Use of actors as simulated patients in nutritional counseling. Journal of Nutrition Education and Behavior, 39(2), 101–102.
  • Bjork, R. A., Dunlosky, J., & Kornell, N. (2013). Self-regulated learning: Beliefs, techniques, and illusions. Annual Review of Psychology, 64, 417–444.
  • Black, P., & Wiliam, D. (1998). Assessment and classroom learning. Assessment in Education, 5(1), 7–74.
  • Bloxham, S., & Campbell, L. (2010). Generating dialogue in assessment feedback: Exploring the use of interactive cover sheets. Assessment & Evaluation in Higher Education, 35(3), 291–300.
  • Bruno, I., & Santos, L. (2010). Written comments as a form of feedback. Studies in Educational Evaluation, 36(3), 111–120.
  • Butler, D. L., & Winne, P. H. (1995). Feedback and self-regulated learning: A theoretical synthesis. Review of Educational Research, 65(3), 245–281.
  • Caffarella, R. S., & Barnett, B. G. (2000). Teaching doctoral students to become scholarly writers: The importance of giving and receiving critiques. Studies in Higher Education, 25(1), 39–52.
  • Carless, D. (2006). Differing perceptions in the feedback process. Studies in Higher Education, 31(2), 219–233.
  • Carless, D., Salter, D., Yang, M., & Lam, J. (2011). Developing sustainable feedback practices. Studies in Higher Education, 36(4), 395–407.
  • Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Hillsdale, NJ: Erlbaum.
  • de Kleijn, R. A. M., Mainhard, M. T., Meijer, P. C., Brekelmans, M., & Pilot, A. (2013). Master’s thesis projects: Student perceptions of supervisor feedback. Assessment and Evaluation in Higher Education, 38(8), 1012–1026.
  • Duijnhouwer, H. (2010). Feedback effects on students’ writing motivation, process, and performance (Doctoral dissertation, Utrecht University). Retrieved from https://dspace.library.uu.nl/handle/1874/43968
  • Duijnhouwer, H., Prins, F. J., & Stokking, K. M. (2010). Progress feedback effects on students’ writing mastery goal, self-efficacy beliefs, and performance. Educational Research and Evaluation, 16(1), 53–74.
  • Dweck, C. S. (1986). Motivational processes affecting learning. American Psychologist, 41(10), 1040–1048.
  • Dweck, C. S., & Leggett, E. L. (1988). A social-cognitive approach to motivation and personality. Psychological Review, 95(2), 256–273.
  • Ekholm, E., Zumbrunn, S., & Conklin, S. (2015). The relation of college student self-efficacy toward writing and writing self-regulation aptitude: Writing feedback perceptions as a mediating variable. Teaching in Higher Education, 20(2), 197–207.
  • Elbow, P., & Sorcinelli, M. D. (2011). 16 using high-stakes and low-stakes writing to enhance learning. In W. J. McKeachie & M. Svinicki (Eds.), McKeachie’s teaching tips. Strategies, research, and theory for college and university teachers (13th ed., pp. 213–234). Wadsworth: Cengage Learning.
  • Ferguson, P. (2011). Student perceptions of quality feedback in teacher education. Assessment and Evaluation in Higher Education, 36(1), 51–12; 62.
  • Gibbs, G., & Simpson, C. (2003, September 1–3). Measuring the response of students to assessment: The assessment experience questionnaire. Paper presented at the 11th Improving Student Learning Symposium (pp. 1–12). Hinckley, England. Retrieved from https://www.open.ac.uk/fast/pdfs/Gibbs&Simpson_03.pdf
  • Gibbs, G., & Simpson, C. (2004). Conditions under which assessment supports students’ learning. Learning and Teaching in Higher Education, 1(1), 3–31. Retrieved from https://www.open.ac.uk/fast/pdfs/Gibbs%20and%20Simpson%202004-05.pdf
  • Gibson, S., & Davidson, Z. (2016). An observational study investigating the impact of simulated patients in teaching communication skills in preclinical dietetic students. Journal of Human Nutrition and Dietetics, 29(4), 529–536.
  • Gielen, M., & De Wever, B. (2015). Scripting the role of assessor and assessee in peer assessment in a wiki environment: Impact on peer feedback quality and product improvement. Computers & Education, 88, 370–386.
  • Gielen, S., Peeters, E., Dochy, F., Onghena, P., & Struyven, K. (2010). Improving the effectiveness of peer feedback for learning. Learning and Instruction, 20(4), 304–315.
  • Gielen, S., Tops, L., Dochy, F., Onghena, P., & Smeets, S. (2010). A comparative study of peer and teacher feedback and of various peer feedback forms in a secondary school writing curriculum. British Educational Research Journal, 36(1), 143–162.
  • Hampl, J. S., Herbold, N. H., Schneider, M. A., & Sheeley, A. E. (1999). Using standardized patients to train and evaluate dietetics students. Journal of the American Dietetic Association, 99(9), 1094–1097.
  • Hattie, J. (2012). Know thy IMPACT. Educational Leadership, 70(1), 18–23. Retrieved from http://search.ebscohost.com/login.aspx?direct=true&db=a9h&AN=82055857&lang=nl&site=ehost-live
  • Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77(1), 81–112.
  • Havnes, A., Smith, K., Dysthe, O., & Ludvigsen, K. (2012). Formative assessment and feedback: Making learning visible. Studies in Educational Evaluation, 38(1), 21–27.
  • Higgins, R., Hartley, P., & Shelton, A. (2002). The conscientious consumer: Reconsidering the role of assessment feedback in student learning. Studies in Higher Education, 27(1), 53–64.
  • Higgins, R., Hartley, P., & Skelton, A. (2001). Getting the message across: The problem of communicating assessment feedback. Teaching in Higher Education, 6(2), 269–274.
  • Hounsell, D., McCune, V., Hounsell, J., & Litjens, J. (2008). The quality of guidance and feedback to students. Higher Education Research & Development, 27(1), 55–67.
  • Huxham, M. (2007). Fast and effective feedback: Are model answers the answer?. Assessment & Evaluation in Higher Education, 32(6), 601–611.
  • Hyatt, D. F. (2005). ‘Yes, a very good point!’: A critical genre analysis of a corpus of feedback commentaries on master of education assignments. Teaching in Higher Education, 10(3), 339–353.
  • Hyland, K. (2013). Student perceptions of hidden messages in teacher written feedback. Studies in Educational Evaluation, 39(3), 180–187.
  • Kluger, A. N., & DeNisi, A. (1996). The effects of feedback interventions on performance: A historical review, a meta-analysis, and a preliminarily feedback intervention theory. Psychological Bulletin, 119(2), 254–284.
  • Lakens, D. (2013). Calculating and reporting effect sizes to facilitate cumulative science: A practical primer for t-tests and ANOVAs. Frontiers in Psychology, 4, 863.
  • Lane, C., & Rollnick, S. (2007). The use of simulated patients and role-play in communication skills training: A review of the literature to august 2005. Patient Education and Counseling, 67(1–2), 13–20.
  • Lizzio, A., & Wilson, K. (2008). Feedback on assessment: Students’ perceptions of quality and effectiveness. Assessment & Evaluation in Higher Education, 33(3), 263–275.
  • Merry, S., & Orsmond, P. (2008). Students’ attitudes to and usage of academic feedback provided via audio files. Bioscience Education, 11(11), 1–11.
  • Nicol, D. (2010). From monologue to dialogue: Improving written feedback processes in mass higher education. Assessment & Evaluation in Higher Education, 35(5), 501–517.
  • Nicol, D., & Macfarlane-Dick, D. (2006). Formative assessment and self-regulated learning: A model and seven principles of good feedback practice. Studies in Higher Education, 31(2), 199–218.
  • Pajares, F. (2012). Motivational role of self-efficacy beliefs in self-regulated learning. In D. H. Schunk & B. J. Zimmerman (Eds.), Motivation and self-regulated learning. Theory, research, and applications (1st ed., pp. 111–139). New York: Routledge.
  • Pereira, D., Flores, M. A., Simão, A. M. V., & Barros, A. (2016). Effectiveness and relevance of feedback in higher education: A study of undergraduate students. Studies in Educational Evaluation, 49(SupplementC), 7–14.
  • Pintrich, P. R., & De Groot, E. V. (1990). Motivational and self-regulated learning components of classroom academic performance. Journal of Educational Psychology, 82(1), 33.
  • Pintrich, P. R., Smith, D. A. F., Garcia, T., & McKeachie, W. J. (1991). A manual for the use of the motivated strategies for learning questionnaire (MSLQ). 1–88. Retrieved from https://files.eric.ed.gov/fulltext/ED338122.pdf
  • Pintrich, P. R., Smith, D. A. F., García, T., & McKeachie, W. J. (1993). Reliability and predictive validity of the motivated strategies for learning questionnaire (MSLQ). Educational and Psychological Measurement, 53(3), 801–813.
  • Poulos, A., & Mahony, M. J. (2008). Effectiveness of feedback: The students’ perspective. Assessment & Evaluation in Higher Education, 33(2), 143–154.
  • Price, M., Handley, K., Millar, J., & O’Donovan, B. (2010). Feedback: All that effort, but what is the effect? Assessment & Evaluation in Higher Education, 35(3), 277–289.
  • Prins, F. J., Sluijsmans, D. M. A., & Kirschner, P. A. (2006). Feedback for general practitioners in training: Quality, styles and preferences. Advances in Health Sciences Education, 11(3), 289–303.
  • Ramaprasad, A. (1983). On the definition of feedback. Behavioral Science, 28(1), 4–13.
  • Rowe, A. (2011). The personal dimension in teaching: Why students value feedback. International Journal of Educational Management, 25(4), 343–360.
  • Ryan, R. M., & Deci, E. L. (2000). Intrinsic and extrinsic motivations: Classic definitions and new directions. Contemporary Educational Psychology, 25(1), 54–67.
  • Sadler, D. R. (1989). Formative assessment and the design of instructional systems. Instructional Science, 18(2), 119–144.
  • Sadler, D. R. (2010). Beyond feedback: Developing student capability in complex appraisal. Assessment & Evaluation in Higher Education, 35(5), 535–550.
  • Schunk, D. H. (1983). Developing children’s self-efficacy and skills: The roles of social comparative information and goal setting. Contemporary Educational Psychology, 8(1), 76–86.
  • Schunk, D. H., & Zimmerman, B. J. (2007). Influencing children’s self-efficacy and self-regulation of reading and writing through modeling. Reading & Writing Quarterly, 23(1), 7–25.
  • Schwartz, V. S., Rothpletz-Puglia, P., Denmark, R., & Byham-Gray, L. (2015). Comparison of standardized patients and real patients as an experiential teaching strategy in a nutrition counseling course for dietetic students. Patient Education and Counseling, 98, 168–173.
  • Shute, V. (2008). Focus on formative feedback. Review of Educational Research, 78(1), 153–189.
  • Todd, J. D., McCarroll, C. S., & Nucci, A. M. (2016). High-fidelity patient simulation increases dietetic students’ self-efficacy prior to clinical supervised practice: A preliminary study. Journal of Nutrition Education and Behavior, 48, 563–567.e1.
  • Van der Schaaf, M. F., Baartman, L. K. J., Prins, F. J., Oosterbaan, A., & Schaap, H. (2011). Feedback dialogues that stimulate students’ reflective thinking. Scandinavian Journal of Educational Research, 57(3), 227–245.
  • Vaus, D., & David. (2001). Research design in social research. London: Sage.
  • Weaver, M. R. (2006). Do students value feedback? student perceptions of tutors’ written responses. Assessment & Evaluation in Higher Education, 31(3), 379–394.
  • Wingate, U. (2010). The impact of formative feedback on the development of academic writing. Assessment & Evaluation in Higher Education, 35(5), 519–533.
  • Zimmerman, B. J. (1989). A social cognitive view of self-regulated academic learning. Journal of Educational Psychology, 81(3), 329.