3,205
Views
4
CrossRef citations to date
0
Altmetric
Research Articles

Exploring Students’ Initial Reactions to the Feedback They Receive on Coursework

, , , , , & show all
Pages 3-21 | Published online: 15 Dec 2015

Abstract

Understanding students’ reactions to their feedback to coursework is crucial in being able to deliver feedback which motivates them and helps them to do better. This study focused on undergraduate bioscience students on a variety of degree programmes, at three UK universities, across all years. A questionnaire was completed by students when first reading their feedback, thereby accessing their initial reactions to the comments they received. Focus groups assisted in the analysis of these initial reactions and also enabled discussion on how the students felt about their feedback. Our findings suggest that, although many students value feedback irrespective of their emotional response to it, others are clearly motivated or de-motivated by specific factors within the feedback that they receive. We suggest that this initial emotional reaction is fundamental to the student’s subsequent engagement with feedback, and that feedback that immediately de-motivates a student is likely to be of very limited value to the ongoing learning process. Recommendations to improve feedback include the need to offer positive, constructive comments, meaningful annotations and comments which justify the given mark.

Introduction

Delivering good quality and timely feedback is crucial to student learning and improving feedback has become central to improving the student experience as a whole (Citation(Price et al. 2010, CitationOrsmond & Merry 2011). Many reports on the student experience focus on feedback as a key issue. In the CitationNational Union of Students (NUS), Student Experience Report (2008) students were asked about the feedback they received, with particular reference to whether the format of the feedback they received was what they wished to receive. For example, students said that they would prefer to receive feedback through an individual meeting. However, students were unlikely to seek such verbal feedback unless their mark was lower than they expected. CitationRamsden’s report (2008) to the Department for Innovation, Universities and Skills outlined many changes that could be made to improve the student experience including the need to engage students in the feedback process. CitationJones et al. (2009) showed how students’ perceptions, based on their experiences at school, could affect how they evaluated the feedback process at university. Orsmond and Merry similarly showed how students utilized feedback based on their own expectations of what that feedback is designed to provide (see CitationOrsmond & Merry 2011) and their own learning styles and ability (see CitationOrsmond & Merry 2012). These authors warn that the expectations of how feedback should be used are often at odds between the student and the academic who is providing that feedback. However, students and staff have also been shown to share ideas on what constitutes good feedback (CitationLilly et al. 2010).

Improving the quality of feedback has been a subject of concern for many years now. A study by CitationRhodes & Nevill (2004), on a large number of first year undergraduates, found the ‘quality of feedback on my work’ to be deeply unsatisfactory. The manner in which feedback is provided is fundamental in encouraging the student to make effective use of it. Students have been found to dislike a lack of information, information that was too impersonal and general, poor handwriting, a lack of dialogue with the marker and feedback that did not feed forward into future assignments (CitationHiggins et al. 2002, CitationCrook et al. 2006, CitationOrsmond & Merry 2011). Feedback is part of an overall learning strategy that can also make use of self and peer assessment, which can encourage students to take an active role in their own learning (CitationNicol & Macfarlane-Dick 2006). Students value feedback that relates to deep rather than surface learning (CitationHiggins et al. 2002), and assessment, feedback and marking criteria are most powerful when the student is engaged in a dialogue with tutors about them (CitationRust et al. 2005).

Many studies have suggested ways of improving students’ experiences with feedback and it is clearly valuable to ask students to look back over their experiences and respond to questions about how they felt about the feedback they received (e.g. CitationWeaver 2006, CitationHounsell et al. 2008, CitationLizzio and Wilson 2008). From such studies, CitationWeaver (2006) concluded that guidance and motivation were the most crucial aspects of feedback from the student perspective. She also recommended a better balance between critical and positive feedback. This emphasis on guidance was also shown by a large-scale study of bioscience students (CitationHounsell et al. 2008). However, CitationStobbart (2006) argues that while feedback that praises a student can be used as a tool to increase motivation, it may hinder deep learning by encouraging students to seek praise by the easiest methods.

In a study by CitationSargeant et al. (2009), response to feedback was investigated in a small group of family physicians. Responses were divided into a series of stages. The first was to consider whether or not the feedback was consistent with their self perceptions. If the tone of the feedback matched preconceptions, the reflection period which followed was relatively short. If they did not, this resulted in a longer period of reflection. They developed a model which maps out reflective and emotional responses. If feedback was consistent with the subjects’ self perceptions, this related to positive feedback. If feedback was inconsistent then it was assumed to be negative. This would suggest that the subjects of this study had strong positive self perceptions, which may not necessarily be the case with undergraduate students. The emotional response to positive or negative feedback may be more consistent between different study groups, although this is very hard to study, being highly subjective (CitationAgarwal & Meyer 2009). CitationAgarwal & Meyer (2009) stress the importance of studying the relevance of emotional responses, as receiving feedback is a complex and emotional issue.

Existing studies, including that of CitationSargeant et al. (2009), focus on the response to feedback after a time of reflection whereas, in the study presented in this paper, students were asked to complete a questionnaire on their response to feedback while they were first reading the feedback. Therefore, the questionnaires focused on the highly emotional stage of receiving feedback, where students see the mark and comments for the first time and may be immediately encouraged or de-motivated. Therefore the objectives of the study were to:

  1. Investigate the immediate response of students to the feedback they received and how these responses, in terms of happiness and appreciation of fairness, related to motivation.

  2. Relate findings to the model of reflection and decision making of CitationSargeant et al. (2009). Sargeant’s model came out of work with family physicians and this paper explores whether this model could be more widely applicable.

  3. To elicit recommendations for improvements to feedback to students.

Methods

Three universities took part in this study, designated Universities A, B and C. This study focused on undergraduate bioscience students taking a variety of degree programmes in all years of study in the 2009–2010 academic year. A questionnaire was designed (Appendix 1) to assess how students felt as they first read feedback to their coursework. Coursework came from a variety of modules. Students were asked to fill out the questionnaire as they were first reading their feedback. The voluntary and anonymous questionnaire comprised a range of questions which asked students to assess how they were feeling on a scale from –4 to +4, with negative numbers indicating a negative, unhappy feeling or one of unfairness, and positive numbers indicating a positive, happy feeling or one of fairness. The zero in the middle of the scale was taken as a neutral feeling. There were also boxes where students could write about specific comments which motivated or did not motivate them. These responses were put into categories, the criteria for which are outlined in Appendix 2. The questionnaires were made available to students where they picked up their coursework. Ethical approval for the study was obtained from the School of Education’s Ethics Committee at the University of East Anglia. The number of questionnaires filled out by students at each of the universities, and the number of modules these questionnaires came from, is detailed in .

Table 1 Number of completed questionnaires and number of modules questionnaires came from, for each university

Groups of students from different years, selected to represent a range of abilities, were invited to join focus groups to address specific areas of feedback and motivation. These were semi-structured interviews where students were asked about the types of feedback, and specific comments within feedback, which motivated or did not motivate them. Students were encouraged to suggest a series of recommendations for improvements to feedback. At University A groups were held for years one, two and three, and at University C groups were held for years one and three. No focus groups were held at University B.

Consideration had to be given as to how data were grouped for data analysis. The response rate was very variable, with some modules only providing a small proportion of returns, relative to the class size, for each piece of course work. A limited response rate had the potential to bias the results if the responses were predominantly from a small subset of potential respondents. Thus, the proportions of students within specific mark ranges for the modules, as a whole, were compared to the range of marks for respondents. Overall, the total number of returned forms for the three universities was 518 and these came from a total of 19 modules. Responses came from a wide range of modules, but representing a minority of those available, and work in different modules was marked by many different people with different feedback styles. Taken individually, this became a very complex matrix of caveats, so results could not be taken as fully representative of each university, nor of individual modules, but combining universities gave a sample size of 518 returns, meaning that a wide level of general representation could be claimed. In addition, there was no significant difference between the levels of satisfaction, motivation and acceptance of fairness between the different universities. Therefore, for all results reported later, universities were combined.

Findings

Satisfaction and motivation felt by students split by gender, year and mark awarded

Responses were analysed to look for differences between gender and year groups (). There was no significant difference in the general level of motivation (χ2 = 10.85, df = 8, ns; ) nor contentedness (χ2 = 10.51, df = 8, ns; ) felt by male and female students. When first seeing the feedback, year two students were less happy than year one (χ2 = 33.17, df = 8, p < 0.001; ). This difference in initial feeling about the work was translated into significant differences in motivation (χ2 = 19.14, df = 8, p < 0.05; data not shown). There were only 23 returns from year three students so they were not included in this part of the analysis.

Figure 1 How a general feeling about feedback and motivation relates to gender. Response split by gender for the question ‘Having read your feedback, how motivated do you feel to try harder next time?’ (1a) and ‘Compared to how you feel in general, how do you feel as you read the feedback you have been given?’ (1b), female (white), male (black)

Figure 2 A general response to feedback relative to students’ year of study. Response split by year, year one (yellow) year two (green) to the question: ‘Compared to how you usually feel in general, how do you feel as you read the feedback you have been given?’

The responses to the questions were separated depending on the mark each student was awarded (). For marks less than 50%, there were only nine returned questionnaires, and for 50–59% only 35 were returned (1.7% and 6.7% of the total respondents, respectively). So these were combined into a <60% mark bracket. Whilst this imbalance could affect a bias in these findings towards higher-achieving students, it is an interesting observation in itself, especially when compared to the proportion of responses from higher-achieving students (see ). Statistically, there was a significant difference between the contentment of those receiving a low grade (<60%) versus middle grade (60–69%) (χ2 = 180.38, df = 8, p < 0.001) and also in their motivation (low grade (40–59%) versus middle grade (60–69%) (χ2 = 128.41, df = 8, p < 0.001). Students receiving the lowest grades (i.e. < 60%) were less content (χ2 = 24.74, df = 8, p < 0.01) but not statistically less motivated (χ2 = 13.74, df = 8, ns) than the middle to high grade (middle grade (60–69%) versus high grade (>69%). The percentage of students generally gaining a mark for their coursework above 69% was lower than the percentage of students returning questionnaires who had received a mark above 69% (). So, students receiving a mark above 69% were more likely to fill in questionnaires than those who had received lower marks.

Figure 3 How the mark awarded relates to a general feeling about feedback and motivation. Responses to the questions ‘Compared to how you feel in general, how do you feel as you read the feedback you have been given?’ (3a) and ‘Having read your feedback, how motivated do you feel to try harder next time? (3b), with results split by the mark awarded for the assignment, <60% (blue); 60–69% (black); >69% (red)

Table 2 Comparing the percentage of all students in each mark category across all modules in the study and for returned questionnaires

The most interesting results were found when looking at the results split by whether marks were lower than expected, as expected, or higher than expected (). When comparing the values for the question ‘Compared to how you usually feel in general, how do you feel as you read the feedback you have been given?’, there was a significant difference (χ2 = 114.26, df = 16, p < 0.001) between how students felt when they first saw feedback on their work, compared to how their mark met their expectations. There was less difference between their motivation to try harder next time and how their mark met their expectations, although the difference was still significant (χ2 = 51.47, df = 16, p < 0.001). If the mark was lower than expected there was a significant difference between their general feeling and their motivation, such that they felt unhappy but were motivated (Mann Whitney U = 537.5, p = 0.002). This was also true if the mark was as expected (Mann Whitney U = 479.5, p = 0.011). However, if their mark was higher than expected, students felt happy and were motivated and there was no significance between the two values (Mann Whitney U = 509, p = 0.55). Regardless of their feelings and motivation, the majority of students felt that their feedback was fair, even if the mark did not agree with their expectation. However, the essential message was that, regardless of the mark offered in relation to expectation, students were motivated by feedback.

Figure 4 Relationship between general feeling and motivation with respect to expectations of the mark awarded. Mean response of all students on a scale of –4 to +4 to the questions: ‘Compared to how you usually feel in general, how do you feel as you read the feedback you have been given?’ (blue; where –4 is less happy than usual and +4 is more happy than usual); ‘Do you feel the feedback you have been given is fair?’ (red; where –4 is very unfair and +4 is very fair) and ‘Having read your feedback, how motivated do you feel to try harder next time?’ (black; where –4 is very discontented/unmotivated and +4 is very contented/motivated)

Satisfaction and motivation felt by students related to fairness

In general most students felt feedback was fair and were motivated by it (). There was little difference noted between whether this feeling was elicited by the mark or the comments provided (not shown). No correlation, nor any significant difference, was found for any results comparing whether general feelings (Mann Whitney U = 0.00001, p = 0.604) or feelings of motivation (U = 0.00001, p = 0.473) came from either the marks or the comments. Students did not demonstrate any obvious indication whether the mark or comment had any greater effect (than each other), when applied to either their level of happiness or motivation when first seeing their feedback.

Figure 5 General responses of all students relating to motivation and fairness. Response to the question ‘Having read your feedback, how motivated do you feel to try harder next time?’ (5a), where –4 is very unmotivated and +4 is very motivated and to the question ‘Do you feel that the feedback you have been given is fair?’ (5b) where –4 is very unfair and +4 is very fair

In looking for a relationship between a general feeling of happiness and fairness, most student responses were in the positive area of the graph with few students unhappy and few who felt the feedback was unfair (; Spearman’s Rank Correlation Coefficient = 0.62, n = 57). This was also true for a relationship between fairness and motivation (see ; Spearman’s Rank Correlation Coefficient = 0.69 n = 51), showing that, on the whole, students felt motivated and that feedback was fair. When students felt that feedback was fair, they mostly felt content, whereas if they felt that feedback was unfair, they were inclined to be unhappy. However, regardless of fairness, any sort of feedback tended to result in students being better motivated. Students were motivated by feedback in whatever form, although the positive effect of feedback could be improved by considering the free-form comments offered by the students. On all questionnaires students were asked to list key feedback comments which either motivated or did not motivate them. These answers were then categorized ( and ).

Figure 6 The relationship between fairness and a general feeling about feedback. Comparing responses to the question ‘Do you feel that the feedback you have been given is fair?’ (fairness) and the question ‘Compared to how you usually feel in general, how do you feel as you read the feedback you have been given?’ (general feeling). The bubble plot shows larger circles with greater numbers of data points

Figure 7 The relationship between fairness and motivation. Comparison of responses to the question ‘Do you feel that the feedback you have been given is fair?’ (fairness) and to the question ‘Having read your feedback, how motivated do you feel to try harder next time?’ (motivation)

Table 3 Categorization of feedback comments from free text responses students found motivating (see Appendix 2 for explanation of categories)

Table 4 Categorization of feedback comments from free text responses students found de-motivating (see Appendix 2 for explanation of categories)

In keeping with the view that students mostly see any feedback as positive, the total number of comments regarding motivational feedback noted was more than twice that of the comments noting feedback that was de-motivating. Indeed it was interesting to find that a number of students either declared that ‘any feedback comment was motivating’ (), or conversely, that ‘no feedback comments were considered to be de-motivating’ (). The most motivating feedback was, unsurprisingly, praise words such as ‘very good’ or ‘excellent’ (accounting for 41% of the motivational factors). This is compared to a quarter of that number of students noting that negative comments, ‘clutching at straws’ or ‘not enough detail’, actively de-motivated them. A total of 20% of the de-motivating factors focused on feedback that was either lacking in comments, or where those comments were illegible or unclear. In these cases students were not receiving any detailed feedback because it was either not provided, or they could not read it. Only one student commented that they felt demoralized by late feedback.

Several of the factors governing students’ responses to their immediate reactions to feedback, identified in the free-text comments, were also raised in the student focus groups. From these discussions the students came up with a number of recommendations for improvements to feedback in terms of aiding motivation and encouraging the active use of the feedback provided. Many recommendations were common across years and across universities and others came from just one cohort of students. The significant recommendations are listed in .

Table 5 Key recommendations from student focus groups to improve feedback on coursework

Interestingly, when students said they needed more detailed comments on scripts, they remarked that having lots of comments on the script made them revisit the script over a period of time after it was returned to them. Students also noted that they felt that there was no such thing as too much feedback, supporting the findings of the free-text sections of the questionnaire. Students also felt that the best feedback included positive comments and specific guidance on how to improve. Similarly, for the physical presentation of the comments, legibility was mentioned but most notably a key issue was the colour of the ink used for written comments. Students specified that this should be a colour other than red, and a colour that was clearly different from the script; their initial reactions to comments written in red were very negative. Students commented that red was ‘associated with failure’, it ‘means it is wrong’ and that if red was ‘scribbled on the page you assume it is a bad comment’.

Discussion

The majority of studies on students’ reactions to their feedback focus on reflective responses. Here the aim was to capture immediate responses, as students were first reading their feedback. The findings of this study show that first year students were generally more happy with their feedback than second year students. CitationLilly et al. (2010) found that students were better able to accept constructive criticism as they progressed, which runs counter to students in this study where year two students were less happy with their feedback than year one. This increased unhappiness, therefore, may not be to do with their continued inability to cope with criticism. For all universities in this study, year one did not count towards their final degree classification which may help explain this difference. There are many issues surrounding the year one to year two transition. First year teaching generally has been modified to cope with the school to university transition and all that that entails (CitationFee et al. 2009, CitationJones et al. 2009) and it is possible that this has resulted in a more marked transition from first to second year, which is being picked up in this study, with second year students less happy about their feedback than first year students. In general, students receiving lower grades were less happy and less motivated than students receiving higher grades. In addition, students who were awarded higher grades were more likely to fill in questionnaires than those receiving lower grades. This may be linked to the emotional response of receiving feedback; the lower the grade, the less happy a student might be and the less inclined to engage in an activity relating to the returned coursework. It has been suggested previously that students pay less attention to feedback when it is accompanied by a mark or grade (CitationRust 2002, CitationBlack et al. 2003). The present study would suggest, however, that it is less the attention they pay to their feedback, and more related to their feelings when they see their grade, which determines what they go on to do with their feedback.

When data were explored looking at where marks were lower, higher or as expected and how this related to fairness, how students felt and their motivation (), it was found that those students given marks lower than expected were more likely to feel unhappy about their feedback, although most of these felt that the feedback itself was fair. It is important to consider that this reaction was an initial response. Given time to consider the mark and feedback, and discuss this with peers and staff, it may be that these students would have had a different opinion after a period of reflection. However, as CitationAgarwal & Meyer (2009) state, it is the relevance of the emotional response which is important, and here, this initial, highly emotional response is relevant to what they might do with the script subsequently. To be able to think beyond the mark awarded is crucial in entering into the process of reflection about the course work (CitationSargeant et al. 2009). It is interesting to note that students with marks higher than expected were not significantly happier than those where the mark was as expected. This contrasts with the significant effect of a lower than expected mark on students’ happiness. By contrast, a very important finding is that even while some students might have considered the feedback unfair () and/or the mark lower than expected (), the majority were still motivated to try harder next time. When the mark was lower than expected, more students were unhappy than neutral or happy, but this did not translate into negative feelings of motivation and fairness. The pattern for students with marks as expected, or higher than expected, were very similar, with most students happy, motivated and believing their feedback to be fair.

CitationNicol & Macfarlane-Dick (2006), in their seven principles of good feedback practice, highlight feedback which encourages positive motivational beliefs and esteem. Comments which students in this study found motivated them were those which praised them and gave constructive advice (), whereas a large number of students noted that no feedback comments were considered de-motivating (). Conversely negative words did rank as the most common form of feedback that students found de-motivating (). This suggests that student responses to comments are quite variable and may be difficult to predict. The dilemma in this case is that often in feedback the students need to be told where they went wrong, and this by default is a criticism, and therefore negative. Students have identified a balance between positive and negative comments to be important (CitationWeaver 2006). Weaver’s study (2006) highlighted that negative comments tend to be more detailed, where as positive comments tend to be vague and short. Students can take critical comments if they are made constructively (CitationBuswell & Matthews 2004). A negative comment may initiate a lack of motivation and a feeling of unhappiness, irrespective of whether the comment is deserved or not. Feedback which criticises the student can have a negative impact on performance, and it is important that students understand feedback is an evaluation of their work not of themselves (CitationNicol & Macfarlane-Dick 2006). CitationLilly et al. (2010) recommend feedback in the third person to divorce feedback on the work from a personal criticism of the student. In focus groups students commented that positive comments helped, but constructive negative ones were also acceptable. CitationOrsmond & Merry (2011) found students wanted feedback which both criticised and praised.

Praise may have had a stronger impact upon student’s feelings about feedback than criticism. While the use of specific criticisms of their work was seen by several students (the second largest #x2018;negative’ category) as de-motivating, the second largest category of motivational factors was ‘constructive comments’, where a criticism is couched in terms of advice or guidance, a so-called ‘softened negative’ (CitationRead et al. 2005). While this feedback was a direct criticism of the student’s work, the ‘feed-forward’ aspect of it was noted by the students as being an encouragement. This positive view of a softened negative comment is possibly following the general wish for feedback to show the students clearly how to develop their work to attain a better mark (CitationOrsmond & Merry 2011). Another major class of factors that promoted motivation was the identification of key points where the students had shown analytical or synthetic ability, or had referred to key research. These were key indicators for the higher marking bands in the marking criteria of all of the assessments. A large number of motivating comments were concerned with guidance over structure and format, indicating that students found direct advice and guidance reassuring. While negative comments can be de-motivating in the short term, this is by no means universal, and may only be a short-lived effect, with students taking a less-emotive response to feedback in the longer term. The present findings must be seen in the context of the study, as being an initial response to feedback comments, without a period of reflection.

Only one student cited late feedback as a de-motivating factor. The importance of promptness in feedback provision has been argued (CitationRust 2002, CitationNicol & Macfarlane-Dick 2006) but did not appear to be a crucial aspect of feedback in terms of immediate response. However, students in some focus groups did note that, if work were to be returned late, it was important to state the time at which feedback would be returned, and to fulfil that deadline. The amount of feedback provided must also justify the lateness of return. More importantly, students felt that comments written in feedback should be reflective of the mark given. This is significant for the initial impact of the feedback, and its credibility to the student. However, this also emphasises that, without an active dialogue with the marker, students may not automatically be able to divorce constructive feedback from fulfilling marking criteria (CitationRust 2002, CitationRust et al. 2005).

While questionnaires provided evidence of students’ immediate reactions to feedback, focus groups accessed the ‘reflective reaction’. Students suggested they accepted negative comments if they were constructive and indicated where they could improve. This concurs with the free-text responses which showed students wanting, and responding well to, comments which showed how they could improve. It has been recommended that markers should write balanced comments encompassing both strengths and weaknesses of the student and annotation which closed the gap between current and desired performance (CitationNicol & Macfarlane-Dick 2006, CitationBall 2009). The focus groups also highlighted the students’ desire for the use of model answers in feedback. CitationOrsmond & Merry (2011) also suggest the use of model answers, but note that they should reflect the variation in students’ work. It may be that such an approach may not promote the student’s development of analysis and criticism of their own work. However, the minimal guidance approach has been criticised as being ineffective (CitationKirschner et al. 2006).

In a study on annotations on essay scripts, CitationBall (2009) found that the way comments were written was crucial to how they were interpreted by the student. Her study suggested students found too much annotation to be overwhelming, whilst in the present study students commented that there was no such thing as too much feedback. Students, both in their initial responses to feedback and in the focus groups, when talking about reflected responses, did not like single words or ambiguous marks such as ticks and question marks without accompanying explanations. Students can be frustrated with single word comments (CitationOrsmond & Merry 2011) and can have a negative response to feedback which does not provide enough information to be helpful (CitationHiggins et al. 2002). CitationPrice et al. (2010) found that one of the most important aspects of feedback was an understanding of what was required and what was meant by feedback comments, with a discrepancy between what students wanted and what staff wished the students to achieve. More of a dialogue is required between the marker and the student (CitationPrice et al. 2010, CitationOrsmond & Merry 2011). Personalised feedback appears to be an ideal situation, and markers need to identify methods of providing personalisation within the strictures of their assessment procedures. Use of the student’s name was identified as a motivational form of feedback in the present study. However, CitationLilly et al. (2010) suggest the opposite, that students like feedback written in the third person, to remove the personal aspect of it. The use of methods such as elective feedback and audio-recordings of feedback enable the personalisation of the feedback provided, whilst still retaining anonymity during the marking process (CitationMerry & Orsmond 2008).

What constitutes good feedback has been found to be common across a wide range of disciplines (CitationLilly et al. 2010). Relating the present study to that of CitationSargeant et al. (2009) therefore, should find some common ground. CitationSargeant et al. (2009) investigated whether or not feedback was consistent with self perceptions. This is similar to the present study investigating whether or not students found their feedback to be fair. Sargeant et al.’s model (2009) suggests that a student’s actions following feedback are not determined by whether or not the feedback met expectations. The biggest variable suggested by their model is in the length of period of reflection. The findings in this study look at whether the mark was higher or lower than students expected, and relate this to their emotional response to their feedback. In this way it mirrors the beginning stages of the model of reflection and decision making of CitationSargeant et al. (2009). The initial reading of the feedback is used by the student in the light of their perceived ideas as to how they should have done. This elicits the emotional response which, according to the model, is determined by whether feedback is consistent or not consistent with their self-perceptions. In Sargeant et al.’s model (2009), if feedback is inconsistent with self perceptions, then the feedback is assumed to be negative. This observation is supported by CitationOrsmond & Merry (2012) who revealed differences in students’ engagement with feedback based on their academic ability, their personal approaches to learning, their degree of metacognition and ability to self-assess. CitationPrice et al. (2010) suggest that students’ valued judgements of feedback are relative to their assessment of its immediate or short-term effectiveness. In the present study it is interesting to note that the majority of students felt that their feedback was fair, even where students received a mark that was lower than expected, and yet there was a differential emotive response relative to how the mark achieved matched their expectations. As one might expect, both a mark that was as expected or higher, and feedback that was considered fair, elicited a positive emotive response, and therefore such feedback is likely to be utilised by the student more effectively. However, irrespective of fairness or happiness, most students were motivated by the feedback to do better. Each study reveals complex emotional links between feedback, expectations, fairness and utility. Although Sargeant et al.’s study (2009) focused on family physicians, the model is relevant to a wider focus. Self perceptions may differ, but aspects of the model can contribute to unravelling feedback, self perceptions and reflection in undergraduate students.

Recommendations made by students here generally reflect those from other studies (e.g. CitationCarless 2006, CitationWeaver 2006, CitationHounsell et al. 2008). Students want feedback which is constructive and shows them how to improve. Essentially, from the findings in this report, students’ recommendations related to feedback which they perceived to be fair, and feedback that would motivate them. Investigating their initial response to feedback highlighted the need to avoid red pen. Red was a colour associated with negative comments, and the initial reaction to a script covered with red would be that much of what the student had written was essentially wrong. It is important that this initial reaction is avoided so that students are able to take a more considered approach to their feedback.

The findings of this study indicate that students do have an immediate emotive response to the feedback that they receive. This response may have an impact on how they will utilise the feedback for future study, and therefore should be considered carefully when the feedback is provided. Feedback should be presented in an encouraging manner, and framed in constructive terms since, even though many students feel that any feedback is useful, there is a clear suggestion that positive feedback (even where constructive criticisms are warranted) is preferable to obliquely negative comments. However, the key finding of this study was that feedback was motivating even when the mark was less than expected and when accompanied by criticism. The more motivated a student is by the receipt of supportive and useful feedback, the more they are likely to use that feedback to feed forward in their learning and therefore develop the independence and deep-learning approaches essential to higher education.

Acknowledgements

The authors thank the HEA Centre for Biosciences for supporting this project and the anonymous referees for their useful comments.

References

  • Agarwal, A. Meyer, M. (2009) Beyond usability: evaluating emotional response as an integral part of the user experience. In CHI ’09 Proceedings of the 27th international conference. New Usability Metrics and Methods. Boston, MA, USA. doi: 10.1145/1520340.1520420.
  • Ball, E. (2009) A participatory action research study on handwritten annotation feedback and its impact on staff and students. Systematic Practice and Action Research 22, 111–124.
  • Black, P. Harrison, C. Lee, C. Marshall, B. Wiliam, D. (2003) Assessment for learning, putting it into practice. Maidenhead, UK: McGraw-Hill Education, Open University Press
  • Buswell, J. Matthews, N. (2004) Feedback on feedback! Encouraging students to read feedback: a University of Gloucestershire case study. Journal of Hospitality, Leisure, Sport and Tourism Education 3, 61–67.
  • Carless, D. (2006) Differing perceptions in the feedback process. Studies in Higher Education 31, 219–233.
  • Crook, C. Gross, H. Dymott, R. (2006) Assessment relationships in higher education: the tension of process and practice. British Educational Research Journal 32 (1), 95–114.
  • Fee, H. Greenan, K. Wall, A. (2009) An investigation into secondary school exit standards: implications for university lecturers. International Journal of Management Education 8, 43–52.
  • Higgins, R. Hartley, P. Skelton, A. (2002) The conscientious consumer: reconsidering the role of assessment feedback in student learning. Studies in Higher Education 27 (1), 53–64.
  • Hounsell, D. McCune, V. Hounsell, J. Litjens, J. (2008) The quality of guidance and feedback to students. Higher Education Research and Development 27 (1), 55–67.
  • Jones, H.L. Bavage, A. Gilbertson, A. Gorman, M. Lodge, J. Phillips, K. Yeoman, K. (2009) Increasing the quality of feedback on assignments while altering student perceptions of good feedback based on their school experience. www.heacademy.ac.uk/resources/detail/evidencenet/Jones_Final_Report (accessed 3 September 2010).
  • Kirschner, P.A. Sweller, J. Clark, R.E. (2006) Why minimal guidance during instruction does not work: an analysis of the failure of constructivist, discovery, problem-based, experimental, and inquiry-based teaching. Educational Psychologist 41 (2), 75–86.
  • Lilly, J. Richter, U.M. Rivera-Macias, B. (2010) Using feedback to promote learning: student and tutor perspectives. Practitioner Research in Higher Education 4, 30–40.
  • Lizzio, A. Wilson, K. (2008) Feedback on assessment: students’ perceptions of quality and effectiveness. Assessment and Evaluation in Higher Education 33 (3), 263–275.
  • Merry, S. Orsmond, P. (2008) Students’ attitudes to and usage of academic feedback provided via audio files. Bioscience Education e-journal 11-3. www.bioscience.heacademy.ac.uk/journal/vol11/beej-11-3.pdf (accessed 14 October 2010).
  • Nicol, D.J. Macfarlane-Dick, D. (2006) Formative assessment and self-regulated learning: a model and seven principles of good feedback practice. Studies in Higher Education 31 (2), 199–218.
  • National Union of Students (NUS) (2008) Student Experience Report. www.nus.org.uk/PageFiles/4017/NUS_StudentExperienceReport.pdf (accessed 5 September 2010).
  • Orsmond, P. Merry, S. (2011) Feedback alignment: effective and ineffective links between tutors’ and students’ understanding of coursework feedback. Assessment and Evaluation in Higher Education 35, 1–12.
  • Orsmond, P. Merry, S. (2012) The importance of self-assessment in students’ use of tutors’ feedback: a qualitative study of high and non-high achieving biology undergraduates. Assessment & Evaluation in Higher Education. doi: 10.1080/02602938.2012.697868.
  • Price, M. Handley, K. Millar, J. O’Donnovan>, B.> (2010) Feedback: all that effort, but what is the effect? Assessment and Evaluation in Higher Education 35 (3), 277–289.
  • Ramsden, P. (2008) The future of higher education. Teaching and the student experience. Report for the Department of Innovation, Universities and Skills. www.divs.gov.uk (accessed 20 September 2010).
  • Read, B. Francis, B. Robson, J. (2005) Gender, ‘ bias’, assessment and feedback: analyzing the written assessment of undergraduate history essays. Assessment & Evaluation in Higher Education 30 (3), 241–260.
  • Rhodes, C. Nevill, A. (2004) Academic and social integration in higher education: a survey of satisfaction and dissatisfaction within a first-year education studies cohort at a new university. Journal of Further and Higher Education 28 (2), 179–193.
  • Rust, C. (2002) The impact of assessment on student learning: how can the research literature practically help to inform the development of departmental assessment strategies and learner-centred assessment practices? Active Learning in Higher Education 3, 145–157.
  • Rust, C. O’Donovan, B. Price, M. (2005) A social constructivist assessment process model: how the research literature shows us this could be best practice. Assessment and Evaluation in Higher Education 30, 231–240.
  • Sargeant, J.M. Mann, K.V. van der Vleuten, C.P. Metsemakers, J.F. (2009) Reflection: a link between receiving and using assessment feedback. Advances in Health and Science Education 14, 399–410.
  • Stobbart, G. (2006) The validity of formative assessment. In Assessment and learning (ed. J. Gardner), pp133–146. London, UK: Sage Publications.
  • Weaver, M.R. (2006) Do students value feedback? Student perceptions of tutors’ written responses. Assessment and Evaluation in Higher Education 31 (3), 379–394.

Appendix 1.

The Questionnaire Completed by Respondents

Module this form relates to:__________________________________________________

On the following questions please circle the number on the scale which best represents how you feel.

1.

Compared to how you usually feel in general, how do you feel as you read the feedback you have been given?

2. a)

Please indicate how much of this feeling came from seeing the mark?

2. b)

Please indicate how much of this feeling came from reading the comments?

3.

Having read your feedback, how motivated do you feel to try harder next time?

4. a)

Please indicate how much of this motivation/de-motivation came from seeing the mark?

4. b)

Please indicate how much of this motivation/de-motivation came from reading the comments?

5.

Do you feel the feedback you have been given is fair?

6. a)

Which comments on the feedback, if any, made you feel happy/motivated?

6. b)

Which comments on the feedback, if any, made you feel unhappy/de-motivated?

In the following questions please tick where appropriate.

7. In which category is your mark? 39% or less 40–49% 50–59% 60–69% 70% or more

8. Was this mark as you expected it to be, or was it higher or lower than you expected?

What is your age?

Are you male or female

What is your year of study?

We welcome all positive and negative comments you may wish to add about the feedback you are reading, so if you would like to make further comments, such as listing words or phrases in your feedback which you did not understand, please do so on the back of this form.

Appendix 2

Table A1 Categories for free text responses for comments which motivated students

Table A2 Categories for free text responses for comments which did not motivate students

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.