25,964
Views
201
CrossRef citations to date
0
Altmetric
Articles

‘It'd be useful, but I wouldn't use it’: barriers to university students’ feedback seeking and recipience

, , &

Abstract

For feedback to be effective, it must be used by the receiver. Prior research has outlined numerous reasons why students’ use of feedback is sometimes limited, but there has been little systematic exploration of these barriers. In 11 activity-oriented focus groups, 31 undergraduate Psychology students discussed how they use assessment feedback. The data revealed many barriers that inhibit use of feedback, ranging from students’ difficulties with decoding terminology, to their unwillingness to expend effort. Thematic analysis identified four underlying psychological processes: awareness, cognisance, agency, and volition. We argue that these processes should be considered when designing interventions to encourage students’ engagement with feedback. Whereas the barriers identified could all in principle be removed, we propose that doing so would typically require – or would at least benefit from – a sharing of responsibility between teacher and student. The data highlight the importance of training students to be proactive receivers of feedback.

1. Introduction

Feedback can have powerful effects on students’ learning and skill development (Hattie and Timperley Citation2007). Indeed, Laurillard (Citation2002, 55) argues that ‘action without feedback is completely unproductive for a learner’; yet it is increasingly apparent that feedback without action is equally unproductive. That is to say, if learning gains are to occur, students must participate actively in the feedback process and act upon the feedback they receive (Delva et al. Citation2013). Relative to the wealth of research on giving feedback, though, researchers note that the education literature has focused considerably less on the process of receiving feedback (e.g. Burke Citation2009). In this paper we explore the barriers that prevent university students from effectively implementing assessment feedback.

We can think of the giving and receiving of feedback as a communicative event. Johnson and Johnson's (Citation1994) Interpersonal Communication Model highlights that when messages are transmitted from a sender to a receiver, the receiver's role is as crucial as the sender's, and involves decoding, interpreting, and responding to the message. In this framework, various sources of noise can prevent clear messages from being transmitted; this noise might originate from the sender, for example, via a lack of message clarity, or from the receiver, for example, via a lack of attention. In the education literature, several theoretical accounts similarly conceptualise feedback as a communicative exchange or dialogue. For instance, Nicol (Citation2010) argues that feedback can only be effective if treated as a two-way communicative process. For Nicol, this means that written feedback should be received within a context where staff and students discuss feedback, students pose questions and reflect, and peer-feedback processes serve to provide additional dialogue. Likewise, Beaumont, O'Doherty, and Shannon (Citation2011) represent the feedback process as a ‘dialogic cycle’ wherein teacher and student have multiple opportunities to engage. In their model, receiving feedback is not seen as a single event, but as a process that begins with teacher-student dialogue when the task is set, through guidance whilst the task is undertaken, to performance feedback accompanied by a verbal discussion informing the process of action planning. The student therefore has a key role in deciding where and when to seek feedback and engage in dialogue.

In short, conceptual models of academic feedback often recognise that students’ involvement is crucial, but do students actually engage in this way? The Higher Education literature often paints a pessimistic picture, highlighting students’ weak implementation of feedback comments (Hyland Citation1998), skim reading (Gibbs and Simpson Citation2004), and failing to even collect feedback (Hounsell Citation2007; Sinclair and Cleland Citation2007). Withey (Citation2013) highlights a so-called feedback paradox, whereby students clearly recognise the importance of feedback, and frequently complain about the quality of feedback they receive, yet also make limited use of it. It is of course true that many students engage well with assessment feedback (Higgins, Hartley, and Skelton Citation2002); nevertheless, if we wish students to be active receivers of feedback, it is essential that we ask why they may be unable or unwilling to do so.

1.1. Barriers to using feedback

In general, the content of feedback undoubtedly influences the quality of students’ engagement. The extent to which feedback supports the development of self-regulation skills is especially fundamental, and features in Nicol and Macfarlane-Dick's (Citation2006) list of seven key features of effective feedback. In one study, Orsmond and Merry (Citation2011) found that relatively few of the tutors’ comments within a sample of genuine written feedback were designed to encourage active engagement (e.g. engaging students in thinking, suggesting approaches to future assignments); it is easy to see why students might fail to engage with feedback that has little developmental emphasis. Complementing the research literature on how engagement with feedback can depend on its content, many studies have outlined interventions for making the content or delivery format of feedback more useable. For example, Hughes (Citation2011) describes an ipsative approach of focusing feedback on the learner's individual improvement, rather than on their performance relative to grading criteria. Recent evidence suggests that this form of feedback could better encourage students to engage and subsequently act (Hughes, Wood, and Kitagawa Citation2014), particularly in modularised programmes where the timing and focus of feedback comments often make it difficult for students to relate them from one assignment to the next (Hughes, Smith, and Creese Citation2015). Focusing instead on delivery format, Hepplestone et al. (Citation2011) describe various ways in which students’ engagement with feedback might be improved when it is provided via technological means, such as virtual learning environments. As one example, learning technologies make it simple to restrict students’ access to their grade until they have responded to the written feedback. Overall, by focusing on how to improve students’ engagement with feedback, interventions such as these seem more likely to reap rewards than would interventions focusing solely on delivering more feedback.

Taking a broader overview, Jonsson (Citation2013) recently identified five reasons why, in different circumstances, students may not use feedback: (1) it may not be useful; (2) it may not be sufficiently individualised; (3) it may be too authoritative; (4) students may lack strategies for using feedback; and (5) students may not understand the terminology used. It is noteworthy that at first glance most of Jonsson's (Citation2013) factors seem to place primary responsibility on the feedback sender, rather than the receiver, in ensuring that feedback is effectively used. In contrast, Handley, Price, and Millar (Citation2011) discussed how students’ ‘readiness to engage’ is also crucial, incorporating factors such as their motivation to receive feedback, and their emotional response. Further barriers, such as students’ weak assessment literacy skills, may also contribute to the lack of engagement with feedback, and these kinds of barrier will undoubtedly require investment from both sender (teacher) and receiver (student) to resolve. For example, students’ (mis)understandings of the difference between formative and summative assessment, and their respective functions, can play a role in how they interpret and act upon feedback they receive (Price et al. Citation2010).

In sum, studies highlight that effectively implementing feedback involves a shared responsibility of sender and receiver. However, finding optimal ways to support students’ use of feedback is difficult without first understanding the barriers to this kind of engagement. The studies reviewed above illustrate that numerous barriers have been discussed in the literature, yet it would be valuable to explore the diversity of barriers more systematically, with the goal of identifying their shared underlying psychological processes. Such understanding would facilitate the design of interventions that target not only the behavioural manifestations of poor engagement, but also the psychological processes that underlie those behaviours. With this goal in mind, the present study systematically explored students’ perceptions of barriers that limit their effective use of feedback, through focus groups with students in our own discipline of Psychology.

2. Method

2.1. Participants

A total of 31 undergraduate psychology students consented to participate either for course credit or £10. This study was conducted as part of a broader consultation exercise on feedback, and so our sample size was determined solely by the number of students who wished to and were able to take part within a specified time-frame, rather than on the number required to achieve saturation. Because our focus groups were activity-oriented (see below), we expected that the interactions would be far more hands-on and animated than is typical in focus groups (and this was indeed the case). The use of activities also meant that our schedule relied less on standardised questioning than does a typical focus group schedule, which can warrant conducting a higher than usual number of focus groups (Morgan Citation1996). For these reasons we conducted a greater number of groups with fewer participants in each, rather than the more usual setup of few groups containing more participants. Our participants therefore formed 11 focus groups, each containing 2–4 students.

Most participants were in their first year (n = 12) or final year (n = 13), with fewer from second year (n = 5) and professional training year (n = 1). Most were female (n = 28), broadly representative of the gender demographic of British psychology undergraduates. This relatively homogenous sample has both advantages and disadvantages. Using participants from a single discipline creates a ‘bounded setting’ (Jazvac-Martek Citation2009), which can be beneficial for providing a shared understanding of concepts, experiences, and terminology in the emerging discussion. Yet homogeneity also makes the generalisability of the findings uncertain. It is plausible that different student demographics would identify different or additional barriers; nevertheless we hoped that this concern would be somewhat mitigated by our theoretical emphasis on the underlying psychological processes that the barriers held in common.

2.2. Materials and procedure

A research assistant – a graduate who was unknown to participants – conducted the focus groups, which ranged from 47–83 minutes (M = 67.81; SD = 12.20). Following Kitzinger (Citation1994), the sessions began with rapport-building through mutual introductions and conversation, then followed a semi-structured format; the researcher's involvement was minimal other than asking standardised introductory questions, explaining the activities, and keeping the discussions on topic. The researcher first asked the group about the type of feedback they normally receive, what they and other students do with feedback, how they think lecturers expect them to use feedback, and whether they might do anything specific to make better use of their feedback. Participants were also asked whether they could describe any interventions that might support their use of feedback.

Next, the groups undertook two self-paced activities. Activity-oriented focus groups are known to support participants in expressing their perspectives, and can elicit richer dialogue compared to questioning alone (Colucci Citation2007; Winstone et al. Citation2014). In this style of focus groups the activities themselves may often produce interesting task-related data, but their primary purpose is to stimulate discussion (Colucci Citation2007). This was true in our study: our focus was on the dialogue about barriers to using feedback that was elicited throughout the activities, rather than on the outcomes of the activities per se. It is important to note that we did not directly ask students about barriers; rather, we allowed and encouraged this discussion to emerge spontaneously.

2.2.1. Activity one: discussing exemplar feedback

In the first activity we showed each group 10 written comments, taken from genuine summative feedback scripts received by students in our own department (see Table S1, online supplemental materials). Participants were told that these were genuine, but not who had written them. For each comment, groups were asked to discuss what actions they might take in response to receiving that feedback. Two comments covered each of five themes: argument quality, using evidence, critical evaluation, writing, and structure. For example, one read ‘Your overall structure is clear but you need to work on your paragraph transitions. In many places, your argument can be difficult to follow because you move between different topics without clear signposting to the reader'. These comments were chosen not to represent instances of good or useful feedback, but rather, simply to represent the style of feedback that these students were accustomed to receiving on summative assessments. Of course, using different comments would undoubtedly influence how participants responded. However, our focus was on the more general dialogue elicited spontaneously about barriers, not on participants’ thoughts about the specific feedback comments.

2.2.2. Activity two: ranking interventions

In the second activity, participants read brief descriptions of 10 interventions for supporting students’ use of feedback, which were identified in a systematic literature review (Winstone et al. Citationforthcoming, a). These were: developing an action plan; receiving resources on using feedback; self-assessment; peer-assessment; receiving feedback without a grade; keeping an assessment portfolio; attending a workshop on using feedback; engaging with the marking criteria; discussing feedback with teachers/lecturers; and communication of feedback via technology. The definitions that participants received are listed in Table S2 of the online supplemental materials.

Through discussion, groups rank-ordered the interventions according to both (a) how useful they perceived them to be in principle, and (b) how likely they would be to actually use them in practice. Once again, we were primarily interested in the dialogue spontaneously elicited about barriers whilst participants completed this activity, rather than in the results of the activity per se. However, because readers may be interested in the rankings in their own right, we report the data in Table S3 in the online supplemental materials.

2.3. Data analysis

All discussions were transcribed verbatim, and analysed in parallel using thematic analysis following Braun and Clarke (Citation2006). We chose this method for its flexibility, and its usefulness for summarising large datasets and generating unanticipated ideas. We adopted a realist approach, iteratively and inductively searching for semantic themes within and across transcripts. All 11 transcripts were first read in depth to allow familiarisation with the data. During this process, initial codes were noted and gathered into themes. Themes were then reviewed against the entire dataset iteratively, refining until a final set of themes and subthemes had emerged. During this phase of analysis, any differences of opinion between the authors were resolved through further examination of the data and discussion with a student collaborator.

3. Results and discussion

In their discussions, participants consistently described barriers to understanding and implementing feedback. Analysis of these discussions revealed four main themes that captured different psychological processes underpinning these difficulties. Within each main theme, two subthemes emerged (). Although the different groups discussed quite different ideas and experiences, all four main themes and all eight subthemes were represented in all 11 transcripts, suggesting a strong convergence of data.

Table 1. Main themes (psychological processes) and subthemes (barriers).

3.1. Students’ awareness of what their feedback means, and what it is for

For feedback to be implemented, it needs first to be understood. Participants spoke of difficulties in decoding their feedback and the academic jargon it contained, and described how such difficulties limit its utility:Footnote1

  1. H: I feel like, sometimes on the feedback, it's just a lack of understanding of what it really means –

    I: Yea.

    H: that holds you back from using it.

  2. G: But sometimes, they're a bit more confusing. Like, ‘Oh’, erm, ‘Be careful with your structure.’ And this, I can't quite process it, so I don't take it into consideration that much.

  3. Y: We've never been told what flair is.

    AB: [Laughs]

    Y: I haven't.

    Z: No.

    AB: No.

    Z: It's questioning the question or something like that.

    Y: A seventies pair of trousers, isn't it? I just wouldn't know. I just … I dunno what they mean by flair.

Feedback providers presumably expect that their comments can usually be easily decoded and used; yet students may require further intervention to decode complex messages and language (Carless Citation2006; Nicol and Macfarlane-Dick Citation2006). Participants expressed particular frustration about lecturers’ use of complicated language when commenting on the students’ own clarity of writing:

  • (4) EF: I wish they'd just communicate it in a more, like – well not friendly way, but just … a way that explains it, rather than all – like, the language that they use and everything. Like I … I kinda hear it in my head as like the posh Radio 4 lady saying it!

  • (5) I: I think it could've just been worded a lot simpler. I think sometimes … 

    H: Mm.

    I: … on reports back, the language use is quite confusing, which seems a bit contradictory. Cos they often say that my language is confusing!

Aside from students’ understanding of what their feedback means, another important type of understanding identified within this theme was of what feedback is for (Withey Citation2013). Participants revealed aspects of these ‘feedback mental models’, which in most cases appeared valid: by far the most common conception was that feedback serves to support skill improvement. Some participants, though, described more nuanced perceptions of the purpose and nature of feedback. These participants referred to functions beyond facilitating improvement, and broader understandings of the types of communication that can constitute feedback:

  • (6) H: I suppose it's to reflect on the work that you did as well. So, not just focusing on the next piece, but kind of, reflecting on your process of even just writing the coursework. And how, from the feedback, that can be improved. And looking over what … yea, what you did, really.

  • (7) T: Also you get, obviously, um, in class feedback from your lecturers. Like on-going, you know, every time you go to a lecturer, you're always getting some form of feedback from what they're kind of telling you, and answering questions and um I guess, even like, going to them after a lecture as well, kind of going in their open office hours and things like that, acts as feedback.

Some participants also showed awareness that feedback needs to be actively used in order to be purposeful, and were able to see their lecturers’ perspectives:

  • (8) U: I can imagine the lecturers get quite annoyed if you don't use the feedback, because they've spent loads of time, like, going through it. And then, if you don't use it, it's just pointless.

In sum, difficulty in using feedback can result from a lack of knowledge of what feedback means, so message senders have a vital role to convey messages in clear terms, avoiding or explaining academic jargon. Some students’ use of feedback might be limited by narrow conceptions of the purpose of feedback, and so supporting students to broaden these conceptions might deliver better engagement.

3.2. Students’ cognisance of appropriate strategies for implementing feedback, and the opportunities available

To use feedback effectively, students need to be cognisant of behaviours and strategies that will be beneficial. In Jonsson's (Citation2013) review, lack of such cognisance emerged as a possible explanation for students’ poor use of feedback. In contrast, our participants seemed aware of certain strategies they could adopt (quotes 9 and 10), but recognised that they could use these strategies better (quote 11):

  • (9) Y: Before we go hand our essays in, we'll always swap, proof read each other's, say like ‘Oh, I didn't quite understand this.’ cos normally we pick completely separate things.

  • (10) J: If it's like an essay, what I'll do is, I'll go through it and then I'll write down on a piece of paper, like … so I'll go through my essay with the points … the summary points, and see the things that I've done well and, like ‘Okay, make a note to keep doing that’, and the things that I haven't done so well, to say, erm, to improve on it.

  • (11) I: Um, I think ideally, I should go through all my feedback and kind of find the points of commonalities. Erm, and make a list of those and just be aware of those consciously but, again, when you've got five hundred things to do … 

    G: [Laughs]

    I: it's not really on the top of your priority list.

Thus, beyond those strategies they could adopt unassisted, students also need to be cognisant of opportunities for seeking further support in using feedback. In their discussions, participants showed that they were aware of how academic staff facilitate these opportunities:

  • (12) Z: I think the majority are quite open, inasmuch as, you know, ‘Come and discuss it and digest it and … ’ … you know, it's obviously why they write it, and they obviously don't do it for fun.

  • (13) EF: We have the privilege of being in this place with all these, like, really intelligent people. Um, and they're like, the best people that we could go to, to ask about these things.

    CD: Mm.

    EF: But I think it's just knowing how to use that kind of resource.

Yet whilst participants appeared to know support was available, they were aware that they often failed to take advantage of these opportunities:

  • (14) Y: I think technically, you have got the access to the marking criteria, before every single time. We assess … we've got the handbook with the, whatever it is, the grade descriptors in it.

    AB: That's right.

    Y: And I never look at the grade descriptors and compare it.

    Z: I haven't looked at them this year.

    Y: So the fact that those resources are there and I, personally, just never use it.

    Z: Yea.

    Y: I don't. I can't see what would encourage me to do it.

  • (15) S: Other people might be more active than me in seeking a feedback from … individual feedback from lecturers, face-to-face. I've never done that.

Alongside this awareness of certain opportunities, participants expressed relative ignorance of other opportunities, or showed that they required explicit prompting to engage with them:

  • (16) Y: Self-assessment.

    AB: I wouldn't use that. Well … 

    Y: I wouldn't use it.

    Z: Well, we've had a chance to use that, haven't we?

    AB: I dunno … 

    Y: We’ve got the resources to do it, and none of us do it.

  • (17) C: I think this was the first year that we were told, ‘Well if you want to go and get feedback on your actual exam, then you can, and this is the person you go to.’ … But before that, no we'd never really been told … we, … we knew we could, but no-one ever, like, took up the opportunity.

In short, these data highlight that making students aware of appropriate strategies and opportunities for implementing feedback is important, but insufficient. Our participants were aware of strategies they might adopt in principle, yet had difficulties appreciating what those strategies require in practice, and how to avail themselves of support. To make use of support, students sometimes need more direction than just an invitation (Price et al. Citation2010).

3.3. Students’ agency to implement strategies for using feedback

Many participants described a sense of disempowerment around using feedback, sometimes seeing ‘no point’. One example concerned students’ sense of learned helplessness, where they perceived that implementing feedback in the past had not paid off:

  • (18) P: Sometimes I've used all my feedback to write an essay, and I've gone ‘Right, I can't do this, I can't do this, I have to do this. This went well, this I like, they definitely loved this thing'. And then … and then I don't get the mark I think, like, that should reflect what changes I've made … in relation to the marks and feedback I've had before.

Students frequently report frustration about the transferability of feedback to future work (Gleaves, Walker, and Grey Citation2008), and this frustration can drive ‘behavioural dis-engagement’ (Handley, Price, and Millar Citation2011, 553). Participants’ sense of disempowerment here seemed also to stem from the modular structure common to many degree courses, whereby individual assignments are perceived as unrelated. Many talked of helplessness in using feedback for future assignments that differ from those already completed, or that will be marked by a different person:

  • (19) C: It's very subjective, depending on the lecturers you go to and the markers. So now it's, like, there's no point in even … well, you should look at your grade, but just take it with a pinch of salt.

  • (20) Z: I mean, if I was to write an essay for the same person, I would follow their feedback and just their feedback, cos I know that's what they want.

    Y: Yea.

    Z: Rather than doing it for myself and applying it to other essays because I don't think it's always applicable.

These quotations support claims that by focusing heavily on the subject content of assignments, many students fail to see their broader intended functions for skill development (Orsmond and Merry Citation2011). Participants appeared to desire immediate transfer of feedback, in terms of direct applicability and effect, rather than viewing the longer-term potential for feedback to develop academic literacy (Price et al. Citation2010). One prediction that deserves further scrutiny is that these unreasonable expectations would be less apparent within more dialogic feedback environments, and when feedback is designed to provide an ipsative function.

Participants also exhibited a sense of disempowerment in using feedback that concerned skills they perceived as fixed, and not modifiable:

  • (21) K: I'm not sure how far you could go to remedying that, because I think part of it is writing style. Which is something, erm, you know, that is difficult to alter.

  • (22) EF: Maybe I could use [feedback] better, but I don't know. I just feel like I kind of do my essays a certain way now, and I don't really know how to get out of that. Even if they give you pointers, I'll still end up … I'll still end up doing it in the same way.

If students believe a particular skill is fixed, then this belief will discourage them from proactively using their feedback to improve this skill. Also regarding agency, participants were clear that even when they understood their feedback, they did not always know how to translate it into action:

  • (23) S: I'd find it useful to know, um, if I … that um … to improve the sentence structure and language used, but I wouldn't necessarily know how.

  • (24) F: I think [lecturers] assume that you know what to do with it.

Students are rarely trained in how to use feedback (Burke Citation2009), and knowing what needs to be developed is quite different from knowing how to achieve that development. A tension therefore emerged concerning who holds the responsibility to translate feedback into action points.

  • (25) V: I always … always get feedback in my work that, like, ‘Your arguments aren't clear'. But it's like, ‘Okay, I understand that, like I've heard this … this comment a million times, but tell me where and how'. Like, say, ‘Maybe do this instead'.

  • (26) G: Yea like, tell me how to change a few things. I mean, we were commenting about the grammar, give me a course … a grammar course. Don't just tell me, and let me do it on my own.

  • (27) CD: Yea [the feedback] should be more clear and like a statement, rather than a question. Keep you wondering, I don't wanna wonder. Just tell me what to do!

Whereas lecturers might view students as responsible for deciding how to action their feedback, the data show that many students see it as the lecturer's responsibility to spell out what they should do next (Bing-You, Paterson, and Levine Citation1997).

3.4. Students’ volition to scrutinise feedback and implement strategies to use it.

As noted earlier, for feedback to be put into action, students have to be ‘ready to engage’ (Handley, Price, and Millar Citation2011); our data show that this implementation can be impeded by a lack of proactivity and receptiveness. Many participants seemed aware that they need to be proactive in seeking and using feedback. However, whereas some showed eagerness for being proactive, the majority suggested that they prefer being reactive:

  • (28) A: I haven't actually gone to see them.

    B: Yea, that's the thing. That's the thing, I haven't gone to see them, and so I don't … yea.

    A: Maybe that's what you need to go and do.

  • (29) CD: Yea, I think a lot of people just can't … literally out of laziness, they just can't be bothered to go and find out their … their office hours, and then go see them and talk it through.

  • (30) O: I'm sure, like, if you really wanted to, you could go and see [Lecturer A]

    M: Mm.

    O: or something, but that would have to be instigated by you, probably.

Participants showed awareness that they often lack the volition to use helpful strategies. In some cases, a sense of apathy can limit openness to the feedback altogether:

  • (31) Q: Like, if you're someone who does actually sit down and look at it, and actually take it in, then you're gonna find it really useful. But I, personally, just put it in a folder. [Laughs]

  • (32) N: I think it'd be useful, but I probably wouldn't use it … as much as it would be useful.

    M: Yea I … I think I'd be the same.

  • (33) H: I suppose it's just the time … time-consuming to go through and try and read through it all and then find it for yourself. So, that'd be … I would … more likely to use it if someone just said to me, ‘Here's your bullet points of what you need to do’.

If students lack volition to use feedback, then academics have limited ability to facilitate engagement. Students must have a ‘commitment to change’ (Bing-You, Paterson, and Levine Citation1997, 43), and in contrast from many of these participants, many academics place the responsibility for using feedback primarily with students (Hernández Citation2012). It was also evident from our data that students’ volition to use feedback requires a state of receptiveness. Defensive behaviour, such as avoiding particular aspects of feedback, seemed to affect participants’ receptiveness. For example, participants seemed to have pre-existing ideas of what constitute good grades, and their volition to even look at written feedback often depended on how their achieved grade aligned with this standard:

  • (34) L:  … it does kind of depend on the grade, how … how much you use the feedback. Obviously, even if you're getting like ninety, you should still use feedback they give you, cos they've obviously given you some, but I think – I probably don't use it as much as you should.

  • (35) I: But I think most students, you get … you get your coursework back, you look at the mark. If it's really good, you probably won't read the feedback. If it's not so great, you probably will look at it, and that's about it.

This grade focus – whereby students often ignore the feedback altogether if they receive a ‘good’ grade – is well-documented (e.g. Hounsell Citation2007), and prevents dialogue between student and marker (Carless Citation2006). Participants’ engagement with feedback also seemed to depend on the valence of the comments, as some were more motivated to engage with either positive or negative feedback:

  • (36) T: If I'm honest, I don't really pay attention to the positive feedback that much, cos it doesn't tell me anything. It doesn't … doesn't say how I can improve. It's just saying, ‘Oh, well done'.

  • (37) I: I think you're more likely to ignore [negative comments]. [Laughs] To save yourself, kinda thing! Um, but if it doesn't point out any negative, you're not gonna learn, so, I think it's a balance between pointing out areas where you need to improve, whilst that to me seems quite negative. And I think that could be detrimental, to kind of, your engagement.

There is evidence that the most dramatic improvements in student work often occur after critical, rather than positive comments (e.g. Higgins, Hartley, and Skelton Citation2002). Nevertheless, it is clear that grades and the accompanying narrative feedback can influence students’ sense of worth (Gleaves, Walker, and Grey Citation2008) and, in turn, their likelihood of engaging.

Finally, participants discussed how their receptiveness to feedback increases when they are in their ‘feedback space’ – an optimal physical and psychological environment. Some identified that the social influence or distraction associated with collecting feedback in public can prevent them engaging:

  • (38) P: You just know the people to avoid on feedback day.

    Q: Yea. And sometimes you … I've actually seen people just reading out their feedback if it's … 

    P: Yea.

    Q: … if they've got, like, a really high first.

    P: Mm.

    Q: And you just sort of think, ‘Okay. This is really demotivating'. Cos it just makes you not wanna read it.

    (39) S: I found a couple of times, if I look at the mark and then I get distracted whilst I'm looking at feedback, I then won't bother and go and read over the feedback again. I'll just sort of leave it.

Students in Carless's (Citation2006) study similarly spoke of discomfort if collecting written feedback alongside friends, either because of a reluctance to disclose their mark, or if the student feels obliged to comfort others. This illustrates powerfully the fact that students’ use of feedback operates within the ‘spheres of engagement’ in a student community (Handley, Price, and Millar Citation2011).

4. General discussion

For feedback to influence learning and development, it must be used (Jonsson Citation2013; Price et al. Citation2010), yet engaging well with feedback can be extremely challenging. Our data highlight various barriers that students believe prevent them from using feedback effectively; these were underpinned by four broad psychological processes, which we labelled awareness, cognisance, agency, and volition. Identifying such barriers and processes is important, because it allows us to foresee the kinds of interventions that might help students to take a share of responsibility for their own academic development. It is clear that some degree of responsibility-taking on students’ part would in most cases be beneficial, or even fundamental, to removing these barriers. Although this study was not designed to identify or generate evidence in support of specific interventions, nevertheless we will consider here some approaches that the barriers we identified might point towards.

In a general sense, learning to take responsibility for using feedback effectively is a difficult but vital skill that underpins the development of self-regulation (Nicol and Macfarlane-Dick Citation2006). In this study, most participants were aware that feedback is intended to help them improve, and recognised that improvement can only happen through acting upon the feedback. However, because students typically desire feedback that specifies exactly what they should do (e.g. Winstone et al. Citationforthcoming, b), educators have a responsibility to challenge these expectations, by encouraging practices that promote self-regulation rather than dependence on explicit instruction. Relatedly, participants here also described many past difficulties with decoding and understanding feedback. These difficulties could be minimised through interventions that ensure the lecturer's clarity of communication, but that also apportion responsibility to students by better preparing them to understand common academic terminology.

Participants typically appeared cognisant of appropriate strategies and available opportunities for making use of feedback. Yet they also highlighted that knowing about these strategies and opportunities is not the same as knowing how to use them effectively. In this respect, the challenge is to support students in transforming their cognisance into action, permitting them to take responsibility through interventions that make feedback-seeking more accessible and encouraged. Ensuring that students feel welcome to meet with staff to discuss feedback would be one such focus; however, other forms of dialogue including peer-feedback are also vital, particularly when individual staff-members are responsible for supporting large numbers of students.

The data suggest that students’ agency in using feedback can be impeded by a sense of helplessness, and by unrealistic expectations about how apparent and immediate the results of their efforts should be. As educators, we can take responsibility for nurturing students’ agency; for example, grading consistency between markers can be enhanced by explicitly linking feedback to assessment criteria and learning objectives (Price and Rust Citation1999). We can also encourage students to share this responsibility, and constructivist and dialogic interventions could foster the appreciation that self-generated goal-setting will benefit them more than would being told exactly what to do (Pitts Citation2005). Moreover, such interventions, and encouraging students to focus on their overall trajectory of improvement, should prevent them being demotivated by the absence of immediate pay-offs of effort. By designing curricula in ways that emphasise coherence and continuation among assessment and learning objectives, irrespective of the specific subject content, it should become more straightforward for feedback to offer an ipsative, developmental function (Hughes Citation2011). Such feedback might in turn offer students greater opportunities to reflect on their trajectory of learning, and greater agency to act upon this self-reflection.

Finally, in terms of volition we observed many instances of participants’ reluctance to engage with feedback, and many attributions for this reluctance. A heavy focus on grades is one attribution, and is difficult to overcome as most education systems likewise place a heavy emphasis on grades. However, educators can take their share of responsibility by making feedback comments clear, and transparent in identifying actions to take. Other facets of volition, such as being in the appropriate ‘feedback space’ to engage meaningfully with feedback, could in some cases be supported through learning technologies that enable students to receive feedback in their own time, without immediate social pressures. Perhaps most importantly, students could be encouraged to take their share of responsibility if their learning environments expect, support and reward proactivity.

Together our findings indicate numerous barriers, some of which mean students ‘cannot’ and some mean they ‘will not’ use their feedback. In the latter case, we suggest that students be supported in developing a mindset of proactive recipience, by which we mean taking the role of an active rather than passive receiver of feedback. This role, beyond simply recognising that effective feedback involves participating in dialogue, also requires students to take direct responsibility for acting upon feedback, and to appreciate the importance of being active in this way. Proactive recipience is thus part of being a self-regulated learner (e.g. Nicol and Macfarlane-Dick Citation2006), and the present data point towards ways that both sides could facilitate this role. Nevertheless, before we can strive to nurture this mindset, it is important to identify and remove the former kind of barriers – the ‘cannots’. For instance, interventions that target students’ receptiveness to feedback are unlikely to reap rewards if those students are unable to first understand what their feedback even means. By focusing on the psychological processes that underlie these barriers, we have outlined a framework that could be easily translated across contexts. Educators who attempt to improve their students’ engagement with feedback should first identify not only the barriers hindering engagement, but also the processes underlying those barriers.

Disclosure statement

No potential conflict of interest was reported by the authors.

Supplemental Data

Supplemental data for this article can be accessed at 10.1080/03075079.2015.1130032

Supplemental material

Online Supplemental Materials

Download MS Word (25.2 KB)

Additional information

Funding

This work was supported by the Higher Education Academy [grant number GEN1024].

Notes

1 To protect their anonymity, all participants were assigned an alphabetic identifier during transcription.

References

  • Beaumont, Chris, Michelle O'Doherty, and Lee Shannon. 2011. “Reconceptualising Assessment Feedback: A Key to Improving Student Learning?” Studies in Higher Education 36: 671–87. doi: 10.1080/03075071003731135
  • Bing-You, Robert G., Jay Paterson, and Mark A. Levine. 1997. “Feedback Falling on Deaf Ears: Residents’ Receptivity to Feedback Tempered by Sender Credibility.” Medical Teacher 19: 40–44. doi: 10.3109/01421599709019346
  • Braun, Virginia, and Victoria Clarke. 2006. “Using Thematic Analysis in Psychology.” Qualitative Research in Psychology 3: 77–101. doi: 10.1191/1478088706qp063oa
  • Burke, Deirdre. 2009. “Strategies for Using Feedback Students Bring to Higher Education.” Assessment & Evaluation in Higher Education 34: 41–50. doi: 10.1080/02602930801895711
  • Carless, David. 2006. “Differing Perceptions in the Feedback Process.” Studies in Higher Education 31: 219–33. doi: 10.1080/03075070600572132
  • Colucci, Erminia. 2007. “‘Focus Groups can be Fun’: The Use of Activity-Oriented Questions in Focus Group Discussions.” Qualitative Health Research 17: 1422–33. doi: 10.1177/1049732307308129
  • Delva, Dianne, Joan Sargeant, Stephen Miller, Joanna Holland, Peggy Alexiadis Brown, Constance Leblanc, Kathryn Lightfoot, and Karen Mann. 2013. “Encouraging residents to seek feedback.” Medical Teacher, 35: e1625–31. doi: 10.3109/0142159X.2013.806791
  • Gibbs, Graham, and Claire Simpson. 2004. “Conditions under Which Assessment Supports Students’ Learning.” Learning and Teaching in Higher Education 1: 3–31.
  • Gleaves, Alan, Caroline Walker, and John Grey. 2008. “Using Digital and Paper Diaries for Assessment and Learning Purposes in Higher Education: A Case of Critical Reflection or Constrained Compliance?” Assessment & Evaluation in Higher Education 33: 219–31. doi: 10.1080/02602930701292761
  • Handley, Karen, Margaret Price, and Jill Millar. 2011. “‘Beyond ‘Doing Time’: Investigating the Concept of Student Engagement with Feedback.” Oxford Review of Education 37: 543–60. doi: 10.1080/03054985.2011.604951
  • Hattie, John, and Helen Timperley. 2007. “The Power of Feedback.” Review of Educational Research 77: 81–112. doi: 10.3102/003465430298487
  • Hepplestone, Stuart, Graham Holden, Brian Irwin, Helen Parkin, and Louise Thorpe. 2011. “Using Technology to Encourage Student Engagement with Feedback: A Literature Review.” Research in Learning Technology 19: 117–27. doi: 10.1080/21567069.2011.586677
  • Hernández, Rosario. 2012. “Does Continuous Assessment in Higher Education Support Student Learning?” Higher Education 64: 489–502. doi: 10.1007/s10734-012-9506-7
  • Higgins, Richard, Peter Hartley, and Alan Skelton. 2002. “The Conscientious Consumer: Reconsidering the Role of Assessment Feedback in Student Learning.” Studies In Higher Education 27: 53–64. doi: 10.1080/03075070120099368
  • Hounsell, Dai. 2007. “Towards More Sustainable Feedback to Students.” In Rethinking Assessment in Higher Education: Learning for the Longer Term, edited by David Boud & Nancy Falchikov, 101–13. London: Routledge.
  • Hughes, Gwyneth. 2011. “Towards a Personal Best: A Case for Introducing Ipsative Assessment in Higher Education.” Studies in Higher Education 36: 353–67. doi: 10.1080/03075079.2010.486859
  • Hughes, Gwyneth, Holly Smith, and Brian Creese. 2015. “Not Seeing the Wood for the Trees: Developing a Feedback Analysis Tool to Explore Feed Forward in Modularised Programmes.” Assessment and Evaluation in Higher Education 40: 1079–94. doi: 10.1080/02602938.2014.969193
  • Hughes, Gwyneth, Elizabeth Wood, and Kaori Kitagawa. 2014. “Use of Self-Referential (Ipsative) Feedback to Motivate and Guide Distance Learners.” Open Learning 29: 31–44. doi: 10.1080/02680513.2014.921612
  • Hyland, Fiona. 1998. “The Impact of Teacher Written Feedback on Individual Writers.” Journal of Second Language Writing 7: 255–86. doi: 10.1016/S1060-3743(98)90017-0
  • Jazvac-Martek, Marian. 2009. “Oscillating Role Identities: The Academic Experiences of Education Doctoral Students.” Innovations in Education and Teaching International 46: 253–64. doi: 10.1080/14703290903068862
  • Johnson, David W., and Frank P. Johnson. 1994. Joining Together: Group Theory and Group Skills. 5th ed. Englewood Cliffs, NJ: Prentice Hall.
  • Jonsson, Anders. 2013. “Facilitating Productive Use of Feedback in Higher Education.” Active Learning in Higher Education 14: 63–76. doi: 10.1177/1469787412467125
  • Kitzinger, Jenny. 1994. “The Methodology of Focus Groups: The Importance of Interaction between Research Participants.” Sociology of Health & Illness 16: 103–121. doi: 10.1111/1467-9566.ep11347023
  • Laurillard, Diana. 2002. Rethinking University Teaching: A Conversational Framework for the Effective Use of Learning Technologies. 2nd ed. London: Routledge/Falmer.
  • Morgan, David L. 1996. “Focus Groups.” Annual Review of Sociology 22: 129–52. doi: 10.1146/annurev.soc.22.1.129
  • Nicol, David. 2010. “From Monologue to Dialogue: Improving Written Feedback Processes in Mass Higher Education.” Assessment & Evaluation in Higher Education 35: 501–17. doi: 10.1080/02602931003786559
  • Nicol, David J., and Debra Macfarlane-Dick. 2006. “Formative Assessment and Self-Regulated Learning: A Model and Seven Principles of Good Feedback Practice.” Studies in Higher Education 31: 199–218. doi: 10.1080/03075070600572090
  • Orsmond, Paul, and Stephen Merry. 2011. “Feedback Alignment: Effective and Ineffective Links between Tutors’ and Students’ Understanding of Coursework Feedback.” Assessment & Evaluation in Higher Education 36: 125–36. doi: 10.1080/02602930903201651
  • Pitts, Stephanie E. 2005. “‘Testing, Testing … ’: How Do Students Use Written Feedback?” Active Learning in Higher Education 6: 218–29. doi: 10.1177/1469787405057663
  • Price, Margaret, Karen Handley, Jill Millar, and Berry O'Donovan. 2010. “Feedback: All That Effort, But What is the Effect?” Assessment & Evaluation in Higher Education 35: 277–89. doi: 10.1080/02602930903541007
  • Price, Margaret, and Chris Rust. 1999. “The Experience of Introducing a Common Criteria Assessment Grid across an Academic Department.” Quality in Higher Education 5: 133–44. doi: 10.1080/1353832990050204
  • Sinclair, Hazel K., and Jennifer A. Cleland. 2007. “Undergraduate Medical Students: Who Seeks Formative Feedback?” Medical Education 41: 580–82. doi: 10.1111/j.1365-2923.2007.02768.x
  • Winstone, Naomi, Corinne Huntington, Lisa Goldsack, Elli Kyrou, and Lynne Millward. 2014. “Eliciting Rich Dialogue through the Use of Activity-Oriented Interviews: Exploring Self-Identity in Autistic Young People.” Childhood 21: 190–206. doi: 10.1177/0907568213491771
  • Winstone, Naomi, Robert Nash, Michael Parker, and James Rowntree. forthcoming, a. Supporting learners' engagement with feedback: A systematic review and a taxonomy of recipience processes.
  • Winstone, Naomi, Robert Nash, James Rowntree, and Richard Menezes. forthcoming, b. “What Do Students Want Most from Written Feedback Information? Distinguishing Necessities from Luxuries Using a Budgeting Methodology.” Assessment and Evaluation in Higher Education.
  • Withey, Carol. 2013. “Feedback Engagement: Forcing Feed-Forward amongst Law Students.” The Law Teacher 47: 319–44. doi: 10.1080/03069400.2013.851336