47,529
Views
377
CrossRef citations to date
0
Altmetric
Original Articles

Supporting Learners' Agentic Engagement With Feedback: A Systematic Review and a Taxonomy of Recipience Processes

, , &

Abstract

Much has been written in the educational psychology literature about effective feedback and how to deliver it. However, it is equally important to understand how learners actively receive, engage with, and implement feedback. This article reports a systematic review of the research evidence pertaining to this issue. Through an analysis of 195 outputs published between 1985 and early 2014, we identified various factors that have been proposed to influence the likelihood of feedback being used. Furthermore, we identified diverse interventions with the common aim of supporting and promoting learners' agentic engagement with feedback processes. We outline the various components used in these interventions, and the reports of their successes and limitations. Moreover we propose a novel taxonomy of four recipience processes targeted by these interventions. This review and taxonomy provide a theoretical basis for conceptualizing learners' responsibility within feedback dialogues and for guiding the strategic design and evaluation of interventions.

Receiving feedback on one's skills and understanding is an invaluable part of the learning process, benefiting learners far more than does simply receiving praise or punishment (Black & Wiliam, Citation1998; Hattie & Timperley, Citation2007). Inevitably, the benefits of receiving feedback are not uniform across all circumstances, and so it is imperative to understand how these gains can be maximized. There is increasing consensus that a critical determinant of feedback effectiveness is the quality of learners' engagement with, and use of, the feedback they receive. However, studies investigating this engagement are underrepresented in academic research (Bounds et al., Citation2013), which leaves a “blind spot” in our understanding (Burke, Citation2009). With this blind spot in mind, the present work sets out to systematically map the research literature concerning learners' proactive recipience of feedback. We use the term “proactive recipience” here to connote a state or activity of engaging actively with feedback processes, thus emphasizing the fundamental contribution and responsibility of the learner (Winstone, Nash, Rowntree, & Parker, Citationin press). In other words, just as Reeve and Tseng (Citation2011) defined “agentic engagement” as a “student's constructive contribution into the flow of the instruction they receive” (p. 258), likewise proactive recipience is a form of agentic engagement that involves the learner sharing responsibility for making feedback processes effective.

BACKGROUND

The topic of assessment feedback has a long history in academic research, and today it is among the foremost concerns in education research and practice (Boud & Molloy, Citation2013a; Evans, Citation2013; Nicol, Citation2010). In the academic literature there has been a considerable emphasis on what educators should do in order to provide ideal written or oral feedback (Nicol & Macfarlane-Dick, Citation2006). For example, a strong and diverse evidence base has outlined factors that make feedback useable (e.g., Nicol, Citation2010), how it should ideally be delivered (e.g., Carless, Salter, Yang, & Lam, Citation2011), and what drives learners' (dis)satisfaction with the feedback they receive from educators (e.g., Weaver, Citation2006; CitationWinstone, Nash, Rowntree, & Menezes, in press). This transmission-focused approach provides strong theoretical foundations for educators to develop and share effective feedback practices. However, this approach can often seem to apportion minimal responsibility to learners in the feedback process, characterizing them instead as passive recipients of advice. Indeed, although the term “feedback” is commonly used in a way that connotes passivity (Ball, Citation2010; Boud & Molloy, Citation2013a, Citation2013b; Parboteeah & Anwar, Citation2009), researchers increasingly acknowledge that “if information is simply stored in memory and never used, it is not feedback” (Orsmond, Merry, & Reiling, Citation2005, p. 381). As a result, a new focus is now emerging within the feedback literature, placing greater emphasis on learners' agentic engagement with feedback processes (e.g., Price, Handley, & Millar, Citation2011). Rather than characterizing the moment of receiving feedback as the end point of the process, it is increasingly characterized as the start point (e.g., Burke, Citation2009). It is important, though, to know what kinds of research are informing this shift in focus. Is the research-base suitably diverse in terms of study disciplines, learner demographics, feedback sources (e.g., educators, peer-assessment, self-assessment), research methods and data types? This was the first research question that we set out to address in the present review. The answer would provide an empirically grounded assessment of the generalizability and strength of current research evidence, which may direct future research.

The perspective that learners benefit little from being passive receivers of feedback is by no means new, and indeed research adopting this perspective has produced important empirical and theoretical developments. Butler and Winne (Citation1995) noted that by playing active roles in the feedback process and engaging with the comments they receive, learners can develop the skills to self-regulate their learning, meaning they will not always be dependent on others for appraisal. The learner's role is also emphasized in theoretical frameworks that conceptualize feedback as a process of dialogue, rather than a one-way transmission of information (e.g., Beaumont, O'Doherty, & Shannon, Citation2011; Boud & Molloy, Citation2013a; Carless et al., Citation2011; Nicol, Citation2010).

Considering learners' important role leads to a further important question: What kinds of psychological, pedagogical, or contextual factors might influence the extent to which learners engage proactively with feedback processes? This research question was the second to be tackled in the present review. When thinking about possible answers, it is useful to conceptualize the giving and receiving of feedback as a communicative event (Beaumont et al., Citation2011; Higgins, Hartley, & Skelton, Citation2001). As illustrated in Johnson and Johnson's (Citation1994) interpersonal communication model, communication between a sender and a receiver is multidimensional. The sender (in this context, the source of feedback such as an educator or peer) is responsible for producing a message (the feedback) and for transmitting it to the receiver (the learner).Footnote1 In some cases receivers themselves might initiate this communication, by actively seeking information. The receiver must then decode the message and respond in a way that allows the sender to evaluate the message transmission. Of importance, various sources of “noise” can disrupt the communication; these can stem from sender processes (e.g., an educator's clarity of expression), receiver processes (e.g., a learner's openness to receiving advice), or the message and communication channel (e.g., the format or environment in which the feedback is conveyed).

Crucially, this communication framework implies equal importance of both sender and receiver in ensuring that communication occurs and is effective. In the present review, we draw upon this framework to synthesize factors that researchers have proposed to moderate proactive recipience. Although engaging effectively with feedback should in principle lead to improved learning outcomes (e.g., better grades and understanding), our intention was to ask what might influence learners' engagement with feedback and not to review the broader question of what makes feedback effective in general (see Evans, Citation2013; Hattie & Timperley, Citation2007, for reviews).

Improving Learners' Proactive Recipience

The importance of learners' engagement with feedback processes is clear, but how well do learners engage in practice? The literature highlights that whereas some learners do engage well (e.g., Higgins, Hartley, & Skelton, Citation2002), there are myriad examples of poor engagement, ranging from skim-reading comments (Gibbs & Simpson, Citation2004) to failing to collect feedback at all (Sinclair & Cleland, Citation2007). Such evidence leads educators to question how to nurture stronger engagement and how learners can become proactive receivers (and seekers) of feedback. It is important to note that apportioning greater weight to learners' role in the feedback process does not imply relieving educators of responsibility. Rather, if we wish to involve learners more in the feedback process, then it is useful to consider how educators might promote this involvement.

Despite many good-practice examples of how to create actionable feedback (e.g., Hysong, Best, & Pugh, Citation2006), there is limited information for educators on how to change learners' behavior such that they shift from being passive to active receivers and seekers of feedback. One reason is that the effectiveness of feedback is typically assessed by measuring changes in learners' grades or satisfaction, rather than changes in their behavior (Bounds et al., Citation2013). This is an important issue because the relation between feedback and learners' achievement is necessarily mediated by the more proximal factor of their engagement with that feedback. The lack of attention to learners' behavior in this sense leads to what Price et al. (Citation2011) referred to as the “invisibility” of engagement (p. 882). Developing a more comprehensive understanding of how to improve this engagement could open up new avenues to understanding how to improve learners' achievement, satisfaction, and fundamental learning skills.

These considerations underpinned the third research question tackled in this review: What kinds of interventions have been tested to nurture proactive recipience, and with what success? Answering this question is key to establishing an evidence base to inform educational best-practice recommendations. However, equally important is to consider how different interventions might support proactive recipience. One way in which a systematic review might approach this question is by considering the theoretical rationales given to justify different interventions. What processes have researchers targeted in their efforts to strengthen learners' proactive recipience? This was the fourth and final research question addressed here.

Drawing the Literature Together

There is increasing consensus that to be effective, feedback must be used and that learners' engagement with feedback processes is often poor. Whereas research on this topic is therefore highly valuable, it is underrepresented in the feedback literature, and somewhat disconnected. To confront this issue, here we report a systematic literature review designed to map the current state of knowledge concerning learners' proactive engagement with feedback processes, in what ways this engagement has been studied, and how it might be supported.

We know of only one prior attempt to look broadly at this kind of question. Jonsson's (Citation2013) literature review uncovered five broad reasons why students may not engage with their feedback: (a) it may not be useful, (b) it may be insufficiently detailed or individualized, (c) it may be too authoritative in tone, (d) students may not know suitable implementation strategies, and (e) students may not understand the terminology used in feedback. As with much of the prior work, these findings offer a sense of how useable feedback can be crafted and delivered, but minimal information about the role or relevance of learners' behavior. The review was further restricted by a narrow focus—on feedback provided by educators (i.e., it excluded feedback from sources such as peers, and feedback produced by the learners themselves through self-assessment and self-monitoring) only in Higher Education (HE) contexts—and by a limited search strategy that involved “snowballing” from a small initial sample of articles published in 2009–2010. These factors potentially limit our appreciation of the extent of current knowledge on this topic, and thus the utility and application of the findings. In short, there remains a need for a more comprehensive and systematic review of this literature, paying greater attention to learners' behavior and recipience processes rather than focusing solely on feedback content and delivery.

To summarize, the aims of this review were fourfold. First, we aimed to describe the characteristics of this literature, in terms of the kinds of research and analytic methods used to study proactive recipience and the kinds of learners, learning environments, and sources of feedback studied. Second, we aimed to synthesize current theory and understanding of factors that might promote or inhibit learners' engagement with feedback processes. Third, we aimed to identify pedagogical initiatives and interventions for nurturing learners' engagement with feedback processes, and to examine reports of these initiatives' success and limitations. Fourth, we aimed to scrutinize and codify the recipience processes that these initiatives have targeted.

METHOD

In early 2014 we searched eight bibliographic databases: the Web of Science Core Collection, Scopus, PsycINFO, the Education Resources Information Center, the British Education Index, the Australian Education Index, the International Bibliography of the Social Sciences, and the Applied Social Sciences Index and Abstracts. Our initial search looked for outputs published in English for which the title, abstract, and/or keywords contained (a) both the terms educat* and assess*; (b) at least one of the terms feedback, feed-back, feedforward, or feed-forward; and (c) at least one of the terms student*, trainee*, pupil*, learner*, *graduate*, teacher*, lecturer*, professor*, instructor*, or tutor*. We included journal articles, reviews, surveys, book chapters, and conference proceedings but excluded books, notes, letters, editorials, dissertations, conference reviews, and reports. Because our aim was to map theoretical proposals in this literature as well as data-driven findings, we included nonempirical as well as empirical work. The database Scopus detailed only the first 2,000 hits; therefore in this database we sorted the hits by “relevance” and included the 2,000 most relevant.

After removing duplicates obtained from more than one database, this initial search retrieved 4,862 outputs. Next we determined whether each met our inclusion and exclusion criteria. Specifically, we wished to include outputs that (a) discussed summative or formative feedback given in the context of education at any level, rather than in other contexts such as employment; (b) discussed feedback directed toward learners, rather than being provided by learners toward their teachers or professors (e.g., teaching evaluations); and (c) discussed learners' use of feedback, or its consequences for learners' behavior, rather than solely the effects on performance (although we did include publications that reported performance data alongside discussions of behavioral consequences). We included research irrespective of the source of feedback (including self- and peer-assessment) or the mode of assessment. We wished to exclude outputs wherein (a) feedback was discussed only as part of a broader intervention or teaching style; or (b) the feedback constituted simply a grade, or “correct/incorrect” responses (e.g., from multiple-choice tests). We excluded the latter kinds of output because, although grades and “correct/incorrect” feedback can be informative, in the absence of further guidance these simple kinds of feedback are not typically sufficient to be transformed into actions for improvement.

We began with a training process to ensure that our inclusion and exclusion criteria were sufficiently clear and concrete to implement reliably. To this end, the research team independently coded the first 50 hits, then through discussion of disagreements the criteria were clarified. This process was repeated with further batches of 50 hits until agreement was near perfect. At this point, one researcher scrutinized the titles and abstracts of all 4,862 hits to determine which appeared to meet these criteria. This process led us to retain 747 outputs. A second coder independently examined a random 10% of the titles and abstracts, blind to the first coder's judgments. The agreement on inclusion/exclusion between coders was 93.2% (Gwet's AC1 = .91); therefore the first coder's judgments were deemed reliable and accepted without further discussions of disagreements. Of the remaining 747 outputs, we were able to access the full text of 649 (87%); the majority of the unobtainable outputs were conference proceedings. The next step was to scrutinize the main texts to determine whether each output did indeed meet all the inclusion and exclusion criteria. As before, we conducted initial training by coding and discussing small samples of papers to identify and discuss potential problems and thereby ensure a high level of agreement. Once the agreement within these samples was satisfactory, one researcher repeated the examination process for the main text of all 649 outputs. This second stage reduced the number of documents to 168, primarily because many of the documents did not ultimately meet inclusion criterion (c) as their abstracts implied could be the case. Again, a second coder examined a random 10% of the 649 outputs. Agreement was 89.2% (Gwet's AC1 = .82); therefore the first coder's judgments were accepted without further discussion.

We extracted bibliographic and descriptive information for the 168 remaining outputs. Furthermore we used the snowballing method by scrutinizing the references sections of those outputs that by our own judgment seemed most relevant. This snowballing identified 27 additional outputs that met our criteria. The combined 195 outputs represent the basis of our analysis (for a complete list, see Table S1 in the online supplementary material).

RESULTS

In the discussion that follows, we present the results of this literature review in three parts. The first part addresses our first research question by providing a descriptive analysis of the demographic and methodological characteristics of the reviewed literature. The second part addresses our second research question by reviewing the characteristics of learners, feedback providers, and learning environments that have been proposed in this literature as potential moderators of proactive recipience. The third part then addresses our third and fourth research questions. In that part we review the interventions that have been reported as means to improve proactive recipience, and we catalogue the recipience processes that these interventions have targeted. On the basis of that review, we present a novel taxonomy of recipience processes, and we summarize researchers' reports of the successes and limitations of these interventions.

Part 1: Descriptive Analysis of the Literature

Almost half of the 195 papers included in our review were published by lead authors based in the United Kingdom (n = 94, 48.2%). The remainder were led by authors from Australia (n = 26), the United States (n = 22), the Netherlands (n = 11), Canada (n = 5), Hong Kong (n = 5), South Africa (n = 5), Ireland (n = 4), New Zealand, (n = 4), Belgium (n = 3), Finland (n = 3), Spain (n = 2), Sweden (n = 2), Taiwan (n = 2), Cyprus (n = 1), Germany (n = 1), Greece (n = 1), Israel (n = 1), Jordan (n = 1), Norway (n = 1), and Sri Lanka (n = 1). Although all the papers reviewed were published between 1985 and 2014, there has been a recent surge of interest in this topic, and the vast majority of papers (174 papers, 89%) were published in or after 2005.

Most papers (81.5%) contained some form of empirical data. The remaining 18.5% were theoretical or discussion papers that contributed to our understanding of factors that might potentially influence proactive recipience and are therefore discussed only in Part 2 of this analysis. Among the 159 empirical papers, the sample size ranged from 4 to 2,273 (Mdn = 62). Eighty-eight papers sampled only undergraduates (55%), whereas 11 sampled only postgraduates (7%) and 17 (11%) sampled university students without specifying their level of study. An additional 13 studies (8%) involved students in medical courses who were not explicitly identified as undergraduates or postgraduates. Only nine studies sampled secondary/high school students (6%), and only two sampled elementary school students (1%). Finally, two papers sampled teaching staff only (1%), and 15 papers (9%) sampled two or more of the groups described. Two studies (1%) did not state their participants' educational level. Clearly, this literature as a whole teaches us far more about proactive recipience among learners in HE than among learners at any prior stage of education: an important point of which readers should remain mindful when considering the empirical evidence described next.

Within these empirical papers, participants from many different study disciplines were represented. Learners from Social Sciences disciplines were the focus in 39 papers (25%), those from STEM disciplines in 36 papers (23%), and those from Health and Social Care disciplines in 35 papers (22%). Arts and Humanities were less well represented, with only 12 papers covering these disciplines (8%). A further 20 papers (13%) focused on more than one discipline type, and the remaining 17 papers (11%) did not focus on a specific discipline. Most papers (69%) did not give clear information about the gender of their participants. Of those that did (excluding papers that had only teacher participants), 28 studies contained more female than male participants, 18 contained more male than female participants, and three reported even numbers of male and female participants.

Most of the 159 empirical papers focused on feedback as provided by an educator, such as a teacher or professor (81%). However, some focused on different sources of feedback or on multiple sources (and hence the percentages sum to greater than 100%). Specifically, several focused on the process of giving (16%) or receiving (9%) peer-feedback. Only 13% focused on self-feedback processes, and 4% focused on the receiving of computer-automated feedback (which was of course typically coded/prepared by an educator).

What kinds of research methods have been used to study this topic? Many of the studies used more than one method, and so again the percentages add to more than 100%. The most common methods were surveys eliciting either open-ended or Likert-style agreement responses. These surveys were included in 55% of the 159 empirical papers. Many studies used focus groups (23%) and/or one-to-one interviews (21%) with participants. In total, 7% of studies used a psychometric approach. Our search uncovered eight papers (5%) that used quasi-experimental methods and seven (4%) that involved true experimental methods. A total of 32% used quantitative research methods not otherwise specified (e.g., analyzing test scores, or usage statistics from online feedback systems), and 21% used qualitative methods not otherwise specified (e.g., analyzing participants' written reflections).

Finally, what kinds of analytic methods were applied in these studies? The majority involved more than one form of analysis. A total of 26% reported quantitative tests of difference on particular outcome variables (e.g., t tests, analysis of variance), and 17% reported tests of association (e.g., correlation, regression). A substantial proportion (19%) reported content analyses. Many papers reported basic descriptive statistics on portions of their data (e.g., frequencies of participants) without subjecting these data to inferential analyses (52%). The most common qualitative analytic approach was thematic analysis (9%), whereas other specified qualitative approaches were more rarely used (8%; e.g., grounded theory). Of interest, 43% of papers reported qualitative data without specifying which analytic method was used.

In sum, this descriptive analysis shows that the issue of proactive recipience is enjoying increasing attention in educational research, thus underlining the timeliness and importance of the present review. Whereas the empirical research draws upon diverse research methods, a preponderance of studies focused on learners' views about their use of feedback, gathered via surveys and qualitative interviews, and far fewer studies assessed learners' actual behavior (by our judgment, just 19% of the empirical papers). Moreover, the participants were most typically undergraduate students and based in English-speaking countries. Our focus solely on research published in English likely adds to the latter bias, but the underrepresentation of learners from outside of HE is more surprising, and we consider this bias in the Discussion section.

Part 2: What Factors Might Influence Proactive Recipience?

Our next aim was to produce a narrative review of potential moderators of proactive recipience. To do so, we reviewed a subset of 90 papers from our review that either contained no empirical data (i.e., theoretical and discussion papers, n = 36), or that did contain empirical data but reported no direct intervention on learners' proactive recipience (n = 54).Footnote2 In short, these papers contained useful observations to support the potential relevance of different factors but did not report on the outcomes of any specific attempt to enhance learners' proactive recipience.

The narrative review approach is admittedly limited because it does not permit full and objective coverage of the entire literature. Nevertheless we took this approach because both the literature itself and the number of potential moderators proposed were large, and so full coverage of every potential moderator would be unfeasible. Rather than conducting thematic analysis or similar analysis of these 90 papers to extract themes, we instead divide our review of these papers into four a posteriori subsections that correspond loosely to the four elements of Johnson and Johnson's (Citation1994) interpersonal communication model: receiver variables, sender variables, variables that pertain to the message, and those that relate to the learning context. These subsections serve purely to organize and structure our overview, rather than to imply any groupings of data based on formal analysis. To preview the findings described next, a large range of possible moderators have been proposed, yet for the vast majority one reasonable conclusion is that the evidence is quite minimal in terms of quantity and/or strength. The combined evidence base gives us strong cause to believe that each of the four elements (receiver, sender, message, and context) substantially moderates proactive recipience, but we recommend a cautious reading of the possible moderators within these four elements.

Characteristics and Behavior of the Receiver

Unless learners are motivated and equipped to use feedback productively, they may have limited potential to occupy a central role in the feedback process (Carless et al., Citation2011). Some researchers argue that a prerequisite for learners to implement feedback effectively is for them to understand the purpose of feedback (e.g., Nelson & Schunn, Citation2009). In this literature, learners in HE were characterized as having a relatively narrow understanding of the purpose of feedback, recognizing that it should facilitate their improvement (Bailey & Garner, Citation2010; Bevan, Badge, Cann, Wilmott, & Scott, Citation2008; Price, Handley, Millar, & O'Donovan, Citation2010) but recognizing less their own responsibility for actualizing this improvement (Price et al., Citation2011). In one case this was true even beyond HE. In Peterson and Irving's (Citation2008) study, focus group data revealed that the secondary school students often externalized responsibility by blaming their teachers when they failed to improve.

Several papers from HE contexts, and a couple from pre-HE, focused on variations in the extent to which learners act upon feedback (Hyland, Citation1998) and are motivated to do so (Havnes, Smith, Dysthe, & Ludvigsen, Citation2012)—what we might call their “commitment to change” or “readiness to engage” (Bing-You, Paterson, & Levine, Citation1997; Handley, Price, & Millar, Citation2011). At a basic level, Turner and Gibbs's (Citation2010) research points to possible gender differences. In their study, female undergraduates from different disciplines were more likely than male students to agree with survey items such as “I used the feedback I received to go back over what I had done in my work.” Other researchers focused on learner identity variables that could influence this commitment to change. For example, Baadte and Schnotz (Citation2013) demonstrated that upon receiving feedback, German fifth graders whose academic self-concept was positive (i.e., those who are self-assured of their academic abilities) increased the self-reported effort they invested in learning, whereas those with negative self-concepts did not. Similarly, Handley et al. (Citation2011) theorized that learners with higher self-efficacy (i.e., a greater belief in their ability to bring about desired outcomes) might be more willing to expend effort on engaging with feedback.

Another group of studies from HE contexts focused on learners' academic skills. For example, implementing feedback requires skilled self-regulation, and in principle those learners who are superior self-regulators should therefore have the potential to make better use of feedback (Nicol & Macfarlane-Dick, Citation2006). This self-regulation has been linked with learners' levels of achievement. In focus groups with biological science undergraduates, for instance, high-achieving students described engaging in greater self-regulatory behavior when receiving feedback, as compared with low-achieving students (Orsmond & Merry, Citation2013). In particular, higher achievers reported engaging in self-assessment and in setting themselves overall targets for improvement. In contrast, lower achievers reported that they tended to simply read the feedback several times and did not typically use it for self-assessment or to plan for future work. A study by Bounds et al. (Citation2013), however, found conflicting results. In that study, medical students were prompted to generate written learning goals after receiving feedback. Analysis of these goals revealed that the high-achieving students were in fact less likely to have incorporated their feedback into their goals than were low-achieving students. Clearly the relation between achievement and proactive recipience is not always positive, but the mediators of this relation are unclear. Prior experience might be one such mediator. For example, learners may conclude from individual experiences that implementing their feedback does not pay off by improving their grades (Price et al., Citation2010), which Handley et al. (Citation2011) theorized can lead to “behavioral disengagement” with subsequent feedback.

A body of evidence indicated that learners in pre-HE and HE may often focus heavily on the grades they receive, at the expense of their engagement with the accompanying qualitative feedback (Bailey & Garner, Citation2010; Crisp, Citation2007; Hernández, Citation2012; Higgins et al., Citation2001; Peterson & Irving, Citation2008). Several papers proposed effects of “expectation discrepancy,” whereby learners' engagement with qualitative feedback depends on the match between their expected and actual grades. Some researchers theorized that a learner's disappointment with a grade typically leads to higher levels of engagement (Hattie & Timperley, Citation2007), a view espoused by the undergraduate students in Poulos and Mahoney's (Citation2008) focus groups. Yet others theorized that the opposite is sometimes true and that disappointing grades sometimes lead learners to “kill” the feedback message in order to protect their positive self-view (MacDonald, Citation1991). Clearly, individual differences must play a role in determining whether grade satisfaction creates engagement or disengagement with qualitative feedback; however, this role is not well specified.

Characteristics and Behavior of the Sender

HE researchers have suggested that learners' perceptions of the people who give them feedback might shape the extent to which they are willing to engage with and act upon the feedback. In a study involving medical students, Bing-You et al. (Citation1997) discussed several dimensions of the perceived credibility of the message sender, including perceptions of their characteristics (level of knowledge, experience) and behavior (attention; interpersonal skills). In interviews exploring feedback use and acceptance, medical residents described how they would be unlikely to engage with feedback if they believed the sender lacked these signals of credibility. Eva et al. (Citation2012) obtained similar findings in their focus group study: Undergraduate and postgraduate students judged feedback as more accurate, and claimed they were more likely to use it, if it originated from an apparently credible source. In short, learners may need to trust the source of feedback before they will be prepared to act on it (Boud & Molloy, Citation2013b; Carless, Citation2006; Holmes & Papageorgiou, Citation2009). Theoretical contributions to this literature proposed that the imbalance of power between senders and receivers might force learners to adopt passive roles in the feedback process (e.g., Jonsson, Citation2013; Yang & Carless, Citation2013). Koen, Bitzer, and Beets's (Citation2012) focus groups with final-year undergraduates raised the suggestion that this power differential can be communicated through gestures, actions, and facial expressions, and that learners' engagement with feedback can be limited when these signals convey a negative or indifferent attitude.

Characteristics of the Message

Quality assurance surveys might indicate that the key to improving learners' satisfaction with feedback is to increase the quantity delivered; however, some learners in HE report feeling overwhelmed by large amounts of feedback (Nicol & Macfarlane-Dick, Citation2006; Orsmond et al., Citation2005). Quality, in this case, should be more important than quantity, even though in at least one study no substantial relation was found between medical residents' ratings of the quality of their feedback and the extent to which they implemented it in their learning goals (Bounds et al., Citation2013). This particular finding notwithstanding, Nicol and Macfarlane-Dick (Citation2006) theorized that high-quality feedback—through clarifying what good performance entails and providing opportunities to close the gap between current and desired levels of performance—influences learners' ability to self-regulate, which we noted earlier as a crucial determinant of feedback use.

At the most basic level, feedback is unlikely to be used effectively if it is unclear or insufficiently detailed (Beaumont et al., Citation2011; Burke, Citation2009; Jonsson, Citation2013). Moreover, which aspects are commented on might also affect pre-HE and HE learners' engagement with their feedback. For example, teacher-education students who responded to Dowden, Pittaway, Yost, and McCarthy's (Citation2013) survey claimed not to typically make use of feedback that focuses heavily on surface features of the assessed work, such as spelling and grammar (see also Hattie & Timperley, Citation2007). Theoretical contributions to this literature endorsed this perspective, suggesting that feedback is more likely to be used if it provides corrective advice, rather than only a judgment of whether the assessed work is “right” or “wrong” (Nicol & Macfarlane-Dick, Citation2006). Indeed, learners in several of the studies reported a strong preference for feedback that directly identifies the issues to be addressed (Bing-You et al., Citation1997; Havnes et al., Citation2012; Koen et al., Citation2012; Robinson, Pope, & Holyoak, Citation2013). A study by Nelson and Schunn (Citation2009) provided some evidence for these effects. Those researchers directly compared the first drafts of essays written by history undergraduates to their second drafts following critical feedback. They found that students were more likely to put their feedback into practice when the problems had been clearly located in the essay, solutions were proposed, and a summary was presented.

It is noteworthy that pinpointing errors constitutes task-specific feedback—focusing on what has been done, rather than what could or should be done in future. Several contributors from HE theorized that future-oriented “process feedback”—especially feedback regarding the development of skills—has greater utility than does task-specific feedback (Carless, Citation2006; Hattie & Timperley, Citation2007; Norcini & Burch, Citation2007), whereas some suggested that a balance between task-specific feedback and process feedback is ideal (Parboteeah & Anwar, Citation2009; Sadler, Citation2010).

As well as the content and focus of the advice given, nuances in the wording might also influence learners' use of feedback. Ideas that emerged from the HE papers in this review included that feedback is unlikely to be acted upon if its tone is perceived as unmotivational (Hernández, Citation2012), unconstructive (Blair, Curtis, Goodwin, & Shields, Citation2013), or insensitive (Koen et al., Citation2012). Indeed, Schartel (Citation2012) theorized that feedback that focuses on the person rather than on the work itself can lead to a decrease in self-efficacy, a variable that may itself predict the quality of learners' engagement with feedback, as previously noted. In this respect, the positive versus negative framing of feedback is widely discussed in this literature. For example, university students in Eva et al.'s (Citation2012) focus groups reported that feedback has greater utility when positive comments give them a confidence boost (Eva et al., Citation2012). In another focus group study with medical students, Murdoch-Eaton and Sargeant (Citation2012) found that the junior students seemed to engage more with feedback that was positive in tone, whereas the senior students seemed less dependent on the confidence boost gained from positive feedback.

Learners may not be able to engage with feedback at all when it is conveyed in the tacit language that educators often employ (Hounsell, Citation2007; Sadler, Citation2010). As such, difficulties in implementing feedback might arise when the sender's intended meaning is not the same meaning interpreted by the receiver (Bailey & Garner, Citation2010; Nicol, Citation2010). For quality assurance and transparency purposes, HE educators commonly use the language contained within formal grading policies and grade descriptors as the basis of their feedback. Crucially, though, several of the reviewed outputs concurred that learners often feel “bamboozled” by academic terminology (Dowden et al., Citation2013; Higgins et al., Citation2001; Jonsson, Citation2013; Parboteeah & Anwar, Citation2009; Weaver, Citation2006). University students across varying disciplines report that verbal feedback can help with the decoding process (Blair et al., Citation2013; see also Nicol & Macfarlane-Dick, Citation2006).

Characteristics of the Context

Several characteristics of the learning environment and curriculum also emerged, almost exclusively from research in HE contexts, as potential influences on learners' proactive recipience. For example, many papers emphasized a need to promote opportunities for face-to-face dialogue and peer-feedback activities (Blair et al., Citation2013; Koen et al., Citation2012; Orsmond, Maw, Park, Gomez, & Crook, Citation2013; Orsmond et al., Citation2005). One interesting perspective that emerged from survey studies was that learners believe they receive insufficient training in using feedback. For example, in a survey by Bevan et al. (Citation2008), only 42% of 1st-year biological sciences undergraduates agreed that they had received adequate guidance on how to understand and use feedback. In another survey, only half of business and design undergraduates agreed that they had received guidance of this sort, the majority of which was gained during their pretertiary education (Weaver, Citation2006).

Assessment and curriculum design may play important roles in promoting or inhibiting proactive recipience (Evans, Citation2013; Yang & Carless, Citation2013). The common modular structure of many education programs was one concern that arose in the HE literature. In such programs, material on one topic is covered in depth, and learners' understanding is assessed toward the end of the module before moving on to a new, often unrelated, topic. This structure, some researchers argued, can inhibit learners' application of feedback to subsequent assessments (Holmes & Papageorgiou, Citation2009; Jonsson, Citation2013; Orsmond et al., Citation2005; Price et al., Citation2011). For instance, in Taylor and Burke da Silva's (Citation2014) survey, university students from Humanities, Education, Law and Biology disciplines differed in the extent to which they believed their feedback was useable. Biology students reported receiving the most-useable feedback, which the authors attributed to the greater overlap between consecutive assessments in this discipline, affording students opportunities to directly put their feedback into practice. Furthermore, in modular programs, assessments from different modules are often graded by different people. This might limit opportunities for ongoing dialogue regarding learners' development in response to prior feedback: especially true when we consider that learners often perceive the expectations to differ widely between graders (Blair et al., Citation2013; Robinson et al., Citation2013).

The timing of feedback delivery might also influence the extent to which it is used. According to the undergraduates in Poulos and Mahony's (Citation2008) focus groups, when work is submitted toward the end of a module, this often means that any subsequent feedback seems “irrelevant” to them and cannot be acted upon constructively. Several other theoretical and empirical contributions provided an apparent consensus that when learners have to wait a long time for feedback, they typically engage with it less once it does arrive (Blair et al., Citation2013; Hernández, Citation2012; Koen et al., Citation2012; Nicol & Macfarlane-Dick, Citation2006; Yang & Carless, Citation2013).

Finally, institutional policies may indirectly affect HE learners' use of feedback. One example is the use of standardized checkbox pro formas. Although such forms can reduce the workload of providing feedback, Price et al. (Citation2011) claimed that learners sometimes infer that these pro formas signify educators' disinterest and unwillingness to expend effort in giving feedback, which they argue could lead learners to disengage from their feedback.

Part 3: What Interventions Have Been Tested, and What Processes Have They Targeted?

The final part of our analysis focuses on the 105 papers not included in Part 2, all of which detailed the outcomes of empirical interventions or initiatives designed to shape learners' behavior in response to feedback. We examined these 105 papers and coded them in two distinct ways to address our two remaining research questions. Specifically, to address our third research question (i.e., to identify interventions for nurturing proactive recipience, and reports of their successes and limitations), we categorized the different components that these interventions have involved, and we scrutinized researchers' formal and informal accounts of the outcomes. To address our fourth research question (i.e., to identify and codify the recipience processes targeted), we coded researchers' implied or explicit rationales for their interventions. These two research questions are related, because it would be valuable to learn which recipience processes have been targeted by which kinds of interventions. Therefore, for ease of explanation, we first report the coding of recipience processes before describing the interventions themselves.

SAGE: A Taxonomy of Recipience Processes

Our review did not uncover any existing theoretical frameworks for categorizing proactive recipience processes. We therefore generated a data-driven taxonomy by examining all of the stated or implied rationales that the various authors gave for their interventions. Through discussion we searched for common rationales to define and to iteratively refine a coding framework. Many of the rationales overlapped in terms of the higher-order skills and processes that they appeared to target, and so through discussion we organized individual rationales into thematic clusters, which eventually formed our final coding framework of four distinct recipience processes: Self-appraisal, Assessment literacy, Goal-setting and self-regulation, Engagement and motivation (SAGE), defined as follows.

Self-appraisal

Self-appraisal is defined here as the process of making judgments about oneself, one's traits, or one's behavior. Note that this is distinct from making academic judgments about one's work, which is discussed next. Self-appraisal should in principle support proactive recipience by enabling learners to become active agents in assessing their own malleable strengths and weaknesses, reducing reliance on the educator as an authoritative source of judgments (Quinton & Smallbone, Citation2010). Furthermore, self-appraisal can also help learners develop a questioning approach to their learning (Moon, Citation2002) and support the transfer of learning (Quinton & Smallbone, Citation2010), both of which should support proactive recipience.

Assessment literacy

Being “literate” in a given domain requires an individual to possess relevant knowledge, skills, and competencies. Assessment literacy is defined here as the processes of understanding the grading process and of applying this understanding to make academic judgments of one's work and performance. Assessment literacy should support proactive recipience by enabling the learner to (a) understand the relation between assessment and learning, and what is expected from him or her; (b) appraise one's own and others' work against implicit or explicit grading criteria; (c) understand the terminology and concepts used in feedback; and (d) know suitable techniques for assessing and giving feedback, and when to apply these techniques (Price, Rust, O'Donovan, Handley, & Bryant, Citation2012, pp.10–11).

Goal-setting and self-regulation

Goal-setting is defined here as a process of explicitly articulating desired outcomes, such as achieving an A grade on the next assignment, or demonstrating better evidence of critical thinking. Fulfilling these desired outcomes typically requires a learner to adopt goal-directed behavior, such as increasing the time they spend studying, or discussing their assignment with a professor. Therefore, goal-setting contributes to the more general skill of self-regulation, defined here as an ongoing process of monitoring and evaluating one's own progress and strategic approaches to learning. Self-regulation involves learners continually updating the strategies they adopt or the resources they rely upon, in response to changes in their ongoing goals, needs, and abilities after they receive feedback. Goal-setting and self-regulation should support proactive recipience by enabling the learner to articulate areas of their skill base that require development, to translate these goals into action plans, and to review and adjust their behavior accordingly.

Engagement and motivation

The final construct, Engagement and motivation, is defined here as being enthusiastic about and open to receiving performance information. This recipience process first requires a state of pre-engagement involving being committed to change and develop: what Handley et al. (Citation2011) called “readiness to engage.” Second, it involves actually paying attention to the feedback and being prepared to consider it, take it on board, and relate it to one's own process of learning (Price et al., Citation2011). Engagement and motivation should support proactive recipience by enabling learners to want to read and understand their feedback.

Two of the present authors jointly classified each of the papers into one or more of the four SAGE categories, and a third author independently classified 21 of the papers (20% of the subset). The three coders assigned identical coding for 17 of these papers (81%). Several of the papers were given more than one classification: The first two coders assigned 28 classifications to the 21 papers, whereas the second coder assigned 29 classifications. Of these, 27 were identical between coders. Given the high levels of agreement, the classifications reported next are those of the first two coders, and we did not discuss disagreements further. Note that these four processes undoubtedly draw upon shared cognitive and metacognitive skills such as the ability to reflect; nevertheless, our coding demonstrated support for the four being distinct. Specifically, for each of the 105 intervention papers, we can treat each of the four SAGE processes as “absent” (coded as 0) or “present” (coded as 1). Substantial overlap between any pair of SAGE processes would then be indicated by a strong and positive point-biserial correlation between the coding variables. However, the correlations for all pairs of SAGE processes were either negative or only very weakly positive (rs ranged from –.43 to .12). Put simply, no two SAGE processes co-occurred systematically within researchers' rationales.

Intervention Components

Scrutinizing the interventions that were reported across this literature led us to distinguish 14 intervention component categories, plus a minor category of “other.” Many of the reviewed studies used two or more intervention components in conjunction; for example, creating an online repository of feedback involved both (a) a portfolio and (b) technology. As such, any kind of intervention that educators might propose could comprise one or more of the individual components defined in .

TABLE 1 Descriptions of Each Intervention Component Type Observed in the Systematic Review

As reported in Part 1, the studies reviewed were diverse in terms of research methods and analytic approaches, as well as in terms of reporting standards. Indeed, a large proportion of the evidence on different interventions came only from descriptive analysis and from anecdotal reports from learners and from the researchers themselves. This meant that it was not plausible to conduct a quantitative meta-analysis or qualitative meta-synthesis to compare the effects of different intervention components. Instead, we therefore present basic narrative summaries for each intervention-component, simply illustrating the types of research evidence that exist in terms of reported successes and limitations. We also comment briefly on which of the SAGE recipience processes were targeted by each intervention component; these data are reported in full in . To draw attention to some conceptual similarities between the activities used in the different intervention components, we report them (with the exception of the “other” category) in four clusters, as follows.

TABLE 2 Number of Papers (From a Subset of 105) Involving Each Type of Intervention Component, and Targeting Each of the Four SAGE Recipience Processes

Internalizing and Applying Standards

Several of the intervention components shared a common activity of encouraging pre-HE and HE learners to become more familiar with the expected standards against which they should learn to appraise their own performance, and/or to practice applying those standards to gain insight on how somebody else might do the same.

Peer-assessment

In terms of the SAGE processes, interventions involving peer-assessment, as defined in , were often designed to target self-appraisal and assessment literacy; however, in some cases the reported rationale involved enhancing learners' motivation to engage with feedback. There was self-report evidence that many learners see the benefits of providing peer-assessment (Al-Barakat & Al-Hassan, Citation2009; Moore & Teather, Citation2013). Furthermore, across different studies using focus groups and other self-report methods, undergraduate and graduate students have reported positive outcomes of engaging in peer-feedback including an improved ability to reflect (Al-Barakat & Al-Hassan, Citation2009), to take others' perspectives on their assignments (McDonnell & Curtis, Citation2014; Moore & Teather, Citation2013), and a better appreciation of grading criteria and expectations (Defeyter & McPartlin, Citation2007). Several months after completing a peer-assessment intervention, for example, Cartney's (Citation2010) undergraduates described how they remained more proactive in seeking and applying feedback.

The reviewed studies highlighted limitations of using peer-assessment, in particular that it can be time-consuming both for learners and educators (Bedford & Legg, Citation2007; McDonnell & Curtis, Citation2014; Pain & Mowl, Citation1996), and learners' engagement can be limited (Bloxham & West, Citation2007; Gielen, Tops, Dochy, Onghena, & Smeets, Citation2010). More substantively, peers do not always identify flaws in one another's work and can be less likely than experts to suggest amendments (Hovardas, Tsivitanidou, & Zacharia, Citation2014), perhaps because they find peer-assessment difficult and report low confidence in their ability to do it correctly (Bedford & Legg, Citation2007; Cartney, Citation2010; Defeyter & McPartlin, Citation2007; McDonnell & Curtis, Citation2014; Moore & Teather, Citation2013). Indeed, grades awarded by peers do not always align with those of expert graders and different peer reviewers (Chen, Citation2010; Hovardas et al., Citation2014; Pain & Mowl, Citation1996).

Self-assessment

Most of the interventions involving self-assessment, unsurprisingly, targeted HE (and in one case pre-HE) learners' self-appraisal ability. However, some utilized self-assessment as a way of nurturing learners' assessment literacy, by giving them greater insight into the grading process and clarifying expectations. In two studies, focus groups of undergraduate students reported that self-assessment improved their capacity to question their own work (Wakefield, Adie, Pitt, & Owens, Citation2014) and developed their understanding of educators' tacit knowledge and the criteria used for assessment (McDonnell & Curtis Citation2014; Wakefield et al., Citation2014). Indeed, 51% of geography undergraduates in Pain and Mowl's (Citation1996) study reported via questionnaire responses that self-assessment helped their understanding of assessment, and some also described how it helped them to feel part of the assessment system.

As with peer-assessment, some researchers noted that self-assessment can be time-consuming (Bedford & Legg, Citation2007; Embo, Driessen, Valcke, & Van der Vleuten, Citation2010; McDonnell & Curtis, Citation2014; Pain & Mowl, Citation1996), that learners' engagement can be poor (Embo et al., Citation2010), and that the grades learners award themselves are often inconsistent with those assigned by expert graders (Chen, Citation2010; Pain & Mowl, Citation1996). Furthermore, not all learners believe that self-assessment supports their understanding of assessment (Pain & Mowl, Citation1996), and many feel out of their comfort zone (Bedford & Legg, Citation2007; McDonnell & Curtis, Citation2014).

Engaging with grading criteria

All papers involving engaging with grading criteria used this approach as a means to develop HE learners' assessment literacy, though some also gave additional rationales. Studies report learners as rating these interventions positively (Atkinson & Lim, Citation2013) and as seeing their importance (Orsmond, Merry, & Reiling, Citation2002). In one study these initiatives reportedly enhanced criminology undergraduates' obtained grades, and their self-reported awareness of the learning objectives (Case, Citation2007). Engaging with grading criteria seems to function well as a perspective-taking exercise, with undergraduates describing an increased appreciation of the assessment process and of the expectations upon them (Defeyter & McPartlin, Citation2007; Rust, Price, & O'Donovan, Citation2003). After implementing an intervention requiring undergraduates to engage with the grading criteria, Orsmond et al.'s (Citation2002) students were subsequently more accurate in self-assessing their work (based on the concordance between the grades ascribed by learners vs. their professors). Furthermore, after engaging with the grading criteria, learners in at least three studies claimed they were more likely to consult them when completing subsequent work (Bloxham & West, Citation2004, Citation2007; Cartney, Citation2010).

Not all learners are positive about engaging with grading criteria (Bloxham & West, Citation2007), and in some studies there was minimal evidence of benefits to self-assessment accuracy (Rust et al., Citation2003). Of course, these interventions require learners to understand grading criteria, and some find the language used within these criteria difficult to decode (Cartney, Citation2010). Indeed, even if learners do come to understand the criteria, this does not mean they are automatically able to transfer this new tacit knowledge to their future work (Defeyter & McPartlin, Citation2007).

Dialogue/discussion

Typically, dialogue/discussion was utilized as a way of supporting pre-HE and HE learners in the SAGE process of goal-setting and self-regulation, or to facilitate their stronger engagement with feedback. Learners in these studies reported that they are particularly receptive to advice received during one-to-one feedback dialogue sessions (Duncan, Citation2007), seeing these as safe spaces within which to discuss their work (Cramp, Citation2011). In one quasi-experimental study, van der Schaaf, Baartman, and Prins (Citation2013) assigned secondary school students to receive either written feedback only on an assignment or written feedback in addition to a face-to-face feedback dialogue. Those who received the additional dialogue subsequently gave higher ratings, on a validated scale-measure, of their feedback having been useful (e.g., “I use the feedback to go back over what I have done in the assignment”).

Despite the advantages, there was evidence that learners' participation in one-to-one dialogue sessions is often limited, even when time is set aside for this purpose. In Duncan's (Citation2007) study, only 31% of undergraduates who were invited to participate in a feedback dialogue actually attended, and many who attended did not explicitly refer to the feedback they received. Similarly, van der Schaaf et al. (Citation2013) found asymmetry in feedback dialogue sessions, suggesting that the teachers often dominated the discussions.

Sustainable Monitoring

Another cluster of intervention components involved learners engaging in activities that required them to formally document and track how their performance and feedback change over time and to reflect on these changes as a means to direct their ongoing skill development. It is noteworthy that all of the evidence reviewed for these interventions came from HE contexts.

Action planning

A range of the recipience processes were targeted through action planning as defined in , the most common being engagement with feedback, and goal-setting and self-regulation. There was evidence that encouraging, or requiring, learners to produce an action plan can facilitate their engagement with feedback. In Enomoto (Citation2012), university-level language learners wrote personal reflections after completing skills-based action plans, and these reflections were qualitatively analyzed. One conclusion was that 52% of reflections were judged to contain evidence of deeper approaches to learning as a consequence of the intervention. Medical and dental students in two other studies believed that action planning was particularly effective in promoting their reflection, independence, and target-setting (Altahawi, Sisk, Poloskey, Hicks, & Dannefer, Citation2012; Dahllöf, Tsilingaridis, & Hindbeck, Citation2004), and there was also anecdotal evidence that producing action plans could promote learners' subsequent feedback seeking (Altahawi et al., Citation2012). One promising finding comes from Chang, Chou, Teherani, and Hauer (Citation2011), who asked medical students to prepare written learning goals and then thematically coded the focuses of these goals. Chang et al. found that the goals' focuses differed systematically between students of varying levels of ability, with higher achieving students proposing significantly more advanced goals. The researchers noted that although lower achieving students proposed weaker goals, these goals were appropriate to their level of performance. In at least some circumstances, then, students are able to effectively calibrate their goals against their own abilities. Despite the potential efficacy of action planning, there was some evidence of learners' limited engagement with this process (Duncan, Citation2007).

Portfolio

Researchers introduced portfolio interventions as a means to develop learners' skills of self-appraisal, and of goal-setting and self-regulation. There was some evidence that keeping a portfolio of assessed work is viewed positively by learners in HE contexts, and moreover that this positivity translates into their engagement in reflection (Quinton & Smallbone, Citation2010) and an appreciation of playing positive roles in their own academic development (Embo et al., Citation2010). In at least three studies, medical students and their tutors described a belief that feedback portfolios promote learners' independence (Dahllöf et al., Citation2004), reflection and target-setting (Altahawi et al., Citation2012; Dahllöf, et al., Citation2004), dialogue with educators (Ajjawi, Schofield, McAleer, & Walker, Citation2013; Dahllöf et al., Citation2004), and feedback seeking (Altahawi et al., Citation2012). Based on undergraduates' written reflections, Quinton and Smallbone (Citation2010) concluded that keeping portfolios can provide learners with distance from their initial emotional responses to evaluative comments. Midwifery students in Embo et al.'s (Citation2010) focus groups—in particular, those in later stages of the course—claimed that a portfolio-style intervention promoted their intrinsic motivation to use feedback.

Again, interventions involving portfolios were said to be time-consuming and often underused by learners (Dahllöf et al., Citation2004; Embo et al., Citation2010). In some cases, this reluctance was attributed to the modular styles of teaching, discussed earlier, wherein learners did not always see reflection on feedback as being useful (Burr, Brodier, & Wilkinson, Citation2013). In other cases, the reluctance was suggested to stem from learners' defensiveness after receiving low grades (Geddes, Citation2009). It was noted that not all learners are able to develop action points from feedback, even if they are able to see recurring themes (Quinton & Smallbone, Citation2010). Furthermore, one important limitation is that to be able to reflect on feedback, learners first need to understand it, and a portfolio does not assist them in decoding (Quinton & Smallbone, Citation2010).

Collective Provision of Training

Some intervention components involved educators supporting groups of learners collectively, by disseminating information and resources. These resources were designed to broaden learners' concepts of feedback and the processes by which it is produced, to help them to understand and use their feedback effectively, and/or to be better prepared for their own emotional responses to feedback. Again, it is noteworthy that all of the evidence reviewed for these interventions came from HE contexts.

Feedback workshop

Workshops, as defined in , were primarily used for the purposes of supporting students in their self-appraisal and developing their assessment literacy. Indeed, undergraduates in two focus group studies reported that participating in a feedback workshop enabled them to better understand the grading process and criteria (Cartney, Citation2010; Rust et al., Citation2003) and gave them insight into the tacit knowledge held by educators (Rust et al., Citation2003). Furthermore, undergraduates in Pain and Mowl's (Citation1996) survey commented that participating in feedback workshops helped them to feel part of the assessment system and that they put more effort into their writing as a result. Researchers noted that learners' attendance at these workshops can sometimes be poor unless these are compulsory, and even when learners do attend, their participation can be limited (Cartney, Citation2010; Price, O'Donovan, & Rust, Citation2007). Moreover, given that preparing and running such workshops can be time-consuming (Pain & Mowl, Citation1996), it is noteworthy that their impact in these studies was not always obvious. Rust et al. (Citation2003) reported no gains in learners' self-assessment accuracy, as indexed by the correspondence between the grades ascribed by learners and teachers, whereas Price et al. (Citation2007) proposed that some learners resist implementing feedback even after attending a workshop.

Feedback resources

Educators have trialed different kinds of resources for supporting learners' proactive recipience. For this reason, skills of self-appraisal, assessment literacy, and engagement with feedback were all targeted with similar frequency. In Withey's (Citation2013) research, 80% of law students who used a feedback guide agreed that the guide made them engage more with their feedback than they normally would. Moreover, most believed that the guide helped them to understand assessment criteria, made them more likely to engage in self-assessment, and improved their grades across different modules. Likewise, in Defeyter and McPartlin's (Citation2007) study, focus groups of undergraduate students reported that having the opportunity to design their own feedback sheet promoted their engagement with the feedback. As with other intervention components, it was noted that generating such resources is time-consuming (Withey, Citation2013) and learners' engagement with them is variable (Adcroft & Willis, Citation2013).

Exemplar assignments

A shared aim of all interventions in this category was to develop learners' assessment literacy. Giving learners access to model exemplars of completed assignments allegedly demystifies educators' expectations (Baker & Zuvela, Citation2013). Learners have in some cases been shown to engage with and appreciate this opportunity (Handley & Williams, Citation2011) and to show insight into its benefits (Hendry, Bromberger, & Armstrong, Citation2011; Orsmond et al., Citation2002). Yet there was only limited evidence, from questionnaires and anecdotal reports, that exemplars can aid feed-forward to learners' own assessments (Baker & Zuvela, Citation2013; Handley & Williams, Citation2011). Once again, it was reported that not all learners engage with exemplar assignments (Baker & Zuvela, Citation2013) or evaluate them in the same way as an expert grader would (Handley & Williams, Citation2011). Finally, Handley and Williams conjectured that exemplar assignments might promote a surface approach to learning.

Manner of Feedback Delivery

Finally, some intervention components focused on various alterations to how individual instances of feedback information were delivered to pre-HE and HE learners, in terms of the modality of the feedback, whether its function is formative or summative, or aspects of its content, presentation, or style.

Formative assessment/resubmission

By far the most common intended purpose of these interventions, defined in , was to support HE learners' engagement and motivation. A few papers reported other purposes, including developing learners' skills of self-appraisal. The reviewed papers showed undergraduate students claiming that they engage strongly with formative assessment (Perera & Morgan, Citation2011) and that they subsequently engage in more proactive behaviors as means to improve (Wingate, Citation2010). Undergraduates in these studies believed that formative assessment supports their self-appraisal skills (Millar, Davis, Rollin, & Spiro, Citation2010) and proactive feedback-seeking (Cartney, Citation2010) and enhances the dialogue between learners and educators (Millar et al., Citation2010; Perera & Morgan, Citation2011). In two studies in which university students had the opportunity to resubmit a summative assessment after receiving feedback on a draft, they claimed to have read, understood, and applied the feedback given (Dube, Kane, & Lear, Citation2012), and there was some evidence that their resubmitted assignments were of a higher quality than the originals (Covic & Jones, Citation2008). Nevertheless, it was noted that by virtue of being noncredited, some learners may submit formative work that is incomplete (Brearley & Cullen, Citation2012) and minimize the effort they invest in preparing the first submission. Indeed, Covic and Jones (Citation2008) theorized that the option to resubmit work following feedback can be viewed by learners as a safety net and could thereby encourage a surface approach to learning. Perhaps for any of these reasons, not all learners improve after formative assessment; it is theorized that formative feedback could harm the self-efficacy of academically weaker learners, discouraging them from participating in such interventions (Wingate, Citation2010).

Feedback without a grade

Very few papers reported this form of intervention, but the typical intended purpose was to support HE learners' self-appraisal skills and promote stronger engagement with feedback. In one study, 62% of university-level history students agreed that withholding grades had made them take more notice of their tutors' feedback (Sendziuk, Citation2010).

Tailored feedback

A small number of papers reported tailored feedback interventions with HE learners, as defined in . The most common rationale was to improve learners' engagement with and motivation to use their feedback. One study, involving interviews with undergraduate students, reported that these learners believed they were more likely to follow the guidance contained in feedback they had specifically requested and that this tailored feedback was effective in promoting dialogue between them and their graders (Bloxham & Campbell, Citation2010).

Presentation of feedback

These interventions (used with both pre-HE and HE learners) were rather diverse, and included highlighting relevant grading criteria within grid templates or presenting feedback supposedly matched against individuals' “learning styles.” Most were intended to increase students' engagement with and motivation to use their feedback. In one study, feedback that was designed to match high school students' individual “learning styles” led to greater improvements in their work—as measured by test scores—than did standard written feedback or no feedback (Parvez & Blank, Citation2008). However it is unclear what behavioral changes drove these improvements, or whether the “learning styles” feedback was simply more effective overall, irrespective of whether it was matched to students' supposed styles.

Technology

Whereas many papers reported using technology as a tool to support self-appraisal, the primary intended purpose of technology-based interventions was to enable pre-HE and HE learners to become more motivated to engage with feedback (see Hepplestone, Holden, Irwin, Parkin, & Thorpe, Citation2011, for a review from HE). Perhaps for this reason, much of the primary emphasis among these papers was on learners' satisfaction with—rather than their use of—their feedback. For example, learners in these papers typically appeared positive about receiving feedback via virtual learning environments (del Mar Sánchez-Vera, Fernández-Breis, Castellanos-Nieves, Frutos-Morales, & Prendes-Espinosa, Citation2012; Geddes, Citation2009; Nicol, Citation2009), electronic voting systems (Lymn & Mostyn, Citation2010), or automated feedback systems (Lipnevich & Smith, Citation2009) but were less enthusiastic about receiving feedback via Short Message Service (i.e., text messaging; Brett, Citation2011). Attitudes toward audio/video feedback were strongly polarized, with some learners and educators disliking this format strongly (Gleaves & Walker, Citation2013), whereas others saw it benefiting learning (O'Loughlin, Ni Chróinín, & O'Grady, Citation2013).

There was a perception among HE learners that graders elaborate more clearly on audio- or video-feedback advice than is possible in written feedback (Gould & Day, Citation2013; Gleaves & Walker, Citation2013). This perception might explain why many of the undergraduates and postgraduates in one survey reported paying greater attention to video feedback than to written feedback. Some of these participants also claimed that video feedback can make academic staff more identifiable, which fosters stronger interpersonal dialogue (Crook et al., Citation2012). Looking instead to feedback delivered via a virtual learning environment, business students in one study reported being significantly more likely to engage with this online feedback than to seek feedback directly from instructors or peers (Geddes, Citation2009). Of interest, usage statistics revealed that these students' level of engagement with their online feedback significantly predicted their eventual grade.

A common reported limitation was the capacity for technical failures or difficulties when delivering feedback via technological means (Crook et al., Citation2012; Lees & Carpenter, Citation2012; O'Loughlin et al., Citation2013). Authors pointed to the ongoing needs for training to become proficient in using these methods (Gould & Day, Citation2013), and there were several concerns about learners' actual implementation of feedback provided via technological means. For example, there was evidence that even if learners are positive about receiving audio feedback, they still desire written feedback alongside it (Atfield-Cutts & Jeary, Citation2013; Brearley & Cullen, Citation2012, Lees & Carpenter, Citation2012). Finally, feedback provided via electronic voting systems is not individualized; therefore, if a small minority of learners get an answer wrong, they may not receive corrective feedback (Cutts, Carbone, & van Haaster, Citation2004; Lymn & Mostyn, Citation2010).

Other

This diverse category included interventions such as keeping a reflective feedback diary (Gleaves, Walker, & Grey, Citation2008), incorporating space into feedback pro formas for learners to add reflection (Quinton & Smallbone, Citation2010), and providing feedback only to learners who request it (Jones & Gorra, Citation2013). Note that some of these would fit within one of our four post hoc clusters previously described, but none fit any specific category of intervention components.

DISCUSSION

Giving feedback to learners does not “magically” improve their skills or boost their grades without those learners acting (Boud & Molloy, Citation2013a). Rather, the relation between feedback and subsequent achievement is necessarily mediated by learners' agentic use of and engagement with feedback processes—what we have termed their proactive recipience. It is clear that the topic of proactive recipience has enjoyed a surge of research interest throughout the past decade, yet it is also clear that the research base remains highly fragmented and somewhat atheoretical. The present systematic review fulfills a timely need to synthesize the state of knowledge on this topic and to build stronger theoretical foundations for future work. The core concepts and constructs that emerged from our synthesis are summarized in .

FIGURE 1 A descriptive model of key conceptual influences on learners' proactive recipience of feedback.

FIGURE 1 A descriptive model of key conceptual influences on learners' proactive recipience of feedback.

Characteristics of the Literature

To begin, we asked: What were the descriptive characteristics of this literature? What kinds of learners, learning contexts, and feedback sources have been studied, and which research methods and analytic approaches have been used? Our review shows, perhaps unsurprisingly, that the literature predominantly focuses on feedback received by learners from their educators, implying that we know relatively less about the behavioral impacts of engaging with peer- and self-feedback processes (it is possible that alternative search terms other than “feedback,” such as “self-monitoring,” might have captured a small additional amount of relevant literature on these topics). Nevertheless, our review points to potential benefits of engaging proactively with feedback received from any of these alternative sources.

The literature represents learners from across a broad range of study disciplines, yet most empirical studies involved learners in HE contexts, whereas other contexts were conspicuously underrepresented. Why might this bias exist? An informal analysis of the abstracts reviewed in Phase 1 of our search suggested that, even at that initial stage, HE contexts were overrepresented relative to pre-HE contexts, and to approximately the same extent as among our final 195 papers (i.e., by approximately 10:1). It therefore seems unlikely that the bias is a product of our inclusion and exclusion criteria beyond the search terms themselves (e.g., an emphasis on behavioral consequences). Logically, it seems that the relative absence of pre-HE research from this review must therefore be attributable to at least one of three other causes: (a) limitations in our search string, meaning that we did not discover relevant pre-HE research (e.g., researchers in pre-HE contexts tend to use different terminology); (b) feedback, as a topic, features more prominently within HE research compared to pre-HE research; and/or (c) educators in HE settings more typically fulfill simultaneous roles as both researcher and teaching practitioner—and as a result are more likely than their pre-HE counterparts to publish their ideas and interventions in academic outlets. Whatever the reason, it is undoubtable that this field of research would benefit from greater representation beyond HE contexts.

The research methods represented in this review were diverse, although as is true in the broader feedback literature, there were very few experimental studies (Evans, Citation2013). The general diversity of research methods is in many ways a strength, but did mean it was impossible to precisely assess and compare the overall effects of the specific factors and interventions identified. Indeed, there was considerable variability in the quality of the study designs, the measures used, the data, and the authors' reporting of these. The studies used quite different outcome measures, most of which involved self-reported behavior, including a preponderance of data from focus groups and surveys that often applied unspecified qualitative-type approaches. Clearly, it would be valuable to focus more on how learners actually behave when receiving feedback rather than principally on how they claim they behave. Future research should more frequently choose outcome variables that reflect this emphasis; a greater number of behavioral studies using observational and (quasi-) experimental methods would be particularly valuable.

Factors That May Influence Recipience

The second aim of this review was to draw together research and theory on various factors that might promote or inhibit learners' proactive recipience. The only previous review of these factors highlighted some such barriers, mainly relating to inadequacies in how feedback is delivered (Jonsson, Citation2013). By conceptualizing feedback as a communicative process, we have documented a far more comprehensive list of potential influences upon proactive recipience than did the earlier review, albeit most of these influences were proposed by researchers working in HE contexts. These potential influences include factors pertaining to the receiver, sender, the message, and the context in which the message is delivered. The literature proposes, for instance, that individual differences in skills, such as self-regulation (e.g., Nicol & Macfarlane-Dick, Citation2006; Orsmond & Merry, Citation2013), confidence, and academic self-concept (e.g., Baadte & Schnotz, Citation2013; Eva et al., Citation2012), might affect learners' engagement, irrespective of the content of feedback. A learner with superior self-regulation skills may well be better able to engage in self-appraisal and goal-setting (Nicol & Macfarlane-Dick, Citation2006)—two of the key processes that emerged in our SAGE taxonomy—and to view feedback as a means to progress toward their goals (Nicol & Macfarlane-Dick, Citation2006). In contrast, learner variables such as confidence and self-efficacy may increase learners' willingness to spend time and effort engaging with feedback (Handley et al., Citation2011) and may promote learners' belief that engaging in this way will lead to improvement.

Learners' perceptions of the credibility of the feedback sender might also moderate this effectiveness (e.g., Bing-You et al., Citation1997), thus implying that promoting proactive recipience could be as much about building relationships and trust as about formulating the right message. Nevertheless, the message itself is important; confusing academic terminology and lack of specificity have been proposed as fundamental barriers to learners' engagement with feedback processes (e.g., Robinson et al., Citation2013; Weaver, Citation2006). It is interesting to note that the relative level of engagement with different kinds of feedback messages may hinge on other learner variables such as their level of study. For instance, Murdoch-Eaton and Sargeant's (Citation2012) data suggest that engagement with positive feedback may be greater among more junior medical students, whereas engagement with negative feedback may be greater among more senior medical students. This kind of finding highlights an important point: We might in principle increase learners' engagement with feedback through tailoring the kinds of messages we send; however, if we want learners to genuinely benefit from the feedback, we may instead need to train them to better engage with different kinds of feedback messages.

Finally, the educational context within which feedback is delivered also plays a potential role; for example, we found proposals that the modular structure of many educational courses might minimize learners' opportunities and motivation to implement feedback (Price et al., Citation2011). Modularization could in principle affect proactive recipience for several reasons. In particular, modularized assessments regularly occur toward the end of modules, and different modules are typically assessed by different teachers whom learners perceive to have differing expectations and standards. These issues can lead learners to perceive there to be limited opportunity to transfer what they learned in one module to their subsequent learning and, therefore, benefit little from engaging with feedback (Price et al., Citation2011). These contextual factors are often difficult, if not impossible, for individual educators to control, but being aware of their potential implications makes it possible to consider mitigating actions. Other contextual influences on recipience, however, are far easier to control. One key example is ensuring that learners receive appropriate training in how to understand and implement feedback rather than making the flawed assumption that feedback literacy skills are obvious or intuitive (Bevan et al., Citation2008; Weaver, Citation2006).

The results of this analysis highlight that when learners fail to adequately engage with feedback processes, this failure could be attributed to many possible sources and not only (or even necessarily) to how the message is delivered. Indeed, interventions that hinge solely on changing the content or delivery of feedback may well be ineffective, and improving learners' proactive recipience will often require a sharing of responsibility by both educator and learner to identify and resolve the barriers (CitationWinstone, Nash, Rowntree, & Parker, in press).

Interventions and Recipience Processes

The third aim of the present work was to catalogue the various interventions that have been reported in the academic literature for supporting proactive recipience, and to examine the reports of their successes and limitations. We found diverse and novel interventions being used to this end, from feedback workshops and resources, to the purposeful use of learning technologies. The publications that described these interventions frequently reported evidence of positive effects—again, mainly in HE contexts—on learners' behavior and their overall proactive engagement with feedback processes (or rather, in a large proportion of cases, their self-reported behavior and engagement). For instance, learners described outcomes such as improved attentiveness to feedback and more proactive feedback-seeking, increases in their ability to reflect and to engage in perspective taking, better understanding of grading criteria, and improved skills of self-assessment. In some cases, albeit not as often as one might hope, there were indications from direct behavioral data to support some of these claimed outcomes.

The publications also described difficulties with many of the interventions, some of which were common across different intervention types. For example, these endeavors were often time-consuming to set up and/or to implement. Moreover, many interventions were reported to be difficult for learners to use, and learners often engaged with them less than would be ideal. These limitations notwithstanding, the cumulative evidence of successful recipience interventions offers some interesting and concrete solutions to a challenging problem. However, weaknesses in the literature—beyond those already mentioned—mean there is still much to learn about how different interventions truly influence learners' behavior before we can draw confident recommendations. For example, most of the intervention components we identified were explored only in very few studies; we also know relatively little about the transferability of interventions' effects across different learning contexts and about their long-term effects (e.g., as might ideally be explored in randomized, longitudinal studies).

Of course, the intervention components we have reviewed and described here should by no means be an exhaustive list of what is possible. However, what seems most important when planning and evaluating future interventions is to begin with a firm understanding of the skills or attributes that those interventions are intended to support. In this vein, the fourth aim of the present review was to overview the processes thought to underlie the effective recipience of feedback and to catalogue how different interventions have targeted these processes. The resulting taxonomy comprises four distinct processes (the SAGE recipience processes) that are believed to support being a proactive, agentic receiver of feedback: Self-appraisal, Assessment literacy, Goal-setting and self-regulation, and Engagement and motivation. It is evident from that across the literature as a whole, every one of the intervention components we identified has been used to target more than one of these four processes. Particularly striking, though, is that far more of the interventions targeted students' engagement with feedback and motivation to use it, than targeted their sustainable skills such as goal-setting and self-regulation. Of course, the cell frequencies in should not be read to imply what is and is not theoretically plausible. For instance, the absence of studies that used action planning interventions to support learners' assessment literacy does not mean that these interventions could not serve such a function. Nevertheless, this mapping does provide an accessible overview of what has been attempted and areas that may deserve further examination. As suggests, the SAGE processes are likely to be interrelated with many of the interpersonal communication variables described earlier. That is to say, it is plausible that receiver, sender, message, and context variables would moderate learners' development of SAGE processes and would be reciprocally influenced by the development of these processes.

The SAGE taxonomy is theoretically and practically important in its own right. Whereas the taxonomy is not a model, insofar that it does not provide specific theoretical predictions, it can nevertheless provide a stronger theoretical organization for existing and future research. Among the many intervention studies reviewed here, we frequently observed instances in which the authors gave no explicit theoretical account of how their endeavors might influence learners' cognition or behavior. Formulating this taxonomy, therefore, provides a foundation for researchers and practitioners to think more conceptually when identifying the problems to be addressed, and when planning possible solutions. For example, an educator whose students struggle to understand the language used in their feedback might reason that these students need opportunities to develop their assessment literacy. The present work helps to identify interventions that others have used for targeting this skill, as well as other interventions that might be considered. Moreover, the SAGE taxonomy could underpin more rigorous evaluations of interventions, by ensuring that the proposed theoretical mechanism is what informs the choice of outcome measures. These improvements should help the growing empirical literature to become more theoretically coherent and unified, such that future reviews might draw stronger conclusions about interventions' effectiveness than are warranted presently.

With our attention tuned to the future development of this research field, several specific directions seem of particular importance. First, although we have already noted the need for more evidence on proactive recipience in pre-HE settings, it would also be valuable to see more empirical comparisons across multiple levels of study. Such comparisons are vital for understanding the long-term trajectories in the development of SAGE processes, and this understanding should in turn inform best practice on how and when to optimally target these processes throughout a learner's educational career. More generally, our review highlights many variables that could influence proactive recipience, yet it also shows that relatively little is known about the higher-order interactions between those variables. For example, as well as moderating proactive recipience itself, communication variables might also moderate the effects of specific interventions upon proactive recipience. An ambitious goal for future research will be to better understand these kinds of complex interactions, thus permitting more sophisticated accounts of the pathways that lead to effective engagement with feedback. Finally, whereas there is a clear need for stronger evidence on the efficacy of certain interventions, no single intervention is likely to resolve all the plausible barriers to proactive recipience. Therefore future research should systematically explore how interventions can best be used in conjunction as a “toolkit,” to nurture learners' proactive recipience in holistic rather than piecemeal manners. By placing primary emphasis on recipience processes in how we design, implement, and evaluate these feedback interventions or toolkits, future work may counter the invisibility of learners' engagement (Price et al., Citation2011), gaining a better understanding of what truly makes feedback effective.

SUPPLEMENTAL MATERIAL

Supplemental data for this article can be accessed on the publisher's website.

Supplemental material

EP_Winstone_supplemental.zip

Download Zip (47.4 KB)

ACKNOWLEDGMENTS

We are grateful to Ian Kinchin, Darren Moore, Pete Reddy, and two anonymous reviewers for their helpful comments on this article.

Funding

This work was generously supported by funding from the Higher Education Academy (Grant GEN1024).

Notes

1 At first glance this framework applies less straightforwardly to self-feedback (i.e., where the learner is both the sender and receiver of feedback information, produced through introspective processes such as self-monitoring and self-assessment). Nevertheless we can equally think about self-feedback as a communicative event, in which, for example, clear proposals for improvement must be produced, and the learner must be receptive to these proposals.

2 Researchers such as Kluger and De Nisi (Citation1996) use the term “intervention” to refer to attempts to modify learners' behavior through the provision of feedback. In that context, feedback per se is the intervention. Here we were interested in different kinds of intervention: attempts to improve learners' engagement with feedback processes. In this context, simply giving feedback is not an intervention as Kluger and De Nisi and others would have it.

REFERENCES

  • Adcroft, A., & Willis, R. (2013). Do those who benefit the most need it the least? A four-year experiment in enquiry-based feedback. Assessment & Evaluation in Higher Education, 38, 803–815. doi:10.1080/02602938.2012.714740
  • Ajjawi, R., Schofield, S., McAleer, S., & Walker, D. (2013). Assessment and feedback dialogue in online distance learning. Medical Education, 47, 527–528. doi:10.1111/medu.12158
  • Al-Barakat, A., & Al-Hassan, O. (2009). Peer assessment as a learning tool for enhancing student teachers' preparation. Asia-Pacific Journal of Teacher Education, 37, 399–413. doi:10.1080/13598660903247676
  • Altahawi, F., Sisk, B., Poloskey, S., Hicks, C., & Dannefer, E. F. (2012). Student perspectives on assessment: Experience in a competency-based portfolio system. Medical Teacher, 34, 221–225. doi:10.3109/0142159X.2012.652243
  • Atfield-Cutts, S., & Jeary, S. (2013, September). Blended feedback: Delivery of feedback as digital audio on a computer programming unit. Paper presented at the BCS Quality Specialist Group's Annual INSPIRE conference, London, UK.
  • Atkinson, D., & Lim, S. L. (2013). Improving assessment processes in higher education: Student and teacher perceptions of the effectiveness of a rubric embedded in a LMS. Australasian Journal of Educational Technology, 29, 651–666.
  • Baadte, C., & Schnotz, W. (2013). Feedback effects on performance, motivation and mood: Are they moderated by the learner's self-concept? Scandinavian Journal of Educational Research, 58, 570–591. doi:10.1080/00313831.2013.781059
  • Bailey, R., & Garner, M. (2010). Is the feedback in higher education assessment worth the paper it is written on? Teachers' reflections on their practices. Teaching in Higher Education, 15, 187–198. doi:10.1080/13562511003620019
  • Baker, D. J., & Zuvela, D. (2013). Feedforward strategies in the first-year experience of online and distributed learning environments. Assessment & Evaluation in Higher Education, 38, 687–697. doi:10.1080/02602938.2012.691153
  • Ball, E. C. (2010). Annotation an effective device for student feedback: A critical review of the literature. Nurse Education in Practice, 10, 138–143. doi:10.1016/j.nepr.2009.05.003
  • Beaumont, C., O'Doherty, M., & Shannon, L. (2011). Reconceptualising assessment feedback: A key to improving student learning? Studies in Higher Education, 36, 671–687. doi:10.1080/03075071003731135
  • Bedford, S., & Legg, S. (2007). Formative peer and self feedback as a catalyst for change within science teaching. Chemistry Education Research and Practice, 8, 80–92. doi:10.1039/B6RP90022D
  • Bevan, R., Badge, J., Cann, A., Wilmott, C., & Scott, J. (2008). Seeing eye-to-eye? Staff and student views on feedback. Bioscience Education Electronic Journal, 12. doi:10.3108/beej.12.1
  • Bing-You, R. G., Paterson, J., & Levine, M. A. (1997). Feedback falling on deaf ears: Residents' receptivity to feedback tempered by sender credibility. Medical Teacher, 19, 40–44. doi:10.3109/01421599709019346
  • Black, P., & Wiliam, D. (1998). Assessment and classroom learning. Assessment in Education: Principles, Policy & Practice, 5, 7–74. doi:10.1080/0969595980050102
  • Blair, A., Curtis, S., Goodwin, M., & Shields, S. (2013). What feedback do students want? Politics, 33, 66–79. doi:10.1111/j.1467-9256.2012.01446.x
  • Bloxham, S., & Campbell, L. (2010). Generating dialogue in assessment feedback: Exploring the use of interactive cover sheets. Assessment & Evaluation in Higher Education, 35, 291–300. doi:10.1080/02602931003650045
  • Bloxham, S., & West, A. (2004). Understanding the rules of the game: Marking peer assessment as a medium for developing student's conceptions of assessment. Assessment & Evaluation in Higher Education, 29, 721–733. doi:10.1080/0260293042000227254
  • Bloxham, S., & West, A. (2007). Learning to write in higher education: Students' perceptions of an intervention in developing understanding of assessment criteria. Teaching in Higher Education, 12, 77–89. doi:10.1080/13562510601102180
  • Boud, D., & Molloy, E. (2013a). Feedback in higher and professional education: Understanding it and doing it well. New York, NY: Routledge.
  • Boud, D., & Molloy, E. (2013b). Rethinking models of feedback for learning: The challenge of design. Assessment & Evaluation in Higher Education, 38, 698–712. doi:10.1080/02602938.2012.691462
  • Bounds, R., Bush, C., Aghera, A., Rodriguez, N., Stansfield, R. B., & Santeen, S. A. (2013). Emergency medicine residents' self-assessments play a critical role when receiving feedback. Academic Emergency Medicine, 20, 1055–1061. doi:10.1111/acem.12231
  • Brearley, F. Q., & Cullen, W. R. (2012). Providing students with formative audio feedback. Bioscience Education, 20, 22–36. doi:10.11120/beej.2012.20000022
  • Brett, P. (2011). Students' experiences and engagement with SMS for learning in higher education. Innovations in Education and Teaching International, 48, 137–147. doi:10.1080/14703297.2011.564008
  • Burke, D. (2009). Strategies for using feedback students bring to higher education. Assessment & Evaluation in Higher Education, 34, 41–50. doi:10.1080/02602930801895711
  • Burr, S. A., Brodier, E., & Wilkinson, S. (2013). Delivery and use of individualised feedback in large class medical teaching. BMC Medical Education, 13, 63. doi:10.1186/1472-6920-13-63
  • Butler, D. L., & Winne, P. H. (1995). Feedback and self-regulated learning: A theoretical synthesis. Review of Educational Research, 65, 245–281. doi:10.3102/00346543065003245
  • Carless, D. (2006). Differing perceptions in the feedback process. Studies in Higher Education, 31, 219–233. doi:10.1080/03075070600572132
  • Carless, D., Salter, D., Yang, M., & Lam, J. (2011). Developing sustainable feedback practices. Studies in Higher Education, 36, 395–407. doi:10.1080/03075071003642449
  • Cartney, P. (2010). Exploring the use of peer assessment as a vehicle for closing the gap between feedback given and feedback used. Assessment & Evaluation in Higher Education, 35, 551–564. doi:10.1080/02602931003632381
  • Case, S. (2007). Reconfiguring and realigning the assessment feedback processes for an undergraduate criminology degree. Assessment & Evaluation in Higher Education, 32, 285–299. doi:10.1080/02602930600896548
  • Chang, A., Chou, C. L., Teherani, A., & Hauer, K. E. (2011). Clinical skills-related learning goals of senior medical students after performance feedback. Medical Education, 45, 878–885. doi:10.1111/j.1365-2923.2011.04015.x
  • Chen, C.-H. (2010). The implementation and evaluation of a mobile self- and peer-assessment system. Computers & Education, 55, 229–236. doi:10.1016/j.compedu.2010.01.008
  • Covic, T., & Jones, M. K. (2008). Is the essay resubmission option a formative or a summative assessment and does it matter as long as the grades improve? Assessment & Evaluation in Higher Education, 33, 75–85. doi:10.1080/02602930601122928
  • Cramp, A. (2011). Developing first-year engagement with written feedback. Active Learning in Higher Education, 12, 113–124. doi:10.1177/1469787411402484
  • Crisp, B. R. (2007). Is it worth the effort? How feedback influences students' subsequent submission of assessable work. Assessment & Evaluation in Higher Education, 32, 571–581. doi:10.1080/02602930601116912
  • Crook, A., Mauchline, A., Maw, S., Lawson, C., Drinkwater, R., Lundqvist, K., … Park, J. (2012). The use of video technology for providing feedback to students: Can it enhance the feedback experience for staff and students? Computers & Education, 58, 386–396. doi:10.1016/j.compedu.2011.08.025
  • Cutts, Q., Carbone, A., & van Haaster, K. (2004, November). Using an Electronic Voting System to promote active reflection on coursework feedback. Paper presented at the International Conference on Computers in Education, Melbourne, Australia.
  • Dahllöf, G., Tsilingaridis, G., & Hindbeck, H. (2004). A logbook for continuous self-assessment during 1 year in paediatric dentistry. European Journal of Paediatric Dentistry, 5, 163–169.
  • Defeyter, M., & McPartlin, P. L. (2007). Helping students understand essay marking criteria and feedback. Psychology Teaching Review, 13, 23–33.
  • del Mar Sánchez-Vera, M., Fernández-Breis, J. T., Castellanos-Nieves, D., Frutos-Morales, F., & Prendes-Espinosa, M. P. (2012). Semantic web technologies for generating feedback in online assessment environments. Knowledge-Based Systems, 33, 152–165. doi:10.1016/j.knosys.2012.03.010
  • Dowden, T., Pittaway, S., Yost, H., & McCarthy, R. (2013). Students' perceptions of written feedback in teacher education: Ideally feedback is a continuing two-way communication that encourages progress. Assessment & Evaluation in Higher Education, 38, 349–362. doi:10.1080/02602938.2011.632676
  • Dube, C., Kane, S., & Lear, M. (2012). The effectiveness of students redrafting continuous assessment tasks: The pivotal role of tutors and feedback. Perspectives in Education, 30, 50–59.
  • Duncan, N. (2007). ‘Feed-forward’: Improving students' use of tutors' comments. Assessment & Evaluation in Higher Education, 32, 271–283. doi:10.1080/02602930600896498
  • Embo, M. P. C., Driessen, E. W., Valcke, M., & Van der Vleuten, C. P. M. (2010). Assessment and feedback to facilitate self-directed learning in clinical practice of Midwifery students. Medical Teacher, 32, e263–e269. doi:10.3109/0142159x.2010.490281
  • Enomoto, K. (2012). A study skills action plan: Integrating self-regulated learning in a diverse higher education context. In X. Song & K. Cadman (Eds.), Bridging transcultural divides: Asian languages and cultures in global higher education (pp. 101–130). Adelaide, Australia: University of Adelaide Press.
  • Eva, K. W., Armson, H., Holmboe, E., Lockyer, J., Loney, E., Mann, K., & Sargeant, J. (2012). Factors influencing responsiveness to feedback: On the interplay between fear, confidence, and reasoning processes. Advances in Health Science Education, 17, 15–26. doi:10.1007/s10459-011-9290-7
  • Evans, C. (2013). Making sense of assessment feedback in higher education. Review of Educational Research, 83, 70–120. doi:10.3102/0034654312474350
  • Geddes, D. (2009). How am I doing? Exploring on-line gradebook monitoring as a self-regulated learning practice that impacts academic achievement. Academy of Management Learning & Education, 8, 494–510. doi:10.5465/amle.2009.47785469
  • Gibbs, G., & Simpson, C. (2004). Conditions under which assessment supports students' learning. Learning and Teaching in Higher Education, 1, 3–31.
  • Gielen, S., Tops, L., Dochy, F., Onghena, P., & Smeets, S. (2010). A comparative study of peer and teacher feedback and of various peer feedback forms in a secondary school writing curriculum. British Educational Research Journal, 36, 143–162. doi:10.1080/01411920902894070
  • Gleaves, A., & Walker, C. (2013). Richness, redundancy or relational salience? A comparison of the effect of textual and aural feedback modes on knowledge elaboration in higher education students' work. Computers & Education, 62, 249–261. doi:10.1016/j.compedu.2012.11.004
  • Gleaves, A., Walker, C., & Grey, J. (2008). Using digital and paper diaries for assessment and learning purposes in higher education: A case of critical reflection or constrained compliance? Assessment & Evaluation in Higher Education, 33, 219–231. doi:10.1080/02602930701292761
  • Gould, J., & Day, P. (2013). Hearing you loud and clear: Student perspectives of audio feedback in higher education. Assessment & Evaluation in Higher Education, 38, 554–566. doi:10.1080/02602938.2012.660131
  • Handley, K., Price, M., & Millar, J. (2011). Beyond ‘doing time’: Investigating the concept of student engagement with feedback. Oxford Review of Education, 37, 543–560. doi:10.1080/03054985.2011.604951
  • Handley, K., & Williams, L. (2011). From copying to learning: Using exemplars to engage students with assessment criteria and feedback. Assessment & Evaluation in Higher Education, 36, 95–108. doi:10.1080/02602930903201669
  • Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77, 81–112. doi:10.3102/003465430298487
  • Havnes, A., Smith, K., Dysthe, O., & Ludvigsen, K. (2012). Formative assessment and feedback: Making learning visible. Studies in Educational Evaluation, 38, 21–27. doi:10.1016/j.stueduc.2012.04.001
  • Hendry, G., Bromberger, N., & Armstrong, S. (2011). Constructive guidance and feedback for learning: The usefulness of exemplars, marking sheets and different types of feedback in a first year law subject. Assessment & Evaluation in Higher Education, 36, 1–11. doi:10.1080/02602930903128904
  • Hepplestone, S., Holden, G., Irwin, B., Parkin, H. J., & Thorpe, L. (2011). Using technology to encourage student engagement with feedback: A literature review. Research in Learning Technology, 19, 117–127. doi:10.1080/21567069.2011.586677
  • Hernández, R. (2012). Does continuous assessment in higher education support student learning? Higher Education, 64, 489–502. doi:10.1007/s10734-012-9506-7
  • Higgins, R., Hartley, P., & Skelton, A. (2001). Getting the message across: The problem of communicating assessment feedback. Teaching in Higher Education, 6, 270–274. doi:10.1080/13562510120045230
  • Higgins, R., Hartley, P. & Skelton, A. (2002). The conscientious consumer: Reconsidering the role of assessment feedback in student learning. Studies in Higher Education, 27, 53–64. doi:10.1080/03075070120099368
  • Holmes, K., & Papageorgiou, G. (2009). Good, bad and insufficient: Students' expectations, perceptions and uses of feedback. Journal of Hospitality, Leisure, Sport & Tourism Education, 8, 85–96. doi:10.3794/johlste.81.183
  • Hounsell, D. (2007). Towards more sustainable feedback to students. In D. Boud & N. Falchikov (Eds.), Rethinking assessment in higher education. Learning for the longer term (pp. 101–113). London, UK: Routledge.
  • Hovardas, T., Tsivitanidou, O. E., & Zacharia, Z. C. (2014). Peer versus expert feedback: An investigation of the quality of peer feedback among secondary school students. Computers & Education, 71, 133–152. doi:10.1016/j.compedu.2013.09.019
  • Hyland, F. (1998). The impact of teacher written feedback on individual writers. Journal of Second Language Writing, 7, 255–286. doi:10.1016/S1060-3743(98)90017-0
  • Hysong, S. J., Best, R. G., & Pugh, J. A. (2006). Audit and feedback and clinical practice guideline adherence: Making feedback actionable. Implementation Science, 1, 9. doi:10.1186/1748-5908-1-9
  • Johnson, D. W., & Johnson, F. P. (1994). Joining together: Group theory and group skills (5th ed.). Englewood Cliffs, NJ: Prentice Hall.
  • Jones, O., & Gorra, A. (2013). Assessment feedback only on demand: Supporting the few not supplying the many. Active Learning in Higher Education, 14, 149–161. doi:10.1177/1469787413481131
  • Jonsson, A. (2013). Facilitating productive use of feedback in higher education. Active Learning in Higher Education, 14, 63–76. doi:10.1177/1469787412467125
  • Kluger, A. N., & DeNisi, A. (1996). The effects of feedback interventions on performance: A historical review, a meta-analysis, and a preliminary feedback intervention theory. Psychological Bulletin, 119, 254–284. doi:10.1037/0033-2909.119.2.254
  • Koen, M., Bitzer, E. M., & Beets, P. A. D. (2012). Feedback or feed-forward? A case study in one higher education classroom. Journal of Social Sciences, 32, 231–242.
  • Lees, D., & Carpenter, V. (2012). A qualitative assessment of providing quality electronically mediated feedback for students in higher education. International Journal of Learning Technology, 7, 95–110. doi:10.1504/ijlt.2012.046868
  • Lipnevich, A. A., & Smith, J. K. (2009). “I really need feedback to learn”: Students' perspectives on the effectiveness of the differential feedback messages. Educational Assessment, Evaluation and Accountability, 21, 347–367. doi:10.1007/s11092-009-9082-2
  • Lymn, J. S., & Mostyn, A. (2010). Audience response technology: Engaging and empowering non-medical prescribing students in pharmacology learning. BMC Medical Education, 10, 73. doi:10.1186/1472-6920-10-73
  • MacDonald, R. B. (1991). Developmental students' processing of teacher feedback in composition instruction. Review of Research in Developmental Education, 8, 3–7.
  • McDonnell, J., & Curtis, W. (2014). Making space for democracy through assessment and feedback in higher education: Thoughts from an action research project in education studies. Assessment & Evaluation in Higher Education, 39, 932–948. doi:10.1080/02602938.2013.879284
  • Millar, J., Davis, S., Rollin, H., & Spiro, J. (2010). Engaging feedback? Brookes e-Journal of Learning and Teaching, 2.
  • Moon, J. A. (2002). Learning journals: A handbook for academics, students and professional development. London, UK: Kogan Page.
  • Moore, C., & Teather, S. (2013). Engaging students in peer review: Feedback as learning. Issues in Educational Research, 23 (Suppl.), 196–211.
  • Murdoch-Eaton, D., & Sargeant, J. (2012). Maturational differences in undergraduate medical students' perceptions about feedback. Medical Education, 46, 711–721. doi:10.1111/j.1365-2923.2012.04291.x
  • Nelson, M. M., & Schunn, C. D. (2009). The nature of feedback: How different types of peer feedback affect writing performance. Instructional Science, 37, 375–401. doi:10.1007/s11251-008-9053-x
  • Nicol, D. (2009). Assessment for learner self-regulation: Enhancing achievement in the first year using learning technologies. Assessment & Evaluation in Higher Education, 34, 335–352. doi:10.1080/02602930802255139
  • Nicol, D. (2010). From monologue to dialogue: Improving written feedback processes in mass higher education. Assessment & Evaluation in Higher Education, 35, 501–517. doi:10.1080/02602931003786559
  • Nicol, D. J., & Macfarlane-Dick, D. (2006). Formative assessment and self‐regulated learning: A model and seven principles of good feedback practice. Studies in Higher Education, 31, 199–218. doi:10.1080/03075070600572090
  • Norcini, J., & Burch, V. (2007). Workplace-based assessment as an educational tool: AMEE Guide No. 31. Medical Teacher, 29, 855–871. doi:10.1080/01421590701775453
  • O'Loughlin, J., Ní Chróinín, D., & O'Grady, D. (2013). Digital video: The impact on children's learning experiences in primary physical education. European Physical Education Review, 19, 165–182. doi:10.1177/1356336X13486050
  • Orsmond, P., Maw, S. J., Park, J. R., Gomez, S., & Crook, A. C. (2013). Moving feedback forward: Theory to practice. Assessment & Evaluation in Higher Education, 38, 240–252. doi:10.1080/02602938.2011.625472
  • Orsmond, P., & Merry, S. (2013). The importance of self-assessment in students' use of tutors' feedback: A qualitative study of high and non-high achieving biology undergraduates. Assessment & Evaluation in Higher Education, 38, 737–753. doi:10.1080/02602938.2012.697868
  • Orsmond, P., Merry, S., & Reiling, K. (2002). The use of exemplars and formative feedback when using student derived marking criteria in peer and self-assessment. Assessment & Evaluation in Higher Education, 27, 309–323. doi:10.1080/0260293022000001337
  • Orsmond, P., Merry, S., & Reiling, K. (2005). Biology students' utilization of tutors' formative feedback: A qualitative interview study. Assessment & Evaluation in Higher Education, 30, 369–386. doi:10.1080/02602930500099177
  • Pain, R., & Mowl, G. (1996). Improving geography essay writing using innovative assessment. Journal of Geography in Higher Education, 20, 19–31. doi:10.1080/03098269608709341
  • Parboteeah, S., & Anwar, M. (2009). Thematic analysis of written assignment feedback: Implications for nurse education. Nurse Education Today, 29, 753–757. doi:10.1016/j.nedt.2009.02.017
  • Parvez, S. M., & Blank, G. D. (2008). Individualizing tutoring with learning style based feedback. Lecture Notes in Computer Science, 5091, 291–301. doi:10.1007/978-3-540-69132-7_33
  • Perera, A. M. B., & Morgan, J. E. (2011, December). Student perceptions of the value of formative assessment in their academic development. Paper presented at the International Horticultural Congress on Science and Horticulture for People, Lisbon, Portugal.
  • Peterson, E. R., & Irving, S. E. (2008). Secondary school students' conceptions of assessment and feedback. Learning and Instruction, 18, 238–250. doi:10.1016/j.learninstruc.2007.05.001
  • Poulos, A., & Mahony, M. J. (2008). Effectiveness of feedback: The students' perspective. Assessment & Evaluation in Higher Education, 33, 143–154. doi:10.1080/02602930601127869
  • Price, M., Handley, K., & Millar, J. (2011). Feedback: Focusing attention on engagement. Studies in Higher Education, 36, 879–896. doi:10.1080/03075079.2010.483513
  • Price, M., Handley, K., Millar, J., & O'Donovan, B. (2010). Feedback: All that effort, but what is the effect? Assessment & Evaluation in Higher Education, 35, 277–289, doi:10.1080/02602930903541007
  • Price, M., O'Donovan, B., & Rust, C. (2007). Putting a social-constructivist assessment process model into practice: Building the feedback loop into the assessment process through peer review. Innovations in Education and Teaching International, 44, 143–152, doi:10.1080/14703290701241059
  • Price, M., Rust, C., O'Donovan, B., Handley, K., & Bryant, R. (2012). Assessment literacy: The foundation for improving student learning. Oxford, UK: Oxford Centre for Staff and Learning Development.
  • Quinton, S., & Smallbone, T. (2010). Feeding forward: Using feedback to promote student reflection and learning—A teaching model. Innovations in Education and Teaching International, 47, 125–135. doi:10.1080/14703290903525911
  • Reeve, J., & Tseng, M. (2011). Agency as a fourth aspect of student engagement during learning activities. Contemporary Educational Psychology, 36, 257–267. doi:10.1016/j.cedpsych.2011.05.002
  • Robinson, S., Pope, D., & Holyoak, L. (2013). Can we meet their expectations? Experiences and perceptions of feedback in first year undergraduate students. Assessment & Evaluation in Higher Education, 38, 260–272. doi:10.1080/02602938.2011.629291
  • Rust, C., Price, M., & O'Donovan, B. (2003). Improving students' learning by developing their understanding of assessment criteria and processes. Assessment & Evaluation in Higher Education, 28, 147–164. doi:10.1080/02602930301671
  • Sadler, D. R. (2010). Beyond feedback: Developing student capability in complex appraisal. Assessment & Evaluation in Higher Education, 35, 535–550. doi:10.1080/02602930903541015
  • Schartel, S. A. (2012). Giving feedback—An integral part of education. Best Practice & Research Clinical Anaesthesiology, 26, 77–87. doi:10.1016/j.bpa.2012.02.003
  • Sendziuk, P. (2010). Sink or swim? Improving student learning through feedback and self-assessment. International Journal of Teaching and Learning in Higher Education, 22, 320–330.
  • Sinclair, H. K., & Cleland, J. A. (2007). Undergraduate medical students: Who seeks formative feedback? Medical Education, 41, 580–582. doi:10.1111/j.1365-2923.2007.02768.x
  • Taylor, C., & Burke da Silva, K. (2014). An analysis of the effectiveness of feedback to students on assessed work. Higher Education Research & Development, 33, 794–806.
  • Turner, G., & Gibbs, G. (2010). Are assessment environments gendered? An analysis of the learning responses of male and female students to different assessment environments. Assessment & Evaluation in Higher Education, 35, 687–698. doi:10.1080/02602930902977723
  • van der Schaaf, M., Baartman, L., & Prins, F. (2013). Feedback dialogues that stimulate students' reflective thinking. Scandinavian Journal of Educational Research, 57, 227–245. doi:10.1080/00313831.2011.628693
  • Wakefield, C., Adie, J., Pitt, E., & Owens, T. (2014). Feeding forward from summative assessment: The Essay Feedback Checklist as a learning tool. Assessment & Evaluation in Higher Education, 39, 253–262. doi:10.1080/02602938.2013.822845
  • Weaver, M.R. (2006). Do students value feedback? Student perceptions of tutors' written responses. Assessment & Evaluation in Higher Education, 31, 379–394. doi:10.1080/02602930500353061
  • Wingate, U. (2010). The impact of formative feedback on the development of academic writing. Assessment & Evaluation in Higher Education, 35, 519–533. doi:10.1080/02602930903512909
  • Winstone, N. E., Nash, R. A., Rowntree, J., & Menezes, R. (in press). What do students want most from written feedback information? Distinguishing necessities from luxuries using a budgeting methodology. Assessment and Evaluation in Higher Education. doi:10.1080/02602938.2015.1075956
  • Winstone, N. E., Nash, R. A., Rowntree, J., & Parker, M. (in press). ‘It'd be useful, but I wouldn't use it’: Barriers to university students' feedback seeking and recipience. Studies in Higher Education. doi: 10.1080/03075079.2015.1130032
  • Withey, C. (2013). Feedback engagement: Forcing feed-forward amongst law students. The Law Teacher, 47, 319–344. doi:10.1080/03069400.2013.851336
  • Yang, M., & Carless, D. (2013). The feedback triangle and the enhancement of dialogic feedback processes. Teaching in Higher Education, 18, 285–297. doi:10.1080/13562517.2012.719154