5,141
Views
5
CrossRef citations to date
0
Altmetric
Articles

Who is feedback for? The influence of accountability and quality assurance agendas on the enactment of feedback processes

ORCID Icon & ORCID Icon
Pages 261-278 | Received 22 Apr 2020, Accepted 15 Apr 2021, Published online: 17 May 2021

ABSTRACT

In education systems across the world, teachers are under increasing quality assurance scrutiny in relation to the provision of feedback comments to students. This is particularly pertinent in higher education, where accountability arising from student dissatisfaction with feedback causes concern for institutions. Through semi-structured interviews with twenty-eight educators from a range of institution types, we investigated how educators perceive, interpret, and enact competing functions of feedback. The data demonstrate that educators often experienced professional dissonance where perceived quality assurance requirements conflicted with their own beliefs about the centrality of student learning in feedback processes. Such dissonance arose from the pressure to secure student satisfaction, and avoid complaints. The data also demonstrate that feedback does ‘double duty’ through the requirement to manage competing audiences for feedback comments. Quality enhancement of feedback processes could profitably focus less on teacher inputs and more on evidence of student response to feedback.

Despite the potential power of feedback to influence learning and development, policy and practice in this area are rife with challenges, complexities, and contradictions. In this paper, we seek to engage with one such complexity inherent to feedback processes: that whilst the individual or team whose performance is being evaluated should be the primary audience for feedback comments, such information often serves multiple purposes and can be directed towards multiple audiences. For example, in the context of school education, comments form part of an evidence trail that are scrutinised as part of internal and external audit processes such as school inspection (Dann, Citation2018). In higher education, internal moderators and external examiners may scrutinise comments provided by educators. Even in the workplace, comments provided by an appraiser to an appraisee are often subject to scrutiny by more senior managers (Brown, Citation2019). Feedback givers, then, are often aware that the developmental advice they are providing to the recipient may well be subject to scrutiny by other stakeholders in terms of its quality, volume, and tone.

Feedback also has many functions. For learners, feedback can provide validation of effort and guidance for future development; it can provide information about how the grading decision was reached; and it can identify errors in their skills or understanding. For teachers, feedback is often the primary means of dialogue with individual learners, and for institutions, feedback is an important part of demonstrating academic quality. For all stakeholders, the most important function of feedback should be that it facilitates student learning, but this is often not the primary measure of impact (Henderson et al., Citation2019a). The essence of this challenge is captured by Watling and Ginsburg (Citation2019, p. 2) who observe that ‘the emergence of learning from a cauldron of assessment and feedback can seem like alchemy’.

Processes such as internal moderation, external examining, inspections, audits, and enhancement activities bring accountability to the centre of feedback processes. Teachers have responded to the rising prominence of feedback on institutional agendas by committing increasing amounts of time and effort to provide what they believe is detailed and useful feedback (e.g. Independent Teacher Workload Review Group, Citation2016; Robinson et al., Citation2013; Tuck, Citation2017). However, the challenge of developing feedback practice under considerable time and workload constraints represents a ‘feedback conundrum’ (Carless, Citation2015, p. 196) that is difficult to resolve.

One solution to this conundrum, which has been advocated by prominent scholars in the field, is to assign greater importance to the role of students in feedback processes (e.g. Boud & Molloy, Citation2013; Carless, Citation2015). Emphasis on unidirectional written comments in feedback practices is representative of a transmission-focused model, whereas a learning-focused model of feedback prioritises how students engage with and use feedback (Carless, Citation2015). In line with a learning-focused model of feedback, we define feedback as a process in which students make use of performance-relevant information to promote their learning (Henderson et al., Citation2019a). Emphasis on what the student rather than the teacher does implies that peer feedback, internally-generated feedback, the development of self-regulation, and evaluative judgement are important elements of a learning-focused approach (Nicol, Citation2020; Tai et al., Citation2018; Winstone & Carless, Citation2019). Such approaches also have the potential to open up discussion about how best to manage the workload challenges inherent in feedback processes. If the role of the student in feedback processes is minimised, responsibility lies with educators to spend time providing comments that may or may not be used. Learning-focused models of feedback place greater emphasis on shared responsibility: the effectiveness of feedback processes depends as much on the actions of students as those of their teachers (Nash & Winstone, Citation2017; Winstone et al., Citation2020). Transmission-focused and learning-focused approaches can be seen as different feedback ‘cultures’, characterised by a continuum of practices from those that are more teacher-driven to those that are more student-led (Winstone & Boud, Citation2019b). These are implemented within an ecology of practices, individual factors and contextual constraints (Henderson et al., Citation2019b). This ecology is complex, with many interacting influences. An important and under-explored element of the ecology of feedback cultures is the role of quality assurance (QA) and accountability, which we discuss further below.

Quality assurance, accountability, distrust, and professional dissonance

The conceptual framework guiding the present research is based on mutually interacting forces impacting on feedback processes: QA and accountability, and their potential to create a sense of distrust and professional dissonance. With their focus on rules, procedures, and performance indicators, QA processes aim to foster accountability, enhancement, and trust in systems. At their best, QA processes bring consistency to procedures, reduce idiosyncratic actions, and set agendas for quality enhancement. An important strand of QA is the role of external examiners in benchmarking academic standards as well as identifying and sharing good practices in teaching, learning and assessment.

Whilst QA mechanisms sometimes promote public confidence and may appear to improve educational outcomes, Brady and Bates (Citation2016, p. 158) draw attention to a ‘standards paradox’, whereby heavy emphasis on QA drivers can actually subvert student learning through the ‘policing of academic processes’. Brady and Bates argue that QA and pedagogical quality are not one and the same thing, and caution that the accountability created by QA can lead to risk aversion and a focus on standardisation of academic processes at the expense of student learning. In this vein, Gibbs and Iacovidou (Citation2004, p. 114) talk of QA resulting in a pedagogy of confinement in which teacher-student relationships are in a ‘static directives mode’. Under such circumstances, teachers often perceive QA as a bureaucratic imposition which relates more to monitoring and control than to enhancement (Cardoso et al., Citation2016).

Accountability in high-stakes school assessment regimes creates tensions through the requirement for detailed documentation of process (e.g. Hopfenbeck et al., Citation2015) and where teachers themselves are assessed on their students’ outcomes (e.g. Pratt, Citation2018). Dann (Citation2018) warns against feedback ‘just being visible for no clear purpose than providing evidence for inspectors’ (p. 92). In higher education systems across the world, student satisfaction with feedback is an important marker of the quality of educational provision, and as a high stakes metric carries financial implications in terms of its impact on league table positions. Assessment and feedback are often high on universities’ agendas, being commonly framed across countries as areas of weakness when compared with other indicators of teaching quality (Mulliner & Tucker, Citation2017). In environments such as these, distrust is accentuated because staff tend to be more interested in quality enhancement, whereas institutions and governments are more focused on quality assurance (Williams, Citation2016). In her classic work on trust, O’Neill (Citation2002, Citation2013) warns that accountability is a source of rather than a remedy for distrust and argues that cultures of accountability provide incentives for arbitrary and unprofessional choices, including ‘defensive teaching’ where avoiding challenge or complaints becomes a central goal (O’Neill, Citation2002, p. 50). Risk-taking and innovative approaches to feedback cannot thrive without atmospheres of trust (Carless, Citation2009).

Accountability and QA exert implicit or explicit pressure on teachers to enact feedback processes in particular ways. In her ethnographic account of academic writing practices from an academic literacies perspective, Tuck (Citation2017) refers to teachers prioritising the accountability function of feedback rather than learning dialogues, through emphasis on compliance with regulations and self-protection from student challenge. This represents a source of tension if educators feel compelled to enact feedback practices that are not in accordance with their own beliefs about what is effective and important (Orrell, Citation2008). This challenge is persuasively represented by the concept of professional dissonance, where individuals experience conflict between their personal values and beliefs, and the nature of the working culture (Taylor & Bentley, Citation2005). This dissonance can be exacerbated when institutional requirements for feedback-giving are perceived as surveillance driven by distrust, and where feedback processes appear to be more strongly shaped by institutional pressures than driven by individual judgements about what would be beneficial to students’ learning (Tuck, Citation2012).

Whilst in a learning-focused feedback approach the primary audience of feedback information is clearly students themselves, the viewing and auditing of assessment artefacts such as annotated scripts and feedback forms can mean that teachers are all too aware of the likelihood of their feedback comments being scrutinised as part of internal and external QA processes. Feedback practices become a balancing act in which there are conflicts between comments for student improvement, and completing the required documentation to comply with institutional policies (Bailey & Garner, Citation2010). Adapting the well-known concept of assessment doing double duty (Boud, Citation2000), these multiple and competing functions of feedback have been labelled as ‘double feedback duty’ (Carless, Citation2015, p. 191).

The present study

In this study, we explore the influence of QA, accountability, distrust, and professional dissonance on the enactment of feedback processes in the context of UK higher education. This context serves as a suitable space in which to explore the mutually interacting influence of these forces given the heavy influence of QA (through moderation and external examining and quality audits) and accountability (through indices of student satisfaction). Pressure to enhance the quality of feedback is felt readily in response to national surveys of the student experience, such as the National Student Survey (NSS) in the UK and the Course Experience Questionnaire (CEQ) in Australia. The pressure to secure student satisfaction with their experience of feedback can lead institutions to focus on the provision rather than utility of feedback information. For example, in the NSS, the framing of items (e.g. ‘I have received useful feedback’) aligns with a transmission-focused model of feedback that positions students as passive receivers, rather than proactive seekers and users, of feedback information (Winstone & Boud, Citation2019a).

The present study seeks to understand how learning-focused feedback processes might operate within a higher education context characterised by strong emphasis on surveillance of academic work, QA, and accountability. Through semi-structured interviews we addressed the following research question: How are feedback practices influenced by accountability and QA agendas?

Methods

Participants

Twenty-eight educators participated in this study, and for the purposes of this paper are represented by participant number (e.g. P1, P2). Our sample were self-selected; participants responded to an advertisement circulated via national email distribution lists for professional and educational organisations and societies in the UK. The study was described as focusing on ‘how academic staff experience the process of providing feedback to students on their learning and their work’. Our final sample consisted of educators from nine different UK Universities. Our sampling decisions were driven by a pragmatic qualitative research approach, which we selected due to its suitability for ‘seeking to discover and understand a phenomenon, a process or the perspectives and worldviews of the people involved’ (Merriam, Citation1998, p. 11). This approach is characterised by seeking to understand accounts of a focal phenomenon without the application of ethnography, phenomenography, or grounded theory (Savin-Baden & Howell Major, Citation2013). In line with this approach, rather than employing data saturation as a criterion for determining sample size, we sought to gain variation in perspectives from different types of participants. Thus, we ceased recruitment once we were satisfied that we had captured within our sample sufficient conceptual depth (Nelson, Citation2017) via a diverse mix of disciplines, experience levels, and institution types (see ). According to Savin-Baden and Howell Major (Citation2013), around 30 participants is common for a pragmatic qualitative study. Interviews ranged in length from 20 to 54 minutes (M = 31.89, SD = 9.48). All participants were given a £10 online shopping voucher in exchange for their participation.

Table 1. Participant details

Materials and procedure

Ethical approval was granted by the University of Surrey Ethics Committee (ref: UEC2017029DHE). Participants were sent a study Information Sheet prior to the scheduled time of their interview, and provided informed consent for their participation. Interviews were conducted in person or by telephone, were audio-recorded, and followed a semi-structured schedule (see Appendix A). The interview protocol explored how participants’ feedback practices were enacted, perceptions of how these had changed over time, and the perceived influence of QA and student satisfaction on feedback practices. Participants were not asked specifically about the perceived challenges associated with these processes, but we ended with a final question giving participants the opportunity to articulate their views on any other elements of assessment and feedback processes. Verbal content of the interviews was transcribed verbatim; non-verbal elements were not transcribed.

Data analysis

Reflexive thematic analysis was selected as an analytical approach as a means of developing a detailed understanding of patterns in the dataset and due to its utility in drawing out similarities and differences in the perspectives of different research participants (Braun & Clarke, Citation2020). Data familiarisation involved listening to recordings whilst reading and re-reading transcripts, with memos used to document initial thoughts about the data. Then codes were assigned to the interview transcripts as part of the sense-making process through which we identified patterns in the data. The approach to coding was mainly inductive in that codes were directly linked to representing participants’ experiences (Braun & Clarke, Citation2006). The inductive process was, however, obviously influenced by the aims of the research, the focus of the interviews and our knowledge of relevant literature. All transcripts were coded in NVivo, and themes were constructed from the codes through a recursive process.

Understanding and representing peoples’ experiences requires interpretive activity that is inevitably informed by our own assumptions and values (Braun & Clarke, Citation2013). Whereas ‘coding reliability thematic analysis’ typically uses inter-coder reliability, ‘reflexive thematic analysis’ does not, because in reflexive thematic analysis meaning is understood as situated and contextual, with researcher subjectivity conceptualised less as a problem to be contained and more as a resource for producing insights (Braun & Clarke, Citation2020). Although we did not employ inter-coder reliability, we sought to develop credibility in our analysis through reflexivity, including using dialogue with a ‘critical friend’ to challenge our analysis and encourage reflection upon multiple and alternative interpretations as these emerged in relation to the analysis and writing (Smith & McGannon, Citation2018). This process is similar to the ‘reviewing themes’ phase of thematic analysis described by Braun and Clarke (Citation2006). The ‘critical friend’ was a researcher working in a broadly similar field to the authors. After reading a draft of our analysis, a face-to-face discussion between the first author and the ‘critical friend’ took place, in which the ‘critical friend’ was invited to question our interpretations, as a way to stimulate dialogue about alternative possibilities. Whilst the themes themselves did not change as a result of this critical engagement, it did lead to further clarity over their meaning and distinction from one another. This stage was also crucial in establishing the conceptual depth of our analysis, which we achieved by employing criteria outlined by Nelson (Citation2017). All codes were checked against the dataset to ensure that multiple examples of concepts were evident across a range of transcripts. This is an important element of demonstrating rigour by supporting the development of trustworthy conclusions firmly grounded in the data and linked to concepts in relevant literature.

Findings

Participants spoke about varying factors that had an impact on the enactment of feedback processes, including top-down dictates, student complaints, student satisfaction, and QA. In each case, these factors were the source of dissonance, with respondents discussing how they had to reconcile conflicting influences on their practice and manage the discomfort that these tensions produced. Three broad inter-related themes were identified through the analysis, with each of these themes representing a source of dissonance. Under each theme, subthemes identify the influences on feedback processes giving rise to these tensions (see ).

Table 2. Themes and subthemes

Dissonance 1: student satisfaction vs. student learning

This theme was evident in the transcripts of all 28 participants. A strong driver of feedback practices was the perceived need to focus on student satisfaction as assessed through internal (e.g. teaching evaluations) and external (e.g. the NSS) accountability processes. This contributed to a sense that in practice, many decisions about how to enact feedback processes came not from pedagogic reasoning pertaining to student learning, but from a desire to ‘game’ the metrics and increase scores:

I think perhaps a knock-on impact of measuring satisfaction is that we change our practices to try and make students happy … shift the focus away from learning and more towards satisfaction. (P16)

This was further represented by descriptions of senior managers being reactive in making changes to feedback policies and practice, ‘jumping on solutions’ (P1) as a ‘kneejerk reaction’ (P5). This perception of top-down direction means that even discourse around feedback at a departmental level is framed ‘in a sort of corporate sense’ (P12), where the University ‘thinks about feedback and thinks about the NSS almost in the same breath’ (P14).

This tension was also expressed as discomfort that instructional efforts to improve feedback processes primarily stem from a desire to increase scores on satisfaction metrics:

[Feedback] is a process which we’re trying to evolve and I think mostly because we’re measured on it and it impacts on our league table rankings. (P12)

I think feedback is a key component to what students learn. But the reason that I’ve made such a big fuss about it is because we do so badly on the NSS. (P14)

Within the data there was evidence of cynicism regarding the game-playing that can characterise institutional work on feedback, where what is portrayed as enhancement work to improve learning may represent an attempt to ‘just try and get good scores’ (P12). One participant spoke of actively resisting this approach: I’m actually involved in the business of learning more than trying to play this game’ (P3).

It was evident that performativity may create pressure to focus on transmission-focused features of feedback, such as its quality, volume, and the speed of return of marked work. In this sense, a drive to improve student satisfaction may limit the extent to which feedback can promote student learning, by serving to emphasise the teacher transmission of feedback rather than what students do with it. Discourse around ‘value for money’ was evident in discussion around the amount of feedback information students received, which in turn was described as a driver to focus on giving greater amounts of feedback information to students. For example, the pressure to reduce turnaround times was described as a source of dissonance, where ‘a lot of conscientious academics are concerned … they want to ensure that they do the best they can, but they do feel the tension between quality and speed’ (P26). This tension has the potential to focus attention on the wrong elements of feedback processes:

The pure metric is ‘how quick is the feedback?’ not the quality of feedback. It seems to be one that’s easy to measure. So that’s as much a stick now to hit you with. (P15)

As well as driving attention towards the delivery rather than reception of feedback, the influence of student satisfaction was also reported as potentially precluding adoption of practices where student engagement might be more of a focus, such as the use of audio feedback, or one-to-one feedback dialogues. Such practices were seen as problematic because institutions were pushing for greater consistency of feedback practices across all modules and units, rather than giving individual educators the agency to design feedback processes according to their own beliefs and values. Within this theme, a finding was that whilst student satisfaction and effective learning ‘are not the same thing’ (P8), placing emphasis on the transmission of comments may prioritise student satisfaction. Students may learn from feedback processes that they evaluate positively but arguably satisfaction should not be a central criterion for the effectiveness of feedback. Whilst a minority of participants expressed a belief that the effectiveness of feedback is synonymous with feedback that students rate positively (‘I think for me the way I would know ideally that feedback has been good is if student evaluations says it was good’, P12), the majority struggled to reconcile student satisfaction with learning in the context of feedback:

The kind of feedback that we would like to provide is probably different to the kind of feedback students would like to receive. There’s a mismatch I think. (P12)

[feedback practice is] less driven by some pedagogical theory than the desire to please the customer, in a sense … it does shape the way we give feedback and it will do so increasingly. (P24)

A pattern in the data indicated that participants generally framed the dissonance between satisfaction and learning as one where satisfaction dominates in terms of the desire to ‘keep students happy’. This was driven by the high stakes associated with getting good scores, where ‘you can live or die on the NSS’ (P3). This pressure was described as a ‘threat over our heads’ (P17) which can impact job security, leading to practices that ‘pander to what’s in the NSS’ (P17). An alternative viewpoint involved participants putting their practice at the forefront and then questioning the (mis)alignment with student satisfaction: ‘I don’t think the NSS really taps into some of the more subtle aspects of feedback that we’re trying to improve on’ (P24).

It is clear that the push for ensuring student satisfaction can promulgate practices which align with the metrics being used to assess feedback practice, leading to emphasis on turnaround time and volume of feedback. Whilst a minority of participants resisted the primacy of satisfaction over learning-focused practices, it became clear that the stakes are perceived to be too high by the majority of individuals to risk adoption of learning-focused approaches which may not be measured by evaluation items in their current framing.

Dissonance 2: meeting QA requirements vs. feedback for learning

This theme was evident in the transcripts of 27 of the 28 participants. The heavy regulation around assessment and feedback processes alongside QA regimens was seen as encouraging a performance, almost a charade, of demonstrating that feedback has taken place, and enacting the requirements of institutional policies. Feedback then becomes a process of ‘having all the ducks in a row and making sure that we file the right kinds of paperwork’ (P1). This can result in a sense of paying lip service to notions of quality, merely ‘demonstrating that I’m giving feedback and showing consistency’ (P15), and ‘meeting the expectations of academic managers’ (P26). Other descriptions focused on the ‘audit-centric’ culture, where practice was designed to ‘satisfy the requirements of audits and regulation’ (P23).

Within the data, there was evidence of a belief that existing QA measures were not fit for purpose in terms of enhancing practice, and did not necessarily capture elements of good practice that might be aligned with learning-focused feedback practices, such as formative dialogues. The influence of QA was also perceived to lead to a focus on the mechanics of the feedback process, rather than its outcomes. This can be experienced as dissonance between facilitating an optimal learning experience for students, alongside requirements for documenting and recording feedback processes. Such processes ‘drive practice and they’re driving it in a direction that is opposite to where I’d like to go if I was just assessing for the purposes of best pedagogy’ (P23). This provides a powerful illustration of how an individual’s beliefs and values can conflict with the drivers for practice on an institutional level.

When discussing influences on their feedback practices, a salient pattern in the data was a sense that when writing comments, the student is just one of a multitude of audiences teachers have in mind. As well as internal moderation processes, the influence of external examiners who may scrutinise a sample of marked assignments impacted on the way in which comments were framed. In some cases, this resulted in a more formal style:

Sometimes there’s a tendency to not only write feedback for your students but knowing that feedback may be seen by external examiners and quality assurance, you’re almost serving two purposes, in terms of making sure it’s okay for your students but also making sure it’s to the standard being looked for. I think it certainly impacts how we write feedback. I always try to make sure my feedback is personalised, but if you’re also trying to write for a different purpose, which might be an external examiner, then it has the potential to take away some of that personalised tone. There’s the potential that you write in a different way that students might not understand because you’ve got that dual thing going on. (P16)

This closely reflects what Carless (Citation2015) describes as feedback doing double duty, where the same feedback serves multiple purposes, or is produced with different audiences in mind. There was a recognition that feedback doing double duty could limit the clarity of the learning advice contained within feedback, resulting in ‘a formality around feedback which sometimes makes it very inaccessible’ (P22).

The data also represent a perception of accountability processes as limiting innovation or ‘doing things differently’. One way in which this occurred is that when a programme’s feedback practice has been seen to have ‘passed muster’ by external examiners, then the programme team may see no need to develop practice:

The examiners absolutely rave about our feedback and say we’re the best thing since sliced bread which tells academics that actually there’s no need to change or do anything differently because the external examiners are saying you’re doing it right … There’s some complacency around that. (P14)

Whilst QA has an important role to play in maintaining standards, accountability procedures were reported as inhibiting creativity in assessment and feedback practice. This influence was frequently discussed in terms of ‘audit’ and ‘standardisation’; less emphasis was placed on the use of external ‘critical friends’ as a means to develop practice. Even the few participants who recognised the presence of quality enhancement as well as QA questioned whether this was the case across the sector.

Dissonance 3: covering your back vs. confidence to experiment

This theme was evident in the transcripts of all 28 participants. Regardless of how beliefs and values might drive feedback practices, a fear of complaints was represented in the data as an important driver of pedagogic decision-making, leading to a feeling of being under pressure to ‘demonstrate process and cover my back and sort of look like I do the right thing all the time’ (P15). This was also discussed in the context of the potential use of different feedback practices, and reticence to move beyond transmission-focused written feedback methods, believing that written comments, and associated ‘academic jargon’ created a strong audit trail that could be used as evidence of ‘due process’ in the case of academic appeals:

I think there’s a need for [feedback] to be [written], so you can almost cover your back with it and prove to the students that they have had it [Laughter] (P7)

The way that feedback is given … you have to use a certain jargon to tick certain boxes to make sure that appeals cannot take place or that you’ve done your job and the university is going to side with you. (P8)

There was also evidence within the data of a fear of internal moderators or external examiners ‘checking’ feedback comments, and student complaints and appeals were also a source of concern. One way in which this was expressed was as a visceral sense of anxiety when work has been returned and an email arrives from a student. A common response is to ‘catch my breath a little bit and I think “Have I messed up?”’ (P10). These fears were often described as a source of dissonance that could inhibit learning-focused approaches to feedback. For example, peer feedback was discussed in the context of concern that students might complain about receiving feedback from a peer and not from their tutor and lack confidence in assessments made by peers:

I also suspect that if a student is providing feedback on another student’s work, it would need to be very clear that it is the process of providing feedback in which they’re being trained … because otherwise they’re going to feel that we’re fobbing them off on other students to save time. (P23)

It’s very much peer feedback, but of course reviewed by us, so I saw it before it was given to the students, just as a failsafe. You know, you’ll always have to go overboard on these things. (P6)

It was a purely formative exercise … and the amount of angst this created! I mean the students were sort of demanding … not merely that they could get the lecturer to review the feedback they’d received if they thought it wasn’t clear and accurate, but almost a formal appeal system. (P26)

These examples illustrate that concerns about the potential for complaints can lead educators to adopt risk-averse practices that ‘cover their backs’, and that this can be a strong influence on the model of feedback they adopt, potentially discouraging an educationally worthwhile approach, such as peer feedback. Overall, there was clear evidence in the data that concerns about student complaints tended to encourage conventional transmission-focused approaches to feedback.

Discussion

There are persistent calls in the literature for a reframing of the student role in feedback processes, away from a passive receiver of comments to an active participant in generating and using feedback (e.g. Boud & Molloy, Citation2013; Carless, Citation2015; Winstone & Boud, Citation2019b). The present study brings an original perspective to this debate, by seeking to interrogate the ways through which the dominant feedback cultures might be impeding such a shift in practice. In a higher education culture characterised by QA, accountability, and surveillance, our data demonstrate how these influences can lead to a sense of distrust and professional dissonance for those involved in this crucial area of academic work. Overemphasis on accountability risks promulgating a feedback culture focused on the transmission of written comments, rather than a more sustainable culture of student engagement and dialogue in the feedback process. We now discuss the significance of the findings in relation to the inter-related concepts of professional dissonance and feedback doing double duty.

Professional dissonance in feedback practices

Our data provide evidence of professional dissonance: conflicts between one’s personal values and beliefs, and the requirements of the working environment. The original work of Taylor and Bentley (Citation2005) on professional dissonance pertained to mental health social workers and our data attest to its broad relevance to the higher education sector. Three key areas of professional dissonance were revealed by participants: reconciling the need to secure student satisfaction alongside enacting practices that support student learning effectively; balancing the competing functions of QA and feedback for student learning; and enhancing practice in a climate where distrust and fear of complaints or appeals are rife.

The first of these areas of dissonance speaks to the growing power of the ‘student voice’, where in many international contexts there are high stakes attached to performance in student satisfaction measures. Our participants spoke of how pandering to student satisfaction could reinforce transmission-based models of feedback and act as an impediment to shifts towards learning-focused feedback processes. Professional dissonance was evident in the perceived need to ‘play the game’ in meeting student satisfaction targets and to ‘keep students happy’, although satisfying students and enhancing their learning practices are sometimes in conflict. Without the pressure to secure student satisfaction as measured by internal and external metrics, teachers may well adopt rather different approaches to feedback than those they currently reported. It is important to acknowledge that the experience of dissonance in the practice of individual educators is likely influenced by broader systemic tensions such as the positioning of students as ‘consumers’ or ‘customers’ as a result of the growth in tuition fees in UK Higher Education (Bunce et al., Citation2017). Some participants reported resisting these forces, and the characteristics of these individuals and their potential to influence their colleagues formally or informally are important issues for further scrutiny.

The second area of dissonance leads us to reconsider the audiences for feedback comments. Whilst common sense suggests that the primary audience should be the student whose work is the subject of feedback commentary, QA systems such as external examining and internal moderation bring to the fore other potential audiences. Our participants described how, when producing feedback comments, the QA functions of feedback were often firmly in their minds. This sometimes results in comments that are less comprehensible to students and less focused on student improvement.

This tension exemplifies the ‘standards paradox’ described by Brady and Bates (Citation2016), where the policing of academic processes through QA regimens can end up subverting student learning rather than encouraging an ethos of learning-focused feedback. Whilst the perceived constraints created by surveillance may or may not exist in reality, the gaze of QA and the requirement to document feedback through the creation of artefacts may reduce the likelihood of feedback processes developing in research-informed directions and having a positive impact on student learning processes.

The third area of dissonance uncovers a lived experience of fear in relation to complaints or reprisals, which appears to propagate a risk-averse approach to practice. This resonates with and exemplifies the arguments of O’Neill (Citation2002) that concerns about student complaints or critique can result in defensive teaching approaches. Risk-averse approaches are likely to limit the development of student-focused or innovative feedback practices. There are dangers that the autonomy and creativity of educators is eroded by a system that promulgates distrust. Despite the systemic challenges that are faced, there is a need for leadership to confront these challenges and reinstate the primary purpose of feedback in engendering a positive impact on student learning.

Acknowledging and confronting feedback doing double duty

The professional dissonance experienced by our participants indicates that a desire to focus on feedback from a learning-focused perspective is often perceived to be at odds with QA and accountability drivers. Our data provide empirical support for the validity of the concept of feedback doing double duty. The seemingly straightforward act of providing feedback comments ends up serving competing functions: satisfying the requirements of QA agendas, and supporting students’ development. In reality, the gaze of internal moderators and external examiners need not prevent learning-focused approaches to feedback, yet they are often prominent in the minds of teachers as they craft and frame comments. Critical dialogue around the impact of feedback doing double duty is essential in tackling the feedback conundrum. The primary role of feedback in supporting student learning needs to be repeatedly emphasised, and barriers to its enactment tackled. Navigating the tensions between different functions of feedback is part of the capacities of the feedback literate teacher (Carless & Winstone, Citation2020). Programme leaders and teachers might give critical thought to differing functions of feedback and distinguish between tasks where the primary focus is on formative feedback, and other occasions where grading, certification and accountability are most salient (see Winstone & Boud, Citation2020).

Whilst our analysis focused on the higher education system, our findings may also resonate with other educational levels and contexts. In UK schools, the result of feedback doing double duty, in serving student learning as well as providing evidence for school inspectors, was an increase in formulaic practices, and unsustainable demands on teachers’ workloads (Dann, Citation2018). As a result, the national inspection body published a statement emphasising that they were not seeking ‘any specific frequency, type or volume of marking and feedback’, instead wanting to focus on ‘how written and oral feedback is used to promote learning’ (Office for Standards in Education, Citation2016, Section 5).

If QA processes in higher education were to seek evidence not of the provision of detailed teacher comments but of student response, this may drive feedback practice towards the enhancement of student learning rather than teacher transmission of information. Developing current QA practices such as moderation and external examining in ways that promote learning-focused feedback processes would support these quality enhancement goals and potentially increase buy-in from teachers. Quality enhancement is enabled when institutions develop ways of capturing the impact of learning-focused feedback practices, with greater flexibility and agency being afforded to individual teachers rather than merely striving for consistency across all courses. These kinds of adjustments imply some kind of change in the cultures of feedback so that they focus on the student outputs rather than the teacher inputs of feedback processes (Winstone & Boud, Citation2019b). Whilst tackling entrenched cultures and practices is always a tall order, such measures carry potential to reduce professional dissonance and might engender rather than erode trust.

Limitations and future research directions

We acknowledge that our participants were all UK-based educators, although our sample represented a wide range of universities, disciplines, and teaching experience. We also acknowledge that as a self-selected sample, our participants may be more committed to learning and teaching than the general population of educators in UK higher education or have particular views about the topics under discussion. Given the international variability in some elements of QA regimens, for example, external examining, future research should seek to explore similar issues in other contexts. This is especially important given that our participants painted a largely negative picture of the influence of QA and student satisfaction on feedback processes. In common with all interview studies, our data represent the self-reported perspectives of individuals. Participants were not known to the researcher conducting the interviews; nevertheless, disclosure in interview research can be coloured by issues of identity and performativity. Beyond interview methods, reviewing artefacts such as feedback samples, moderation documentation, and external examiner reports would provide a different lens on the ways in which the tensions described by our participants are evident in practice.

Conclusion

Regardless of approaches to QA, few would contest that student learning is the primary focus of education. Yet a focus on the transmission of feedback comments, where students are assigned a passive role in the process, may subvert the learning function of feedback processes. We have argued that systems such as QA and student satisfaction metrics, in their current form, may contribute to the dominance of transmission-focused models of feedback and the long-term difficulties in enhancing feedback processes. Placing student learning at the centre of feedback processes requires the sector to acknowledge the challenges of feedback doing double duty and work towards new models of quality enhancement that recognise and promote an emphasis on the role of students, not just teachers, in feedback processes.

Disclosure of potential conflicts of interest

No potential conflict of interest was reported by the author(s).

Acknowledgments

The authors thank Jessica Bourne for assistance with data collection, and Karen Gravett for comments on an earlier version of this article.

Additional information

Funding

The research reported in this paper was supported by the Society for Research into Higher Education under grant [RA1648].

Notes on contributors

Naomi E. Winstone

Naomi E. Winstone is a Reader in Higher Education and Director of the Surrey Institute of Education at the University of Surrey. Naomi is a cognitive psychologist specialising in the processing and impact of feedback information.

David Carless

David Carless is Professor of Education in the Faculty of Education at the University of Hong Kong. His research focuses on feedback for student learning in higher education and the development of feedback literacy.

References

Appendix A.

Interview schedule

  1. Thank you for agreeing to talk to me about your perspectives on feedback practice. Before we begin, I would like to emphasise that the purpose of this study is to understand how academics might experience different forms of feedback practice. It is not in any way an evaluation of an individual’s teaching practice.

  2. In which discipline area are you based, and what units or modules do you teach?

  3. How long have you been teaching and/or assessing student work in Higher Education?

  4. During this period, how has the assessment and feedback process changed?

  5. What do you see as the biggest challenge academics face in the assessment and feedback process?

  6. In your discipline, what do you think is the dominant method for giving feedback to students?

  7. In an ideal world, how do you think feedback should be given to students? What are the potential barriers to implementing this method?

  8. I am now going to describe some different forms of feedback practice. For each one, please could you say whether or not you think this practice would work in your discipline, and why? Do you use any of these practices? Why/Why not?

    1. Audio feedback

    2. Video feedback

    3. Peer Feedback

    4. One-to-one verbal feedback

  9. I am now going to describe some different influences on teaching in Higher Education. How do you think they might influence the ways in which academics give feedback to students?

    1. National Student Survey

    2. Student Evaluations

    3. Teaching Excellence Framework

    4. Quality Assurance Agency visits

    5. External Examiners

  10. Is there anything else you would like to share about your perspectives on feedback practice?