7,865
Views
24
CrossRef citations to date
0
Altmetric
Articles

Measuring what matters: the positioning of students in feedback processes within national student satisfaction surveys

ORCID Icon, ORCID Icon, ORCID Icon & ORCID Icon

ABSTRACT

The increasing prominence of neoliberal agendas in international higher education has led to greater weight being ascribed to student satisfaction, and the national surveys through which students evaluate courses of study. In this article, we focus on the evaluation of feedback processes. Rather than the transmission of information from teacher to student, greater recognition of the fundamental role of the learner in seeking, generating, and using feedback information is evident in recent international literature. Through an analysis of the framing of survey items from 10 national student satisfaction surveys, we seek to question what conceptions or models of feedback are conveyed through survey items, and how such framing might shape perceptions and practice. Primarily, the surveys promote an outdated view of feedback as information transmitted from teacher to student in a timely and specific manner, largely ignoring the role of the student in learning through feedback processes. Widespread and meaningful change in the ways in which feedback is represented in research, policy, and practice requires a critical review of the positioning of students in artefacts such as evaluation surveys. We conclude with recommendations for practice by proposing amended survey items that are more consistent with contemporary theoretical conceptions of feedback.

Introduction

Internationally, there has been an increase in the use of student satisfaction metrics as a proxy for indicators of quality in higher education (Langan and Harris Citation2019). Data generated by these measures play an important role in institutions’ decision-making and drive accountability processes; they also contribute to the creation of a neoliberal marketized system where students are increasingly positioned as consumers (Langan and Harris Citation2019). Such reforms have the power to transform the standing of students in systems and processes, and the agency afforded to them (Dusi and Huisman Citation2020).

Whilst student satisfaction and evaluation surveys are believed to play an important role in drawing attention to teaching quality in higher education (Kornell Citation2020), the validity and utility of such instruments have received heavy criticism in recent years. For example, bias in students’ ratings has been identified where, for example, characteristics of instructors such as their gender, age, and attractiveness influence ratings assigned by students (Carpenter, Witherby, and Tauber Citation2020). Even factors such as whether students are given chocolate prior to completing surveys can influence how positively they respond (Youmans and Lee Citation2007). Furthermore, metacognitive errors in assessing one's own learning can lead to ‘illusions of learning’ where students can rate positively instructors who are engaging, enthusiastic and appear to make the content simple, yet these factors do not correlate with actual learning (Carpenter, Witherby, and Tauber Citation2020).

A further problem with tools such as student satisfaction surveys is that they not only provide metrics; the language used within survey items conveys, explicitly or implicitly, what is valued in education and positions students and teachers in particular ways. In turn, this can directly drive practice (Kornell Citation2020), where metrics can lead to ‘all sorts of intellectual short cuts in order to improve statistics on student satisfaction’ (Spence Citation2019, 762). Thus, surveys can incentivise practices that focus on increasing scores rather than improving education (Carpenter, Witherby, and Tauber Citation2020; Leach Citation2019).

Taking a critical approach to the framing of student satisfaction items in the context of assessment and feedback processes, we explore conceptions of feedback represented within survey items and question whether such metrics may lead to institutions valuing what is measured, rather than measuring what is valued (Hargreaves, Boyle, and Harris Citation2014). This is reflected in the caution that ‘measuring the simple when the outcome is complex is a characteristic flaw of many metricised environments’ (Spence Citation2019, 769). We focus specifically on feedback as this is a common area of concern for higher education institutions, being described as the sector's ‘Achilles’ Heel’ in terms of quality (Knight Citation2002, 107). In this sense, then, we are investigating items used by students to provide feedback information on their experiences of feedback processes. Furthermore, shifts in the conceptualisation of feedback are evident in the literature, and we question whether such developments in thinking about feedback are reflected in survey items used to assess practice.

Conceptualising feedback processes

Over the past decade, conceptualisations of feedback have changed profoundly. Approaches to feedback that focus on the transmission of comments from teachers to students have been critiqued as outdated; instead, a new conception of feedback has emerged that places emphasis on student engagement, action, and agency (Boud and Molloy Citation2013; Carless Citation2015; Winstone and Carless Citation2019). This shift in viewing feedback away from being a product of teachers and towards a process necessarily involving students has led to changes in the positioning of students in feedback processes. In a meta-review of the role of the student in feedback, Van der Kleij, Adie, and Cumming (Citation2019) outline four different models of the student role in feedback. In the first two models (transmission and information processing), the student is positioned as a relatively passive recipient of comments, who may choose (or not) to process comments with no necessary expectation that they will do so. In these models, it is assumed that students will learn through these inputs if the right kinds of feedback information are provided, without recognising the student's role in realising the learning potential of the information. In contrast, in the second two models (communication and dialogic), students are positioned as more active players in the process. Here, students have the capacity to generate rather than merely receive feedback information, decide how to act upon it, and drive feedback processes through their own self-regulation. Crucially, these models of feedback, by positioning students as active participants in the process, recognise that students’ active role is a necessary prerequisite for feedback to have any impact on student learning. In more recent conceptions, the importance of internally-generated feedback, rather than the primacy of teacher comments, has gained prominence (Nicol Citation2020). Other influential work has pointed to the importance of students having the opportunity to hone their ‘evaluative judgement’ through engagement with feedback information; that is, their capacity to understand standards and criteria and make judgements about the quality of work (Tai et al. Citation2018). In their review, Van der Kleij, Adie, and Cumming (Citation2019) highlight that whilst recent reviews on the topic of feedback position students as active participants in feedback processes (e.g. Ajjawi and Boud Citation2017; Carless and Boud Citation2018; Winstone et al. Citation2017), the transition from passive to active is not universally adopted. Some recent reviews continue to represent students as passive recipients of feedback, leading to the conclusion that ‘the information processing model of feedback in which students have a limited role is still driving thinking within the field’ (Van der Kleij, Adie, and Cumming Citation2019, 319). This highlights a key challenge pertinent to the positioning of students, not just in the literature but also in metrics such as student satisfaction surveys: if students’ participation in feedback is necessary for it to have impact on learning, are measures of its effectiveness aligned with this approach?

Evaluating feedback processes

Students’ experience of feedback in their university courses is commonly evaluated as part of both internal institutional evaluation surveys, as well as national survey instruments that collect data on the quality of students’ educational experiences (Bennett and Kane Citation2014). There are a number of benefits of such national surveys: they generate information that can be used to inform developments to practice; they can be used to benchmark courses according to their quality (Kuh Citation2003); and they can inform the work of policy makers at a national level and influence funding allocations. However, there is also controversy surrounding the use (or indeed misuse) of such measures. For example, the UK National Student Survey (NSS) is well-known internationally as a measure of student satisfaction that has high stakes for universities in terms of positions in national league tables.Footnote1 In the UK, the NSS is ‘embedded into the national psyche’ for higher education (Langan and Harris Citation2019, 1077), and every year students’ reported satisfaction with the quality, promptness, and utility of feedback makes media headlines and leads universities to try and improve this area of their practice and policy (Williams and Kane Citation2009). However, the framing of items pertaining to feedback in the NSS focuses on the delivery and transmission of information rather than students’ use of feedback (Nicol Citation2010; Pokorny and Pickford Citation2010), which may focus enhancement efforts on delivery rather than supporting students to learn through active engagement with the process of feedback. Such framing may also imply to students that their role in feedback processes should be one of passive reception rather than active engagement. A further focus on valuing what is measured in this way is that the NSS has been described as providing a ‘misleading snap shot’ (Williams and Kane Citation2009, 265) of students’ experiences. Items themselves are interpreted in different ways (Bennett and Kane Citation2014), and there is evidence of wide variety in students’ preferences and perceptions of quality in feedback (Mendes, Thomas, and Cleaver Citation2011). For example, whilst students’ satisfaction with elements of feedback such as the timing of the return of comments might indicate that they are unhappy with this dimension of their experience, in studies using broader forms of data collection, few students report prompt return of feedback as pertaining to its effectiveness (Dawson et al. Citation2019).

Measuring what we value or valuing what we measure?

The NSS and other national surveys such as the Australian Course Experience Questionnaire (CEQ) have been criticised for the way in which the items pertaining to feedback represent outdated, transmission-focused models of feedback, and position students as passive recipients of comments rather than active participants in a dialogic process (Winstone and Boud Citation2019). The result may be that students, practitioners, and policy makers in higher education come to value that which is measured, rather than seeking to measure what is valued.

These surveys do not exist in a vacuum, nor are they neutral forms of data collection. Brooks (Citation2021, 162), referencing Foucault, notes that texts ‘construct certain possibilities for thought by ordering and combining words in particular ways and excluding or displacing other combinations’. Similarly, a sociomaterial perspective highlights that objects are agentic in the sense that they give rise to or constrain particular actions (Bearman and Ajjawi Citation2018). Practice and knowledge emerge through relations among objects and people in joint action (Fenwick Citation2009). In other words, a material object such as a sanctioned student satisfaction survey has the potential to shape what is valued in feedback practices, and what should be done to improve these practices. The behaviours these shape might be unintended, such that feedback practices become hostage to the items represented in the survey leading to approaches that might improve scores (e.g. coaching of students to ‘notice’ feedback, or enforcing deadlines to ensure feedback is ‘timely’) but not necessarily improve student learning. There is a risk that the practices of feedback come to be cemented by their representation in evaluation instruments. Such practices may elide beneficial changes in practices and may lead to the misallocation of precious resources to support improvement.

The present study

Measures of student satisfaction are part of accountability and enhancement processes of higher education systems internationally, and feedback is often an area that causes ‘concern’ (Williams and Kane Citation2009, 265) when results are released. However, the way in which the findings of these measures are likely to inform practice through how they position the actions of educators and students as important parts of the feedback process, has received little critical scrutiny. The aim of the present study is to investigate how feedback is conceptualised within the framing of items used to assess students’ perception of feedback in higher education systems internationally.

We chose to focus on national student satisfaction surveys rather than instruments developed within individual institutions for four specific reasons. First, institutional surveys are often modelled on national surveys, and whilst national surveys usually evaluate education at the programme level, institutional surveys typically operate at the level of individual courses/units/modules. Second, national surveys often have higher stakes for institutions, and are more likely to have been validated through psychometric analysis (Spooren, Brockx, and Mortelmans Citation2013). Third, national surveys serve a common purpose in the country where they are used; in institutional surveys, there is wide variety in purpose (some of which are conflicting), and wide variety in the dimensions that they measure (Spooren, Brockx, and Mortelmans Citation2013). Fourth, it is harder to gain a representative sample of institutional surveys as they are not typically made available publicly, and not all of those which have been published make reference to feedback processes (Spooren, Brockx, and Mortelmans Citation2013).

Our overarching research question, ‘What conceptions or models of feedback are conveyed through national student evaluation surveys?’, was addressed through analysing survey items with respect to five areas:

  • (1) To whom is agency attributed in feedback processes?

  • (2) What actions appear to be involved in feedback processes?

  • (3) Is feedback represented as a product or a process?

  • (4) What elements of feedback processes are being evaluated?

  • (5) What is the implied role of the student in feedback processes?

Materials and methods

Sampling national surveys

In order to collate national student evaluation surveys that include items pertaining to feedback, we adopted a sampling approach involving two related strategies. First, we used EBSCO to search the literature for key publications on student evaluation surveys published between 2000 and 2019. We also consulted international researchers in assessment and feedback from 24 countries and asked them to provide information about national evaluation surveys used in their country. Responses from 11 countries indicated that there was no national-level survey used to assess students’ evaluation of their experience pertaining to feedback (Belgium, Brazil, Chile, China, Germany, Hong Kong, Italy, New Zealand, Singapore, Spain, Sweden, Trinidad & Tobago), and there were two further countries for which we received no response (Poland, Sri Lanka). In these cases, we checked against our literature search to ensure that there appeared to be no national-level surveys for these countries. We collated detailed information on national surveys from 10 countries (Australia, Canada, Denmark, Finland, Ireland, Japan, Netherlands, Norway, UK, and USA). For more details about these surveys, see Table S1 in the online supplementary materials.

Data extraction

We created an initial pool of 11 surveys from 10 countries (there were two different surveys in use in the Australian context). Whilst six of these surveys are administered in languages other than English, the items are also published in English. The first step in data extraction was to search in each survey for items that related to feedback. Items were deemed relevant if they met the following two criteria: (1) the item pertained specifically to feedback, not assessment in broad terms, and (2) the item pertained to feedback in the context of assessment, not processes such as supervision or coaching in which feedback is one aspect of a broader set of interventions.

Data extraction was undertaken independently by two of the authors. For one survey (Japan Student Survey), no items pertaining to feedback were identified. Across the remaining 10 surveys, a total of 31 items pertaining to feedback were extracted. For nine of these surveys, there was 100% agreement in the items to be extracted; for the Studiebarometeret of Norway, there was 89% agreement which was resolved through discussion. An overview of the 31 items that were extracted from the surveys is presented in . Given the variability in the total number of items in each of the surveys, we also calculated the percentage of total items in each survey that pertained to feedback.

Table 1. Survey items extracted.

Data coding

We coded each survey item individually against each of our five areas of interest (To whom is agency attributed in feedback processes?; What actions appear to be involved in feedback processes?; Is feedback represented as a product or a process?; What elements of feedback processes are being evaluated?; and What is the implied role of the student in feedback processes?). Our coding framework is presented in .

Table 2. Coding scheme.

In assessing who has agency in feedback processes, the actions involved in feedback processes, and the elements of feedback processes being evaluated, we applied an inductive approach to coding. Each author individually reviewed each survey item and noted codes that represented each of these questions. Through discussion, these initial codes were refined to form a coding framework which was then applied to all items. In assessing whether feedback is represented as a product or a process, and the implied role of the student in feedback processes, we applied a deductive approach. In order to code items as representing products or processes, we drew upon Henderson et al. (Citation2019), who ‘purposely position feedback as a process, or a series of processes, and not simply an event involving the transmission of information or input’ (17). Items were coded as representing feedback-as-product if emphasis was placed on feedback as comments or input from an educator; in contrast, items were coded as representing feedback-as-process if feedback was positioned as something extending beyond the mere provision of information. Finally, in terms of the implied role of students in feedback processes, we utilised the coding framework reported by Van der Kleij, Adie, and Cumming (Citation2019) in their meta-review. We coded the role of the student as ‘passive’ if the item represented the transmission or information processing models described by Van der Kleij, Adie, and Cumming (Citation2019); our ‘active’ code was applied if the item represented the communication or dialogic models described by these authors.

We first took a random sample of 8 items from our extracted pool and coded them individually, which was followed by discussion between all four authors during which we clarified our understandings of the codes. We then coded all 31 items individually and resolved disagreements through discussion until a final set of codes applied to each item were agreed by all four authors.

Findings

To whom is agency attributed in feedback processes?

Across the items that we analysed, the teacher is most commonly positioned as the key agent in feedback (see ). Whilst there were a small number of items where peers were the agent, we found no examples of items where students were positioned as the primary agent. Furthermore, some items position ‘feedback’ as the primary agent, which implies that feedback itself can have effects without any role for the student in this process.

Table 3. Agents in survey items.

What actions appear to be involved in feedback processes?

Through our inductive coding, we found seven key verbs in the extracted items that represent the actions involved in feedback processes (see ). The data provide strong evidence of a transactional approach to feedback, as verbs related to a one-way communication process (e.g. give, provide, receive) are most common. Verbs representing active participation by students were rare.

Table 4. Key verbs in survey items.

Is feedback represented as a product or a process?

Of the items that we analysed, over twice as many represented feedback as a product to be transmitted as represented feedback as a process (see ).

Table 5. Representation of feedback in survey items.

What elements of feedback processes are being evaluated?

The most commonly-evaluated element of feedback is its frequency (see ). Receiving regular feedback information may well be advantageous to learning but only if learners have opportunities to apply it. The ‘value’ of feedback was also a common focus of evaluation. This could imply that feedback needs to be of use to drive learning, but this is not explicitly stated.

Table 6. Focus of evaluation in survey items.

What is the implied role of the student in feedback processes?

Almost all items positioned students as having a passive role in feedback processes, on the receiving end of what teachers do (see ). Whilst a small number of items gave students a more active role, in some cases this was in surveys that are more strongly focused on student engagement than satisfaction (e.g. the USA NSSE).

Table 7. Implied role of student in survey items.

Discussion

The primary aim of the present study was to explore the framing of items used within international student evaluation surveys to assess students’ experiences of feedback. Despite wide variation in the purpose, focus, and age of the surveys (and items within them), they appear to represent feedback in transmission-focused ways that attribute little, if any, agency to students. We first asked to whom agency is attributed in feedback, and found that teachers (as opposed to peers or students themselves) are positioned as the primary agents of feedback processes. This implies that the weight of responsibility for ensuring effective feedback processes is placed on teachers, despite the fact that teacher comments on their own have little impact on students unless these comments are understood and applied (Henderson et al. Citation2019). Next, we sought to determine what actions are involved in feedback. Here, we saw strong evidence of a transactional approach to feedback as something that can be ‘given’, ‘provided’, and ‘received’. This approach is also reflected in our next area of interest where we asked whether feedback is represented as a product or a process, and we found the former to be twice as common in survey items as the latter. We also sought to understand what elements of feedback are most commonly evaluated, and found that frequency and value were most common. Finally, we were interested in the implied role of the student in feedback and found that survey items predominantly position students as having a passive rather than an active role.

These findings show that the language used in national student survey items emphasises and privileges certain views of feedback – the actions of the teacher – over others and thereby renders invisible students’ roles and responsibilities in the process. The limited discourse of the feedback survey items creates ‘tunnel vision’ of feedback as information transmission, where students are passive recipients of this information. The items maintain a view and practice of feedback as information that should be timely and specific, largely ignoring the role of the student in learning from/with feedback. When items continue to conceptualise feedback as a ‘gift’ to be received, we risk ‘creating a world in the image of the reductionist view afforded by such measurements’ (Gorur Citation2016, 599).

We do however recognise that student satisfaction does not necessary correlate with student learning, and can even create an illusion of learning (e.g. Carpenter, Witherby, and Tauber Citation2020; Kornell Citation2020). Nevertheless, surveys often inform accountability and quality enhancement regimes, and hence can influence efforts to ‘improve’ practice. As our conceptions of what is effective in education evolve, so should our surveys accordingly. In order to move beyond valuing what we measure, Biesta (Citation2009) argued that we need to re-engage with the question as to what constitutes good education. And so we ask, what constitutes good feedback processes and good quality markers of these? The raft of research in this field clearly leads to a different conception of feedback than is represented by the items in the surveys we analysed.

Measuring what matters?

Whilst there are demonstrable shifts in the ways in which feedback is described in the research literature (Winstone et al. Citation2021), there is still a strong perception of feedback as a product that is transmitted from teacher to student within the surveys. If there is to be widespread and meaningful change in the way in which feedback is represented in research, policy, and practice, this requires alignment across all of these areas in terms of what is positioned as being valued.

From a theoretical perspective, our findings indicate that changing conceptions of feedback are in many ways divorced from practice, and that this might impede developments in the translation of research into evidence-informed educational practices. Crucially, educators may be improving their practice by adopting learning-focused approaches, yet these dimensions of feedback are not assessed by the majority of metrics in their current framing. Our analysis highlights the potential mediating role of non-human agents such as survey items in the chasm between theory and practice that may have the power to enable or constrain evidence-informed approaches.

Our findings also have important practical implications. If evaluations of the student experience of feedback adopt a transmission-focused, transactional model of feedback, this conveys to students that their role is to receive comments and little else. Furthermore, because of the stakes attached to these surveys, this can also promulgate and reward practices that focus on transmission of information, rather than its impact on student learning.

Survey items are necessarily reductionist in character. They seek to capture that which is salient and it is impossible for them to capture the nuances of any practice. In 2010, Pokorny and Pickford cautioned that the framing of items evaluating feedback such as those in the UK NSS leads to a risk ‘that the definition of feedback becomes increasingly narrow, and dominated by the auditing of formal transmission feedback mechanisms, thereby inhibiting and detracting from broader, more flexible (and potentially useful) approaches to the feedback process’ (28). This caution has been echoed more recently by Winstone and Boud (Citation2019, 115), who argued that ‘by failing to value learning-focused approaches to feedback by placing them at the centre of approaches to evaluation, there is little incentive for educators seeking to work in learning-focused ways’. They go on to argue that

We could send a powerful message about the importance of student agency in the feedback process by asking them not to rate the utility of feedback they have received, but rather the extent to which they have been enabled to gather or use feedback to support their learning. (Winstone and Boud Citation2019, 115, emphasis in original)

Items used in student evaluation surveys are often developed without consideration of relevant evidence on what makes for effective practice (Spooren, Brockx, and Mortelmans Citation2013). There is an important role for policy makers in seeking to align measurement tools such as evaluation surveys with the contemporary literature on educational effectiveness. We urge practitioners, institutional leaders, and policy makers to question the implicit messages sent to students through the language of survey items, and consider the ways in which such messages might shape the perceptions and actions of all those involved in feedback processes. We would also encourage those involved in feedback processes at all levels to ensure that practices are not driven solely by the ways in which they are measured. Instead, we argue that any recommendations for practice should be evidence-informed and, in turn, that measurement instruments are aligned with scholarly understanding of feedback processes.

We propose that evaluation items could be reframed in ways that continue to assess dimensions of feedback such as its timing, frequency, or value in ways that represent feedback as a process, but also with room for other dimensions such as dialogue, that include verbs foregrounding the actions of the student, where the student is the primary agent, and where students play an active role. In , we illustrate how such reframing might be implemented. The wording we adopt is intended to be indicative rather than prescriptive; we are not proposing that items should necessarily be framed in these ways. Rather, we suggest these examples as indicators of ways in which students could be repositioned within survey items in order to afford them greater agency in feedback processes. It is important to acknowledge that whilst changing the positioning of students in national surveys is likely to feed down to changes in items used within institutional surveys, individual instructors have power to reshape perceptions of feedback in their own contexts by adjusting the ways in which they seek feedback from their students on their experiences of feedback processes.

Table 8. Reframing items to focus on feedback as a process.

Limitations

While we sought to include a range of national evaluation instruments in our study, we do not claim that they are representative of all available instruments. Lack of contacts in many non-English speaking countries meant that our sample of instruments may be partial. In addition, the absence of national instruments in many of the countries surveyed does not imply that there may not be a wide and influential array of regional or local approaches that might display characteristics other than those we have identified here. We have focused on items assessing feedback processes only, and we do not contend that students’ experiences of assessment and feedback are separable from their overall university experience. However, the approach we have adopted might also prove illuminating in exploring the positioning of students in items assessing other dimensions of their experience.

Conclusion

Student evaluations are influential in higher education. It behoves us therefore to look critically at the instruments used and ensure that they both do what they claim to do and have a positive educational effect when acted upon. We have shown that in the case of feedback, most instruments are wanting and an inadequate view of feedback is captured in them. A common critique of student evaluations is that they can incentivise teaching practices that are aligned with what is being measured (Kornell Citation2020); instead, we argue that we should measure what we value – feedback as a process rather than the mere provision of information – therefore influencing feedback practice and resource allocation in positive ways.

Supplemental material

Supplementary Material

Download MS Word (15.5 KB)

Disclosure statement

No potential conflict of interest was reported by the author(s).

Notes

1 In 2020, a large-scale review of the NSS was announced by the Office for Students.

References