871
Views
1
CrossRef citations to date
0
Altmetric
Practice Paper

Listening to Students’ Views on NSS Data for Quality Enhancement

&
Pages 35-40 | Published online: 15 Dec 2015

Abstract

In the UK, the marketisation of Higher Education (HE) increasingly constructs students as ‘customers’ rather than ‘learners’. Prospective students are faced with an array of published material to enable them to compare and contrast the ‘products’ on offer from UK institutions, including the Government website, Unistats (http://unistats.direct.gov.uk), which provides at-a-glance information about each programme to help inform the choice of university.

It can be argued that such marketisation constrains pedagogical aspects of HE provision and renders obscure the responsibilities required of each learner when considering the effectiveness of a programme of learning; raising challenges for managing the expectations of students. This paper examines the challenges to HE Institutions (HEIs) in ensuring that the provision they offer is evaluated and developed in more ways than simply as that of ‘product’. The challenges to be addressed are discussed and a good practice example of using National Student Survey (NSS) data for quality enhancement is detailed.

Introduction

The increased alignment of HE with subsequent employment tends to construct students as ‘customers’ or ‘consumers’ rather than ‘learners’ (CitationSonnenberg & Fitzpatrick 2012) and encourages those consumers to consider the end point of programmes of learning (i.e. getting a degree) as the consumer ‘product’ (CitationMolesworth et al., 2009). Higher Education Institutions (HEIs) are responding to consumer demand, making improvements in cost-effectiveness and facilities; consequently, there is an increasing tendency for HEIs to adopt commercial practices concerned with measuring and responding to consumer satisfaction (CitationLowrie & Hemsley-Brown 2011). Also, ‘consumer’ evaluation of HEI provision tends to take place externally to the institutions concerned and these measurements are widely publicised. For instance, data available includes: various university ranking schemes such as the Times Higher Education World University Rankings; externally managed consumer satisfaction scores, such as Ipsos MORI’s National Student Survey (NSS); and opinions expressed on social media sites. Hence, the measurement of students’ experience within HE and the use of the measures employed are now being viewed with trepidation by many providers.

Measuring and comparing the quality of Higher Education

One source of evaluative data that cannot be ignored is the government comparison website, Unistats, which provides at-a-glance information in the form of Key Information Sets (KIS) of each programme, to help inform the choice of university. KIS include measures of student satisfaction using quantitative data from the NSS, along with quantitative measures of learning and teaching activities; assessment methods; graduate outcomes; tuition fees and student finance; accommodation; and professional accreditation.

shows an anonymised overview of KIS for a selection of BSc (Hons) Diagnostic Radiography programmes. On first inspection, it would appear that direct comparisons could be drawn between programmes, and also that within the selection results vary from 68% to 100% student satisfaction.

Figure 1 Unistats comparison of BSc (Hons) Diagnostic Radiography programmes.

However, this type of approach to measuring the value of a Higher Education experience can be questioned in terms of both construct and internal validity. In other words, the appropriateness of these indicators for evaluating quality of experience in HE, and the practical value of the measures to programme and curriculum developers. We will now explore these issues further.

Appropriate measures and standards used to evaluate education will depend upon whether we construe the student as a ‘customer/consumer’, where the desired outcome is the ‘product’ or final award, or as a ‘learner’ where the desired outcome is the ‘process’ of learning evidenced in the development of study skills and autonomy of thought (CitationMolesworth et al., 2009). It is likely that an educator would construe the student as the latter; however gradual marketisation of HE puts more emphasis on the former (CitationMolesworth et al., 2011). Drivers for this change include globalisation and the changing role HE has in the global economy; extended and distributed learning environments enabled by digital technologies (CitationJISC 2012), the marketing of programmes to recruit more international students; and the resultant increase in competition between universities.

Furthermore, the introduction of undergraduate student fees has redefined the student as customer (someone fully responsible for his/her own choices and future within free-market neo-liberalism) rather than learner (CitationLeathwood & O’Connell 2003) and, importantly, the academic is being redefined as service provider rather than educator (CitationNordensvard 2011). This has signalled a fundamental shift in the relationship between academics and students and places greater emphasis on student expectations of value for money.

In the consumer model of evaluation, the customer is central to determining the quality of the organisation’s efforts and therefore strongly influences not only the standards but also the product they believe they are purchasing. This is described as Customer-Focused Management (CFM) (CitationHomburg et al. 2000), in which success depends on satisfying the customer’s needs and exceeding their expectations, measured through, for instance, the customer satisfaction survey – hence the introduction of the NSS. However, whilst CFM frameworks may be relevant and easy to apply to an industry where the product is clearly defined, the problem in education is that what is being purchased is more abstract, and measures of satisfaction such as those seen in the NSS do not necessarily capture evaluation of pedagogical outcomes. Furthermore, customers themselves may not easily grasp the notion of learning as a process where, according to CitationMeyer & Land (2006) uncomfortable feelings associated with grasping new and unfamiliar concepts are an inevitable aspect of learning. Consequently, such discomfort, whilst a vital element of learning, may be reported as ‘dissatisfaction’ when completing a satisfaction survey. The authors suspect this effect to be the likely cause for the mismatch between external examiners’ views and students’ NSS ratings of assessment and feedback processes, an area which nationally tends to achieve poorer scores in the NSS (CitationFielding et al. 2010). Yet it is at these times of discomfort that students are arguably being stretched to their utmost and most effective learning takes place (CitationMeyer & Land 2006). Thus quantitative accounts of the HE experience, such as number of classroom hours or percentage of graduates with a first class degree, and ‘feel good’ measures of satisfaction, fail to demonstrate whether effective learning has been facilitated and achieved, calling into question the construct validity of such measures.

We propose that a more appropriate way to consider evaluation of the higher education experience is to construct the student as learner rather than consumer. It is likely that this approach resonates with most academics since it underpins pedagogical theories upon which our practices are based. In this role the student is positioned in a more equal knowledge transfer relationship with the academic and considers the learning and development or ‘growth’ of the individual as the thing to be measured and valued (CitationLeathwood & O’Connell 2003). Here, learners are constructed as actively consuming educational services, taking responsibility for their own learning; and as autonomous, independent, and self-directed individuals (CitationLeathwood & O’Connell 2003). Whilst such characteristics are far less tangible and difficult to measure, the ‘students as learner’ model acknowledges that relational factors such as interactions with other people for learning, cognitive and metacognitive changes, and development opportunities are as important as degree classification. Unlike the Unistats measures of teaching quality, the length of contact time is less important than the types of activities undertaken during that contact.

Despite the fact that there are multiple dimensions associated with student identity (CitationLeathwood & O’Connell 2003), it could be argued that students are being actively encouraged through the marketisation described to adopt the stance of the consumer model. Also depending on the student’s chosen standpoint, the ways that they measure and gauge the quality of their HE experience can vary – there has been some research to show that this is the case. CitationSonnenberg & Morris (2011) carried out a study which considered students according to two separate identities: ‘consumer’ or ‘learner’. Their study comprised approximately 300 students who were asked to complete an NSS. Before the survey half were provided with a statement to read which addressed them as ‘learners’ and the other half were encouraged to consider themselves as ‘consumers’. There was a statistically significant difference between the levels of satisfaction on 16 of the 22 NSS categories, with ‘consumers’ being less satisfied than ‘learners’. In a subsequent study, CitationSonnenberg & Fitzpatrick (2012) found there were statistically reliable differences between students who defined their own prominent student identity as ‘consumer’ and those with a prominent student identity of ‘learner’. ‘Consumers’ felt less dedicated to their studies and had a diminished sense of belonging to their university in comparison to ‘learners’. Further, ‘consumers’ were less satisfied with their HE experience and the quality of their course and they reported significantly lower general well-being than ‘learners’.

The message here is that the KIS metrics do not appear to value those aspects of education which are valued by the academic in the ‘student as learner’ model. Instead they construct students as consumers and do not appear to take account of the evidence that identity processes are implicated in students’ assessment of the quality of their HE experience as well as their general well-being (CitationSonnenberg & Fitzpatrick 2012).

Now we shall consider the validity of the methodology employed in evaluating HE using the KIS measures. In particular, we will focus on how student satisfaction is measured in the NSS – the specific methods of data collection and analysis.

First, the NSS is completed anonymously and without dialogue between the academic and the student. This uni-directional flow of information is contrary to the more collaborative approach to making improvements to the experience being widely promoted as good practice in national strategies to engage students at all levels of curriculum design (CitationHEA 2013).

Second, whilst there are open questions for students to add their comments they don’t always do this, and even those who do are often descriptive and vague; so knowing why students are dissatisfied with assessment, for instance, becomes a guessing game and it is not clear what improvements need to be made.

Third are methodological issues associated with completing student experience surveys, such as acquiescence bias and indifference bias, which is explained later in relation to (CitationYorke 2009). The questions are also open to interpretation and may not always be applicable. For example the question, “Good advice was available when I needed to make study choices”, is not always applicable because on some programmes all modules are compulsory. In this situation, we believe our students select ‘neither agree nor disagree’ – and there is a big problem with this category (CitationYorke 2009). Responses are selected from a Likert scale (‘definitely agree’, ‘mostly agree’, ‘neither agree nor disagree’, ‘mostly disagree’, ‘definitely disagree’). The NSS score is a measure of satisfaction, so is calculated as the sum of all ‘definitely agree’ and ‘mostly agree’ responses divided by all answers for that question. This makes the ‘neither agree nor disagree’ response count against a score of satisfaction (CitationIpsos MORI 2013).

Figure 2 Example of how NSS data may be misleading.

In this example (), cohort 2 would generate an NSS score in which they appear to be most satisfied, yet 20 students expressed dissatisfaction; whereas no students in cohort 1 expressed dissatisfaction.

Finally, the value of the KIS data to prospective students is questionable. Providers implement changes in response to evaluations such as the NSS, therefore changes made (if it is possible to work out what these should be) benefit students in subsequent years, not the students having the experience now. But Unistats reports the preceding year’s figures and prospective students are not informed of changes implemented since the data was collected. This problem was identified by CitationCheng & Marsh (2010). This evaluative data clearly may be of use to the organisation, but does not adequately inform the prospective student about the quality of what is actually on offer to them.

An example of good practice

Despite the issues highlighted previously, institutional behaviour continues to be driven by such data, regardless of its validity (CitationGibbs 2012). Therefore the NSS and Unistats evaluations are probably here to stay, and providers need to consider ways in which their outcomes can be used to best effect. On the BSc (Hons) Diagnostic Radiography programme at the University of Salford we use a range of usual methods to engage students and evaluate the HE experience, including personal tuition, Personal Development Planning (PDP), module evaluation, and student representation on staff–student liaison committees. However, this data does not always closely reflect the categories being reported in the NSS. This means we may not understand the reasons students might report being dissatisfied when it comes to completing their NSS. Whilst it is imperative (and a legal requirement) that we do not attempt to manipulate student NSS responses, we recognised a need to think laterally about how we could investigate our NSS scores to maximise pedagogical quality, i.e. actually make the data useful to us in practice. Since we were unable to ask those who completed the most recent NSS survey about the significance of the answers they provided, we decided to seek the views of current students about the survey results provided by previous cohorts in the hope that they may act as change agents (CitationGibbs 2012).

In 2011–2012, we introduced listening events for the level 4, 5 and 6 students, timetabling hour-long sessions with full cohort groups. During the session, we asked students to look at the Unistats website and previous cohort’s satisfaction and scores (this information is freely available to the public) and to work in small groups to see if they could help us to interpret the data from the students’ emic perspective (CitationFielding et al. 2010). They were then asked for suggestions for improving the curriculum based on their analysis of the results. The listening events therefore still align with the CFM principles of consumer-focused involvement in quality improvement. However because, unlike the NSS, they are undertaken in a face-to-face manner they provide additional and essential qualitative information. Moreover this approach exemplifies good practice in engaging students as collaborators in curriculum design.

The benefits of the listening events are therefore that they have enabled us to i) take a proactive rather than reactive approach to developing quality so current students benefit from their own suggestions; ii) identify student-generated ideas to improve the curriculum and associated systems; iii) garner more in-depth explanations about why certain student satisfactions scores might be low; and iv) explain the programme learning philosophy to the students, which in turn has helped us to manage their expectations. Furthermore, talking to the students on a regular basis in this way means we can show them we are prepared to listen and make changes as necessary. Students said they appreciated being listened to, and asked for sessions to be repeated every year. Changes we have made as a result of these events include: assignment briefs that are designed around a ‘before’, ‘during’ and ‘after’ format which ensure staff and students have a shared understanding of the process, and support and feedback which are available at key stages in the planning and submission of a piece of work.

lists the measures which, based on our experiences, we believe help us to respond effectively to the increased marketisation of our working environment and make the best use we can of NSS data.

Figure 3 Recommended good practice.

Over the last three years our external measures of quality (e.g. NSS) show increasing trends of satisfaction, and whilst it is difficult to claim that the listening events are solely responsible for this, we believe that talking to students has helped us understand what is important to them and therefore to make small but important changes. We have also been able to explain to students when changes are not possible and this has gone some way to managing their expectations. However, it is important to point out that the aim of the listening events was not to influence the NSS scores, not only because this would be contrary to the NSS regulations, but more importantly because we are convinced evaluation of HE is much more complex than quantitative data suggests. Furthermore, direct comparisons between institutions are not straightforward (CitationFielding et al. 2010). CitationLeathwood & O’Connell (2003) showed that the cultural and ethnic complexities associated with increased widening participation adds several additional dimensions to student identity which affect evaluation of their HE experiences – all of which needs to be taken into account before one institution, or indeed one cohort, may be compared with another. Therefore, the dialogic activities associated with the listening events overcome some of these issues by helping us to understand and improve local contexts, clearly promoting continuous and systematic improvement in our HE provision, which is in accordance with the Quality Code of the QAA (Quality Assurance Agency for Higher Education).

Conclusion

It has been recognised that aligning pedagogical cultures and student identities with consumerism may promote passive attitudes to learning, threaten academic standards and deter innovation (CitationNaidoo & Jamieson 2005). We have suggested that marketisation of HE can lead to outcomes-based evaluation which promotes changes in student identities. Although these changes tend to affect the evaluations negatively, it is possible to use data generated by mechanisms such as quantitative student satisfaction surveys in a different way to help overcome these problems. It is clear that improving the learning relationship within the current HE environment requires re-framing of the HE task, putting the professional development and personal growth of learners at the centre of the ways in which we describe, design and manage the learning experience; and how we manage expectations of learners (CitationNorthumbria University 2011).

References

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.