3,230
Views
1
CrossRef citations to date
0
Altmetric
Editorials

Back to basics for student satisfaction: improving learning rather than constructing fatuous rankings

The error of standardisation

There is growing concern expressed in this journal and elsewhere about the misdirection of student feedback processes. ‘Feedback’ in this sense refers to the expressed opinions of students about the service they receive as students. This may include perceptions about the learning and teaching, course organisation, learning support and environment. The problem is that feedback seems increasingly to have become a ritualistic process that results in very little if any action and, is thereby, decried as of little value. Student indifference because of the formulaic nature of the feedback and the failure to see any changes enacted only serves to reinforce the pointlessness of the process.

The problem, though, is not the indifference or contempt with the process. That is the symptom. The problem is the lack of desire to use student views to make changes compounded by the obsession with standardisation of questions in fatuous national surveys.

Standardising student feedback is the enemy of improvement. It misses the whole point. It facilitates ludicrous and entirely pointless rankings. Student feedback is a serious matter that provides the basis for a fundamental exploration of what works and what doesn’t work for students. It is not about creating league tables or rating teachers. Student feedback is fundamentally about making changes to the student experience at a level that improves the experience for students: teaching and learning at a programme level, general facilities at a university level.

It is time to return to using student feedback as an improvement tool. Complacent and relatively meaningless one-size-fits-all surveys used to rank entire institutions are misleading, especially to prospective students, for whose benefit the obsession with league tables is supposedly aimed.

Zineldin et al. (Citation2011), for example, showed that in their study that the ten critical components of student satisfaction, in order of importance, were as follows: (1) cleanliness of classrooms (2) cleanliness of toilets (3) the skill of the professors attending the class (4) politeness of professors (5) physical appearance of professors and assistants (6) responsiveness of the professors to students’ needs and questions (7) cleanliness of the food court (8) physical appearance of classrooms (9) politeness of assistants (10) the sense of physical security the students felt on the university campus. Not many of these criteria are likely to have prominence in national surveys that have not engaged with student views before the questionnaire is constructed. While this list may be ‘idiosyncratic’ of the specific study, it is indicative of the variability of student perspectives and their considerable variance from the bland and generic statements that are found in national surveys.

At a more general level, a Norwegian study revealed that:

the empirical analysis has, not surprisingly, indicated that student satisfaction is a rather complex concept…. As the sample contains more than 10,000 respondents, in a variety of study programmes and institutions, it is arguable that, in a Norwegian context, some important factors that influence overall satisfaction have been identified, independent of institutional characteristics. Factors associated with teaching and social climate seem to be very important. However, significant relationships occur for other factors like administrative staff services and physical facilities as well. Not least, the social climate is a factor of considerable significance to the well-being of students, and this factor may be regarded as highly manipulative as seen from the level of the institution. (Wiers-Jenssen et al., Citation2002, p. 193)

Improvement

What does it mean to use student feedback for improvement? This has been a long-discussed issue in the journal. In an Editorial in 2003, the whole issue and nature of student feedback was discussed in detail. The principles in the commentary remain valid (Harvey, Citation2003). Unfortunately, the development of national ranking surveys of student satisfaction have ridden roughshod over the institutional improvement processes.

What is needed is a return to a sensitive approach that (a) identifies student concerns and (b) uses that information in a systematic way to improve the situation, which is monitored year-on-year.

Rather than construct a questionnaire that tells students what their concerns are, find out from students what they consider to be the key issues. Use focus groups, minuted discussion sessions, open-ended questionnaires or on-line ‘Trust-Pilot’-type reviews to explore student concerns. Then, based on that qualitative information, construct a questionnaire that would enable action to be taken to address the concerns. This would, ideally involve asking questions of students that sought both their satisfaction rating and an importance rating for the item. The analysis of results would then identify those issues that are particularly important and very unsatisfactory for sub-groups of students. Action would then follow at a local level to deal with the key student concerns. As an annual process, a time series could be developed of (hopefully) increasing satisfaction year-on-year of core areas.

Many institutions have teacher performance questionnaires. Their impact, though, tends to be minimal. A bland teacher performance questionnaire that asks general questions such as ‘Starts lectures on time’, ‘How knowledgeable was your instructor?’, ·’Did your lecturer explain the course material well?’ are of little value because they are not necessarily the questions students are concerned about, they target individual lecturers and may be more alienating than helpful in improving teacher performance, they tend to ‘discriminate’ against lecturers who demand more of students, are limited to the teaching situation and do not include the wider learning environment.

Student satisfaction surveys should be about student learning and the resources that support it. Satisfaction surveys should not be about scoring teacher performance. Information needs to be about how courses are organised, what knowledge students learn, what abilities they develop, how well they are prepared as lifelong learners and what the learning support infrastructure is like.

The priorities thus need to be clear. Student feedback is primarily about improvement not information. Indeed, public information is a spin-off from the much more important process of improvement. Student feedback plays a very important role in the improvement process. However, effective improvement requires integrating student views into a regular and continuous cycle of analysis, reporting, action and feedback. It is essential to ensure the closing of the action and feedback loop. It is important to make reflection explicit in the feedback system, both at the level of the student and of the institution. This requires systematic data collection based on student-voiced concerns and clear reports that identify areas for action, delegating responsibility for action, encouraging ownership of plans of action and ensuring feedback to generators of the data, viz. the students.

Establishing this is not an easy task, which is why so much data on student views is not used to effect change. The student satisfaction survey approach, originated at the University of Central England in the late 1980s and used by institutions in the England, Wales and Scotland as well as Sweden, Finland, New Zealand and Poland, established a clear process that involved the institutional leaders in a top-down strategic approach paralleled by a bottom-up module-level feedback that coalesce in the programme-level planning process. Reporting is to the level that effective action can be implemented. For example, programme organisation is reported to the level of programmes, computing facilities to the level of faculties, general facilities and learning resources to the level of the institution.

At Lund University, students are not viewed as counterparts but partners in the university’s activities. Lund University has carried out Student Satisfaction Surveys (barometers) since the 1990s and an overview has shown that an evaluation culture has grown during the past decade…The student barometers have been valuable instruments in gaining access to the students’ opinions about their education. The student unions, involved at all levels within the university, have been able to use the results to strengthen the voice of students. The success of student feedback is greater when information is not just acquired and analysed but also worked through, discussed and developed in a forum containing university personnel, academic staff and students. (Josefson et al., Citation2011, pp. 257 & 261)

However, at Lund the barometer had to be managed alongside teacher performance questionnaires, which over time tended to cover similar areas. It is important that student feedback is not just developed as yet another layer of questionnaires but that student feedback is treated holistically, co-ordinated across the institution and processed accordingly.

Each institution has its own unique improvement needs and so it is important that universities should tailor the satisfaction surveys to fit the improvement needs of the institution based on the qualitative feedback from students that initiates the process. There must be an infrastructure in place to ensure effective use of the data. One-size-fits-all surveys are next to useless for effecting change.

Kane et al. (Citation2008), for example, undertook an analysis of 18 years of student feedback data and consequent action at UCE revealing that significant changes occurred over time, as a result of the closing of the feedback loop. The data indicated not only where improvements had been made and priorities changed but also where student concerns have remained consistent. The analysis also revealed how the feedback questionnaire had evolved dynamically reflecting historical changes.

Instead of tailored improvement-oriented student feedback processes, we hear of ever-more attempts to universalise student feedback. The Global Student Satisfaction Awards, for example, proclaims that universities must pay attention to student satisfaction. Yet what they offer is pseudo ranking information based on student feedback, for which they make awards. This is based on ‘Trust Pilot’ type responses to the open multifaceted question:

Tell us about your experience.

What did you really like about it and what can be improved?

How would you summarise your overall study experience?

This may provide some form of information for prospective students but given the tiny numbers of respondents per institution, it hardly represents a credible basis for making a life-changing career choice. Studyportals claimed to have received 108,000 student reviews across 4,000 global institutions. However, almost 90% of institutions did not receive more than 10 reviews (the minimum number required) and only 444 institutions had enough reviews to qualify for the awards.

More to the point, these reviews are not systematically used to improve the situation within the institution. It is more about the self-aggrandisement of the awarding body and vanity projects for the institutions than any genuine attempt to enhance the student experience.

Relative or absolute

What these one-size-fits-all ranking surveys ignore is that student satisfaction feedback is not some kind of absolute but is relative to the respondent’s situation. Research has shown that satisfaction varies by gender, age and demographic group, among other things. Hamshire et al. (Citation2017, pp. 50 & 53) drawing on studies involving institutions in the United Kingdom, Australia, South Africa and New Zealand, warns against the generic survey tool.

University policies are increasingly developed with reference to students’ learning experiences, with a focus on the concept of the ‘student voice’. Yet the ‘student voice’ is difficult to define and emphasis is often placed on numerical performance indicators. A diverse student population has wide-ranging educational experiences, which may not be easily captured within the broad categories provided by traditional survey tools, which can drown out the rich, varied and gradual processes of individual development. There is no single tool that can be used to measure students’ experiences.… However valuable the data may be for the quality assurance agenda, the types of questions included in student surveys can limit their value for quality enhancement. Questionnaires using closed answers cannot adequately describe the variety of student experiences and there is evidence to show that students have disparate understandings of survey items…Questions about student satisfaction may therefore receive different answers at different stages of an individual’s journey through their higher education studies: at times, learning in higher education may be challenging and uncomfortable for students as they develop as autonomous learners and team workers and their discomfort may be expressed as dissatisfaction.

Furthermore, as well as changes wrought at different stages of the academic journey, different backgrounds raise different expectations, which is also crucial in identifying satisfaction. For example, Abizada and Mirzaliyeva (Citation2021, p. 267) recently pointed out that to

measure student satisfaction properly, goals of the students should be clearly identified and specified, which in itself is a hard and continues process…. Student satisfaction is strongly correlated with student expectation: when delivered performance is below expectations, students are usually dissatisfied, while, on the contrary, when delivered performance exceeds expectations, students are usually satisfied…. In such comparison, quality of delivered performance is not questioned and all that is needed to satisfy the students, is to have it above their expectations.

Similarly, Momunalieva et al. (Citation2020, p. 351), showed that the student view on the quality of education was dependent on the context of the student

The physical infrastructure and university support as well as professionalism of teachers were among the main factors that influence students’ perception of educational quality. Students mentioned that it is important to study in a clean, modern and equipped building. Yet, this factor was important only among public universities. In universities with foreign capital, students perceive the infrastructure as something natural and therefore it has no impact on their perception of education quality. University support also played a vital role in building students’ perception of education quality: students who are less satisfied with the university support are less satisfied with the higher education quality than those who are more satisfied with the support.

However valuable the data may be for the quality assurance agenda, the types of questions included in student surveys can limit their value for quality enhancement. Questionnaires using closed answers cannot adequately describe the variety of student experiences and there is evidence to show that students have disparate understandings of survey items.

Ruohoniemi et al. (Citation2017, p. 260) show that student satisfaction is also very much affected by the learning styles of students.

Students’ perceptions of the teaching-learning environment are slanted by their approaches to learning. Thus, in order to enhance the quality of education, the quality of students’ learning needs to be taken seriously and development should not be based on satisfaction surveys only…. It would be beneficial to explicitly make students aware of their approaches to learning and to support the development of their study practices.…

All of this suggests that student feedback is inappropriate for constructing rankings but crucial for institutional improvement at all levels. Do it properly and close the loop and the enhancement of the student experience is immense; do it ritualistically and the whole potential is lost on the altar of rankings.

References

  • Abizada, A. & Mirzaliyeva, F., 2021, ‘Student satisfaction with honours programme in Azerbaijan’, Quality in Higher Education, 27(2), pp. 264–78.
  • Hamshire, C., Forsyth, R., Bell, A., Benton, M., Kelly-Laubscher, R., Paxton, M. & Wolfgramm-Foliaki, E., 2017, ‘The potential of student narratives to enhance quality in higher education’, Quality in Higher Education, 23(1), pp. 50–64.
  • Harvey, L., 2003, ‘Editorial: Student feedback’, Quality in Higher Education, 9(1), pp. 3–20.
  • Josefson, K., Pobiega, J. & Stråhlman, C., 2011, ‘Student participation in developing student feedback’ Quality in Higher Education, 17(2), pp. 3–20.
  • Kane, D., Williams, J. & Cappuccini‐Ansfield, G., 2008, ‘Student satisfaction surveys: the value in taking an historical perspective’, Quality in Higher Education, 14(2), pp. 135–55.
  • Momunalieva, A., Urdaletova, A., Ismailova, R. & Abdykeev, E., 2020, ‘The quality of higher education in Kyrgyzstan through the eyes of students’, Quality in Higher Education, 26(3), pp. 337–54.
  • Ruohoniemi, M., Forni, M., Mikkonen, J. & Parpala, A., 2017, ‘Enhancing quality with a research-based student feedback instrument: a comparison of veterinary students’ learning experiences in two culturally different European universities’, Quality in Higher Education, 23(3), pp. 249–63.
  • Wiers-Jenssen, J., Stensaker, B. & Grøgaard, J.B., 2002, Student satisfaction: towards an empirical deconstruction of the concept, Quality in Higher Education, 8(2), pp. 183–95.
  • Zineldin, M., Akdag, H.C. & Vasicheva, V., 2011, ‘Assessing quality in higher education: new criteria for evaluating students’ satisfaction’, Quality in Higher Education 17(2), pp. 231–43.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.