615
Views
2
CrossRef citations to date
0
Altmetric
Editorial

Editorial

When quality assurance first began in earnest in the 1980s, there was a concern that it did not mesh with the then vibrant learning and teaching developments.

Researchers who bridged the divide between the teaching and learning enhancement and the quality assurance communities argued strongly for mutual development and compatibility. For example, in the second volume of Quality in Higher Education, Elton (Citation1996, p. 101)

an audit would assess the robustness and effectiveness of all the internal quality assurance processes needed to ensure both the current quality of the student learning experience in its totality and the potential for future quality enhancement.

The issue persisted as problematic and at the start of the millennium Gosling and D’Andrea (Citation2001) bemoaned the separation of quality assurance and educational development. They advocated, in vain at the time, a more holistic approach that didn’t separate educational development units and quality assurance offices. The separation was not only counterproductive but revealed competing improvement agendas based on often opposing values.

The End of Quality? conference in 2002 addressed the quality and learning issue. Sceptical that quality assurance impacted on student learning, the emerging consensus was that if quality monitoring is to be effective in aiding and embedding improvement then any results of external monitoring processes must lead to more than temporary adjustments. There is considerable evidence that the initial impact fades away quickly, ‘especially if there is no significant connection between internal and external processes. External monitoring must interact with internal quality systems: the real benefits, it was argued, are products of the external and internal dialogue’ (Harvey, Citation2002, p. 9).

An INQAAHE Workshop in The Hague, in 2006, about the impact that external quality assurance processes have on institutions and programmes, provided an agency view. The delegates listed various impacts of external quality monitoring on student learning.

First, institutions are required to take responsibility for students enrolled. Second, curricula have been adjusted as the result of review. Third, there has been a growing concern about attrition rates. Fourth, course evaluations have been introduced. Fifth, appeals and complaints procedures have been set up. Sixth, rather more radically, teachers have thought about different ways of doing things, reviewing pedagogy, which has possibly led to better teaching (although there is little systematic evidence to confirm such impressions). Seventh, standards of student achievement have improved in many countries; this includes competencies (such as team working and communication) as well as knowledge and academic skills … . In some cases, this has gone hand-in-hand with a reduction in over-teaching, which had characterised some systems. However, the massification in other systems has perhaps resulted in reduced face-to-face teaching time, which may have mediated against improvements in quality.

Quality became less focused on learning than on institutional processes and ‘quality’ for a period became synonymous with quality assurance mechanisms and thus alienated lecturers (Newton, Citation2000), while generating a whole new layer of oppressive bureaucracy.

Throughout the first two decades of Quality in Higher Education there have been repeated concerns about the artificiality of quality assurance processes in higher education and the response and resistance of academics. For example, Barrow (Citation1999) talked of dramaturgical compliance in New Zealand, Anderson (Citation2006) showed that Australian academics, although committed to quality in research and teaching, continued to resist quality assurance processes within their universities. Minelli, Rebora and Turri (Citation2008) outlined how evaluation in Italian universities risked slipping towards ritual. In a South African study, Jacobs and Du Toit (Citation2006), five years into quality processes, concluded that quality committees still viewed quality as ‘something that exists out there’.

Quality assurance has marched on regardless. it has taken over the world, spreading out from its beginnings in Western Europe and North America. Yet, the concern is that it has contributed very little to the improvement of learning.

This issue of the journal focuses on a project that explores performance indicators and the use of learning analytics to reconnect learning to quality assurance and, more importantly, it explores how, ultimately, to improve learning and teaching for the individual student.

However, this leads us to bigger questions. What has quality assurance really achieved if it hasn’t improved student learning? Horsburgh’s (Citation1999) detailed analysis of enhanced learning suggested that there are far more important factors impacting on innovation in learning than external quality monitoring. The most direct impact on student learning was, she showed, how teachers help students learn and the assessment practices they employed. Harvey and Newton (Citation2004, p. 157), reviewing the literature on the effect of quality assurance, concluded that

most impact studies reinforce the view that quality is about compliance and accountability and has contributed little to any effective transformation of the student learning experience … . It is [unclear] what impact external and internal quality monitoring is having on the student experience. There appears, for example, to be little articulation between quality monitoring and innovation in learning and teaching.

Why hasn’t quality assurance engaged with what makes for better learning? Why has it eschewed the link with learning and, instead of taking the transformative approach to quality, adopted a control-oriented approach underpinned by an accountability-focused fitness-for-purpose approach to quality. At one level, the question is naïve and answers itself. Quality assurance is not about improvement at the individual level it is about external control of the institution. Political distrust has used quality assurance in tandem with tight fiscal control to emasculate the independence of universities: student learning and innovation has been the victim.

This leads onto the final hugely significant issue that massification of higher education has trodden underfoot but which the age of on-line learning may help to resuscitate. The issue is the assessment of student learning. Has assessment shifted from helping students learn to evaluating the institution?

For decades, it was shown by those concerned with the development of learning (and concomitant teaching practices) that active learning, participatory learning, was by far the most effective way of learning. That examinations, which promoted rote learning, were detrimental to long-term learning. That, for example, recruiters focusing solely on student performance in examinations did not result in employers gaining the most able or appropriate recruits. The more assessment prioritised examinations rather than other forms of assessment the more creativity was suppressed. The rising numbers of undergraduates in many countries is applauded because it raises the educational level but the downside is that assessment becomes more standardised, is almost all summative rather than formative and final performance in examinations counts far higher than any continuous assessment. This has also crept into postgraduate education (post bachelor) and examinations infest a realm that hitherto relied upon creative thinking, epitomised by master’s dissertation or doctoral thesis.

The shift to online learning, blended or entirely remote, provides an opportunity not to assess via tedious multi-choice tests or examinations but to ask interesting questions that require imagination and research (better, for example, to reflect the world of work). This would mean a rethinking of teaching delivery, different forms of engagement with students, with a focus not on whether they ‘know’ things that teachers want them to know but asking what have the students learned, how have they developed their thinking and critique? It is, in short, time to empower learners and to work with them rather than try and process them.

Amongst other things, this will require due recognition and reward for innovative teaching and learning. Back at the start of the Millennium, Drennan (Citation2001) argued that one of the key aims of the teaching quality assessment in Scotland was to encourage continuous quality improvement in teaching and learning. She showed, though, that senior management were reluctant to promote staff on the basis of teaching performance. Her findings were in line with previous research in the USA and Australia, which indicated that prioritisation of research was a disincentive to the development of innovative teaching and learning processes. Nothing much has changed on that front, although, as Sarrico (Citation2021) argues in this issue, some countries are making changes to recognition and reward systems. It may now be time to significantly shift emphasis and acknowledgement to innovative on-line or blended learning in the wake of the pandemic. New approaches that use the technology creatively and focus on encouraging transformative learning should be rewarded, which requires a shift in funding focus (away from research competitions).

This reiterates advocates from the turn of the century, summarised by Lomas and Nichols (Citation2005, p. 138–39)

Peter Williams (Citation2002), Director of QAA, … claimed that quality enhancement is an integral part of quality assurance by disseminating the mass of good practice collected through reviews and also by warning against the bad practice that is sometimes seen. However, Jackson (Citation2002) suggested that quality enhancement is more transformative and is directly concerned with adding value and improving quality. Harvey and Knight (Citation1996) argued that quality education is transformative, leading to change and enhancement in the participants themselves. These views are supported by a Teaching Quality Enhancement Committee (TQEC) report (TQEC, Citation2003) which concluded that quality enhancement involves enthusing the students, responding to new technologies as one of the many means of coping with the more diverse range of students and ensuring that staff are recognised and rewarded for excellent teaching

So, to today. Although circumstances encumbered the shift to empowered learners in the past, the current situation makes it much more possible for learners to reclaim the high ground and turn their 21st century manifestation as consumers back on the educational providers and demand transformative individualised learning approaches.

References

  • Anderson, G., 2006, ‘Assuring quality/resisting quality assurance: academics’ responses to ‘quality’ in some Australian universities’, Quality in Higher Education, 12(2), pp. 161–73
  • Barrow, M., 1999, ‘Quality-management systems and dramaturgical compliance’, Quality in Higher Education, 5(1), pp. 27–36.
  • Drennan, L.T. 2001, ‘Quality assessment and the tension between teaching and research’, Quality in Higher Education, 7(3), pp. 167–78.
  • Elton, L. 1996, ‘Partnership, quality and standards in higher education’ Quality in Higher Education, 2(2) pp. 95–104
  • Gosling, D. & D’Andrea, V.-M., 2001, ‘Quality development: a new concept for higher education’, Quality in Higher Education, 7(1), pp. 7–17.
  • Harvey, L. & Newton, J., 2004, ‘Transforming quality evaluation’, Quality in Higher Education 10(2) pp. 149–65.
  • Harvey, L. & Knight, P.T., 1996, Transforming Higher Education (Buckingham, Society for Research into Higher Education/Open University Press).
  • Harvey, L., 2002, ‘The end of quality?’, Quality in Higher Education, 8(1) pp. 5–22
  • Harvey, L., 2006, ‘Impact of quality assurance: overview of a discussion between representatives of external quality assurance agencies’, Quality in Higher Education, 12(3), pp. 287–90.
  • Horsburgh, M., 1999, ‘Quality monitoring in higher education: the impact on student learning’, Quality in Higher Education, 5(1), pp. 9–25.
  • Jackson, N., 2002, ‘Principles to support the enhancement of teaching and student learning’, Educational Developments, 3(1), pp. 1–6.
  • Jacobs, G.J. & Du Toit, A., 2006, ‘Contrasting faculty quality views and practices over a five-year interval’, Quality in Higher Education, 12(3), pp. 303–14.
  • Lomas, L. & Nicholls, G., 2005, ‘Enhancing teaching quality through peer review of teaching’, Quality in Higher Education, 11(2), pp. 137–49.
  • Minelli, E., Rebora, G. & Turri, M., 2008, ‘How can evaluation fail? The case of Italian universities’, Quality in Higher Education, 14(2), pp. 157–73.
  • Newton, J., 2000, ‘Feeding the beast or improving quality? Academics’ perceptions of quality assurance and quality monitoring’, Quality in Higher Education, 6(2), pp. 153–63.
  • Sarrico, C.S., 2021, ‘Quality management, performance measurement and indicators in higher education institutions: between burden, inspiration and innovation’, Quality in Higher Education, 28(1).
  • Teaching Quality Enhancement Committee (TQEC), 2003, Final Report of the TQEC on the Future Needs and Support for Quality Enhancement of Learning and Teaching in Higher Education (Bristol, HEFCE).
  • Williams, P., 2002, ‘Anyone for enhancement?’ QAA Higher Quality, 11, pp. 1–2.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.