3,013
Views
1
CrossRef citations to date
0
Altmetric
Articles

Teaching assessment and perceived quality of teaching: a longitudinal study among academics in three European countries

ORCID Icon
Pages 382-399 | Received 19 Dec 2017, Accepted 09 May 2018, Published online: 20 Jun 2018

ABSTRACT

European institutions of higher education have increasingly sought to improve the accountability and transparency of teaching and research with formal procedures and performance criteria. In a longitudinal analysis conducted in faculties of social sciences and economics at universities in the Netherlands, Sweden and the United Kingdom, we examined ways in which academics have experienced the expanded use of teaching assessments and its impact on the perceived quality of teaching. Results revealed that teaching assessments in the three countries have become more institutionalized, as scepticism of their principles have been replaced with resilience and pragmatism in assessment instruments and, among individual instructors, with sharpened focus on the operational side of teaching. Although faculty members acknowledged benefits of teaching assessments, they could not envision how the assessments would improve the quality of teaching. In response, we offer a theoretical explanation of those trends that extends the development of micro-institutional theory.

Introduction

This paper addresses how increased attention to accountability and transparency in teaching and research in European institutions of higher education, supported by performance measurement and assessment, has affected the quality of teaching as perceived by academics.

Since the 1990s, governments in Europe have introduced a wave of reforms aimed at improving the efficiency of public-sector organization under the label of New Public Management, and consequently European universities have gained institutional autonomy (e.g. Kehm Citation2012). In turn, to increase their competitiveness among other universities, they have felt forced to alter the organizational strategies, structures and values of their management to include budget transparency and output measurement (Smeenk et al. Citation2006; Citation2008). To those ends, universities have adopted principles and patterns of management from the private sector, including their structures, basic values and norms (Hackett Citation1990; Deem Citation1998; Pollitt and Bouckaert Citation2004), to organize and assess their institutions and institutional processes in an output-oriented manner.

The shift in governance, often called managerialism, has manifested in the management of performance and accountability, as reflected in performance measurement (Adcroft and Willis Citation2005), here defined as how activities of academic staff are measured and compared by using, for instance, teaching evaluations. Managerialism has resulted in an ever-increasing number of university rankings that have consequently become increasingly important to European institutions of higher education (Devinney, Dowling, and Perm–Ajchariyawong Citation2008). Such developments have prioritized self-governmentality, changed funding mechanisms, especially for research, and thus substantially influenced the work of academics (Kehm and Lanzendorf Citation2006; Scott Citation2008; Leišytė Citation2016). Initially, academics met the implementation of managerialist strategies, particularly ones to ensure accountability for quality and transparency in teaching and research (Bryson Citation2004), with resistance. Early studies by Trow (Citation1994) and Parker and Jary (Citation1995) sketched a dreary picture of the so-called ‘McUniversity’, marked by the increased power of university management and the diminished autonomy of academics. In time, a more nuanced vision of accountability and performance measurement has taken hold (Meyer Citation2002) in which managerialism was not fully embedded in university life (Barry, Chandler, and Clark Citation2001).

The perceived consequences of the shift towards managerialism in governance deserve attention because their impact in higher education remains contested (Paradeise and Thoenig Citation2013; Wilkesmann Citation2015). Several authors (Chan Citation2001; Kolsaker Citation2008) have argued that managerialism can benefit the quality of teaching and research and that ‘some dose of managerialism in the right proportion and in the right context’ (Chan Citation2001, 109) is useful at universities. For example, managerialism has positively affected academics’ job performance by issuing criteria for the quality of day-to-day academic productivity. Consequently, as Kolsaker (Citation2008) has shown, academics have become more flexible, positive and pragmatic than suggested in earlier literature.

Contrary to those claims, however, other authors have suggested that the practices of managerialism undermine its chief goal of efficient, effective quality improvement. Among them, Bryson (Citation2004) has explained that managerialism at universities has deteriorated the quality of teaching and research (cf. Trow Citation1994; Davies and Thomas Citation2002). Not only has time and energy once dedicated to those endeavours become spent on completing secondary tasks, but academics have tended to adapt their teaching and research to prioritize so-called ‘measurable’ activities that support a more loosely coupled relationship with the institutional processes. More broadly, as the expanded use of performance indicators has reduced confidence in professional peer evaluation and overruled any solidarity academics once felt (Sarrico et al. Citation2010; Aertz Citation2013), academics have increasingly feared the further diminishment of their individual autonomy.

In response to the controversy, we investigated how academics at European universities have experienced the managerial changes imposed upon them and their impact on the perceived quality of higher education in Europe. Due to space constraints, we address academic teaching only in this paper; discussions concerning the quality assessment of research have been published elsewhere. A recent bibliometric analysis of 1,610 articles published during 1996–2013 revealed not only increased attention to quality assurance of teaching and learning in European higher education but also antagonism between education-oriented and management-oriented strands of research (Steinhardt et al. Citation2017).

In what follows, we first describe the theoretical framework combining neo-institutional and professional theories that we used to investigate and interpret the experiences of participating academics. Above all, by facilitating an analysis of their experiences from the embedded perspectives of national policy, institutions of higher education and individual academics, the framework furthers the development of micro-institutional theory. Next, we explain the study design and our selection of three countries – the Netherlands, Sweden and the United Kingdom – as sites for two rounds of data collection to secure a longitudinal perspective. After presenting our findings both country by country and in an international comparison, we offer our conclusions and theoretical reflections.

Theoretical framework for data collection and analysis

To investigate how academics have experienced the managerialist changes imposed upon them and their impact on the perceived quality of teaching, a framework was necessary that would enable the analysis of the complex, dynamic interactions of formal national and institutional policies (e.g. on performance measurement) and the day-to-day teaching and research activities of academics.

Because neo-institutional theory clarifies the effects of institutional structures and policies on the individual behaviours of members of organizations, it can also elucidate the relationships of formal procedures and the informal perceptions of individuals in an organization. Although neo-institutional theory used to chiefly inform the (inter-)organizational level of analysis for determining conditions of stability and paths towards change (Greenwood and Hinings Citation1996; Bleiklie, Enders, and Lepori Citation2015), recent work by Powell and Rerup (Citation2017, 2) has indicated the necessity of studying the micro-foundations of institutional theory, largely because the macro level ‘associated with institutional theory needed an accompanying argument at the micro level’. In stressing the importance of research on individuals’ roles in organizations, other authors have advocated a distinct micro-institutional perspective (Powell and Colyvas Citation2008) to investigate the aggregated perceptions of individuals (e.g. Czarniawska and Genell Citation2002), their embodiment and the reproduction of ‘social reality, organizational purpose, identity and norms’ (van Dijk et al. Citation2011, 1487).

From the national perspective, neo-institutional theory helps to explain the role of the institutional environment of organizations in determining organizational structures. In that sense, the theory can clarify behaviours demonstrated by institutions of higher education and their employees (Engebretsen, Heggen, and Eilertsen Citation2012; Paradeise and Thoenig Citation2013; Bleiklie, Enders, and Lepori Citation2015). At the individual level, professional theory can further explain the behaviours and perceptions of individuals in organizations. Because professionals at the front lines often work closer with their clients, patients or students than with their immediate colleagues and because the standards of quality of their work are often set by their professional associations, not the organizations that employ them, professionals and professionalism can be potential sources of resistance to the new mode of managerial control that standardization represents (Czarniawska and Genell Citation2002), including at universities (Chandler, Barry, and Clark Citation2002; Kirkpatrick and Ackroyd Citation2003). Professionals are thus liable to feel overruled by managerialism when public service organizations adopt managerial systems of control (Ackroyd, Kirkpatrick, and Walker Citation2007).

Professionals can adapt themselves as well as their professional identities to the organizations that employ them (Muzio, Brock, and Suddaby Citation2013) and cope with competing logics in a way that enables them to colonize organizational spaces and structures (Reay and Hinings Citation2009). Therefore, their organizations can be conceived both as independent actors in the institutionalization of professions and as sites for the development of professionalism (Suddaby and Greenwood Citation2005; Suddaby Citation2010; Kipping and Kirkpatrick Citation2013). Some authors have argued that academic identity and a performance-oriented ethos are mutually exclusive (Ayers Citation2012; Winter and O’Donohue Citation2012), whereas others (Trowler Citation1998; Anderson Citation2008) have claimed that professionalism and managerialism can function together and that, because tensions between them are multidimensional, the measurement of performance can also involve the inherent quality of teaching and research. Among them, Lea and Stierer (Citation2011, 615) have demonstrated that in situations focusing predominantly on performance, academics can ‘build new identities successfully within the changing university’.

Combining the mentioned theoretical perspectives can afford additional insights into the dynamic balance between professionalism and managerialism within institutional settings and explain how academics respond to those tensions, for example, with resistance or by adapting to new situations. In so doing, we investigated how academics in our sample have understood teaching assessments imposed upon them and how, if at all, they have perceived its relationship with the quality of teaching. We sought to explain those perceptions within their national and organizational contexts as well as in relation to their academic identities. By elucidating the experiences and perceptions of participating academics and how their perceptions changed during a 4-year period, we contribute to further developing micro-institutional theory.

Data collection and analysis

We collected data at universities in three countries – the Netherlands, Sweden and the United Kingdom – whose systems of higher education exhibit similarly high levels of managerialism. The trend is particularly true in the Netherlands and the United Kingdom, both of which represent core New Public Management countries (Pollitt and Bouckaert Citation2004; De Boer, Enders, and Leišytė Citation2006), while higher education in Sweden has developed in a similar direction (Bauer and Kogan Citation1997; Bologna Follow-Up Group Citation2007). In each country, among universities with both a faculty of social sciences and one of either business studies or economics, we randomly selected three general research universities (Smeenk et al. Citation2006).

To enable international comparison, we collected similar data in the three countries. We initially contacted potential participants at faculties of social sciences and economics via an online survey addressing organizational commitment, during part of which we asked respondents whether they would consent to being interviewed. With their consent, we approached them on the basis of convenience sampling and did not seek representativeness of the employee population. After performing two rounds of data collection, one in 2007 and the other in 2011, we performed a third round in 2016 with far fewer interviews, given the impossibility of approaching the same participants a third time, which precluded further longitudinal analyses. As such, we based the analysis and findings in this paper on data from 2007 to 2011 only.

To collect additional background data, we gathered policy documents as a secondary source of information, followed by an initial round of interviews at three universities in each country. The substantial overlap of participants from 2007 to 2011 () facilitated our longitudinal perspective and afforded us a dynamic overview of developments in participants’ perceptions. In spring and summer 2011, we conducted 53 interviews in the Netherlands (n = 13), Sweden (n = 21) and the United Kingdom (n = 19), 29 of which (=54%) were with the same participants from 2007 (). In particular, we interviewed lecturers and senior lecturers, assistant, associate and full professors, deans, vice deans and vice chancellors in faculties of social sciences and economics. Most participants (80%) worked full-time and had permanent contracts. In 2011, the age of participants ranged from 28 to 71 years.

Table 1. Overview of interviews held in 2007 and 2011.

The author and five M.Sc. students interviewed participants either in Dutch or English. To ensure comparability, we conducted all interviews with reference to a list of topics addressing recent developments in systems of higher education, the background of participants, their institutions, their work performed for the ostensible purposes of accountability and performance measurement and their perceptions of the developments. We audiotaped and fully transcribed all interviews.

We analysed the transcripts by using software called kwalitan designed for investigating interview data. To gain a more thorough understanding of the perceptions of participants concerning changes in both teaching as an everyday activity and the accountability and administration of teaching (Boeije Citation2012; Yin Citation2014), we followed an interpretative approach while analysing the interviews. We performed analysis from two perspectives: from a country-focused perspective, in order to gain insights into developments concerning teaching activities in general, and from an individual perspective, particularly for the 29 participants who we interviewed twice, to compare their perceptions from 2007 to 2011 and thereby determine how, if at all, their opinions concerning teaching assessment had changed and, if so, then why. We performed analyses by matching the patterns (Yin Citation2014, 143) of various themes that emerged in the data, with special focus on similarities and differences among interview fragments. In this paper, we present our findings with the support of quotations from interviews, some of which we translated from Dutch to English.

After briefly introducing the systems of higher education in each country and recent relevant developments therein, we present findings country by country, followed by a longitudinal comparison of all three countries and, in closing, our conclusions and theoretical reflections.

Recent developments in European higher education

The systems of higher education in the three countries are high-status organizations with long histories. In the Netherlands and Sweden, institutions are either scientific universities or universities of applied sciences representing higher vocational education (Eurydice Citation2010); beyond that distinction, however, no explicit status-related differences between the institutions exist. By contrast, the system of higher education in the United Kingdom maintains a clear hierarchy of universities (e.g. Complete University Guide Citation2011). The general idea of the Bologna Process has been that universities in Europe should improve their productivity and quality of teaching and research, and be publicly managed with greater transparency (Wihlborg and Teelken Citation2014). Consequently, the management of universities in the three countries has become increasingly complex due to the often contradictory demands and expectations imposed upon them by various sources of funding. In turn, in becoming increasingly guided by strategies used in the private sector, the three university systems have been more oriented to the market in order to compete for clients, funding and prestige, as well as to meet mounting pressure to cut costs (Christensen and Lægreid Citation2001).

Developments in teaching assessments in the Netherlands, Sweden and the United Kingdom

During the past 25 years, quality assurance regimes have spread in ‘an explosive phenomenon’ among institutions of higher education in half of all countries worldwide (Jarvis Citation2014, 238). Earlier, Bauer and Kogan (Citation1997) described the development of quality assessments in higher education in the United Kingdom and Sweden since the mid-1980s as involving prescriptive requirements for evaluation and quality assurance. Whereas the Swedish system has shifted from relying upon state regulation to relying upon self-governance, the UK system has shifted from prioritizing autonomy to prioritizing standardized quality assessment.

In the United Kingdom, the Quality Assurance Agency in Higher Education (QAA), an independent body funded by subscriptions from universities and colleges of higher education, has played a major role in teaching assessment. The QAA exists to safeguard public interest with standards and the management of quality in the qualifications of personnel in higher education and to encourage the continuous improvement of both. The QAA performs institutional audits of institutions of higher education to safeguard, monitor and report standards among them over time, which has prompted scrutiny of internal quality assurance systems at universities for their emphasis on students and learning.

In Sweden during 2007–2012, new ideas concerning quality assurance provided the basis for the development of a new quality assurance system in higher education (Swedish National Agency for Higher Education Citation2007). The system has since sought to ensure certain minimal standards in three domains: distinguishing centres of educational excellence by highlighting good practices, audits of quality assurance procedures at higher institutions and the evaluation of entitlement in awarding degrees in a three-cycle system following in-depth programme evaluation by a panel of external assessors, onsite visits and expert opinion (Swedish National Agency for Higher Education Citation2007, Citation2008).

In the Netherlands, though such explicit national policy measures as in Sweden and the United Kingdom have not been implemented in institutions of higher education, financial cutbacks at universities have exerted heavy influence. Although financial reductions have persisted for decades in the Netherlands, in 2013 a major cutback required master's and bachelor's courses to request re-accreditation, and in anticipation of the cutback, programmes began to intensify focus on quality control. It is expected that accreditations in higher education in the Netherlands will be performed at the institutional level (Westerheijden et al. Citation2013).

Findings

In what follows, we discuss shifts in the use of performance measures and the accountability of teaching in each country, as well as their impact on teaching practices and their consequences for the quality of teaching as perceived by participants in our sample.

The United Kingdom

In 2007, UK participants acknowledged an overall increase in both internal and external monitoring and teaching assessment (2007Uk2c), although also perceived their influence upon actual teaching quality to be weak. They generally complained about increased workloads and administrative pressures, and in some cases, they reported that the outcomes of teaching evaluations were not taken seriously but used for ‘door stopping’ (2007Uk3c).

Similarly, in 2011, UK participants observed more systematic attention to the quality of teaching involving performance measurement and quality assurance (2011Uk1a,e,f,g,4 g). Such assessment had become more centralized and formalized via external inspection by the QAA and standard assessment forms.

There have been many changes in the extent to which our teaching is now assessed for quality. In terms of quality control mechanisms and systems now in operation, we have a much more formalized or more defined set of procedures and mechanisms for assessing teaching quality. (2011Uk1f, man, full professor, late 40s)

Because part of the more formalized evaluation occurs via a primary, systematic feedback loop, the findings of evaluations are actually relayed to the management and lecturers, who can apply such input to change their courses:

Students are asked to fill out these feedback questionnaires. They are then sort of plated and assessed in areas, and the results are then fed back to the actual teachers, for them to make some kind of assessment about how they are doing (2011Uk1f).

In turn, during programme evaluations, both facultyboard and managementteam are involved in a secondary feedback loop to determine whether feedback from evaluations prompted changes in the courses.

The various teaching assessments in the United Kingdom have become more institutionalized and formalized, and their outcomes have become more visible to students. Some participants, however, expressed concerns with what is assessed. One participant (2007Uk3f, man, full professor, early 60s) explained in 2007 that though evaluations were required, their actual form was to be decided by the staff: ‘But there isn't a required questionnaire or checklist or template. It's up to us to decide how we do it, but we have to show that we are doing it’. In 2011, by some contrast, when evaluations had become more institutionalized, the same participant expressed concern about how evaluations were conducted: ‘The system gets out of control and is no longer a measure of quality in the academic sense but in terms of what indices you’re using’ (2011Uk3f). From his perspective, the relationship of assessment to the primary processes of academia – namely, teaching and learning – had become unclear.

UK participants in 2011 also expressed more nuanced opinions than they had in 2007. Several considered teaching assessment to be highly useful (2011Uk3i), particularly if students, not an external body such as the QAA, conducted them. The major development between the years related to the manner and level of standardization: ‘We’re under great pressure from the registry office to produce a more rigid, one-size-fits-all principle. I think that's something that worries me because maybe there will be more pressure on the autonomy of assessment, autonomy of teaching’ (2011Uk3 g, man, full professor, mid 60s). As another participant (2011Uk1f, man, full professor, late 40s) stated even more strongly, the increased assessment of teaching had enhanced the power of students at the expense of the autonomy of academics:

We swam from the extreme, in which there was virtually no external assessment of teaching quality or students’ experience whatsoever. We’re now moving towards a system in which students are becoming too powerful and are actually significantly reducing the autonomy of university teachers and the capacity of university teachers to set their own agenda.

Both participants’ arguments underscored that performance management in teaching has become more pervasive, particularly due to the institutionalization of teaching evaluations and use of external inspection bodies such as the QAA.

Although UK participants accepted and understood the benefits of greater standardization, they also disliked how teaching assessments had limited their autonomy (2011Uk1f,3 g). Several participants (2011Uk1 h) explained that increased pressure to ensure teaching and administrative quality had reduced time for research, which, in creating added personal value, is often regarded by academics to be the most important part of their work (Schäfer Citation2016).

UK participants also reported a difference in how evaluation mechanisms should be implemented according to programme policy and how they have actually been implemented in daily practice (2011Uk3f,3 g,3j). Participants were often unaware of possible sanctions when teaching did not meet certain criteria during evaluations and thus could not envision how evaluations could improve their teaching (2011Uk1 h,4e,4f). As one participant explained, ‘They [the evaluations] go up, upstairs. We have an undergraduate committee, a teaching committee. … As it goes up higher, it becomes less and less of something that might feed into some larger process’ (2011Uk4e, man, senior lecturer, early 60s).

Sweden

In 2007, Swedish participants indicated that evaluations had been used to assess courses in an increasingly formalized and systematic manner (2007S1f). In 2011, quite similar to their counterparts in the United Kingdom, they explained that national audits on teaching quality had become a trend in Swedish higher education. As one professor stated,

It's very popular to have these evaluations in Swedish universities. … It's good that there are audits. Because people shape up, and I know somebody is coming. In the old days when I was a lecturer, we knew that nobody would care. It was up to the individual lecturer. Now, there's a risk you might be punished in some way (2011S1b, man, full professor, retired, late 60s).

Participants explained that teaching quality assessments had manifested at the institutional level, and in keeping with findings from 2007, nearly all of them (2011S1d,e,f,h,2b,d,g,i,3e,g,h) cited the National Agency for Higher Education (Högscholeverket) as an example of an auditing body, whose assessments began at the national level and have since been executed at institutions.

Conversely, several participants reported little systematic attention to the evaluation of teaching quality in the workplace and being far less intensely monitored than their counterparts in the United Kingdom. ‘I think in Sweden, here in this department … we’ve been a little sheltered from this big pressure. If you look in the United Kingdom, there’s an enormous pressure to produce and to publish’ (2011S1 h, woman, associate professor, late 30s).

Many Swedish participants expressed ambivalence towards more systematic evaluations and viewed intensified teaching assessments as being unhelpful. Of course, others mentioned several benefits of the evaluations, and many perceived that systemic teaching assessment and evaluation could improve the quality of teaching only if certain conditions were met. A few participants considered the auditing efforts to be useless because they serve only as a source of legitimacy (2007S3d) or necessity (2007S2b) or lack standardization (2007S1d). They argued that it mattered little what evaluations find because the process is hardly formalized and because no secondary feedback loop exists that affects salaries or influences promotions (2007S1e, f, g, S2b).

In 2011, however, several Swedish participants mentioned that the benefits (2011S2d) of the developments had tremendously influenced (2011S1 h) the quality of education systems and generated comparable levels of quality across Europe (2011S2d). Specific examples included the restructuring of a certain semester (2011S3f) and the more thorough evaluation of a master's thesis (2011S1i,2b). However, only a few participants (2011S1f,g) indicated that the efforts had improved the quality of teaching because they provide structure and direction in academic work:

Evaluations … force you to make it very clear what you mean by “quality” all the time … [There are] not really any drawbacks. It's very good to have a stepwise system so teachers and students can see if they can go on with it (2011S3f, man, lecturer, late 40s).

Some Swedish participants expressed great dislike for such a systematic approach to teaching assessment. Among them, 2011S3d (man, lecturer, mid 50s) explained,

We’re trying to develop a new platform [for discussing evaluations and teaching quality]. … I don't care for their bloody platform. I can work without it. It doesn't work, so we work in our own way. … It's blocking ideas in the education system. I hate it [the platform].

Others mentioned that teaching could benefit from systematic evaluation but under certain conditions only. However, when systematic feedback loops are weak, teachers sense little incentive for improving their teaching. Several participants (2011S1i,2d,f,i,3b) explained that because evaluations have often been completed arbitrarily – for example, by dissatisfied students only (2011S2d,3d) – and receive low response rates of, for instance, 20%–30% (2011S2i), they have had little effect on improving teaching: ‘If you do a course evaluation and people just write anything, you don't really want to read it because they can write anything, like “You have ugly clothes”’ (2011S1i, woman, lecturer, late 20s).

Although the tasks of university employees in Sweden have become increasingly complex, the time allocated to their work has not changed, which has caused disillusionment with the profession for some participants:

There was a side of the career I hadn't seen, really, or thought about at all. Teaching: yes. Reading, writing, producing a thesis: yes. But the sort of administrative, managerial side of it I hadn't seen. That perception has changed (2011S2i, man, lecturer, mid-40s).

Indeed, performance measurement for teaching and research in Sweden has come to involve various administrative tasks that steal time from research.

Generally, the Swedish participants were eager to describe their views on the inherent quality of teaching, particularly the immeasurable aspects of teaching quality (2011S1d,e,f,g,h i,2b,d,i,3d,f,g). As one such participant explained, ‘Quality in teaching is when there's light in the eyes of the students, when they’re really interested, and you see that they’re listening, like sucking up what you’re saying (laughs)’ (2011S2f, woman, full professor, early 50s). Another added that if a teacher can ‘encourage their own curiosity about the subject – that's very difficult to measure – but if you can accomplish it … . That will give quality to teaching’ (2011S3f).

Netherlands

In the Netherlands, as in Sweden and the United Kingdom, systematic attention to teaching evaluations has grown (2007Nl2d,3a,3b,3d). Previously, courses were evaluated at random during the final examination or final lecture by distributing evaluation forms ‘with often very low responses and reliability’ and that ‘end up somewhere’ (2007Nl3b, woman, lecturer, early 50s).

In 2011, the entire system of course evaluation in the Netherlands was institutionalized within the information and communications technological (ICT) infrastructure of the faculties, often in a bid to receive external accreditation (2011Nl2e,3b,4a). Although such institutionalization has made it easier to supervise the current status quo of evaluation, it has also resulted in a more distanced, formalized system of evaluation. According to one Dutch participant, ‘It leads to more stress because it's impersonal and anonymous and results in nothing positive. It doesn't improve the opportunity for getting more positive evaluations. You get the idea that they’re especially looking for something negative’ (2011Nl3b). However, other participants (2011Nl2d,3a,3e) reported more positive experiences with evaluations because they can ensure transparency and prompt interventions: ‘If it concerns an uninspired colleague who's never there, I understand the measures very well’ (2011Nl3a, man, lecturer, early 50s).

Dutch participants expressed numerous concerns about increased pressure upon their teaching performance, generally felt greater pressure while teaching and reported having less time to spend with each student (2011Nl1c,1d,2e,3b,3c,4a). When three participants (2007 and 2011Nl3a,b,d) failed to meet publication criteria, they were consequently excluded from the research school, afforded less time to conduct research and had to do more teaching instead.

Participants explained that, since 2007, a clearer primary feedback loop had developed (2011Nl1c,2e,3c) in which the findings of evaluations have been relayed to the lecturers themselves. Although such systematic evaluations have resulted in student feedback for lecturers, their response rates and reliability have been dismal and only weakly representative of the realities of courses (2011Nl3b,4a). Despite their dubious impact, the outcomes are systematically recorded in the performance files of faculty members (2011Nl3a,3b), which can cause significant stress: ‘If you have earned a B once [in a teaching evaluation], you’re approached by the director of education, and you have to explain yourself, figure out how to improve and write an enhancement plan’ (2011Nl3b). If you earn a B twice, you might have to leave’ (2011Nl3a).

However, a secondary feedback loop for investigating whether lecturers make improvements in their courses in response to the outcomes of evaluations remains deficient (2011Nl2e,3c). Consequently, the system is subject to quick, easy solutions instead of more thorough improvements of teaching quality.

It [the evaluation system] is a beautiful system. But in daily practice, it works differently. … As a lecturer, you cover yourself for poor outcomes. … In your reflection on outcomes, you state that you’ll implement several improvements. But in practice, no one does. So they’re cheating. The procedures are so complex that actual improvements fail (2011Nl4a, man, full professor, early 60s).

Altogether, Dutch participants reported a fragile balance between greater emphasis on evaluation and quality assessment. Although some perceived positive elements, they generally worried about the weak link of assessment with the actual inherent quality of teaching and the lack of secondary feedback loops (2011Nl1c,2e,3b,3e,4a).

Longitudinal comparison of findings

The analysis of data from 2007 (Teelken Citation2012a, Citation2012b, Citation2015) revealed a clear shift towards more measurable standards of teaching and research performance in the workplaces of the participants from the three countries. Findings indicated more evaluation output and external influences at all universities, both in policy actions and according to the perceptions of participants. Participants showed a clear dislike of managerial measures imposed upon them, particularly if they resulted in heavier workloads. Compared to research, which is traditionally considered to be the autonomous domain of academics, teaching has come under close external scrutiny.

In the second round of interviews, participants emphasized that though certain developments, including the systematic assessment of teaching, had commenced in 2007, such developments had since become more clearly implemented and institutionalized within organizations. In particular, assessments have been conducted in more organized ways, often as part of annual control loops, and performance results have had consequences for individual employees. The UK and Dutch participants reported that quality assurance in teaching has received more systematic attention, supported by ICT infrastructure, whereas their Swedish counterparts explained that though their national systems for quality assessment exerted significant influence, they considered themselves to be more sheltered from managerialism at the workplace and have experienced less standardization than UK faculty.

Participants had little reason to expect that new quality assurance regimes would generally improve teaching quality, though that goal often propels official justifications for policy initiatives from the managerialist perspective (e.g. Jarvis Citation2014). UK participants expressed more nuanced arguments in 2011 than in 2007; in particular, they accepted and understood the benefits of teaching assessments, despite perceiving a weak relationship with the assessment of teaching quality and its improvement. By contrast, Swedish participants regarded quality assessment more or less as an administrative exercise that only marginally improved teaching, if at all. Last, Dutch participants appreciated the benefits of the primary feedback loop connected to quality assurance, though they also worried that the more distanced manner of evaluation (e.g. via online surveys) would reduce the validity of the outcomes.

We distinguished three reasons why participants perceived a weak relationship between assessments of teaching quality and improved teaching. First, although such assessments seem to have become more institutionalized, participants explained that the corresponding secondary feedback loop (e.g. a check on whether suggested improvements have been implemented) is often deficient. Consequently, the scope of the assessments remains limited, and individual academics are left to decide how to respond to the outcomes.

Second, the disadvantages of performance measurement, including reduced autonomy for academics and the implementation of strategies and instruments that require time and energy and typically result in arbitrary outcomes, were openly acknowledged by participants. Nevertheless, several participants took those limitations in stride while becoming more strategic in their actions. For example, a Dutch participant stated that she just ‘plays the game’ and tries not to question the performance pressures too strongly (2011Nl2a, woman, associate professor, mid-40s).

Third, and perhaps most importantly, longitudinal analyses of the data revealed a gap between what participants considered to be the operational quality of teaching versus the inherent quality. The participants acknowledged that teaching quality is measurable to a limited extent and that its most important aspects are immeasurable. Several UK participants explained taking their own measures to learn how students had experienced their teaching:

We also have an opportunity to meet with the students informally at the end of a course to have a discussion, and also I have regular meetings with student representatives, so each class, each course will elect two or three reps. I meet with them every month, so those meetings allow me to evaluate how the course is going, and they feed into the end (2011UK4e).

Despite criticism of teaching assessments, few participants complained about or reported resisting the idea of performance management or measurement for principled reasons. Far more often than in 2007, academics in 2011 resisted the consequences of performance systems imposed upon them, their instrumental features and their perceived consequences. They acknowledged the disadvantages of performance measurement, including its weak link with the primary process of teaching and its inherent quality, but usually for pragmatic reasons.

Conclusions and theoretical reflection

A major impetus for conducting the longitudinal study was the ambiguous impact of managerialist developments upon the professional work ethos of academics. Our study, based on 100 interviews at faculties of social sciences and economics in the Netherlands, Sweden and the United Kingdom, revealed that performance management and measurement can be perceived as part of a continuous process that has been increasingly influential in the systems of higher education in all three countries. Performance measurement seems to have persevered in such systems, not so much for the supposed improvement of the quality of the primary process but because of its impact on the academic workforce (Leišytė Citation2016).

Under New Public Management and its nuanced conceptualization of managerialism (e.g. Deem, Hillyard, and Reed Citation2007), academics have found themselves under greater pressure to be accountable to their organizations. Accountability has been visible in the form of systematic teaching evaluations and primary feedback loops involving students, yet less often via secondary feedback loops involving staff, colleagues, competing institutions and university management. Arguably, the growth of managerialism and the audit culture are signs of the distrust of academic staff towards managerialism (Morley Citation2003).

Our analyses revealed that participants had experienced continual ambivalence towards quality assessment in teaching (Lomas Citation2007; O’Connor and O’Hagan Citation2016). They demonstrated understanding that quality assessment can ensure fairness, transparency and improved teaching quality, but they also stated perceiving a weak relationship between assessment and the actual quality of teaching. In particular, attention to how assessments have been established and conducted was perceived to be weak, and when asked about teaching quality, participants often referred to the qualitative – and what they often called immeasurable – aspects of teaching.

The ambivalence expressed by participants can be explained by tensions between micro-institutional and professional theory. Participants were willing to use performance measurements to a certain extent, as long as their autonomy was not constrained, but consistently perceived its weak connection with teaching quality. Assessments had distanced academics from the primary process of teaching and the organization, and they resisted such organizational imperatives by referring to their professional status and identity, as well as used the immeasurability of their contribution (e.g. ‘the light in the students’ eyes’) to legitimise their resistance. Previous scepticism and cynicism (Townley, Cooper, and Oakes Citation2003) manifesting at individual level had been replaced at the institutional level by resilience and instrumental pragmatism, and at individual level, with a sharpened focus on the operational aspects of teaching, particularly in the United Kingdom and with the differentiation of the measurable and immeasurable aspects of teaching quality. Such findings contribute to mounting research showing a more nuanced, hybridized picture of academics and their identities (Anderson Citation2008; Lea and Stierer Citation2011; Teelken Citation2015).

Paradoxically, with the emphasis on output control, universities have motivated academics to replace certain organizational values, including the ideal of the close-knit academic community (Powell and Owen–Smith Citation1998), with individual requirements when they pursue academic careers. Participants had successfully negotiated tensions concerning quality assessment by demonstrating flexibility and building upon new types of academic identities by remaining connected to the inherent quality of teaching on their own terms (Trowler Citation1998; Anderson Citation2008).

Criticism of the limited measurability of teaching quality can be regarded as an implicit plea for a more developmental, discursive and reflexive use of performance measurement. Lomas’s (Citation2007) interviews at UK universities demonstrated that his participants perceived quality to be more related to fitness of purpose and accountability instead of transformation or improvement. Our participants also preferred a more discursive, subjective perception of quality as opposed to the current mechanistic, objective approach via, for example, oral evaluations (2011S1d, 2011Uk4e) or peer review among lecturers (2011S1f). Instead of commenting on the general principle of performance measurement, participants often concentrated on its actual instruments – for instance, the disadvantages of ICT-based evaluation (2011Nl3a, b) and low response rates (2011S2i). By perceiving that relationship, they remained connected with their professional identity. The combination of developing and investigating interactive modes of quality assessment that does justice to academics’ professional identity deserves further explorative and longitudinal research.

Acknowledgements

No particular funding was used to conduct this research, consequently no conflict of interest emerges. I am immensely grateful to my 8 master students for carrying out most of the interviews. Also sincere appreciations for Dr. Ine Gremmen, Dr. Leonore van den Ende and two anonymous reviewers for providing me with feedback for this paper

Disclosure statement

No potential conflict of interest was reported by the author.

Notes on contributor

Christine Teelken works as an Associate Professor at the Vrije Universiteit Amsterdam, Faculty of Social Sciences. Her research interests involve higher education organizations and academic careers. She is very active in the higher education network of the European Educational Research Area and has published widely in higher education and public administration journals.

References

  • Ackroyd, S., I. Kirkpatrick, and R. Walker. 2007. “Public Management Reform and its Consequences for Professional Organisation: A Comparative Analysis.” Public Administration 85 (1): 9–26. doi: 10.1111/j.1467-9299.2007.00631.x
  • Adcroft, A., and R. Willis. 2005. “The (un)Intended Outcome of Public Sector Performance Measurement.” International Journal of Public Sector Management 18: 386–400. doi: 10.1108/09513550510608859
  • Aertz, Koen. 2013. Accessed 21 August, 2013. http://www.demorgen.be/dm/nl/2461/Opinie/article/detail/1690107/2013/08/21/Zo-kan-het-niet-langer-aan-de-universiteit.dhtml.
  • Anderson, G. 2008. “Mapping Academic Resistance in the Managerial University.” Organization 15 (2): 251–270. doi: 10.1177/1350508407086583
  • Ayers, D. F. 2012. “When Managerialism Meets Professional Autonomy: The University ‘Budget Update.” Genre of Governance, Culture and Organisation 20 (2): 98–120.
  • Barry, J., J. Chandler, and H. Clark. 2001. “Between the Ivory Tower and the Academic Assembly Line.” Journal of Management Studies 38 (1): 88–101. doi: 10.1111/1467-6486.00229
  • Bauer, M., and M. Kogan. 1997. “Evaluation Systems in the UK and Sweden: Success and Difficulties.” European Journal of Education 32 (2): 129–143.
  • Bleiklie, I., J. Enders, and B. Lepori. 2015. “Organizations as Penetrated Hierarchies: Environmental Pressures and Control in Professional Organizations.” Organization Studies 1–24. doi:10.1177/0170840615571960.
  • Boeije, H. 2012. Analyseren in kwalitatief Onderzoek, Denken en Doen. Den Haag: Boom, Lemma.
  • Bologna Follow-up Group. 2007. Bergen to London 2007, Secretarial Report on the Bologna Work Programme 2005–2007. Accessed 14 February 2017, http://webarchive.nationalarchives.gov.uk/20100202100434/http://www.dcsf.gov.uk/londonbologna/.
  • Bryson, C. 2004. “The Consequences for Women in the Academic Profession of the Widespread Use of Fixed Term Contracts.” Gender, Work & Organization 11: 187–206. doi: 10.1111/j.1468-0432.2004.00228.x
  • Chan, K. 2001. “The Difficulties and Conflict of Constructing a Model for Teacher Evaluation in Higher Education.” Higher Education Management 13 (1): 93–111.
  • Chandler, J., J. Barry, and H. Clark. 2002. “Stressing Academe: The Wear and Tear of the New Public Management.” Human Relations 55 (9): 1051–1069. doi: 10.1177/0018726702055009019
  • Christensen, T., and P. Lægreid. 2001. New Public Management. Aldershot: Ashgate Publishers.
  • Complete University Guide. 2011. Accessed 12 December, 2011. www.thecompleteuniversityguide.co.uk/league-tables/rankings.
  • Czarniawska, B., and K. Genell. 2002. “Gone Shopping? Universities on Their Way to the Market.” Scandinavian Journal of Management 18 (4): 455–474. doi: 10.1016/S0956-5221(01)00029-X
  • Davies, A., and R. Thomas. 2002. “Managerialism and Accountability in Higher Education: The Gendered Nature of Restructuring and the Costs to Academic Service.” Critical Perspectives on Accounting 13: 179–93. doi: 10.1006/cpac.2001.0497
  • De Boer, Harry, Jürgen Enders, and Liudvika Leišytė. 2006. “Public Sector Reform in Dutch Higher Education: The Organizational Transformation of the University.” Public Administration 85 (1): 27–46. doi: 10.1111/j.1467-9299.2007.00632.x
  • Deem, R. 1998. “‘New Managerialism’ and Higher Education: The Management of Performances and Cultures in Universities in the United Kingdom.” International Studies in Sociology of Education 8 (1): 47–70. doi: 10.1080/0962021980020014
  • Deem, Rosemary, S. Hillyard, and M. Reed. 2007. Knowledge, Higher Education, and the New Managerialism, The Changing Management of UK Universities. Oxford: Scholarship Online.
  • Devinney, T., G. R. Dowling, and N. Perm–Ajchariyawong. 2008. “The Financial Times Business Schools Ranking: What Quality is This Signal of Quality?” European Management Review 5 (4): 195–208. doi: 10.1057/emr.2008.14
  • Engebretsen, E., K. Heggen, and H. A. Eilertsen. 2012. “Accreditation and Power: A Discourse Analysis of a New Regime of Governance in Higher Education.” Scandinavian Journal of Educational Research 56 (4): 401–417. doi: 10.1080/00313831.2011.599419
  • Eurydice. 2010. Accessed June 1, 2010. http://eacea.ec.europa.eu/education/eurydice/eurybase_en.php#description.
  • Greenwood, R., and B. Hinings. 1996. “Understanding Radical Organisational Change. Bringing Together the Old and the New Institutionalism.” Academy of Management Review 21 (4): 1022–1054. doi: 10.5465/amr.1996.9704071862
  • Hackett, E. J. 1990. “Science as a Vocation in the 1990s: The Changing Organizational Culture of Academic Science.” The Journal of Higher Education 61 (3): 241–279.
  • Jarvis, D. S. L. 2014. “Policy Transfer, Neo-Liberalism or Coercive Institutional Isomorphism? Explaining the Emergence of a Regulatory Regime for Quality Assurance in the Hong Kong Higher Education Sector.” Policy and Society 33: 237–252. doi: 10.1016/j.polsoc.2014.09.003
  • Kehm, B. 2012. “New Forms of Governance in Higher Education: The Case of Germany.” Daad conference, 13–15 December, Potsdam, Germany.
  • Kehm, B., and U. Lanzendorf, eds. 2006. Reforming University Governance: Changing Conditions for Research in Four European Countries. Bonn: Lemmens/Verlag.
  • Kipping, M., and I. Kirkpatrick. 2013. “Alternative Pathways of Change in Professional Services Firms: The Case of Management Consulting.” Journal of Management Studies 50 (5): 777–807. doi: 10.1111/joms.12004
  • Kirkpatrick, I., and S. Ackroyd. 2003. “Transforming the Professional Archetype? The New Managerialism in UK Social Services.” Public Management Review 5 (4): 511–531. doi: 10.1080/1471903032000178563
  • Kolsaker, A. 2008. “Academic Professionalism in the Managerialist Era: A Study of English Universities.” Studies in Higher Education 33 (5): 513–525. doi: 10.1080/03075070802372885
  • Lea, M. R., and B. Stierer. 2011. “Changing Academic Identities in Changing Academic Workplaces: Learning From Academics’ Everyday Professional Writing Practices.” Teaching in Higher Education 16 (6): 605–616. doi: 10.1080/13562517.2011.560380
  • Leišytė, L. 2016. “New Public Management and Research Productivity – a Precarious State of Affairs of Academic Work in the Netherlands.” Studies in Higher Education 41 (5): 828–846. doi:10.1080/03075079.2016.1147721.
  • Lomas, Laurie. 2007. “Zen, Motorcycle Maintenance and Quality in Higher Education.” Quality Assurance in Education 15 (4): 402–412. doi: 10.1108/09684880710829974
  • Meyer, H. D. 2002. “The new Managerialism in Education Management: Corporatization or Organizational Learning?” Journal of Educational Administration 40 (6): 534–551. doi: 10.1108/09578230210446027
  • Morley, L. 2003. Quality and Power in Higher Education. Buckingham: Society for Research into Higher Education and Open University Press.
  • Muzio, D., D. M. Brock, and R. Suddaby. 2013. “Professions and Institutional Change: Towards an Institutional Sociology of the Professions.” Journal of Management Studies 50 (5): 699–721. doi: 10.1111/joms.12030
  • O’Connor, P., and C. O’Hagan. 2016. “Excellence in University Academic Staff Evaluation: a Problematic Reality?” Studies in Higher Education 41 (11): 1943–1957. doi: 10.1080/03075079.2014.1000292
  • Paradeise, C., and J. C. Thoenig. 2013. “Academic Institutions in Search of Quality: Local Orders and Global Standards.” Organization Studies 34 (2): 189–218. doi: 10.1177/0170840612473550
  • Parker, M., and D. Jary. 1995. “The McUniversity: Organization, Management and Academic Subjectivity.” Organization 2 (2): 319–338. doi: 10.1177/135050849522013
  • Pollitt, C., and G. Bouckaert. 2004. Public Management Reform: A Comparative Analysis. Oxford: Oxford.
  • Powell, W. W., and J. A. Colyvas. 2008. “Microfoundations of Institutional Theory.” In The Sage Handbook of Organizational Institutionalism,, edited by R. Greenwood, C. Oliver, R. Suddaby, and K. Sahling, 276–298. London : Sage Publications Ltd.
  • Powell, W. W., and J. Owen–Smith. 1998. “Universities and the Market for Intellectual Property in the Life Sciences.” Journal of Policy Analysis and Management 17 (2): 253–277. doi: 10.1002/(SICI)1520-6688(199821)17:2<253::AID-PAM8>3.0.CO;2-G
  • Powell, W. W., and C. Rerup. 2017. “Opening the Black Box: The Microfoundations of Institutions.” In The Sage Handbook of Organizational Institutionalism (2nd edition), edited by R. Greenwood, C. Oliver, T. B. Lawrence, and R. E. Meyer. Thousand Oaks: Sage.
  • Reay, T., and C. R. Hinings. 2009. “Managing the Rivalry of Competing Institutional Logics.” Organization Studies 30 (6): 629–652. doi: 10.1177/0170840609104803
  • Sarrico, Cláudia S., Maria J. Rosa, Pedro N. Teixeira, and Margarida F Cardoso. 2010. “Assessing Quality and Evaluating Performance in Higher Education: Worlds Apart or Complementary Views?” Minerva 48: 35–54. doi: 10.1007/s11024-010-9142-2
  • Schäfer, L. O. 2016. Performance Assessment in Science and Academia: Effects of the RAE/REF on Academic Life. Working paper no. 7, September, Higher Education Funding Council for England.
  • Scott, W. R. 2008. “Lords of the Dance: Professionals as Institutional Agents.” Organization Studies 29 (2): 219–238. doi: 10.1177/0170840607088151
  • Smeenk, S. G. A., R. N. Eisinga, J. A. C. M. Doorewaard, and J. C. Teelken. 2006. Organizational Commitment among European University Employees. DANS Data Guide Volume 1. The Hague: Data Archiving and Networked Services (DANS).
  • Smeenk, S. G. A., J. C. Teelken, J. A. C. M. Doorewaard, and R. N. Eisinga. 2008. “An International Comparison of the Effects of HRM Practices and Organisational Commitment on Quality of Job Performances among European University Employees.” Higher Education Policy 21 (21): 323–344. doi: 10.1057/hep.2008.12
  • Steinhardt, I., C. Schneijderberg, N. Götze, J. Baumann, and G. Krücken. 2017. “Mapping the Quality Assurance of Teaching and Learning in Higher Education: the Emergence of a Specialty?” Higher Education 74: 221–237. doi: 10.1007/s10734-016-0045-5
  • Suddaby, R. 2010. “Challenges for Institutional Theory.” Journal of Management Inquiry 19: 14–20. doi: 10.1177/1056492609347564
  • Suddaby, R., and R. Greenwood. 2005. “Rhetorical Strategies of Legitimacy.” Administrative Science Quarterly 50: 35–67. doi: 10.2189/asqu.2005.50.1.35
  • Swedish National Agency for Higher Education. 2007. How Did Things Turn Out? Final Report on the Swedish National Agency for Higher Education’s quality appraisals 2001–2006. Report 2007:51R.
  • Swedish National Agency for Higher Education. 2008. National Quality Assurance System for the period 2007–2012. Report 2008:4R.
  • Teelken, C. 2012a. “Compliance or Pragmatism, How Do Academics Deal with Managerialism in Higher Education? A Comparative Study in Three Countries.” Studies in Higher Education 37 (3): 271–290. doi: 10.1080/03075079.2010.511171
  • Teelken, C. 2012b. “Academic Leadership and Its (Perceived) Effects on Professional Autonomy.” In Leadership in the Public Services – Promise and Pitfalls, edited by C. Teelken, E. Ferlie, and Mike Dent. Abingdon: Routledge.
  • Teelken, C. 2015. “Hybridity Coping Mechanism and Academic Performance Management: Comparing Three Countries.” Public Administration 93 (2): 307–323. doi: 10.1111/padm.12138
  • Townley, B., D. J. Cooper, and L. Oakes. 2003. “Performance Measures and the Rationalization of Organizations.” Organization Studies 24: 1045–1071. doi: 10.1177/01708406030247003
  • Trow, M. 1994. “Managerialism and the Academic Profession: The Case of England.” Higher Education Policy 7: 11–18. doi: 10.1057/hep.1994.13
  • Trowler, P. R. 1998. Academics Responding to Change: New Higher Education Frameworks and Academic Cultures. Buckingham, England: Society for Research into Higher Education and Open University Press.
  • van Dijk, S., H. Berends, M. Jelinek, A. G. L. Romme, and M. Weggeman. 2011. “Micro-Institutional Affordances and Strategies of Radical Innovation.” Organization Studies 32 (11): 1485–1513. doi: 10.1177/0170840611421253
  • Westerheijden, D. F., E. Epping, M. Faber, L. Leisyte, and E. de Weert. 2013. Comparative IBAR Report WP9: Stakeholders in Higher Education. Enschede: CHEPS.
  • Wihlborg, M., and C. Teelken. 2014. “Striving for Uniformity, Hoping for Innovation and Diversification, A Critical Review Concerning the Bologna Process – Providing an Overview and Reflecting on the Criticism.” Policy Futures in Education 12: 1084–1100. doi:10.2304/pfie.2014.12.8.1084.
  • Wilkesmann, U. 2015. “Governance of Research Organizations Imaginary Contradictions of University Governance.” In Incentives and Performance, edited by I. Welpe, J. Wollersheim, S. Ringelhan, and M. Osterloh, 189–206. The Hague: Springer.
  • Winter, R. P., and W. O’Donohue. 2012. “Academic Identity Tensions in the Public University: Which Values Really Matter?” Journal of Higher Education Policy and Management 34 (6): 565–73. doi: 10.1080/1360080X.2012.716005
  • Yin, R. K. 2014. Case Study Research. 5th ed. Los Angeles, CA: Sage.