33,962
Views
14
CrossRef citations to date
0
Altmetric
Articles

Assessing quality assurance in higher education: quality managers’ perceptions of effectiveness

ORCID Icon & ORCID Icon
Pages 258-271 | Received 15 Jan 2018, Accepted 07 May 2018, Published online: 17 May 2018

ABSTRACT

The present article offers a mixed-method perspective on the investigation of determinants of effectiveness in quality assurance at higher education institutions. We collected survey data from German higher education institutions to analyse the degree to which quality managers perceive their approaches to quality assurance as effective. Based on this data, we develop an ordinary least squares regression model which explains perceived effectiveness through structural variables and certain quality assurance-related activities of quality managers. The results show that support by higher education institutions’ higher management and cooperation with other education institutions are relevant preconditions for larger perceived degrees of quality assurance effectiveness. Moreover, quality managers’ role as promoters of quality assurance exhibits significant correlations with perceived effectiveness. In contrast, sanctions and the perception of quality assurance as another administrative burden reveal negative correlations.

Introduction

Quality of teaching and learning has become a major strategic issue in tertiary education systems across the globe over the past decades (Harvey and Williams Citation2010; Enders and Westerheijden Citation2014). In Europe, the Bologna process, as well as other concurrent developments, has hastened the introduction and elaboration of institutionalized quality assurance (QA) and quality management (QM) mechanisms.Footnote1 Most importantly, under the new public management paradigm, (standardized) comparison of educational outcomes, rankings, and a higher degree of university autonomy and accountability have become an integral part of university managers’ day-to-day work (Broucker and de Witt Citation2015; van Vught and de Boer Citation2015).

The Bologna process strives to make degrees and learning outcomes more comparable across European university systems as an aid to increasing student and staff mobility across European higher education institutions (HEIs) (Teichler Citation2012). Thus, comparability of individual universities’ provisions has become a core part of the reforms carried out as part of the Bologna process, resulting in the establishment of formalized external QA mechanisms (e.g. external programme accreditation) and internal QA mechanisms (Bollaert Citation2014). These mechanisms are supposed to draw on certain sets of quality standards, most importantly the Standards and Guidelines for Quality Assurance in the European Higher Education Area (ESG) (ENQA Citation2015). Other concurrent developments that have led to an increased awareness of the relevance of QA are the massification and internationalization of tertiary education (Bollaert Citation2014). Universities are granting wider access to new target audiences which they would most probably not have addressed just one or two generations ago. Among these are ‘non-traditional students’ who originate from non-academic family backgrounds or who enter university education with a vocational education background and professional experience instead of with a secondary school education certificate. Another group that is rising in numbers are mature students, returning to higher education after a professional career (‘life-long learning’).

An increasingly heterogeneous student population adds a different dimension to the quality debate (Krempkow, Pohlenz, and Huber Citation2015). This dimension refers to the learning needs of the ‘new learners’ and to the mode of teaching in the higher education sector rather than to the control function of QA mechanisms, which aim at improving processes and workflows in the organization of HEIs. Against that backdrop, QA and QM are discussed in the literature more as a means to develop a quality culture which promotes the willingness of academic staff to make use of evidence (produced by QA procedures) to innovate their teaching and their attitudes towards higher education (scholarship of teaching and learning) as a response to changes in the environment and changing target audiences.

For many academic staff (and other stakeholders too), however, the rapidity and impact of the above-mentioned change processes have been more of a burden than an opportunity. This is why QA as an idea and as a process has become an issue of controversial debate and something that encounters resistance, at least when it is first being introduced (Anderson Citation2006, Citation2008). Over the years a huge body of literature has accumulated, with studies suggesting that evaluation results do not reliably and validly reflect teaching quality and can thus not be used as a basis for management decisions, particularly when these management decisions have budgetary relevance (for an overview, see Pohlenz Citation2009; Shevlin et al. Citation2000; Zhao and Gallant Citation2012). In a broader perspective, different authors highlight various effects of evaluations that are not necessarily related to quality (Pollitt Citation1998; Dahler-Larsen Citation2012). Other arguments against QA refer to the nature of academic teaching, which cannot, according to this type of argumentation, be broken down into measurable units and cause–effect relations that indicate any kind of impact by teachers on student learning achievements. The question of what quality in higher education actually is has been addressed on many occasions, with the implication that, even if quality cannot easily be defined this does not mean that it cannot be measured. However, it underlines the fact that measurement of quality is not an easy task (for a literature overview, see Harvey and Green Citation1993; Owlia and Aspinwall Citation1996; Reeves and Bednar Citation1994).

Even though the existence of external and internal QA is widely accepted nowadays, the debate outlined above is still pending, at least in many universities where scholars still accuse QA of being both a bureaucratic burden and an illegitimate interference from a central management – namely the Rector's office – which holds too much managerial power in its hands in order to ‘regulate and discipline academics’ (Lucas Citation2014, 218). QA officials (‘quality managers’) who are in charge of implementing the respective procedures (evaluation studies, internal quality audits and the like) are continually in the awkward position of having to justify their approaches and methods. As a consequence, the QA practitioners try to make its research instruments more sophisticated in order to keep pace with the methodological debate that is being conducted in the space around it, thereby challenging the methodological and managerial approaches of the quality managers’ work.

Given the ongoing debate and disputes characterized above about the legitimacy of and justification for QA in higher education, evaluation of the effects of QA becomes an inescapable necessity. However, studies on the impact evaluation of education policies and practices often focus on purely methodological issues such as identifying the econometric methods available (Schlotter, Schwerdt, and Woessmann Citation2009). That research stream concentrates on the quest for causal relationships, or cause–effect relations between inputs and outputs or outcomes, and the modes of research which are applicable to tracing these relationships, e.g. randomized controlled trials (RCT). It should be noted, however, that in the field of higher education, the applicability of experimental research designs is very limited. Experimental and control groups can hardly be separated in the ‘natural’ field of higher education; experiments are hardly replicable without the influence of learning effects on the test persons; etc. (Sullivan Citation2011). Thus, the question arises of which alternative approaches to a randomized controlled trials research paradigm exist that are acceptable with regard to the objective of establishing the effectiveness of QA procedures in HEIs. Among these are methods using panel data (see e.g. Schlotter, Schwerdt, and Woessmann Citation2009, 19ff.), in particular the before-after comparison approach (see e.g. Leiber, Stensaker, and Harvey Citation2015, 297) but also other, mutually complementary, non-exclusive methods (Leiber, Stensaker, and Harvey Citation2015, 297–298).

We highlight one of these further approaches, namely, ‘assessment of intervention effects by participants, key informants and experts’ (Leiber, Stensaker, and Harvey Citation2015, 298). Particularly, we analyse the degree to which quality managers perceive their own approaches to QA as effective or conducive to its overall aims. We will develop our argument by first digging deeper into the methodological issue of investigating the causal effects of higher education on learning outcomes. After this, we will outline a concept of quality managers’ perception of the effectiveness of their work. Last of all, we will present results of an empirical research study of the latter question.

Causal relationships between inputs and outputs. Changing function of QA in higher education

Investigating causal relationships between inputs (such as teaching and teacher attitude) and (desired) outcomes (such as student learning achievement) is a complex endeavour which requires methodological rigour for different reasons:

  1. Learners are co-creators or co-producers of the teaching and learning process and its outcomes. Student achievement varies not only with the quality of the teaching but also with other sources of variance, such as the students’ aptitude or the time budget which they are spending on extra-curricular activities or jobs;

  2. Quality means different things to different stakeholders (such as labour market representatives, the scientific community, the students themselves, the wider public, the different political arenas, etc.). Thus, it remains difficult to decide what relevant and methodologically sound indicators (in terms of their validity and reliability) could be used to measure quality;

  3. In order to trace causal relationships between input and output (and outcomes) one would have to theorize the specific impact that the specific features of a study programme have on the participating students. Empirically traceable hypotheses would then, for instance, refer to the effectiveness of particular teaching methods on learning outcomes (e.g. ‘the more e-learning experience, the better the overall learning result’). Since higher education is delivered in a dynamic and changing environment, it seems to be unrealistic to attempt to create laboratory conditions (replicability of tests, constant conditions, no changes in environmental variables) which would be a methodological prerequisite to testing such hypotheses empirically (in randomized controlled trials).

Although such real-world complexities and problems make it difficult to implement and maintain laboratory conditions in the daily businesses of higher education, it is stated by Manville et al. that the raison d’être of QA is precisely this: the investigation of causal relationships between inputs and outcomes (Manville et al. Citation2015), for example, in order to serve the public interest in greater transparency of HEIs regarding public expenditure in the field of higher education. The public implicitly and politicians explicitly request HEIs to perform evaluations (or any other kind of QA procedures) to provide sound evidence of the presence or absence of quality in higher education.

In the QM and evaluation research community on the other hand, the limitations of procedures that help to uncover the cause–effect relations between teaching and learning were already an issue back in the 1990s and have led to alternative practices:

Measuring programme outcomes requires a great deal of rigour; it's demanding. When programme evaluators discovered how difficult it was, many abandoned the attempt and decided to focus on process, which is generally more tractable. (…) Rather than assuming the difficult tasks of improving the designs and developing better measures (…) programme evaluators decided to focus on something else – process. (Smith Citation1994, 217)

In contrast to more rigorous quasi-experimental, e.g. RCT-based approaches to evaluation, process-oriented QM practice places comparatively more emphasis on a (study) programme's causality of planning and implementation: What are the teachers’ assumptions regarding the effectiveness of their teaching methodology? How can improvements be made in the course of subsequent implementation cycles? These could be questions that would be addressed under an approach to QA which is more formative and focused on the programme's implementation and continuous improvement. Methodologically and epistemologically speaking, one could say that such an approach would follow the logic of design-based research on teaching and learning processes, which consists of an iterative cycle of programme design, implementation, reflection and improvement (Reimann Citation2011).

Relevant evidence can be gathered from different sources, such as student surveys, collegiate teaching inspections, university statistics, interviews, focus groups, etc., or, in short, from data that follows a naturalistic evaluation paradigm in which social behaviour is observed in its natural setting (e.g. a classroom) as it occurs (Guba and Lincoln Citation1981).

Quality assessment that follows this form of ‘implementation research’ contrasts with the more control-oriented approach in which the quality manager's function is to detect (and in many cases, sanction) deviations from whatever kind of quality standard is being applied. The practice of higher education QA, however, seems largely to follow such a formative approach, instead of trying to analyse teaching and learning in an RCT logic of causal analyses (Smith Citation1994). Generally, the aim of the formative kind of QA is to ‘describe and analyse the process of implemented programme activities – management, strategies, operations, costs, interactions among clients and practitioners and so forth – so as to improve them’ (Chelimsky Citation1985, 2). The desired effect of formative QA or evaluation research is to stimulate (organizational) learning about development needs and potentials for further improvement (Widmer Citation2000). This learning is stimulated during the implementation process itself, since evaluation results are fed back into the process. This form of QA can hardly be utilized for summative purposes (such as a retrospective assessment of the merit or worth of a programme), since its application influences the implementation process itself. However, it can be beneficial for QA and quality development purposes since it enables the researchers and practitioners involved to detect undesired programme effects, whose impact can then be minimized during the implementation process. Methodological requirements for robust evaluation research can be relaxed in favour of such formative effects of an evaluation (Chelimsky Citation1985, 2) in return for its function of stimulating teachers’ and students’ (and administrators’) self-reflection on the programme.

The role of the quality manager does, of course, change in such an approach to QM: he or she is acting rather as a consultant to those who are involved in the teaching and learning process and to those who are in charge of taking action whenever needed. This consultation can address individual teachers (in order to support concrete teaching practice), teams of teachers (in order to implement curricular reforms), and top-level managers (e.g. in order to reflect the institution's teaching practice and outcomes against its mission statement). In particular, the latter perspective is of increasing relevance since universities are developing more and more in the direction of self-regulating and managerially administered institutions under the previously mentioned new public management approach to university governance. There is a growing body of literature on the reshaping of universities to become more formalized corporate actors, which requires them to manage themselves with clear organizational structures and evidence-based internal policies (Ramirez and Christensen Citation2013; Etzkowitz et al. Citation2000). Nevertheless, it is important to keep in mind that there are well-known dilemmas which cannot be overcome easily (Larsen, Maassen, and Stensaker Citation2009).

In this context, there is also debate on what counts as evidence. What information is needed, when ‘tough decisions’ – for instance concerning budget allocations – need to be taken? And how robust do the evaluation results need to be in order to legitimize managerial action? QA practice thus needs to strike a balance between different functions and needs to align its procedures and instruments. However, the function of (internal) QA or QM as a consultant to the central management level seems to be of growing importance, irrespective of the function outlined above of contributing to improving particular teaching and learning processes and study programmes.

With the changes of the functions that QA has within universities’ quality development frameworks, greater importance attaches to the question of how quality managers perceive their own role and the effectiveness of their approaches to QA. To what extent would quality managers appreciate their own practice as conducive to the overall objective of QA, namely, to contribute to actual quality improvements? These questions have been addressed in this study and will be described in the following sections.

The aims of quality assurance and quality managers’ self-concept

What impact QA has on quality managers’ self-concept has not yet attracted the attention of too many researchers. However, in the notion of Whitchurch (Citation2008), who describes quality managers as members of an emerging ‘third space’ located between academia and line management, questions concerning the perceived effectiveness of their work are already implied. The present article draws on that picture and investigates quality managers’ self-concept – not as a personal or psychological trait (in the sense of self-efficacy; Bandura Citation1977), but rather as a broader concept that indicates the quality managers’ perception of the impact of their work on QM's ultimate goal, which is actual quality improvement.

Our earlier outline of different approaches to QA – ranging from quality control to consulting decision makers – forms the background against which we address the question of quality managers’ self-perception of the efficacy of their own QA. What impact do ‘their’ QA mechanisms actually have on teaching and learning practice, and what features and outcomes of QA are helpful in innovating teaching and learning cultures? How can quality managers best play significant roles as consultants to the university management? To answer these questions, we consult data that sheds light on how quality managers consider the role and the impact of QA mechanisms in their university (see below for information on data and methods).

One of the main objects of quality managers’ self-perception is the effectiveness of their actions within their university. The ‘locus of control’ is in this case external: the effects of the QA approach are not – or at least, not necessarily or exclusively – attributed to the person's own competence or performance as a QA professional; rather, they are attributed to the external conditions under which QA is implemented. These can be influenced by the specific features of the university (e.g. university type, size, disciplinary culture, etc.). Most importantly, the practice of QA varies according to the management decisions which are not usually taken independently by the quality managers themselves but by top-level management representatives: choosing a more centralized approach with a central QA unit being in charge of university-wide procedures produces different effects – for example on the teaching staff's willingness or unwillingness to engage in QA – than does a decentralized responsibility which is located for example at department level. However, with a distanced view to their own day-to-day actions, we are interested in the quality managers’ perception of their (or their university's) management approach and its effectiveness and impact. In the present study we thus examine different predictors of quality managers’ perception of the effectiveness of their work, and we assume that aspects like the support from higher management levels, the sense of belonging to a community of professional practitioners, and the like are conducive to such self-perception. In contrast, aspects like external obligations (QA merely as a means of satisfying external demands, e.g. accreditations) are expected to be detrimental to quality managers’ sense of effectiveness with regard to their practice. In distinguishing these types of drivers, we do neither claim that all factors located externally are necessarily expected to be negatively correlated with quality managers’ self-perception of their own efficacy, nor that the totality of internal factors are positively correlated. External factors (e.g. sanctions) can also be a strong driver for the feeling of being capable of changing things in the university. In the following sections, we outline the data which we had available for the research and report on the methodology and outcomes.

Data and methods

Our research follows a mixed-methods approach combining qualitative and quantitative data. By doing so, we attempt to avoid the flaws which either of these research paradigms usually has (Haverland and Yanow Citation2012; Mertens and Hesse-Biber Citation2013). But implementing mixed-methods research does not only mean analysing qualitative and quantitative data separately (Tashakkori and Teddlie Citation2003; Kelle Citation2006). In contrast, it means that different data has to be collected and analysed in an integrated way. For instance the design for a (standardized) survey which we performed has not only been developed on the basis of theoretical considerations but also relies on qualitative information derived from narrative interviews that had been conducted beforehand. Hence, this article combines different types of data on the opinions and perceptions of quality managers.

We use data from the above-mentioned nationwide survey which was conducted in summer 2015 and was sent out to all HEIs where we were able to identify people in charge of QM at central management level, excluding faculty/department staff involved with QM. Thus, the questionnaire was sent out to all QM departments and their functional equivalents in all HEIs in Germany which fulfilled the above-mentioned criterion. This is the first-ever survey among quality managers in HEIs about the particular topic of the effectiveness of QM. From our point of view, interviewing quality managers about their perception of the QA mechanisms’ effectiveness is beneficial because most of the interviewees have a scientific background and are thus able to provide a reasonable self-assessment against the criteria outlined above of scientific rigour and potential impact on quality development initiatives (change management).

The questionnaire covered the following topics: (1) general characteristics of the QM department, (2) purpose of and tasks involved in QM, (3) effectiveness of QM, (4) QM procedures and activities, (5) scepticism and resistance to QM, (6) capacities and professionalization in QM, (7) quality of study programmes and teaching, (8) biographical data and institutional background. Altogether 294 of 639 identified quality managers responded to our questionnaire, which equates to a participation rate of 46%.

Nevertheless, we controlled for the representativeness of our sample (see ). Almost all parameters presented in show nearly the same frequency distribution in our sample and the general university population. For the variables ‘type of HEI’ (University, University of Applied Science, School of Arts and Music), ‘funding body’ (State-funded, Church-funded; privately funded institutions were excluded from the sample) and ‘gender’ (male, female) we see only marginal deviations. Hence, we conclude that the sample is representative. This is important for the generalizability of our results. Statistically significant results allow for the inference that the relations we uncover in this article are also present in the population to which we refer.

Table 1. Sampling characteristics.

Based on our theoretical considerations, we calculated an ordinary least squares (OLS) regression model on the perceived overall effectiveness of QM at HEIs. This model includes variables which measure internal reasons (e.g. support from HEI management) and external reasons (e.g. Bologna process) for the establishment of QM. The former imply a functional perspective; the latter are concerned with the legitimacy that is conferred on universities if they follow the general expectation, namely that for today's universities the existence of a QA department is required. In addition, we also include a variable which measures the actual level of resistance to QM. All variables except actual resistance to QM are Likert-scaled from 1 to 6. Low values indicate low levels of approval, while higher values indicate higher levels of approval. The resistance to QM variable is a dummy ranging from 0 (no resistance) to 1 (resistance). Our dependent variable measures the QM managers’ perception of the general effectiveness of QM at the HEIs concerned, again ranging from 1 to 6.

Results

If we consider quality managers to be change agents in a higher education system (Pohlenz Citation2010), we can assume that external circumstances and internal attitudes influence their perceptions of the effectiveness of QM. Hence, the effectiveness of their work is not only characterized by their individual efforts in their day-to-day business but also by existing rules, norms and the institutional environment. To analyse these two perspectives, we calculate three different regression models (OLS). The first refers to the institutional environment and the overall purpose of the QM. The second model refers to particular individual functions and the third regression model combines both perspectives (see ). As mentioned above, our central dependent variable is the general effectiveness of the institutions’ QM as perceived by QM managers.

Table 2. Regression model on determinants of overall effectiveness of QM in HEIs as perceived by quality managers.

The first regression model mainly addresses the reasons for the introduction of QM, which can be either internal or external to the institution. Hence, the first model contains items for both of these potential drivers (e.g. support from HEI management or integrating existing approaches as internal reasons, and the Bologna process and the ESG as external reasons). In addition, there are variables, like preparation of accreditation and cooperation with other HEIs, for which it is reasonable to assume that they could be inspired equally by internal or by external processes and expectations. Finally, the model contains a control variable for ‘resistance’ because we assume that it may make a difference in the perception of the effectiveness of QM if instruments are developed independently or are prescribed externally.

As can be seen from the data in , Model 1 reveals the results for the motives for introducing QM at HEIs. The model exhibits an overall mediocre explanatory power (r = 0.43) and explains nearly 20% of the variance. However, the results of the regression are very interesting. They show that variables like the Bologna Process, the integration of existing approaches or the preparation of accreditation produce negative coefficients. That means that if quality managers described these motives as being relevant to or decisive for the introduction of QA, they also have the perception of a generally lower effectiveness. Although only the coefficient for preparation of accreditation is significant, the results reveal that adjustments to certain standards and certain processes may diminish the effectiveness of QM as perceived by QM managers. Interestingly, the coefficient for European standards for QA (ESG) is positively (but insignificantly) correlated with the dependent variable. But even this result seems to be reasonable because standards need to be given life and require particular institutional knowledge for their implementation. This is one of the main mechanisms of ‘glocalization’ (Paradeise and Thoenig Citation2013), which means that global trends and standards are adapted and aligned with local demands, which may lead to institutional variance.

Furthermore, the coefficient for internal support from the university's management and cooperation with other HEIs are positively correlated and statistically significant. Hence, if quality managers enjoy the support of the university management, this strongly influences their perception of QM effectiveness. Additionally, cooperation with other universities reveals a significant positive effect. It indicates that certain concepts and ideas may diffuse from institution to institution, while their implementation may vary between those institutions.

Remarkably, the coefficient of actual resistance is insignificant but positively correlated, which means that actual resistance correlates with higher levels of perceived effectiveness of QM. This result could be interpreted to mean that resistance is not per se negatively connoted. In contrast, it may actually help to get quality managers to develop an effective QM or at least to perceive QM as an effective instrument, because it can improve due to resistances and even overcome them. However, due to the fact that this particular coefficient is statistically insignificant, the results should not be overstated. Nevertheless, this highlights that in terms of internal QM our understanding of universities’ organizational set-up and institutional conflict lines is still very limited and requires further research.

The second model focuses on individual statements about how quality managers perceive their own role in QM. In our model, we consider statements that refer to a particular situation, namely, when quality managers are faced with resistance by teachers. Again, the model produces a mediocre to low overall explanatory power (r = 0.41) and explains nearly 20% of the variance. What is very interesting is the coefficient referring to the sanctioning power of QM. It assumes that stronger tendencies to sanction are negatively correlated with the overall individual perception of the effectiveness of QM. The more quality managers consider possible sanctions, the lower is their perceived effectiveness of QM. All remaining coefficients are positively correlated with perceived effectiveness, while only the results of external QM procedures (e.g. external programme accreditation) and promotion of goals produce significant results. These results reveal that quality managers also function as translators or communicators who feed the HEI system with relevant information from external procedures or from internal goals. Consequently, this information flow is positively correlated with the perceived effectiveness of QM.

In our final Model 3, we combine both perspectives, including external and internal motives for QA. However, some adjustments needed to be made. Two variables had to be excluded due to regression diagnostics. The variance of inflation (VIF) coefficient indicated higher levels of multicollinearity. Therefore, we have excluded the resistance variable. Additionally, the coefficient for ‘I focus on results of external procedures of QM’ has been excluded from the model. The remaining variables have a VIF value less than 1.7, which can be accepted as adequate for OLS modelling. To sum up, our final model exhibits a medium explanatory power of r = 0.56 (or r-squared = 0.31). Our overall model therefore yields nearly the same results as the single models presented previously.

Hence, the results show that, with regard to external motives, the support from the university's management and cooperation with other HEIs are very important for the general perception of the effectiveness of QM. While the first is not surprising, the second possibly indicates that, from a neo-institutional perspective (DiMaggio and Powell Citation1983), aspects such as membership of professional groups and networks seem to be a relevant factor for the quality managers’ perception of the effectiveness of their practice. In order to reduce uncertainty, quality managers engage in networks and this can be seen as a reassurance of the relevance and the appropriateness of existing practices.

In contrast to this, the ‘preparation of accreditation’ variable reveals a negative and significant correlation with the perceived effectiveness of QM, which means that if quality managers argue that one of the main and relevant reasons for the introduction of QM was accreditation, this might lower the (perceived) effectiveness of QM. This seems to be reasonable because accreditation does not emphasize curricular contents in detail or the consistency and the coherency of QM approaches. Accreditation is rather a formal procedure to meet certain standards but is not necessarily linked to the effectiveness of QM. Moreover, on the individual level also the communicator role of quality managers yields results that remain stable in the final regression model. Only the coefficient of sanctions has reversed the direction and now reveals a positive effect on the overall perception of the effectiveness of QM. Although this result could indicate that sanctions are positively associated with perceived effectiveness, the coefficient is close to zero and statistically insignificant and should thus not be overinterpreted.

Conclusions

Research on the impact of QA and QM in higher education and particularly on the quality managers’ perceptions of QA and QM effectiveness in HEIs is still rather rare. The present article presents results on perceived effectiveness of QM in teaching and learning in German HEIs. The data is based on a nationwide survey among quality mangers conducted in 2015. It represents the effectiveness of QM as perceived by quality managers as a combination of several factors such as the Bologna process, the ESG (ENQA Citation2015) and certain QM-related activities and motivations of quality managers and other HEI members.

The results reveal that on the institutional side three factors seem to be crucial. Firstly, the support from HEIs’ higher management. Without the support of higher management or HEI leadership, QM in teaching and learning is a ‘toothless tiger’. Most probably, in such a case there would be only limited chances of competing and deliberating with other actors within the institution. Unsurprisingly, the relevance of the support by higher management levels is positively correlated with the perceived effectiveness of the QM approach. Secondly, the relevance of the preparation of accreditation is negatively correlated with perceived effectiveness, signalling that accreditation is a rather formal procedure and associated with lower levels of perceived effectiveness of QM. Thirdly, cooperation with other HEIs exhibits a positive correlation with perceived effectiveness and indicates that cooperation and networking between different universities is supportive in this regard.

If we consider certain QM-related activities and motivations of quality managers, we can see different results. Here, only two variables seem to be relevant. Firstly, the attitude towards using the results of external QM procedures is positively correlated with the perceived effectiveness of QM. There seems to be a contradiction with the preparation of the accreditation variable, but this can be easily explained. While the accreditation variable focuses on processes, the motivational variable emphasizes the results. Secondly, the mobilization of support of QM from academic staff is also positively correlated with perceived effectiveness. This result is in line with research on organizational change and with research on academic staff's resistance to QM. It shows that stronger tendencies towards the promotion of the goals of QM are associated with a higher perceived effectiveness of QM.

Discussing these results against a motivation theory background (e.g. Deci and Ryan Citation1985), it can be seen that autonomy (in the sense of independence from external demands and the opportunity to act according to internally driven, strategic considerations in the field of QA and quality development) and support from higher management best promote a sense of effectiveness. In contrast, the feeling of merely executing mandatory procedures decreases the quality managers’ perception of an effective approach to their assignment. Such perception of their own professional role is very much in line with the way faculty perceive QA procedures in many cases: a bureaucratic burden and illegitimate interference by distant management levels in academic affairs. And it is very much in line with the self-perception of academic practice: the quest for truth, performed both in self-regulating and independence on the one hand, and social integration in professional (or academic) communities on the other hand.

If quality managers see themselves as being in a position not merely to execute administrative requirements but as part of an active network, and as a beneficial support to their universities’ managements and academia, this would support the notion of the emerging third space (Whitchurch Citation2008). In this sense quality managers act in an academic environment with the help of academic means (e.g. robust application of empirical research methods in educational evaluation procedures) but without belonging to academia in the narrow sense of the word.

In turn one could say that – at least from the quality managers’ viewpoint – QA procedures can be used most beneficially when they are (a) embedded in a comprehensive strategy with higher management and the QA unit working closely together, and (b) when they are also accepted as both a valuable contribution to the particular HEI's evidence-based management agenda and as an indispensable part of the HEI's research outputs.

Acknowledgements

Philipp Pohlenz thanks the organizers of the impact analysis Training Workshop Central Europe, which was held on 28 September 2016 in Mannheim, Germany, for inviting him to give a presentation on the subject of the present article. This publication reflects the views only of the authors, and the European Commission cannot be held responsible for any use that may be made of the information contained therein.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Notes on contributors

Markus Seyfried is a researcher at the Chair for Politics and Government in Germany at Potsdam University, Potsdam (Germany), where he also earned a doctoral degree in political sciences. His research interests comprise public organisations with a focus on independence, accountability, auditing and evaluation as well as any topics related to quality in higher education.

Philipp Pohlenz holds the Chair of Higher Education Research and Academic Development at Otto-von-Guericke-University, Magdeburg (Germany). His research focuses on student achievements and organisational change in higher education. His teaching areas include evaluation research and key competencies. He earned a doctoral degree in social sciences from the University of Potsdam.

Additional information

Funding

This workshop was organized in the context of a three-year project on impact evaluation of quality assurance in higher education institutions, which was co-funded by the European Commission (grant number: 539481-LLP-1-2013-1-DE-ERASMUS-EIGF). Markus Seyfried thanks the Federal Ministry of Education and Research, Germany (BMBF) for funding the WiQu research project which has investigated procedural, structural and personnel influences on the impact of quality assurance departments (funding number: 01PY13003A/01PY13003B).

Notes

1 Throughout this article, we refer to quality management as a set of management practices and routines implemented by universities in order to assure and/or develop higher education quality (the operational level), whereas quality assurance, as the overarching concept, refers to the goals, strategy and methodology of assuring and/or developing quality in higher education.

References

  • Anderson, Gina. 2006. “Assuring Quality/Resisting Quality Assurance: Academics’ Responses to ‘Quality’ in Some Australian Universities.” Quality in Higher Education 12 (2): 161–173. doi: 10.1080/13538320600916767
  • Anderson, Gina. 2008. “Mapping Academic Resistance in the Managerial University.” Organization 15 (2): 251–270. doi: 10.1177/1350508407086583
  • Bandura, Albert. 1977. “Self-Efficacy. Toward a Unifying Theory of Behavioural Change.” Psychological Review 84 (2): 191–215. doi: 10.1037/0033-295X.84.2.191
  • Bollaert, Lucien. 2014. A Manual for Internal Quality Assurance in Higher Education – with a Special Focus on Professional Higher Education. Brussels: EURASHE.
  • Broucker, Bruno, and Kurt de Witt. 2015. “New Public Management in Higher Education.” In The Palgrave International Handbook of Higher Education Policy and Governance, edited by Jeroen Huisman, Harry de Boer, David D. Dill, and Manuel Souto-Otero, 57–75. New York: Palgrave Macmillan.
  • Chelimsky, Eleanor. 1985. “Old Patterns and New Directions in Program Evaluation.” In Program Evaluation: Patterns and Directions, Vol. 6, edited by Eleanor Chelimsky, 1–35. Washington, DC: American Society for Public Administration.
  • Dahler-Larsen, Peter. 2012. The Evaluation Society. Stanford, CA: Stanford University Press.
  • Deci, Edward, and Richard M. Ryan. 1985. Intrinsic Motivation and Self-Determination in Human Behavior. New York: Springer.
  • DiMaggio, Paul J., and Walter W. Powell. 1983. “The Iron Cage Revisited: Institutional Isomorphism and Collective Rationality in Organizational Fields.” American Sociological Review 48 (2): 147–160. doi: 10.2307/2095101
  • Enders, Jürgen, and Don F. Westerheijden. 2014. “Quality Assurance in the European Policy Arena.” Policy and Society 33 (3): 167–176. doi: 10.1016/j.polsoc.2014.09.004
  • ENQA (European Association for Quality Assurance in Higher Education). 2015. Standards and Guidelines for Quality Assurance in the European Higher Education Area. Brussels: ENQA. Accessed 24 April, 2018. http://www.enqa.eu/wp-content/uploads/2015/11/ESG_2015.pdf.
  • Etzkowitz, Henry, Andrew Webster, Christiane Gebhardt, and Branca Regina Cantisano Terra. 2000. “The Future of the University and the University of the Future: Evolution of Ivory Tower to Entrepreneurial Paradigm.” Research Policy 29 (2): 313–330. doi: 10.1016/S0048-7333(99)00069-4
  • Guba, Egon G., and Yvonna S. Lincoln. 1981. Effective Evaluation. Improving the Usefulness of Evaluation Results Through Responsive and Naturalistic Approaches. San Francisco: Jossey-Bass.
  • Harvey, Lee, and Diana Green. 1993. “Defining Quality.” Assessment and Evaluation in Higher Education 18 (1): 9–34. doi: 10.1080/0260293930180102
  • Harvey, Lee, and James Williams. 2010. “Fifteen Years of Quality in Higher Education (Part two).” Quality in Higher Education 16 (2): 81–113. doi: 10.1080/13538322.2010.485722
  • Haverland, Markus, and Dvora Yanow. 2012. “A Hitchhiker’s Guide to the Public Administration Research Universe: Surviving Conversations on Methodologies and Methods.” Public Administration Review 72 (3): 401–408. doi: 10.1111/j.1540-6210.2011.02524.x
  • Kelle, Udo. 2006. “Combining Qualitative and Quantitative Methods in Research Practice: Purposes and Advantages.” Qualitative Research in Psychology 3 (4): 293–311.
  • Krempkow, René, Philipp Pohlenz, and Natalie Huber. 2015. Diversität und Diversity Management an Hochschulen [Diversity and Diversity Management at Higher Education Institutions]. Bielefeld: Universitätsverlag Webler.
  • Larsen, Ingvild, Peter Maassen, and Bjørn Stensaker. 2009. “Four Basic Dilemmas in University Governance Reform.” Higher Education Management 21 (3): 1–18.
  • Leiber, Theodor, Bjørn Stensaker, and Lee Harvey. 2015. “Impact Evaluation of Quality Assurance in Higher Education: Methodology and Causal Designs.” Quality in Higher Education 21 (3): 288–311. doi: 10.1080/13538322.2015.1111007
  • Lucas, Lisa. 2014. “Academic Resistance to Quality Assurance Processes in Higher Education in the UK.” Policy and Society 33 (3): 215–224. doi: 10.1016/j.polsoc.2014.09.006
  • Manville, Catriona, Molly Morgan Jones, Marie-Louise Henham, Sophie Castle-Clarke, Michael Frearson, Salil Gunashekar, and Jonathan Grant. 2015. Preparing Impact Submissions for REF 2014: An Evaluation. Approach and Evidence. Cambridge: RAND Corporation.
  • Mertens, Donna, and Sharlene Hesse-Biber. 2013. “Mixed Methods and Credibility of Evidence in Evaluation.” New Directions for Evaluation 2013: 5–13. doi: 10.1002/ev.20053
  • Owlia, Mohammad, and Elaine Aspinwall. 1996. “A Framework for the Dimensions of Quality in Higher Education.” Quality Assurance in Education 4 (2): 12–20. doi: 10.1108/09684889610116012
  • Paradeise, Catherine, and Jean-Claude Thoenig. 2013. “Academic Institutions in Search of Quality: Local Orders and Global Standards.” Organization Studies 34 (2): 189–218. doi: 10.1177/0170840612473550
  • Pohlenz, Philipp. 2009. Datenqualität als Schlüsselfrage der Qualitätssicherung von Lehre und Studium [Data Quality as a Key Qquestion of Quality Assurance of Teaching and Learning]. Bielefeld: Universitätsverlag Webler.
  • Pohlenz, Philipp. 2010. “Agenten des Wandels – Institutionalisierung von Qualitätssicherung auf Hochschulebene” [Agents of Change – Institutionalization of Quality Assurance at University Level]. Zeitschrift für Hochschulentwicklung 5 (4): 94–103.
  • Pollitt, Christopher. 1998. Evaluation in Europe: Boom or Bubble? London: Sage.
  • Ramirez, Francisco, and Tom Christensen. 2013. “The Formalization of the University: Rules Roots, and Routes.” Higher Education 65 (6): 695–708. doi: 10.1007/s10734-012-9571-y
  • Reeves, Carol, and David Bednar. 1994. “Defining Quality: Alternatives and Implications.” Academy of Management Review 19 (3): 419–445. doi: 10.5465/amr.1994.9412271805
  • Reimann, Peter. 2011. “Design-based Research.” In Methodological Choice and Design, edited by Lina Markauskaite, Peter Freebody, and Jude Irvin, 37–50. Dordrecht: Springer.
  • Schlotter, Martin, Guido Schwerdt, and Ludger Woessmann. 2009. Methods for Causal Evaluation of Education Policies and Practices: An Econometric Toolbox. EENEE Analytical Report to the European Commission, No. 5. Brussels: European Expert Network on Economics of Education.
  • Shevlin, Mark, Philip Banyard, Mark Davies, and Mark Griffiths. 2000. “The Validity of Student Evaluation of Teaching in Higher Education: Love me, Love my Lectures?” Assessment and Evaluation in Higher Education 25 (4): 397–405. doi: 10.1080/713611436
  • Smith, M. F. 1994. “Evaluation: Review of the Past, Preview of the Future.” Evaluation Practice 15 (3): 215–227. doi: 10.1016/0886-1633(94)90015-9
  • Sullivan, Gail M. 2011. “Getting off the ‘Gold Standard’: Randomised Controlled Trials in Education Research.” Journal of Graduate Medical Education 3 (3): 285–289. doi: 10.4300/JGME-D-11-00147.1
  • Tashakkori, Abbas, and Charles Teddlie. 2003. Handbook of Mixed Methods in Social and Behavioral Research. Thousand Oaks, CA: Sage.
  • Teichler, Ulrich. 2012. “International Student Mobility and the Bologna Process.” Research in Comparative and International Education 7 (1): 34–49. doi: 10.2304/rcie.2012.7.1.34
  • van Vught, Franz, and Harry de Boer. 2015. “Governance Models and Policy Instruments.” In The Palgrave International Handbook of Higher Education Policy and Governance, edited by Jeroen Huisman, Harry de Boer, David D. Dill, and Manuel Souto-Otero, 38–56. New York: Palgrave Macmillan.
  • Whitchurch, Celia. 2008. “Shifting Identities and Blurring Boundaries: The Emergence of Third Space Professionals in UK Higher Education.” Higher Education Quarterly 62 (4): 377–396. doi: 10.1111/j.1468-2273.2008.00387.x
  • Widmer, Thomas. 2000. “Qualität der Evaluation – Wenn Wissenschaft zur Praktischen Kunst Wird” [Quality of Evaluation – When Science Becomes Practical Art].” In Evaluationsforschung. Grundlagen und Ausgewählte Forschungsfelder, edited by Reinhard Stockmann, 77–102. Opladen: Leske und Budrich.
  • Zhao, Jing, and Dorinda J. Gallant. 2012. “Student Evaluation of Instruction in Higher Education: Exploring Issues of Validity and Reliability.” Assessment and Evaluation in Higher Education 37 (2): 227–235. doi: 10.1080/02602938.2010.523819