3,984
Views
38
CrossRef citations to date
0
Altmetric
Articles

Quality assurance in the political context: in the midst of different expectations and conflicting goals

Abstract

Higher education quality assurance systems develop within a complex political environment where national level goals and priorities interact with European and global developments. Furthermore, quality assurance is influenced by broader processes in the public sector that set expectations with respect to accountability, legitimacy and regulatory quality. As a result, quality assurance systems often face different and even conflicting goals from different parts of society. The traditional goals of securing minimum standards and facilitating improvement within universities are augmented with such goals as providing information to the public, supporting inter-institutional competition and positioning institutions or higher education systems in the global competition. The relative priority of these goals is in a constant change over time. This paper aims to map the main tensions that emerge from the conflicting demands and discusses the extent to which impact evaluation can address some of the difficulties.

Introduction

A formal system of quality assurance in higher education has become a widespread practice in Europe and outside. It has been a rapid process. Within about two decades quality assurance has developed from single initiatives to a well-institutionalised regulatory régime (Westerheijden et al., Citation2007). Nevertheless, there are also tensions within the system. Questions related to the purpose of the system, its effectiveness and costs emerge regularly and the search for the most satisfying quality assurance system is ongoing. Expectations of a quality assurance system change over time and they tend to vary among different stakeholders (Beerkens & Udam, Citation2015). Quality assurance evolves in a complex political environment where different types of factors interact: for example, factors specific to the sector, such as expectations to higher education; broader trends in governance and public management that shape discussions about accountability and the role of stakeholders; and the organisational structure created for quality assurance.

The development of the quality assurance régime over the last two decades is a fascinating topic of its own and many interesting studies have recently described the complexity of this evolutionary process (Gornitzka & Stensaker, Citation2014; Westerheijden et al., Citation2014). These analyses illustrate a web of factors, often random and unpredictable, that explain why a quality assurance system looks the way it does. It seems that any attempt to deconstruct the developments in a clean and single conceptual model cannot do justice to the complexity of the situation. This paper, therefore, takes a rather descriptive approach in highlighting the main forces in the political environment around quality assurance and it discusses how the forces contribute to different expectations and conflicting goals evident in current discussions.

The rapid development of the formal quality assurance system in Europe emerges out of synergy between objective changes happening on the higher education landscape and paradigmatic changes in the dominant governance mode in the public sector more broadly. Massification of higher education caused a severe burden on the public budget and it raised questions about efficiency and effectiveness of the funding used, fed by the worries about declining standards and lowering quality in the massified system (van Vught & Westerheijden, Citation1994). Furthermore, the rising budget of the higher education sector raised questions about the societal benefits of higher education and accountability for the benefits. At the same time, the rise of performance management and an accountability paradigm in the public sector, and deregulation and ‘agencification’ more specifically (Majone, Citation1997; Pollitt & Bouckaert, Citation2004), created a fertile ground for developing a quality assurance system (Enders & Westerheijden, Citation2014).

Quality assurance systems thus exist and evolve within a complex set of interactions. National, European and global forces interact in setting the objectives. The priority of different objectives changes over time and different stakeholders set different expectations that in one way or the other need to be considered. Tensions between different goals and expectations are often implicitly buried in the system but sometimes they may surface through a public conflict. A quality scandal may be an incident that makes the public wonder whether the system really works. The tension may also explode when poor accreditation results trigger a within-an-institution investigation on the validity of the judgment and thereby question the effectiveness of the current system (Beerkens, Citation2015). Furthermore, the political context does not only influence the quality assurance system but the system itself has an effect on the political context, namely by changing existing authority relationships. External quality assurance has been found to strengthen the role of the central leadership within universities, it has made the voice of external stakeholders, most importantly that of students and employers, part of decision-making (Stensaker, Citation2003) and quality assurance agencies are becoming an interest group of their own.

This paper does not aim to map all forces and actors that influence the quality assurance system. The paper reviews what is known about different expectations of the quality assurance system and identifies where the main tensions and conflicts arise. The main developments on the national, European and global arena that link to these tensions are discussed. Within the main theme of this issue the role of impact assessment in addressing some of the dilemmas is discussed. The paper focuses on European countries in general terms without going into national details. Whether there is a convergence in national quality assurance is a much discussed topic and some common trends are apparent (Westerheijden et al., Citation2007), as well as much national variety at a deeper level (Stensaker, Citation2014). As Pollitt (Citation2001) has helpfully pointed out, convergence in national reform efforts depends on the level of analysis, for example, convergence in policy debates is not the same as convergence in actual practices and it may be a ‘helpful myth’ of its own value. This paper uses examples from different countries to illustrate and substantiate the points about different expectations, without pretending to offer a rigorous comparative analysis.

Different goals around higher education quality assurance: the classic discussions

Before getting to the complexity of the political environment around quality assurance, we need to review some well-known discussions that underline different expectations and goals to quality assurance systems. These discussions, because of their timeless relevance, by now have become ‘the classics’ in the quality assurance literature, although criticised and challenged. They include the issues of: (a) what is the purpose of higher education and thereby what is ‘quality’; (b) what is the purpose of quality assurance; and (c) what is an appropriate quality assurance instrument?

What is ‘quality’?

Discussions about ‘quality’ in higher education tend to take quickly a technical nature and focus on quality assurance instruments. Quality, however, has a strong normative meaning and has therefore also a political basis. Blackmur (Citation2007, p. 18) links the dynamics of quality assurance primarily to electoral politics because governments identify ‘quality’ in whatever they prioritise in higher education. These priorities vary internationally but also over time. Governments may want to influence the quality of certain characteristics of higher education, for reasons such as economic development, equity, accountability, market failure, public opinion and the activities of the interest groups.

Within the last few decades there has been a major shift in the narrative about higher education quality in Europe, enforced by the role of the Lisbon agenda in quality assurance (Gornitzka & Stensaker, Citation2014). The goals of higher education are, more than before, defined by the economic objectives, both economic competitiveness of states as well as labour market perspectives of graduates. As a result, quality assurance systems are increasingly devoted to final qualifications and competencies of graduates; to a large extent ignoring the values that dominated the discussion a few decades earlier, such as civic values or social mobility. The shift towards the economic contribution of higher education is particularly illustrative in the case of Erasmus student exchange, funded by the European Commission, which in the beginning had a strong component of common citizenship and cultural exchange and nowadays is presented strongly in the context of labour market benefits (King & Ruiz-Gelices, Citation2003).

Harvey and Green (Citation1993) in their seminal article distinguish between five different interpretations of quality, which are highly relevant for illustrating some of the current tensions. According to them, quality can be seen as exception, as perfection, as fitness for purpose, as value for money and as transformative. Developing a quality assurance system in general terms as well as in specific instruments is dependent on the definition of quality and different stakeholders are likely to behold a different understanding of quality. Competing definitions are recognisable also in the societal reaction to different performance instruments. Accreditation procedures attempting to ensure minimum standards, thereby offering assurance and guarantee about a university degree, may not fill the societal expectation to recognise excellence. The appeal of university rankings, among other reasons, may lie in the wish to recognise excellence, a wish that accreditation, for instance, cannot fulfil. International rankings and other internationally comparable achievement scores seem to work as a benchmark for allowing an ‘informed’ judgment about excellence on a global scale. As excellence becomes an important notion in higher education, and in several countries the notion is highly influential in political and policy discussions, the approach to ‘quality’ changes with it. Recent developments such as U-Multirank try to be sensitive to the issue of fitness for purpose while combining it with the goals of recognising excellence (van Vught & Ziegele, Citation2012) but whether such a combination fills a societal need remains to be seen. The current orientation to student outputs, qualifications and standards (Westerheijden et al., Citation2007) makes sense in a specific formulation of quality but would leave stakeholders with an alternative understanding of quality dissatisfied.

What is seen as quality by different stakeholders is thus highly dependent on what the stakeholders see as a (potential) quality problem. A quality assurance system responds to the societal definition of the problem that higher education needs to address, which means that issue framing is very important for the definition of quality used: excellence, diversity in the system, ensuring minimum standards, student competences, and so on. This takes us to the interlinked issue: what is the purpose of quality assurance?

What is the purpose of quality assurance?

When discussing the purpose of quality assurance, we must distinguish between the historical reality of why and how quality assurance systems have emerged and the rationale on the conceptual and political level for quality assurance. Earlier literature has mapped well the variety of purposes quality assurance has served. The list of purposes includes (Brennan & Shah, Citation2000; Harvey & Newton, Citation2004; Schwartz & Westerheijden, Citation2004):

to ensure accountability for the use of public funds;

to steer the division of labour within the higher education sector;

to improve the quality of higher education provision;

to inform students and employers;

to stimulate competitiveness within and between institutions;

to undertake a quality check on new institutions;

to assign institutional status as a response to increased diversity within higher education;

to change the governance of universities;

to encourage internationalisation;

to stimulate mobility of students;

to make international comparisons, due to increasing mobility of students and staff;

to ensure compliance with government or external agency requirements;

to control the growth of private providers.

The classic dichotomy in the purposes of quality assurance is between quality enhancement and accountability. Particularly in the early years of quality assurance this dichotomy received a lot of attention and the discussion settled to many with a conclusion that the two goals can be achieved simultaneously, even if one has some priority over the other (Thune, Citation1996; Smeby & Stensaker, Citation1999). Stensaker (Citation2003) is rather critical about the dichotomy because it seems to overstate the divide between the external and internal quality assurance and simplify causal mechanisms for quality improvement in organisational change. Harvey (Citation1999) argued that the two concepts are not ends of a continuum pulling against each other but two separate dimensions of quality assurance. Yet the balance-seeking between the two purposes remains (Danø & Stensaker, Citation2007). The recent trend towards accreditation and standards suggests that now the focus is falling more on the accountability goal (and also transparency and comparability) even though both goals are still present (Danø & Stensaker, Citation2007).

Next to the issue of accountability, there is an interlinked notion of political legitimacy. Westerheijden et al. (Citation2014) illustrate eloquently competing arguments for understanding quality assurance policy and why it develops in a certain direction. One angle is to look at it from a problem-centred approach. Quality assurance policy aims to solve a perceived problem in higher education; and as the main problem changes, the quality assurance system is revised accordingly. This links to the argument of the previous section: the problem formulation has a major influence on the type of quality assurance. Jeliazkova and Westerheijden (Citation2002) suggested that quality assurance in European countries follows a common dynamics: as one aspect of quality is addressed, another problem becomes dominant and needs addressing. This overly rational approach to policy-making, however, does not seem to correspond well to the political reality. In reality, the problems and solutions are often uncoupled and the decision-making is more accurately described by theoretical concepts such as ‘garbage can’ and ‘windows of opportunity’.

Thus, an alternative purpose of a quality assurance system is legitimacy. Not only are universities expected to be accountable to society but also government officials in charge of higher education are held accountable for protecting public interest. Quality assurance may be driven by the need for such legitimacy and the specific instrument is just a symbolic vehicle (Westerheijden et al., Citation2014). Politicians, particularly when faced with a specific problem, need to respond to the ‘Don’t just stand there but do something’ type of reactions from society (Westerheijden et al., Citation2014). Whether the policy actually solves the quality problem is thus less important than whether it restores the trust in the eyes of the relevant constituency. Needless to say, the constituency rarely has sufficient expert knowledge to judge the effectiveness of the policy instrument. This also makes quality scandals influential in policy design, as public demand for immediate reaction may produce a stronger regulation than would be actually optimal for solving the quality problem. As a response to a quality scandal in some Dutch universities of applied sciences, the Minister proposed rather intervening measures, such as final examinations, a regular state inspection and other instruments (De staatssecretaris van Onderwijs, Cultuur en Wetenschap, Citation2011), that to a large extent remained unimplemented only due to government change.

Furthermore, it is not only the politicians that need to offer legitimacy to their constituency but also quality assurance agencies have an interest to look legitimate and necessary in the eyes of their ‘principals’: politicians and society. The same rule holds here: what appears effective may be more important than what actually is effective. As another example from the Netherlands, the quality assurance agency changed its procedures so that programmes are graded on a scale that allows also giving a so-called ‘yellow card’ prior to a ‘reject’ accreditation outcome. The quality assurance agency earned high praise from the Parliament for the high number of ‘yellow cards’ given in the area of humanities and a Parliamentarian complemented the agency saying it is doing ‘exactly what it should be doing’ (Digitaal UniversiteitsBlad Utrecht, Citation2014). Similarly, rejecting accreditation for a large number of programmes in Norway strengthened the power and visibility of the Norwegian quality assurance agency (Stensaker, Citation2011). The search for legitimacy also puts pressure on quality assurance agencies to look for an optimal quality assurance practice. A European Association for Quality Assurance in Higher Education (ENQA) survey of 2012 showed that 60% of quality assurance agencies are planning to undertake a major change in the near future (ENQA, Citation2012), probably suggesting that the agencies also feel the pressure to respond to changing demands.

The old dichotomy of quality enhancement and accountability has by now grown into a ‘trinity’ by incorporating also the purpose of transparency. From a conceptual perspective, need for transparency rises with market approaches to higher education. Transparency is vital to make the market work as a coordination mechanism (Dill & Soo, Citation2005). In a market approach, the primary force behind quality control and quality improvement is ‘consumer choice’. Students ‘vote with their feet’ and poor quality programmes are driven out of the market because they cannot survive competition. This mechanism transforms the role of the state in quality assurance. The state gives up its paternalistic role in assuring quality and obtains a role as a facilitator of the market. For a market to work, students (or parents) must be able to make an informed choice and for this they need valid and reliable information. Because this information is not likely to exist in the market, the state has a responsibility to facilitate the provision of such information (Dill & Beerkens, Citation2010). This also transforms the role for quality assurance. It keeps some of its traditional role because of ‘consumer protection’ reasons but primarily the role is about giving information. This has an impact on the nature of information that a quality assurance process needs to lead to. It requires information that is relatively easily comparable and preferably quantitative. University rankings and classifications seem to fill this market niche. Some are quick to point out that rankings are not a quality assurance tool and should not be seen as such, because they are contributing only to the accountability but not to the enhancement function of quality assurance (Costes et al., Citation2011). The line between in-depth qualitative assessments and rather quantitatively oriented comparable measures are increasingly blurry, though. Graduate and student experience surveys, for example, are used publicly for comparative data purposes in several countries but they are also incorporated in a self-evaluation and considered as input for programme improvement. Needless to say, commercial rankings have a major effect on universities’ behaviour (Hazelkorn, Citation2009), even though they are indeed not intended for that purpose and it is also highly doubtful whether the effect is really ‘enhancing’ quality (Dill & Soo, Citation2005).

As Hopbach (Citation2014) points out, today several stakeholders, particularly political decision-makers, apply a significantly different approach to the purpose of quality assurance that is not at all confined to the ‘traditional’ twin purpose of accountability and enhancing teaching quality. As a representative of a quality assurance agency, he observes that the purpose of quality assurance is becoming increasingly vague or even arbitrary and the quality assurance procedures have remained to a large extent unchanged and therefore not keeping up with changing demands.

What is an appropriate quality instrument?

The choice for a specific quality assurance instrument is of course strongly influenced by the concerns identified above: what is the perceived quality problem and therefore the definition of quality and what is the main purpose of quality assurance. The menu of different quality instruments is wide, including external examiners, professional accreditation, institutional audit, national graduation tests, information tools and benchmarks. There are different logics to classifying the approaches. Dill and Beerkens (Citation2010) distinguish instruments on the level of the coordination mechanism: market-based, self-regulatory and hierarchical instruments. Brennan and Shah (Citation2000) take a more operational look and distinguish between the academic, managerial, pedagogical and employment-oriented approach to quality assurance. This paper focuses only on the dichotomy between output-oriented and process-oriented quality assessment and ignores a related distinction between mission-based and standards-based evaluation (Westerheijden et al., Citation2014).

The dominant form of quality assurance nowadays seems to be accreditation, even though it comes with much variety (Schwarz & Westerheijden, Citation2004). Furthermore, there is an increasing emphasis on the outputs, such as quality standards and learning competencies (Stensaker, Citation2014). Outputs as the ‘unit of analysis’ have a certain appeal. The approach gives the feeling that the quality assurance system is trying to capture what really matters: it focuses on what comes out of the process and does not bother about the mechanisms of the process itself. The output measures take also very different forms. Sweden is an illustration of a system that gave up entirely the process measures in the assessment and focuses on the quality of students’ final thesis. There have also been attempts to create national graduate examinations, which again aim to measure directly students’ knowledge and competencies at the end of their studies (Schwartzman, Citation2010). A rather ambitious initiative is the Organisation for Economic Co-operation and Development’s (OECD) Assessment of Higher Education Learning Outcomes (AHELO) project that attempts to measure the ‘value-added’ of universities by standardised tests globally (see OECD, Citation2013). Outcome-based measures are conceptually appealing, particularly among external stakeholders. Yet all the attempts seem to fail due to reliability and validity issues of the instrument, next to political sensitivities. The ‘value-added’ measures in higher education indeed seem to remain the search for the ‘holy grail’ that in the end may rather distort or diminish academic standards than to assure them (Dill & Beerkens, Citation2010, citing Douglass et al.).

Next to accreditation, there is also a shift towards institutional audits in the Netherlands, Austria and elsewhere (Hopbach, Citation2014). Instead of standards, institutional audits focus on institutional processes to monitor and improve quality in the institution. For many the audit is a superior instrument that manages to touch the core of the institutional processes and support effectively the collaborative actions within the university to really change the teaching and learning process (Dill & Beerkens, Citation2010). Its current appeal can be explained not so much by its internal benefits but by the substantial costs that programme-level accreditations impose on the government and on higher education institutions. Whether institutional audit can as effectively address the accountability needs of external stakeholders is less clear.

Quality assurance and national, European and international governance

Developments in the political context have a substantial influence on tensions and dilemmas in the quality assurance system. The following discussion separates the national, European and global layer of governance but the layers are so intertwined that the distinction is rather arbitrary. The EU, for example, can be seen as a mediator for global reform trends, it has a role in setting its own agenda and national-level policy makers sometimes use the ‘Europe-card’ to push through certain reforms they wish to see (Elken & Stensaker, Citation2011).

National politics

Quality assurance systems are primarily national, so national politics is clearly a major playing field for quality assurance. The importance of the perceived problem and legitimacy was discussed above. Next to such specific issues unique to higher education, which receive a lot of attention in the quality assurance literature, the national approach to quality assurance is highly influenced by governing models and developments in the public sector more broadly. The recent trend towards risk-based regulation in British higher education is a good example of this. Risk-based regulation is conceptually rather different from a more traditional approach. It assumes that quality risks are not equally distributed and therefore it is more efficient for government regulation to focus on high-risk cases. In practice this means that the monitoring of institutions is selective, based particularly on considerations of established track records of regulatory compliance, financial soundness and good internal (risk management) controls. Risk-based regulation is particularly concerned about the proportionality and the cost-benefit ratio of regulation. High costs of the standard quality assurance practices are a concern also in higher education, which makes a risk-based approach attractive. Nevertheless, searching an explanation within the higher education sector would misrepresent the true force behind this new approach. Risk-based regulation has received much attention in the United Kingdom (UK) and it is spreading rapidly across different sectors (Hutter, Citation2005). After years of experience, there was an official assessment of the feasibility of such a régime also for higher education. The roots of the policy are thus not in higher education but in a dominant regulatory model at a conceptual level. The new policy approach has a societal appeal: ‘Regulators, rather like the bodies they regulate, have come under increasing pressure to justify their activities and resources. A strong deregulatory rhetoric has emerged internationally, centring on alleged over-regulation, exaggerated formalism and inflexibility and rising regulatory costs’ (King, Citation2011, pp. 1–2).

There are two governance reform ideas that are particularly relevant for the current forces in quality assurance, ‘agencification’ and stakeholder engagement. Two other interesting developments that may have strong effects in the future are the ‘better regulation’ reforms encouraged among others by the OECD, as also represented in the risk-based regulation in the UK, and potential privatisation of quality assurance (Grove, Citation2014); but these effects are still more difficult to predict.

Agencification

Quality assurance in most European countries is organised through semi-independent quality assurance agencies, which is also encouraged by the Standards and Guidelines for Quality Assurance in the European Higher Education Area (known as the European Standards and Guidelines (ESG)) (ENQA, Citation2009). While it seems a particularly suitable model in higher education because the sector has a built-in aversion for government intervention, the ‘agencification’ trend is a main characteristic of the deregulation reform of the 1990s (Talbot, Citation2004). Semi-independent agencies, which are not officially part of a ministry but linked to the ministry via management or board appointments, were an attractive solution in the ‘regulatory state’ for several reasons: they help to separate policy-making from policy implementation; they contribute to the credibility of regulation via greater independence from politicians; and agencies were seen as a mechanism towards greater specialisation and, therefore, to more expertise and efficiency. In the higher education literature agencies are praised for a constructive independence from the government but also from universities (Dill & Beerkens, Citation2013) and they may contribute to credible commitment towards quality assurance as government policies and priorities change constantly (Ewell, Citation2008).

The problems of agencification, as pointed out in the public administration literature are primarily twofold: fragmentation and a loss of the political core (Bouckaert et al., Citation2010). Delegating responsibilities to highly specialised (semi-) independent agencies leads to coordination problems, particularly in cases where issues cross the borders of one specific agency. In the words of Lægreid and Verhoest (Citation2010, p. 2): ‘The narrow task definition of agencies, their focus on organisational performance targets, their drive for autonomy, and the decoupling of implementation from policy design creates centrifugal forces, with central and parent departments perceiving a loss of coordination capacity’. Furthermore, this has created a situation where programmes and organisations are much better able to resists coordination efforts.

Agencification in higher education may suffer from similar issues. There are concerns about evaluation fatigue and increase in workload that comes from different quality assurance and other performance instruments that are simultaneously in place (Danø & Stensaker, Citation2007; Westerheijden Citation2008). The loss of the political steering capacity is an intriguing issue. Agencies are rather autonomous from the elected policy makers, yet they have a serious policy-making role. They design the procedures and the framework for quality assurance. The political independence of the agency makes quality assurance much more a technical exercise, requiring primarily professional expertise. However, as argued above, the quality assurance exercise cannot be seen independent from the definition of the problem in higher education, which is primarily a political question. This does not, however, mean necessarily a conflict between the political and bureaucratic forces. Such a technical quality assurance can indeed be helpful in offering legitimacy through the ‘new public management’-type procedures even if it does not touch the core of quality (Enders & Westerheijden, Citation2014).

An important aspect of the agencification in Europe is the role of ENQA, supported by the European Commission. The rather successful network contributes not only to the convergence of quality assurance practices by mimetic, normative and potentially coercive instruments (in the sense of DiMaggio & Powell, Citation1983) but also to the increase of both the expertise and autonomy of quality assurance agencies, thus making the agencies stronger actors next to national politics and raises the influence of Europe in national policy-making.

Stakeholder engagement

Stakeholder engagement in policy consultation is another governance trend that has entered higher education through a broader governance reform. Universities’ responsiveness to stakeholders’ expectations has received considerable attention from the angle of ‘corporate governance’ in universities, suggesting that universities must analyse and manage stakeholders for their success (Benneworth & Jongbloed, Citation2010). The discussion about stakeholder engagement in policy formulation, however, exceeds the unique relationship between universities and their external environment. Stakeholder consultation is a cornerstone in the new, horizontal governance model (Kooiman, Citation2003). On the one hand, both internal and external stakeholders are expected to be included in the policy development, not only because of their interests but also because of their unique expertise. On the other hand, the horizontal governance redefines the policy design process by moving from policy-making as a battle between competing interests towards policy-making as a consensus building. More formally speaking, stakeholder consultation is a pillar in the ‘better regulation’ agenda, promoted by the European Commission and the OECD.

Stakeholders are engaged in the quality assurance system in several ways. The ESG directly encourages universities but also quality assurance agencies to incorporate stakeholders in their activities, even though actual engagement of external stakeholders other than students may be limited (Westerheijden et al., Citation2013). The new ESG itself was formed in collaboration with the main stakeholder groups and stakeholder input into the document was much encouraged by the ‘sponsors’ of the initiative. All this has broadened the voices to which policy makers and quality assurance agencies must listen and adapt. While the voice of internal stakeholders in universities has always been present, the voices of students, employers and other external stakeholders are increasingly part of the process. This is likely to increase the variety of expectations that quality assurance has to meet.

The question is if different stakeholders indeed have significantly different expectations of quality assurance? One may hypothesise that stakeholders’ views differ because their ‘stake’ is different but also they have different type of expertise and the level of knowledge. The few studies examining stakeholder expectations confirm that external and internal stakeholders differ in their expectations to quality assurance (Beerkens & Udam, Citation2015). Internal stakeholders (academics and university leaders) seem to lean more towards the ideas of improvement and enhancement in a quality assurance system (Rosa et al., Citation2014). Employers expect more comparative information about universities, both at the national and international level and information about output factors such as qualifications and labour market success. State representatives, who function also as an umbrella for all societal stakeholders, have the broadest set of expectations, including the practical aspects such as facilitating funding and offering credibility of the system. This means that opening up policy-making and rule development to different stakeholder groups necessarily brings to the fore a large set of expectations.

Thus, quality assurance systems have various demands on the national level. Not only are there multiple actors with their own vested interests and various stakeholders with different expectations, quality assurance is heavily influenced by changes in the governance model in public administration more broadly.

European and global level forces

While higher education is formally not within the authority of the European Union, the European Union’s effect on higher education can hardly be overestimated. Falling under the Open Method of Coordination, the European Union hereby only facilitates coordination, learning and exploring innovative solutions (Borras & Radaelli, Citation2015). The effects of Europe are however more structural than the rather light-weight expectations to the Open Method of Coordination (Beerkens, Citation2008). The Bologna declaration, which established the common European Higher Education Area, and the Lisbon agenda, which aims at making Europe the most competitive economy in the world, had a major effect on the higher education system in setting the dominant narrative but also put in place some specific structures and initiatives. The process made governance of higher education much more complex because of ‘blurring the boundaries between formal and informal influence and power structures in higher education’ (Maassen & Stensaker, Citation2011). Another important effect particularly of the Lisbon strategy is that it asked to focus on common concerns and priorities as opposed to celebrating national diversity of education and research systems (Gornitzka, Citation2007). Quality assurance was one of the pillars in the original Bologna declaration and since then both informal and formal institutions are in place, such as ENQA, the ESG and the European Qualifications Framework, (Westerheijden, Citation2008; Elken & Stensaker, Citation2011; Gornitzka & Stensaker, Citation2014). In the context of this paper, the focus is on the effect of the general narrative in Europe and, more specifically, on the focus of standards and competences in quality assurance.

One effect that Europe has had comes from the new narrative. The Lisbon Strategy offered a powerful script that made also national governments approach higher education from the perspective of knowledge economy and labour market (Gornitzka, Citation2007). With this, Europe has declared global competition as an element of its higher education agenda. Various data about the performance of the European higher education system, including international university rankings, feed the discussion. It also activates external stakeholders. As Maassen and Stensaker (Citation2011, p. 766) put it, the narrative shift has led to a situation where ‘stakeholders are playing a kind of a “panic football”’ and claim that universities have to be drastically reformed to live up to their potential in the European knowledge economy. This changed not only the narrative around higher education but strengthens a certain view on what information and evidence is important for policy discussions in this area. Considering the focus on global competition in the narrative, it is not surprising that Europe is involved also in developing transparency instruments. The U-Multirank initiative, funded by the European Commission, demonstrates an interest in quantitatively oriented, easily comparable information that could facilitate the higher education market.

Next to the shift in the narrative, there seems to be also some change in the dominant approach to quality assurance at the European level. Partially as a result of the Bologna framework, accreditation has become a dominant mode of quality assurance in Europe. However, there seems to be a reorientation from process-oriented approach towards output-oriented approach. We can see this trend in the growing role of the qualifications framework and attention to learning outcomes more broadly at the European-level developments. The shift has probably some conceptual roots as discussed above (it reflects a wish to enter the core of the educational process rather than to search quality around it) but it also supports well the narrative of labour market skills.

Besides Europe, the global level is also significant in higher education quality assurance. Global forces influence national and European policy-making and the forces also confront universities directly. Countries are sensitive to their position in global competition and the European agenda in higher education enforces the sense of global competition. Universities also feel it directly, such as through their ability to attract international students, which can have significant financial implications. This makes universities sensitive to their international reputation. Since a formal regulation at the global level is very weak or non-existent, informational tools, such as university rankings, emerge as very powerful instruments in steering the market. The influence of rankings on major excellence-related policy initiatives, such as in Germany or Russia, are an illustration of the trend. As university rankings also determine eligibility for certain government scholarships for international students, as sometimes is the case, their influence enters universities’ everyday life very directly. As a result of these trends, international reputation has also become an important expectation for external stakeholders and both policy makers and universities are held accountable for maintaining and improving the reputation. As mentioned above, transparency instruments tend to fill their purpose better if they offer easily comparable and quantitative information. The crudeness of data in international rankings questions their validity; and they impose a global uniform ideal of a good university (Dill & Soo, Citation2005). Nevertheless, global rankings challenge the existing, more sophisticated quality evaluation instruments because they have a real effect on national and European policy discussions and they influence stakeholders’ perspectives on quality.

Another aspect of globalisation is a potential de-nationalisation and privatisation of quality assurance. On the European level quality assurance is crossing national borders, facilitated by the register for the European Quality Assurance Register for Higher Education. As the global competition increases, the role of international subject-specific accreditations is likely to gain in importance. One option is mutual accreditation agreements, such as the Washington agreement in engineering education. There will probably be growing visibility of private accreditation initiatives, such as the international accreditation for business schools (Association to Advance Collegiate Schools of Business). Such internationally recognised labels may be much more valuable to universities in a global competition than a national accreditation. It is not a surprising result considering that voluntary market-based instruments (such as various ‘labelling’ and certification instruments) are particularly wide-spread and suitable in global governance models.

Different expectations and the complexity of impact assessment

The formal quality assurance systems in Europe have been evolving into their current form over the last 25 years. While many of the key tensions and dilemmas have remained the same over the years, the context around quality assurance has changed considerably. Stakeholders have become more vocal and the European and global developments have made the higher education governance structure significantly more complex. As a result, the expectations of quality assurance systems have broadened considerably. It is not surprising that we increasingly hear doubts whether the current system is effective and relevant and whether the high costs of quality assurance are indeed justified.

Many of the dilemmas highlighted above assume that there is only one instrument that has to solve all the problems: it has to be able to fill the quality enhancement, accountability and transparency purpose; it must simultaneously create trust and have ‘teeth’; be individualised and rank universities on one scale; orient towards a process as well as towards outputs. It is not realistic that one instrument can fill all the legitimate expectations. In reality there are multiple instruments in place, sometimes consciously planned and sometimes just evolved and they also often overlap. Such a ‘regulatory overlap’ is not necessarily a problem. It may be an effective way to ensure quality in a complex and multi-faceted sector. However, such an overlap may create additional bureaucratic burden on universities. Another issue with regulatory overlap is that different instruments may balance out each other when they pull in different directions and therefore universities do not respond to the instruments as expected. On the other hand, recognising that all evaluation instruments are imperfect, a combination of instruments and also a constant change in the system, may avoid institutionalised biases and therefore prevent dysfunctional reactions from universities. Furthermore, a system that continuously changes by responding to problems in the system may also look legitimate in the eyes of the stakeholders.

The tensions presented in this paper, and the criticism about the costs and effectiveness of the current quality assurance system, naturally raise questions about the actual, empirically proven impact of the system. The idea of impact measurement has a strong rational appeal, as witnessed also by the spread of the notion of evidence-based policy-making. While some are rather critical that the current quality assurance system really says much about actual quality (Enders & Westerheijden, Citation2014), existing literature does refer to some positive effects that the quality assurance reforms have created (Stensaker, Citation2003). There is certainly a need for more and more rigorous, empirical studies on the effects of quality instruments and a need for studies that go deeper than the perceptions of those involved in the process. In principle, the quality assurance policy could indeed more effectively apply the principles of evidence-based policy practices.

The challenge to evidence-based policy-making is not so much conceptual but practical. Accumulating literature on evidence-based policy-making brings out also serious challenges. Its rhetoric implies that ‘the nature and dimensions of the problem being addressed are known, measurable and unambiguous, and that appropriate monitoring will show the success of policy measures’ (Wesselink et al., Citation2014, p. 340). In reality the problems are rarely so straightforward. The challenge for an impact evaluation in higher education quality assurance is primarily threefold: the outputs are numerous, difficult to define and even more difficult to measure; the effects are likely to change over the timespan of the quality assurance instrument; and it is not clear what change one is trying to capture. As the paper hoped to show, higher education quality is not only a technical concept. Thinking about quality as a function of standards and qualifications makes it look technical but it is strongly political because it directly links to the problem as it is perceived in society.

Adequate measurement of actual student learning and development, so called ‘value-added’, would solve many dilemmas of quality assurance in higher education. The impressive attempt by the OECD’s AHELO initiative again shows the appeal of this approach but also shows how difficult it is to make it work. Next to the efforts to identify a true impact of quality assurance on students’ learning, we cannot disregard the fact that quality assurance systems are also meant to serve the legitimacy and accountability goals. An effectiveness of an instrument and its legitimacy in the eyes of stakeholders may not necessarily coincide. Furthermore, an impact evaluation may oversimplify the causal relationships between a quality instrument and its long-term effects.

On the other hand, there are examples from the neighbouring fields about how evidence helps to fine-tune a policy. A systematic analysis of the effects of research policies in the UK seem to have contributed to policy-making and have triggered change, fine-tuning and enforcement of certain policy tools (Bence & Oppenheim, Citation2005). The field of quality assurance could also gain from more hard evidence, even when recognising the political complexity around the field.

Acknowledgements

The author thanks the organisers of a project on impact analysis of external quality assurance in higher education institutions, which is co-funded by the European Commission [grant number 539481-LLP-1-2013-1-DE-ERASMUS-EIGF] for inviting her to the European conference seminar ‘Impact Analysis of External Quality Assurance in Higher Education: Methodology and Its Relevance for Higher Education Policy’ which was held on 19–20 May 2014, in Mannheim (Germany). This publication reflects the views only of the author and the Commission cannot be held responsible for any use that may be made of the information contained therein. The author is grateful to Theodor Leiber and James Williams for their helpful comments and suggestions.

Disclosure statement

No potential conflict of interest was reported by the author.

References

  • Beerkens, E., 2008, ‘The emergence and institutionalisation of the European higher education and research area’, European Journal of Education, 43(4), pp. 407–425.10.1111/ejed.2008.43.issue-4
  • Beerkens, M., 2015, ‘Agencification problems in the higher education quality assurance’, in Reale, E. & Primeri, E. (Eds.) The Transformation of University Institutional and Organizational Boundaries, Higher Education Research in the 21st Century series, pp. 43–62 (Rotterdam, Sage Publisher).
  • Beerkens, M. & Udam, M., 2015, ‘Stakeholders in the higher education quality assurance: richness in diversity?’, The Hague. Available at: https://www.researchgate.net/profile/Maarja_Beerkens (accessed 6 November 2015).
  • Bence, V. & Oppenheim, V., 2005, ‘The evolution of the UK’s research assessment exercise: publications, performance and perceptions’, Journal of Educational Administration and History, 37(2), pp. 137–155.10.1080/00220620500211189
  • Benneworth, P. & Jongbloed, B., 2010, ‘Who matters to universities? A stakeholder perspective on humanities, arts and social sciences valorisation’, Higher Education, 59(5), pp. 567–588.10.1007/s10734-009-9265-2
  • Blackmur, J., 2007, ‘The public regulation of higher education qualities: rationales, processes and outcomes’, in Westerheijden, D.F., Stensaker, B. & Rosa, M.J. (Eds.) Quality Assurance in Higher Education, pp. 15–45 (Dordrecht, Springer).10.1007/978-1-4020-6012-0
  • Borrás, S. & Radaelli, C.M., 2015, ‘Open method of co-ordination for demoi-cracy? Standards and purposes’, Journal of European Public Policy, 22(1), pp. 129–144.10.1080/13501763.2014.881412
  • Bouckaert, G., Peters, B.G. & Verhoest, K., 2010, The Coordination of Public Sector Organizations: Shifting patterns of public management (Basingstoke, Palgrave Macmillan).10.1057/9780230275256
  • Brennan, J. & Shah, T., 2000, Managing Quality in Higher Education. An international perspective on institutional assessment and change (Buckingham, OECD/SRHE/Open University Press).
  • Costes, N., Hopbach, A., Kekäläinen, H., van Ijperen, R. & Walsh, P., 2011, Quality Assurance and Transparency Tools. European Association for Quality Assurance in Higher Education (ENQA), workshop report no 15. Available at: http://www.enqa.eu/indirme/papers-and-reports/workshop-and-seminar/QA%20and%20Transparency%20-%20Final.pdf (accessed 30 January 2015).
  • Danø, T. & Stensaker, B., 2007, ‘Still balancing improvement and accountability? Developments in external quality assurance in the Nordic countries 1996–2006’, Quality in Higher Education, 13(1), pp. 81–93.10.1080/13538320701272839
  • De staatssecretaris van Onderwijs, Cultuur en Wetenschap, 2011, Beleidsreactie op de Eindrapporten Alternatieve Afstudeertrajecten. aan de voorzitter van de Tweede Kamer der Staten-Generaal, 20 mei. Available at: http://www.rijksoverheid.nl/documenten-en-publicaties/kamerstukken/2011/05/20/beleidsreactie-op-de-eindrapporten-alternatieve-afstudeertrajecten.html (accessed 16 February 2015).
  • Digitaal UniversiteitsBlad Utrecht, 2014, Tweede Kamer: keuringsstelsel hoger onderwijs functioneert prima, 4 September. Available at: http://www.dub.uu.nl/artikel/nieuws/tweede-kamer-keuringsstelsel-hoger-onderwijs-functioneert-prima.html (accessed 28 January 2015).
  • Dill, D. & Beerkens, M. (Eds.), 2010, Public Policies for Academic Quality: Analyses of innovative policy instruments (Dordrecht, Springer).
  • Dill, D. & Beerkens, M., 2013, ‘Designing the framework conditions for assuring academic standards: lessons learned about professional, market, and government regulation of academic quality’, Higher Education, 65(3), pp. 341–357.10.1007/s10734-012-9548-x
  • Dill, D. & Soo, M., 2005, ‘Academic quality, league tables, and public policy: a cross-national analysis of university ranking systems’, Higher Education, 49(4), pp. 495–533.10.1007/s10734-004-1746-8
  • DiMaggio, P.J. & Powell, W.W., 1983, ‘The iron cage revisited: institutional isomorphism and collective rationality in organizational fields’, American Sociological Review, 48(2), pp. 147–160.10.2307/2095101
  • Elken, M. & Stensaker, B., 2011, ‘Policies for quality in higher education – coordination and consistency in EU-policy-making 2000–2010’, European Journal of Higher Education, 1(4), pp. 297–314.
  • Enders, J. & Westerheijden, D.F., 2014, ‘The Dutch way of new public management: a critical perspective on quality assurance in higher education’, Policy and Society, 33(3), pp. 189–198.10.1016/j.polsoc.2014.07.004
  • European Association for Quality Assurance in Higher Education (ENQA), 2009, Standards and Guidelines for Quality Assurance in the European Higher Education Area (ENQA, Helsinki).
  • European Association for Quality Assurance in Higher Education (ENQA), 2012, Quality Procedures in the European Higher Education Area and Beyond: Third ENQA survey (ENQA, Brussels).
  • Ewell, P., 2008, ‘The “quality game”: external review and institutional reaction over three decades in the United States’, in Westerheijden, D.F., Stensaker, B. & Rosa, M.J. (Eds.) Quality Assurance in Higher Education: Trends in Regulation, translation and transformation (Dordrecht, Springer).
  • Gornitzka, Å., 2007, ‘The Lisbon process: a supranational policy perspective’, in Maassen, P. & Olsen, J.P. (Eds.) University Dynamics and European Integration, pp. 155–178 (Dordrecht, Springer).10.1007/978-1-4020-5971-1
  • Gornitzka, Å. & Stensaker, B., 2014, ‘The dynamics of European regulatory regimes in higher education – challenged prerogatives and evolutionary change’, Policy and Society, 33(3), pp. 177–188.10.1016/j.polsoc.2014.08.002
  • Grove, J., 2014, ‘QAA ‘no match’ for rapid change in higher education sector’, Times Higher Education, 9 October.
  • Harvey, L., 1999, ‘Evaluating the evaluators’, opening keynote at the Fifth International Network of Quality Assurance Agencies in Higher Education (INQAAHE) Conference, Santiago, Chile, 2 May.
  • Harvey, L. & Green, D., 1993, ‘Defining quality’, Assessment & Evaluation in Higher Education, 18(1), pp. 9–34.
  • Harvey, L. & Newton, J., 2004, ‘Transforming quality evaluation’, Quality in Higher Education, 10(2), pp. 149–165.10.1080/1353832042000230635
  • Hazelkorn, E., 2009, Impact of Global Rankings on Higher Education Research and the Production of Knowledge. UNESCO Forum on Higher Education, Research and Knowledge Occasional Paper No. 15. Available at: http://unesdoc.unesco.org/images/0018/001816/181653e.pdf (accesssed 30 January 2015).
  • Hopbach, A., 2014, ‘Recent trends in quality assurance? Observations from the agencies’ perspectives’, in Rosa, M.J. & Amaral, A. (Eds.) Quality Assurance in Higher Education: Contemporary debates, pp. 216–230 (Basingstoke, Palgrave).
  • Hutter, B.M., 2005, The attractions of risk-based regulation: accounting for the emergence of risk ideas in regulation. CARR discussion paper no. 33, London School of Economics and Political Science. Available at: http://www.lse.ac.uk/researchAndExpertise/units/CARR/pdf/DPs/Disspaper33.pdf (accessed 17 February 2015).
  • Jeliazkova, M. & Westerheijden, D., 2002, ‘Systemic adaptation to a changing environment: towards a next generation of quality assurance models’, Higher Education, 44(3–4), pp. 433–448.10.1023/A:1019834105675
  • King, R., 2011, The Risks of Risk-based Regulation: The Regulatory Challenges of the Higher Education White Paper for England. Available at: http://www.hepi.ac.uk/wp-content/uploads/2014/02/Main-Report.pdf (accessed 28 January 2015).
  • King, R. & Ruiz-Gelices, E., 2003, ‘International student migration and the European ‘Year abroad’: effects on European identity and subsequent migration behaviour’, International Journal of Population Geography, 9, pp. 229–252.10.1002/(ISSN)1099-1220
  • Kooiman, J., 2003, Governing as Governance (London, Sage).
  • Lægreid, P. & Verhoest, K., 2010, ‘Introduction: reforming public sector organizations’, in Lægreid, P. & Verhoest, K. (Eds.) Governance of Public Sector Organizations: Proliferation, autonomy and performance, pp. 1–19 (Basingstoke, Palgrave Maxmillan).10.1057/9780230290600
  • Maassen, P. & Stensaker, B., 2011, ‘The knowledge triangle, European higher education policy logics and policy implications’, Higher Education, 61(6), pp. 757–769.10.1007/s10734-010-9360-4
  • Majone, G., 1997, ‘From the positive to the regulatory state: causes and consequences of changes in the mode of governance’, Journal of Public Policy, 17(2), pp. 139–167.10.1017/S0143814X00003524
  • Organisation for Economic Co-operation and Development (OECD), 2013, Assessment of Higher Education Learning Outcomes (AHELO): Feasibility study report (Paris, OECD).
  • Pollitt, C., 2001, ‘Convergence: the useful myth?’, Public Administration, 79(4), pp. 933–947.10.1111/padm.2001.79.issue-4
  • Pollitt, C. & Bouckaert, G., 2004, Public Management Reform: A comparative analysis, second edition (Oxford, Oxford University Press).
  • Rosa, M.J., Sarrico, C.S. & Amaral, A., 2014, ‘Academics’ perceptions on the purposes of quality assessment’, Quality in Higher Education, 18(3), pp. 349–366.
  • Schwartzman, S., 2010, ‘The national assessment of courses in Brazil’, in Dill, D. & Beerkens, M. (Eds.) Public Policies for Academic Quality: Analyses of innovative policy instruments (Dordrecht, Springer).
  • Schwarz, S. & Westerheijden, D.F. (Eds.), 2004, Accreditation and Evaluation in the European Higher Education Area (Dordrecht, Kluwer).
  • Smeby, J.-C. & Stensaker, B., 1999, ‘National quality assessment systems in the Nordic countries: developing a balance between external and internal needs?’, Higher Education Policy, 12, pp. 3–14.10.1016/S0952-8733(98)00027-0
  • Stensaker, B., 2003, ‘Trance, transparency and transformation: the impact of external quality monitoring on higher education’, Quality in Higher Education, 9(2), pp. 151–159.10.1080/13538320308158
  • Stensaker, B., 2011, ‘Accreditation of higher education in Europe - moving towards the US model?’, Journal of Education Policy, 26(6), pp. 757–769.
  • Stensaker, B., 2014, ‘European trends in quality assurance: new agendas beyond the search for convergence’, in Rosa, M.J. & Amaral, A. (Eds.) Quality Assurance in Higher Education: Contemporary debates, pp. 135–148 (Basingstoke, Palgrave).
  • Talbot, C., 2004, ‘The agency idea: sometimes old, sometimes new, sometimes borrowed, sometimes untrue’, in Pollitt, C. & Talbot, C. (Eds.) Unbundled Government: A critical analysis of the global trend to agencies, quangos and contractualisation, pp. 3–21 (London, Routledge).
  • Thune, C., 1996, ‘The alliance of accountability and improvement: the Danish experience’, Quality in Higher Education, 2(1), pp. 21–32.10.1080/1353832960020103
  • Van Vught, F. & Westerheijden, D., 1994, ‘Towards a general model of quality assessment in higher education’, Higher Education, 28(3), pp. 355–371.10.1007/BF01383722
  • Van Vught, F. & Ziegele, R., 2012, Multidimensional Ranking: The design and development of U-Multirank (Dordrecht, Springer).10.1007/978-94-007-3005-2
  • Wesselink, A., Colebatch, H. & Pearce, W., 2014, ‘Evidence and policy: discourses, meanings and practices’, Policy Sciences, 47(4), pp. 339–344.10.1007/s11077-014-9209-2
  • Westerheijden, D.F., 2008, ‘States and Europe and quality of higher education’, in Westerheijden, D.F., Stensaker, B. & Rosa, M.J. (Eds.) Quality Assurance in Higher Education: Trends in Regulation, Translation and Transformation, pp. 73–96 (Dordrecht, Springer).
  • Westerheijden, D.F., Stensaker, B. & Rosa, M.J. (Eds.), 2007, Quality Assurance in Higher Education (Dordrecht, Springer).
  • Westerheijden, D.F., Epping, E., Faber, M., Leisyte, L. & Weert, E. De., 2013, ‘Stakeholders and quality assurance’, Journal of the European Higher Education Area, 3, pp. 72–80.
  • Westerheijden, D.F., Stensaker, B., Rosa, M.J. & Corbett, A., 2014, ‘Next generations, catwalks, random walks and arms races: conceptualising the development of quality assurance schemes’, European Journal of Education, 49(3), pp. 421–434.10.1111/ejed.2014.49.issue-3