722
Views
2
CrossRef citations to date
0
Altmetric
Original Article

Doing things with numbers: The Danish National Audit Office and the governing of university teaching

Abstract

Like the Supreme Audit Institutions of many other OECD countries, the Danish National Audit Office has stepped up its performance auditing of public administrations and agencies in order to ensure that they provide value for money. But how do performance audits contribute to making state institutions change their conduct? Based on the case of the Danish National Audit Office's auditing of the quality of teaching at Danish universities, this paper seeks to show, first, that the quantification and, by the same token, simplification of the practices subjected to performance auditing are key to understand how SAIs render complex activities amenable to assessment. Secondly, the paper seeks to show how the quantification of the quality of university teaching contributed to make academic staff and universities, groups that have a large room of professional autonomy, viewing, assessing and, ultimately, govern their teaching activities differently.

1 IntroductionFootnote1

Performance measurement has become an essential component of public management reforms in most OECD countries (CitationLewis, 2015; Pollitt & Bouckaert, 2011, pp. 106–110). As part of this tendency to monitor and gauge the quality of public services, Supreme Audit Institutions (SAIs)Footnote2 are paying increasing attention to performance auditing (CitationGrönlund, Svärdsten, & Öhman 2009; Lonsdale, Wilkins, & Ling, 2011; Pollitt et al., 1999; Shand & Anand, 1996; Wanna, Ryan, & Ng, 2001). Governments and treasuries seem to belief that SAIs provide an important role in ensuring that public money is spend efficiently (CitationVrangbæk & Reichborn-Kjennerud, 2013). More generally, the spread of New Public Management to public organizations, including professionalization of management, management by objectives, performance targeting, and contract steering, public organizations have been granted increasing responsibility and discretion for meeting policy targets and producing services more cost-effectively (CitationAzuma, 2005; Barzelay, 1997).

Notwithstanding this consensus that SAIs have stepped up their performance auditing activities, the number of studies of the political consequences remain rather limited. A review of existing research on the political consequences of performance auditing for the auditee found only 14 studies (CitationVan Loocke & Put, 2011). In these studies, impact was primarily defined as instrumental. Instrumental impact is more short term and easier to measure than conceptual and interactive impact, which is deemed harder to grasp since it entails more long-term consequences that might be hard to single out (CitationLonsdale et al., 2011). Some have problematized the independent role of auditors as being an obstacle to impact because stakeholder involvement is associated with greater utilization (CitationReichborn-Kjennerud & Johnsen, 2011; Vanlandingham, 2011). CitationJustesen and Skærbek (2010) found that accountability mechanisms are forceful in making organizations implement changes. Finally, it has been found that auditee perceptions of the audit — in particular the degree to which it is seen as an instrument of learning rather than control — is important for the accommodation of auditor recommendations (CitationReichborn-Kjennerud, 2013). While these studies are important to our understanding of the relationship between auditors and the auditees, i.e. the actors, their intentions and behavioural reactions, they do not tell us much about the consequences of the performance audit act. That is, rather than focusing exclusively on actors, we need to focus on the audit as a practice in order to better understand better how the (SAIs’) quantification and simplification of the audited activities contribute to making these activities governable in new ways.

This article seeks to provide an (tentative) answer to the following question: How do performance audits contribute to making state institutions change their conduct? This question is pursued in two steps. First, I discuss how we may grasp the (potential) political effects of measuring and quantifying public service activities. The article does so by comparing and discussing the potentials of two competing analytical frameworks for studying the political consequences of SAI performance auditing. Second, it seeks to illustrate the analytical potential of the constitutive perspective by applying it to Danish National Audit Office, Rigsrevisionen (RR), and its auditing of the quality of university teaching in Denmark. It is argued that the RR's performance audit of Danish university teaching quantifies and simplifies university teaching in a way that enables not only external monitoring and control, but also a form of governing at a distance by which the exercise of power by central authorities (in casu: The Ministry of Higher Education and Science) is facilitated by the local agencies’ (in casu: the Danish universities) governing of themselves.

In the following, I first outline a conceptual and analytical framework for studying the political consequences of performance auditing. This is followed by an account of the RR and its turn towards performance auditing. I then try to illustrate the analytical potential of the constitutive framework — with a particular emphasis on the Foucauldian notion of government — by analysing the political processes and consequences of the SAIs performance auditing of the quality of Danish university teaching. Finally, the paper draws some tentative conclusions on the merits and limitation of the applied frameworks.

2 Conceptualizing and analyzing the political consequences of performance auditing and measuring

In line with the research problem of this special issue, this article asks how we can grasp and analyze the potential political consequences of performance auditing conducted by SAIs? When confronted with this question, the Danish RR answers that it does not engage in politics and therefore its performance audits can have no political implications (CitationHenning & Rasmussen, 2013). Probably most SAIs would answer in a more or less similar manner. They do not see performance auditing as a political activity in as much as they are just trying to make public institutions implement existing laws and regulations in a more efficient manner. However, we may want to broaden our understanding of politics to try to grasp some of the consequences that either go unnoticed or at least are taken for granted — perhaps because they are widely regarded as non-political.

In the following, I briefly review two analytical perspectives on political consequences of performance auditing, namely the intentionalist and the constitutive perspectives. For reasons explained below, I adopt the latter in the ensuing analysis. Over the years, a substantial literature on the consequences of performance measurement has emerged that in one way or another takes the intentions of the performance programme as its point of departure. The limited existing literature on the consequences of SAI performance auditing reviewed in the introduction above shares this feature. The key research question driving this perspective is whether performance auditing generates the effects it intended or not? An interesting literature on performance measurement has taken this agenda a step further and interrogated the non-intended effects (Citationde Bruijn, 2002; Smith, 1995). Attention to such effects is an important step towards a more political-realistic analysis of the chain of performance measurement (CitationLewis, 2015). However, while paying attention to other effects than those intended is a vast improvement compared to the rationalist model of performance measurement, the analytical space of the unintended effects studies is limited by the notion of intentionality. The analytical starting point is always the intentions of the auditor and effects are gauged in terms of whether or not they concur with these intentions. Unintended consequences emerge when those subjected to performance auditing engage in various forms of gaming or manipulation. This is often a highly rational behaviour in as much as performance auditing creates new and often unpredictable incentive structures. Such studies of unintended effects are valuable and a sober reminder to those who think that public choice disciples who believe that economic incentives and performance measures will fix the problems of the public sector once and for all. However, in order to address unintended consequences, the intentionalist perspective assumes that intentions are easy to identify. As argued by CitationDahler-Larsen (2014) and in the introduction of this special issue (CitationLewis, 2015), this is far from always the case. Moreover, unintended consequences do not capture the entire range of possible effects of performance measurement and auditing. They overlook the possible constitutive effects of quantification.

2.1 The constitutive perspective

Another way of addressing the real political consequences of performance measurement that go beyond intended and unintended effects is by paying attention to the ways in which the very act of measuring changes the object of measurement. Such changes, usually termed the constitutive effects, are what Mike Power refers to when he argues that things or activities are not auditable by nature, but have to be made so (CitationPower, 1996, 2003). When, for example, public sector activities are made auditable, the ways in which they are grasped, assessed, governed and ultimately enacted no longer remain the same. Such constitutive effects have been examined in the area of economics (CitationMacKenzie et al., 2007) and state formation (CitationDesrosières, 1998). This constitutive effect is perhaps clearest when a certain set of activities are quantified as this entails that the qualitative features of these activities are deliberately — or implicitly — ignored in favour of a numerical scale (CitationPorter, 1994). The act of quantification implies that the complex and often impenetrable character of a set of activities that seem to defy easy description is translated into one or more numbers making it much easier to convey this activity to others. Accordingly, the set of activities that hitherto had been difficult to account will — after quantification — lend themselves more readily to auditing and, ultimately, to political-administrative purposes and interventions.

By quantifying activities, performance auditing may also render these activities susceptible to governing in new ways. We may thus further expand the analysis of the real political consequences in the chain of performance measurement by paying attention to the ways in which numbers may serve to produce an image of impartiality and trust (CitationPorter, 1995). Accordingly, the RR may gain credibility, trust and thereby authority in the sense of being able to persuade auditees to change conduct. Another way of addressing this issue is to examine how quantification allows audited activities to be governed at a distance, i.e. a form of governing that works through the self-governing capacities of the universities. This entails a form of power that is akin to what the late French historian and philosopher Michel Foucault termed ‘government’ (CitationFoucault, 1985 introduction; CitationFoucault, 2007; see also CitationBarry, Osborne, & Rose, 1996; Dean, 1999; Triantafillou, 2012). Government denotes the action conducted by an individual or group, such as the RR, upon the possible field of action of another individual or group, such as the universities. Government then is not about dictating or determining the universities’ behaviour against their will, but about making them act upon (govern) themselves in line with more or — quite often — less clearly defined norms and goals. By implication, we may want to examine how the quantification of the quality assessment of university teaching may allow the RR not to determine the design of university teaching, but to make the universities themselves assess and govern their teaching differently.

Within the constitutive perspective, Foucault's notion of government is employed in order to shed light on how the RR seeks to govern the conduct of others, notably the Ministry of Higher Education and Science and the Danish universities. I examine the quantified forms of knowledge produced and employed to tell the truth about the object to be governed (the quality of university teaching), on the one hand, and the procedures, tools and techniques of power on the other (see also CitationFoucault, 1980). My assumption thus is that the ways in which the quality of university teaching is quantified in line with standardized indicators enabling nationwide benchmarking significantly impinges on, but does not determine, the ways in which universities assess and govern the quality of their teaching (CitationTriantafillou, 2007).

2.2 Method

In order to explore the potential of the analytical framework, a singly case study approach has been adopted: the RR's performance audit of the quality of university education in Denmark. This case of SAI performance auditing is interesting for at least two reasons. First, the RR's emphasis on process (quality of teaching) rather than output (e.g. number of students graduating) or outcome (e.g. employment) is remarkable because the RR is moving into an area in which it obviously has little if any expertise and, more generally, because it goes against the grain of New Public Management doxa. Secondly, the case involves what is widely regarded as relatively powerful profession (university academics) and an issue (the quality of university education) which is quite complex to define and, not least, to govern. Also, even if the universities’ actions are shaped by tri-annual contracts with the Ministry of Higher Education and Science, they have substantial room for manoeuvre within both research and teaching activities. Accordingly, if the RR has been able to make universities assess and govern the quality of their teaching differently, then the RR may be able to govern many other Danish state organizations as well. Of course, one should be very careful not to exaggerate the potential for generalizing the findings of a single case study. As we shall see below, the auditing practices of the RR are in some ways rather uniquely organized.

A comprehensive search and collection of publicly available documents have been undertaken to support the analysis. Publications issued from mid-2012 to mid-2014 by the RR, the Ministry of Higher Education and Science, and the six universities offering studies within the human sciences, which were the ones subjected to auditing, were identified by going through their official homepages and by telephone contact to their information departments to check for material not uploaded. However, only official reports were compiled as the purpose was to examine the consequences of public critique and quantifying analyses, not behind the scenes political strategizing. Moreover, in order to examine how the Ministry, the RR and the six universities publicly tried to legitimize the needs for reform (or not), I went through the national Danish e-library for newspaper cuttings (Infomedia) to identify newspaper articles in the said period. Also, secondary literature on the topic was collected using standard social science databases, Google Scholar and the snowball method. Finally, two RR staffs engaged in performance auditing were interviewed in order to better understand how they conducted their performance audits in this area, including how they conveyed their results to the universities. No interviews were made with university representatives, nor were observation made of their teaching. The reason for this is that the ambition of this article is not to assess whether the universities really changed their teaching (such as the use of pedagogical methods or the choice of syllabus). Instead the present article examines how universities assess and govern the quality of their teaching. This latter dimension can — at least in the Danish case — largely be mapped through publicly available documents from the universities. The process of identifying the relevant documents has been facilitated by the fact the author is employed as an academic at a Danish university and therefore has a certain level of inside knowledge on how the quality management systems are designed there. Such an internal position in the field under scrutiny can easily lead to bias problems. However, as the aim of this article is to examine if and how external audits spark internal actions at the universities, rather than assessing whether these actions are good or bad (for teaching), the motivation for bias is limited.

3 The Danish National Audit Office and its turn towards performance auditing

The RR was established by law in 1976 that merged existing state auditing functions (CitationRigsrevisionen, 2013a). Today, the process of the auditing of state institutions involves two bodies: the RR and the state accountants, a system that is largely unique to Denmark. The RR is an independent organization under parliament. By law, it has the right to decide which state institutions and activities it subjects to particular scrutiny. The other body, the state accountants are made up by six parliamentary politicians, usually with a legal education, appointed by Parliament with the task of reporting on the result of state audits (CitationRigsrevisionen, 2013b). Only the state accountants are allowed to ask the RR to investigate particular issues. The results of the RR's auditing activities are conveyed to the state accountants who in turn present these to parliament and, in case malpractice has been reported, decide whether these should result in a warning to the minister in charge of the said public organization. In other words, the state accountants work as an intermediary between the RR and the government (and parliament), and thereby serve — mostly successfully — to protect the RR from political pressures.

The RR conducts three distinct auditing tasks, namely financial, legal–critical and performance auditing of state institutions and statutory bodies receiving state funding. The RR is obliged to undertake financial auditing of all state institutions every year. The legal–critical and performance auditing is undertaken more selectively according to areas that the RR find deserve particular attention. Nominally speaking, the RR has been undertaking performance auditing since 1926 (CitationRigsrevisionen, 2013a).Footnote3 However, early forms of performance auditing seem to have focused almost exclusively on assessing the use of public funds. Moreover, until the early1990s performance auditing actually played a relatively small role in the RR joint portfolio. In 1988, the RR only spent 9% of its resources on ‘larger studies’, the label for RR studies in which performance auditing play a prominent role (CitationKjær, 1998, p. 47). By 1995, the RR has stepped up its ‘large surveys’ to constitute 23% of its resource use (CitationKjær, 1998, p. 48), a figure that has increased to around 30% today (CitationHenning & Rasmussen, 2013).Footnote4

But how then does the RR define the task of performance auditing, i.e. of ensuring that state money are spent efficiently? (CitationRigsrevisionen, 2013c, p. 9). The performance audits have for a long time been guided by three principles, namely thrift, productivity and goal effectiveness (CitationRigsrevisionen, 1982, p. 13). Currently, these principles are put into operation in the following ways. First, the RR examines the concrete economic dispositions made by the responsible public agent or agency, such as how organizational and activities are planned and executed in the agency to solve their tasks or provide a particular service. This includes checking that the agency is not undertaking activities that do not contribute to solving its tasks. Secondly, the RR often checks whether the fiscal and personnel management systems used by the auditee are adhering to acceptable standards for good public management in general, and if procedures are established to control and allocate resource expenditures in a systematic fashion. This examination of the adequacy of internal fiscal management systems has become an increasingly important element of the RRs performance auditing since the 1990s (CitationKjær, 1998, p. 49). As we shall see below, it was a crucial element in the RRs auditing of the Danish universities. Thirdly, the RR performance audits may subject a wider (policy) area to benchmarking either by comparing the activities of a number of agencies within the area or by examining the development of their activities over time. Again, as we shall see, benchmarking was a crucial instrument in the case of gauging the quality of university teaching.

4 The critique and surveillance of the quality of university teaching

In the autumn 2011, the RR decided to initiate a large audit of the teaching and education offered by the universities at the undergraduate and graduate levels across all scientific disciplines (CitationRigsrevisionen, 2012). Moreover, it decided to examine if the recent 10% raise in the performance payment rate for students in the humanities and social sciences had actually been used according to its purpose, i.e. to increase the quantity and quality of teaching.Footnote5 Finally, the RR decided to evaluate whether the existing accreditation system actually secured that the universities offered research-based teaching on all their studies.

In August 2012, the RR delivered a devastating critique of the universities, the Ministry, and the existing accreditation system. It found that even after the raise of the performance payment rate at the social science and humanities studies, the students there still received preciously little teaching and much of it was not conducted by researchers. In some places, only 20% of the teaching was conducted by researchers, i.e. tenured staff with paid time to undertake research. The RR concluded that the low number of teaching hours meant that many students were not really offered a full time study — as required by the law and the European Credit Transfer System. Moreover, the quality of the study could hardly be regarded as acceptable on those studies with very low rate of research-based teaching. The universities were criticized, firstly, for not having clear budget allocation systems that ensured that all studies were enabled and actually offered a minimum number of teaching hours and research-based teaching ratios. Secondly, the RR found that the universities’ production of data on the amount and quality of teaching offered was of a very uneven quality hindering systematic oversight and allocation of resources for teaching. Thirdly, the RR also criticized the Ministry of Higher Education and Science for using an accreditation system that despite being in use for several years had proven unable to ensure a certain minimum of teaching hours at the various university studies. The RR stressed that it was the responsibility of the Ministry, not the independent accreditation institution, to ensure that the accreditation system worked properly.

The state accountants basically reiterated the RRs critique of the universities and the Ministry of Higher Education and Science for not ensuring that students received sufficient research-based teaching (CitationStatsrevisorerne, 2012). They also specifically criticized the universities for not setting goals and norms for research-based teaching and for not providing adequate knowledge on the costs of the various studies. Thus, like the RR, the critique was as much about insufficient and poor quality teaching as it was about the lack of proper data and management systems at the universities. The notion that the universities — and their lack of internal management systems — were the main culprit was echoed by the major newspapers (CitationInformation, 2012; Politiken, 2012) and by the major opposition parties, though the latter pointed out that the Minister should have taken action much earlier.

In February 2013, the RR issued a follow-up report stating that it would monitor the development in the number of teaching hours and the rate of research-based teaching (CitationRigsrevisionen, 2013d). More precisely, this included checking the Ministry of Higher Education and Science with regard to: its work on creating more transparency in the universities’ educational results, its use of contracts to ensure high educational quality, its measures following up on the ECTS evaluation, its allocation of money released from the increase of the student performance payment rate. Moreover, the RR stated that it would check the universities work with securing an adequate level of research-based teaching on all studies and increasing transparency of educational results and resource allocation, and their implementation of the new accreditation model. A follow-up evaluation of the universities’ efforts to enhance educational quality is to be published by the end of 2015.

5 The actions taken by the Ministry and the universities

5.1 Ministerial actions

The Minister of Science, Innovation and Education basically accept the RR's critique. Yet, he blamed the universities by arguing that after the raise of the performance pay rate the universities no longer had any excuse for not providing enough high quality teaching (CitationRitzau, 2012, 29 August). A few months later, the Minister launched a more balanced statement. He still acknowledged the critique, but he also explained that it did not make sense for the Ministry to impose simple standards on for example the weekly number of teaching hours (CitationMinisteriet for Forskning, Innovation og Videregående Uddannelser, 2013). However, as we shall see, while the Ministry did not dictate minimum standards of teaching hours and tenure staff rates, they moved immediately to use these two standards to conduct regular, comparative surveys of the universities performance. He also explained that the responsibility for the quality of education was shared jointly between the Ministry and the universities.

The Ministry reacted on the critique by accounting for the measures already in place and by launching a series of new ones (CitationMinisteriet for Forskning, Innovation og Videregående Uddannelser, 2013). Apart from the obviously not very successful accreditation system, the Ministry had for around a decade used contracts with each university to secure general political ambitions, such as the quality of education. Some of these contracts, but far from all, contained specific goals promising to increase the amount of teaching offered to student and the rate of research-based prior to the RR's critical report, see below.

Apart from the further exploiting the potential of existing measures, the Ministry also launched four new interventions to spur educational quality in general and the amount of teaching offered to students and the rate of research-based teaching in particular. First of all, the Ministry published an evaluation in August 2012 of the extent to which the extra money granted to the human and social science studies had been use to increase the amount of teaching offered to students and the rate of research-based teaching. This evaluation is now conducted annually in order to monitor the universities’ continued work and progress within these two particular aspects of teaching. This will be further explicated in . Secondly, from 2012, the Ministry would require — through the university contracts — that all universities produced standardized economic financial accounts for resources allocated to key purposes, namely: research, education, expert service to the public and management/administration, in order to ensure transparency and enable (inter- and intra-) university comparisons (cf. CitationStyrelsen for Universiteter og Internationalisering, 2012). Thirdly a working group was established to look into the actual student workload required by the various university studies in order to ensure that all studies are actually fulltime studies. Accordingly, all universities were required to document that the educational activities they offered — at all studies, at all levels — actually amounted to 60 ECTS per year and an average weekly student workload of 37 h. Thus, all universities had to make numerical translation of all courses, seminars, exercise sessions, supervisory sessions and other educational activities into the number of hours that each student was supposed to put into her or his preparation for participating in these activities.

Table 1 Measures to improve human and social science studies at Danish Universities after 2012.Footnote a

While the three measures mentioned so far mainly aimed at creating more and new visibilities in ways that would illuminate the quality of education and the resources allocated by the universities to further improve the quality of their studies, a fourth measure aimed at further delegating responsibility for educational quality from the Ministry to the universities. Thus, the Ministry launched a thorough revision of the existing university accreditation system, which the RR had criticized as insufficient for ensuring adequate teaching. By transforming the accreditation system from one focusing on specific studies to one focusing on the whole university, the responsibility for the quality of education would now effectively be transferred from the Ministry, or rather an accreditation body under the Ministry, to the universities. The universities generally applauded that they would have more responsibility for auditing their own studies, but they also noted that the key standards informing the self-evaluation would be almost identical to the former system. Therefore, the universities room for discretion on what and how to evaluate the quality of its studies remained rather narrow.

5.2 University actions

The report by the RR provoked quite a debacle in university circles. The national Student Council and many university staff at the social sciences and humanities applauded the report and said that it pointed to what everyone knew, namely that the social sciences and the humanities had been neglected in terms of resources for years (CitationUniversitetsavisen, 2012). However, many university staff added to this that simply augmenting the number of teaching hours would hardly solve the problem of poor quality, which had more to do with students not caring to attend the teaching being offered already (CitationJyllandsposten, 2012a, 23 October).

The university managements responded rapidly to the critique, which they largely accepted. While the university managers pointed to low performance pay rates, even after the 10% raise, as an underlying cause of low levels of teaching hours and research-based teaching, they acknowledged that they too had a responsibility for prioritizing differently. Accordingly, within few weeks after the RR's report was published most universities made quite precise promises to increase the number of weekly teaching hours offered to students and less precise intentions to increase the rate of research-based teaching at the social science and humanity studies (CitationRitzau, 2012, 29 August).Footnote6 Two universities (Aalborg and Roskilde) refrained from making promises in terms of concrete numbers on account of their particular, project-based pedagogy. Nevertheless, they did promise to step up the rate of research-based teaching and ensure that all students received adequate supervision (CitationNordjyske Stiftstidende, 2012).

In an overview is provided of the measures taken since August 2012 — when the RR report was published — by the universities in two areas: the quantity of teaching and/or supervision offered to students and to increase the rate of tenured staff to temporary staff. I have focused on these two indicators because they were the main ones explicitly used by the RR to critically assess the quality of university education. Subsequently, these two indicators were taken up by the Ministry of Higher Education and Science to conduct annual comparative surveys of the universities’ efforts to enhance educational quality (CitationStyrelsen for Videregående Uddannelser, 2013).

shows that four out of six universities went at quite some length in order to increase the amount of teaching (or supervision) hours offered at the human and social science studies. A fifth university (Southern Denmark) promised to increase its already high level of teaching hours. Only, one university (Roskilde University) did not announce any increases, though it did actually undertake a minor increase in teaching. Not surprisingly, the universities offering relatively little teaching (Copenhagen University and Aarhus University) decided to make significant increases. More surprisingly also some of the universities offering relatively much teaching (the Copenhagen Business School and Aalborg University) have stepped up their teaching load.

In contrast to the general increase in the amount of teaching and supervision promised (and actually offered), only one out of six universities promised that they would increase the rate of tenured staff (Copenhagen Business School). One more university (Roskilde) made a modest increase in its number of tenured staff. At least two reasons can be provided for the lack of response from the other universities. First, the general percentage of tenured staff is quite high (70–80%) at most studies at most universities. Exceptions are Copenhagen Business, which did react, and University of Copenhagen, which only reacted within the economics study. It may also be noted that the low levels of tenured staff in certain disciplines, notably law and psychology, may partly be understood on the longstanding tradition for valuing the input of practitioners within these disciplines. Secondly, and more generally, the moderate responses are linked financially to the decision to offer more teaching. When universities increase the number of teaching hours offered to students, the cheapest way to do so is to hire more temporary employed academics (the price is around half that of a tenured staff or less). Given that the universities are responsible to maintain their activities within more or less given budgets, they had to make a certain trade-off between offering more teaching on the one hand, and increasing the percentage of tenured staff on the other. Accordingly, the University of Copenhagen's (very) limited measures to increase the percentage of tenured staff may partly be understood on the background of the substantial expenditures triggered by its simultaneous decision to significantly increase the volume of teaching offered. In sum, if most universities did nothing and a few universities did little to increase the rate of tenured staff this was not because they found that the ideal of research-based teaching wrong nor that they found the RR's and Ministry's way of objectifying this ideal problematic. In fact, they accepted both. However, the universities simply found that tenured staff rates around 70–80% was highly satisfactory and therefore only those falling substantially below this threshold were put under pressure — by the ministry and by local university staff — to act and hire more tenured staff.

If the universities reaction to the push for more teaching in general and more research based reaching in particular was a mixed affair, they all moved fast to accommodate the RR's critique of inadequate budget systems. By the end of 2013, all universities had either fully or almost fully adopted a new budget system defined by the Ministry of Higher Education and Science that would provide more transparent and standardized accounts of expenditures divided according to key institutional purposes, i.e. research, education, expert services to the public, and management/administration (cf. annual reports 2013 of the Danish universities). In contrast, most universities had previously used budget systems divided around major sources of income and expenditures only. That is, there was no clear link between income and expenditures on the one hand, and the quantity and quality of the key services to be produced by the universities on the other, notably research and teaching. Accordingly, neither the universities themselves nor the outside world could assess the extent to which public money was used efficiently to do research and teaching. On the one hand, this gave the universities a wide room of manoeuvre to decide on how to spend the money allocated from the Ministry. On the other hand, the very absence of a performance-oriented budget system meant that the systematic allocation of funding in order to boost the performance of teaching or research was missing from the universities’ room of manoeuvre. Accordingly, the move to the new budgeting system does not simply entail that the universities’ room for manoeuvre is reduced, but rather that it is changed to something more favourable to the Ministry's ambitions. While it is still too early to examine the actual consequences of this new budgeting system and the visibilities it creates, the modality of power seems fairly clear. The Ministry is intent on governing the universities at a distance, i.e. to enable and incentivize the universities to govern themselves in accordance with the Ministry's overall strategy of maximizing the quantity and quality of research and teaching within the given budget. Thus, the new budget system will not only enable the Ministry to gauge the extent to which each university allocates resources to goals agreed in their contracts, which in itself does not amount to much more than a question of external control. However, the new budget system will also allow and encourage the universities themselves to monitor, evaluate and govern the quality of their educational activities in line with the new budget standards. It is because of the more or less systematic linking of these two elements, the governing of others and the governing of the self by the self, that we may want to regard the new budget as an element in the Ministry's governing of the universities at a distance.

This potential of the new budget to facilitate a certain governing at a distance may be linked to the final element in the RR's critical audit: the accreditation system. As mentioned above, this critique spurred the Ministry's decision to reform the existing accreditation system from one focusing on individual studies to one focusing on the whole institution (each university). The universities were not entirely happy about this. On the one hand, they acknowledged the positive potential of giving each university more discretion in how to evaluate the quality of its own studies. On the other hand, the concrete proposal for the new institutional accreditation system was criticized for containing further central controls on the universities’ discretion on how to manage their study programmes (CitationJyllandsposten, 2012b, 24 October). Moreover, universities must still apply for a permit every time they intend to launch a new study. More importantly, the five key accreditation standards from the former accreditation model were retained in the new model, i.e. societal need, research based knowledge, explicit learning goals, clear structure of the studies, and internal quality management system. Societal needs, before and after the reform, translates into (un-) employment ratesFootnote7 for graduates and research based knowledge is gauged by the percentage of tenured staff to temporary staff (i.e. exactly the same figures that the RR used to criticize the universities). The three other standards are gauged by qualitative — and usually very lengthy — justifications. In brief, the reformed accreditation system is in itself not likely to make any radical changes in the ways in which universities govern their educational activities. Its main contribution seems to rest with the potential it gives to further delegate the responsibility (and blame) for educational practices, which find their way into the critical light of the RR, from the Ministry to the universities. Moreover, when linked to the Ministry's annual benchmarking of educational quality, measured according to the two simple indicators (cf. ), the universities may use the new freedom enabled by the reformed accreditation system to govern themselves in line with the performance standards espoused by the RR.

6 Discussion and conclusion

This paper outlined and discussed two analytical perspectives with view to address the political consequences of performance auditing and measuring undertaken by SAIs: the intentionalist and the constitutive. In line with CitationLewis (2015) it was argued, firstly, that the intentionalist perspective may go some way in addressing the real political consequences of performance measurement (and auditing) by paying attention to the unintended effects. However, I also pointed to some limitations to focusing too narrowly on intentionality. The paper then argued that the constitutive perspective in general and — within this — the notion of government in particular, may be fruitfully employed to further examine the real political consequences of performance auditing, or, more precisely: how do performance audits contribute to making state institutions change their conduct?

The utility of this framework has been illustrated through one — arguably least likely Danish case: the auditing of the quality of university education. The Danish Audit Office (RR) has little if any knowledge on university studies. However, through the use of a few simple quantifiable indicators based on statistics produced by the Ministry of Higher Education and Science and the universities themselves, the RR's analysis and its ensuing recommendations appeared impartial and trustworthy. By making teaching auditable in terms of two quantifiable indicators, the RR was largely able to convince both the Ministry of Higher Education and Science and the universities that the quality of many studies within the human and social sciences was unsatisfactory. While two universities did problematize the pedagogical implications of the RR's recommendations, all took steps to increase the quantity of teaching. Little change was induced by the second indicator, the percentage of tenured staff probably because most universities did not accept that they had a problem and because a simultaneous increase of the quantity of teaching and the rate of tenured staff would be financially untenable.

Moreover, the RR managed to make the Ministry of Higher Education and Science impose a new standardized budget system that would increase transparency and enable normalizing comparisons of the universities allocation of resources according to key institutional goals, including education. Linked with a reform of the existing accreditation system that will further delegate responsibility for the quality of education from the Ministry to the universities, the Ministry is now further able to govern the universities at a distance. The RR's performance audit thus triggered a series of events whereby the universities seem to be further encouraged — if not forced — to conduct monitor, assess and govern the quality of their educational activities in the terms espoused by the RR. We should, of course, be careful not to exaggerate the influence of the RR. After all, the two indicators used to gauge educational quality was not actually invented by the RR, they had been produced for some time by the Ministry. Also, the decision to reform the accreditation system was taken by the Ministry before the RR's critical report, though the latter seem to have sped up the process. Nevertheless, the RR pushed hard for the making of a new budgeting system and, more importantly, pushed to make the Ministry undertake annual benchmarking analyses of the quality of university education based on the two specific standards. Taken together, these processes in general, and the knowledge generated by a few simple indicators in particular, seem to have encouraged universities to govern themselves differently.

This paper has tried to illustrate the utility of a constitutive framework and the Foucauldian concept of government to understand how we may grasp and analyze the political consequences of performance auditing by SAIs. On the one hand, this is a rather generic framework that may be applied to study performance auditing by SAIs within a range of political and institutional contexts. Moreover, the least like nature of the case may indicate for wider generalizations at least within the Danish political context. On the other hand, single case studies should always be treated with great caution. More precisely, even within the Danish political context, the performance audits conducted by the RR do not always rely so exclusively on simple, quantitative performance measures as in the present case. One could expect that in cases where more complex and qualitative indicators are used to gauge the performance of state institutions that it would be more difficult for the RR to make these govern themselves differently. Moreover, while the political context of Denmark may resemble that of other Nordic countries and certain other Western European countries, the Danish RR and its relationship to government is organized rather uniquely in that Denmark has a Board of State Auditors acting as an intermediary in that relationship. Finally, it should be noted that the analytical advantage of the constitutive framework (illuminating the constitute effects of performance auditing) comes with a limitation. As it does not amount to a behavioural theory, such as rational choice or neo-institutional theory, it is not very good at explaining why auditees behave the way they do. To better grasp this, we may probably profit from other approaches explicitly targeting the combined influence of normative rules and the auditees’ strategic calculations (e.g. CitationOliver, 1991). For these reasons there is a clear need for further conceptual development and empirical research on the political consequences of SAI performance audits both inside and outside Denmark.

Notes

1 Earlier versions of this article were presented at The Politics and Consequences of Performance Measurement workshop, Melbourne University, December 2013 and at the ECPR General Conference, Glasgow, September 2014. The article benefited importantly from the comments of the two reviewers from Policy and Society.

2 The term Supreme Audit Institution (SAI) designates those public institutions with the supreme authority within a territorial state to audit the activities of mostly, but not exclusively, central state institutions as specified in national law. Examples are the British National Accounting Office, the US General Accountability Office, and the German Bundesrechnungshof.

3 Financial auditing refers to ensuring that accounts do not contain significant financial errors. Legal-critical auditing refers to ensuring that the money are spent in accordance with laws, regulations and other relevant agreements. Performance auditing refers to ensuring that public money are spent reasonably efficient (CitationRigsrevisionen, 2013c).

4 The total number of RR staff has remained relatively stable since the early 1990s, i.e. between 250 and 300 man years (CitationHenning and Rasmussen, 2013).

5 The student performance payment system implies that the university receives a certain sum of money from the Ministry depending on the number of exams passed by the university's students. The system currently operates with three rates: the highest rate for the experimental sciences, an in-between rate for the non-experimental sciences, and the lowest rate for the social sciences and humanities.

6 University of Copenhagen, University of Aarhus, University of Southern Denmark and the Copenhagen Business School all promised that students at all studies would be offered a minimum of 12 h teaching per week (CitationRitzau, 2012, 29 August).

7 University departments proposing a new study have to substantiate the labour market's need for the study in two ways. They must make a survey of potential employers’ (in the private and public sectors) view of the needs for the proposed study, and they must compile statistical data of the unemployment rates for students recently graduated from existing studies proximate to the proposed one. Exactly how this is to be done is currently not stipulated.

References

  • Aalborg Universitet . Årsrapport 2013. 2014; Aalborg.
  • Aarhus University . Aarhus University's policy for quality assurance in education. 2013, August http://www.au.dk/fileadmin/www.au.dk/uddannelse/kvalitetsarbejde/Kvalitetspolitik_paa_uddannelsesomraadet__engelsk_oversaettelse.pdf
  • Aarhus University . Præcisering af minimumskrav til undervisning- og vejledningsudbud i forbindelse med Arts’ “Politik for øget studieaktivitet på Arts’ uddannelser”. Bilag. 1. Juli. 2013
  • Aarhus University . Årsrapport 2013. 2014; Aarhus.
  • N. Azuma . The role of the supreme audit institutions in new public management (NPM): The trend of continental countries. Government Auditing Review. 12 2005; 69–84.
  • A. Barry , T. Osborne , N. Rose . Foucault and political reason. 1996; UCL Press: London
  • M. Barzelay . Central audit institutions and performance auditing: A comparative analysis of organizational strategies in the OECD. Governance. 10(3): 1997; 235–260.
  • Copenhagen Business School . At læse = 37 timers arbejdsuge. CBS News. 2012, 13 September http://www.cbs.dk/cbs-news-da/21/laese-37-timers-arbejdsuge
  • Copenhagen Business School . Årsrapport 2013. 2014; København.
  • P. Dahler-Larsen . Constitutive effects of performance indicators: Getting beyond unintended consequences. Public Management Review. 16(7): 2014; 969–986.
  • H. de Bruijn . Performance measurement in the public sector: Strategies to cope with the risks of performance measurement. International Journal of Public Sector Management. 15(7): 2002; 578–594.
  • M. Dean . Governmentality. Power and rule in modern society. 1999; Sage: London
  • A. Desrosières . The politics of large numbers. A history of statistical reasoning. 1998; Harvard University Press: Harvard
  • M. Foucault . Truth and power. G. Colin . Power/knowledge. Selected interviews and other writings 1972–1977. 1980; Harvester Wheatsheaf: New York 109–133.
  • M. Foucault . The history of sexuality. The use of pleasure. Vol. 2 1985; Pantheon: New York
  • M. Foucault . Security, territory, population. Lectures at the Collège de France, 1977–1978. 2007; Palgrave Macmillan: Basingstoke
  • A. Grönlund , F. Svärdsten , P. Öhman . Value for money and the rule of law: The (new) performance audit in Sweden. International Journal of Public Sector Management. 24(2): 2011; 107–121.
  • N. Henning , H.B. Rasmussen . Interview at Rigsrevisionen. 2013, 3 September
  • Information . Studerende snydes for undervisning på fuld tid. 2012, 3 September
  • K. Henriksen . Implementering af fakultetsledelsens nye politik for øget studieaktivitet ved IÆKs uddannelser. 2013, 15 August; Institut for Æstetik og Kommunikation, AU: Aarhus
  • L. Justesen , P. Skærbek . Performance auditing and the narrating of a new auditee identity. Financial Accountability & Management. 26(3): 2010; 325–343.
  • Jyllandsposten . 12 timer hjælper ikke. 2012, 23 October
  • Jyllandsposten . Minister strammer grebet om universiteterne. 2012, 24 October
  • A. Kjær . Rigsrevisionsfeltet — En institutionel og historisk analyse af dansk rigsrevision og forvaltningsrevision. (COS-rapport, no. 6/1998) 1998; Center for Offentlig Organisation og Styring: Frederiksberg
  • H.K. Krogstrup , S. Kristiansen . Medarbejdermøde 4. 2013, February http://www.fak.samf.aau.dk/digitalAssets/62/62729_medarbejdermoede-040213.pdf
  • Københavns Universitet . Årsrapport 2013. 2014; KU: København
  • J. Lewis . The politics and consequences of performance measurement. Policy and Society. 2015 10.1016/j.polsoc.2015.03.001. (in press).
  • J. Lonsdale , P. Wilkins , T. Ling . Performance auditing: Contributing to accountability in democratic government. 2011; Edward Elgar: Cheltenham
  • D. MacKenzie , F. Muniesa , L. Siu . Do economists make markets? On the performativity of markets. 2007; Princeton University Press: Princeton
  • Ministeriet for Forskning, Innovation og Videregående Uddannelser . Uddannelsesministerens redegørelse vedr. Statsrevisorernes bemærkninger til Rigsrevisionens beretning om undervisningen på universiteterne. 2013, 11 January; København.
  • Nordjyske Stiftstidende . Ingen løfter om timetal. 2012, 2 September
  • C. Oliver . Strategic responses to institutional processes. Academy of Management Review. 16(1): 1991; 145–179.
  • Politiken . Universiteters tilbud til dagens studenter: Gå selv. 2012, 29 June
  • C. Pollitt , G. Bouckaert . Public management reform. 3rd ed., 2011; Oxford University Press: Oxford
  • C. Pollitt , X. Girre , J. Lonsdale , R. Mul , H. Summa , M. Wærness . Performance or compliance? Performance audit and public management in five countries. 1999; Oxford University Press: Oxford
  • T.M. Porter . Trust in numbers. The pursuit of objectivity in science and public life. 1995; Princeton University Press: Princeton, NJ
  • T.M. Porter . Making things quantitative. Science in Context. 7(3): 1994; 389–407.
  • M. Power . Making things auditable. Accounting, Organizations and Society. 21(2–3): 1996; 289–315.
  • M. Power . Evaluating the audit explosion. Law & Policy. 25(3): 2003; 179–321.
  • K. Reichborn-Kjennerud . Political accountability and performance audit: The case of the auditor general in Norway. 2013; Public Administration. prepublished: http://onlinelibrary.wiley.com/doi/10.1111/padm.12025/abstract .
  • K. Reichborn-Kjennerud , Å. Johnsen . ’Auditors’ understanding of evidence: A performance audit of an urban development programme’. Evaluation. 17(3): 2011; 217–231.
  • Rigsrevisionen . Betænkning om retningslinier og handlingsplaner for udvikling af RRs organisation og arbejdsmetoder i perioden 1983–86. (Projekt nr. 81) 1982; Rigsrevisionen: Kbh. http://www.rigsrevisionen.dk/media/101365/strategi_eksperter.pdf
  • Rigsrevisionen . Beretning om undervisning på universiteterne. (Beretning nr. 16/2011) 2012; Rigsrevisionen: Kbh. http://www.rigsrevisionen.dk/publikationer/2012/162011/
  • Rigsrevisionen . Historie og baggrund. 2013; Rigsrevisionen: Kbh http://www.rigsrevisionen.dk/om-os/historie-og-baggrund/
  • Rigsrevisionen . Om statsrevisorerne. 2013; Rigsrevisionen: Kbh http://www.rigsrevisionen.dk/statsrevisorerne/om-statsrevisorerne/
  • Rigsrevisionen . God offentlig revisionsskik — normen for offentlig revision. 2013; Rigsrevisionen: Kbh. http://www.rigsrevisionen.dk/media/8/gor_publikation_netversion_ny_hjemmeside.pdf
  • Rigsrevisionen . Rigsrevisors notat af 11. Februar. 2013; Rigsrevisionen: Kbh. http://www.rigsrevisionen.dk/publikationer/2012/162011/701-13/
  • Ritzau . Minister garanterer mere fokus på timetal. 2012, 29 August
  • Roskilde Universitet . Årsrapport 2013. 2014; Roskilde.
  • D. Shand , P. Anand . Performance auditing in the public sector: Approaches and issues in OECD countries. OECD . Performance auditing and the modernization of government. 1996; OECD: Paris 57–102.
  • P. Smith . On the unintended consequences of publishing performance data in the public sector. International Journal of Public Administration. 18(2 & 3): 1995; 277–310.
  • Statsrevisorerne . Statsrevisorernes bemærkning til beretningen om undervisning på universiteterne. 2012, 29 October; Rigsrevisionen: Kbh. http://www.rigsrevisionen.dk/publikationer/2012/162011/statsrevisorernes-bemaerkning-til-beretningen/
  • Styrelsen for Universiteter og Internationalisering . Vejledning om hovedområde- og formålsfordeling af universiteternes omkostninger. 2012, December; København.
  • Styrelsen for Videregående Uddannelser . Opgørelse af taxameterforhøjelse til humaniora of samfundsvidenskab 2013. 2013, 13 December; København.
  • Syddansk Universitet . Uddannelsesrådsmøde. 2012, 24 October; Det Humanistiske Fakultet. LGB,/lip/lb/lg.
  • P. Triantafillou . Benchmarking in the public sector: A critical conceptual framework. Public Administration. 85(3): 2007; 829–846.
  • P. Triantafillou . New forms of governing. A Foucauldian inspired analysis. 2012; Palgrave Macmillan: Basingstoke
  • Universitetsavisen . Diskussion om timetal får nye vinkler. 2012, 3 September
  • E. Van Loocke , V. Put . The impact of performance audits: A review of the existing evidence. J. Lonsdale , P. Wilkins , T. Ling . Performance auditing: Contributing to accountability in democratic government. 2011; Edward Elgar: Cheltenham
  • G.R. Vanlandingham . Escaping the dusty shelf: Legislative evaluation offices’ efforts to promote utilization. American Journal of Evaluation. 32(1): 2011; 85–97.
  • K. Vrangbæk , K. Reichborn-Kjennerud . The changing role of the Office of Auditor General as accountability forum in Denmark and Norway. 2013; Paper presented at EGPA 2013. Study Group IV: Governance of Public Sector Organizations.
  • J. Wanna , C.M. Ryan , C. Ng . From accounting to accountability: A centenary of the Australian National Audit Office. 2001; Allen and Unwin, Crows Nest, NSW.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.