6,891
Views
6
CrossRef citations to date
0
Altmetric
Articles

An evolution of performance data in higher education governance: a path towards a ‘big data’ era?

ORCID Icon

ABSTRACT

Performance data in higher education has gone through a major development in the last few decades. Simple input measures have given way to increasingly nuanced and dynamic output measures and performance indicators have become an integral part of management at the organisational and system level. The evolution of higher education performance measurement shows a reiterative relationship between data availability, its purpose in a governance system and its target audience. Digitalisation of learning, management and communication systems has revolutionised data availability, creating new possibilities for ‘big data’ use. Based on insights from the past evolution, current experiments with ‘big data’ and lessons from other sectors, the article explores what the new ‘big data’ era might mean for higher education governance. The high volume of data but also its speed of accumulation and related analytical techniques, are likely to substantially transform the current relationship between data and performance but also create some technical, ethical and policy challenges.

Introduction

Performance data in higher education has gone through a major development over the last few decades. Performance management has become a normal practice within higher education institutions and many countries use performance indicators actively in higher education governance (Martin, Citation2018). Moreover, significantly more data is being collected and more sophisticated indicators have been generated to capture the complexity of performance. The rapid development shows no signs of slowing down. Digitalisation of learning platforms, student records, management information systems and communication routes create unprecedented amount of data, at a real-life speed and of high level of precision and accuracy. Transforming the unstructured data into valuable information that can be used for improving the core tasks of higher education is an important challenge for the future of the sector. In the business sector it is widely recognised that adaptation to ‘big data’ is now critical for a competitive advantage and long-term growth (Manyika et al., Citation2011). Similarly, public sector leaders find that ‘big data’ will make a significant contribution to their work and reduce costs in providing services to the public (Mullich, Citation2013). Universities are no exception in this trend.

Performance data plays an important part in higher education governance today. What data is being collected and for what purpose the data is being used, however, is constantly shifting. This paper zooms in to the evolution of performance measurement in higher education over the last few decades. A look into the past helps understand the current state and controversies around performance data use but it can also offer valuable insights into a potential contribution of the new, ‘big data’ paradigm for the future. As performance measures have become increasingly sophisticated and performance management in universities becomes professionalised, it is easy to forget that performance measurement has a strong normative foundation. The evolution in performance measurement has been driven not only by accumulating technical expertise and investments into better data collection. Performance measurement incorporates value-based decisions about what should be measured, how it should be measured and for what purpose it is being measured (Pollitt, Citation2018; Beerkens, Citation2015). We can observe an iterative process between what indicators are available and how indicators are used in the governance process. While technical expertise and investments into measurement broaden the scope and sophistication of available measures, political processes define the priorities in measurement, the importance of measurement as a governance principle and policy instruments that link specific incentives to the indicators.

There are considerable differences across countries in how performance data is being used, both in higher education (Martin, Citation2018) and in the public sector in general (Pollitt, Citation2018). The details in this article are informed primarily by developments in the Anglo-Saxon countries (the United Kingdom (UK), Australia, Canada) and in the Netherlands: countries that are relatively performance- and data-minded in their approach to higher education. Nevertheless, similar trends and discussions are recognisable in most other higher education systems, even if less crystallised. As often reported in the literature of policy convergence and cross-country comparisons, convergence in policy narrative is often stronger than convergence in actual policy instruments. This is likely to be the case also with performance management in higher education. This article cannot do justice to specific differences between countries in their use of performance data and it tries to capture only the general trend and a dominant narrative in performance measurement in the last few decades.

The purpose of the analysis in this article is to offer an insight to the following question: is the ‘big data’ revolution taking us to a new evolutionary phase in the nature and use of performance data in higher education? The article starts with clarifying the conceptual argument about a normative nature of performance data and about an iterative relationship between data availability and its use in the governance structure. Thereafter, it will zoom in on the evolution of performance data in the last few decades and explore the possible effect of ‘big data’ for the future of higher education governance.

A spread of performance measurement and political value choices

Several factors have contributed to the growing interest in performance data. Governance changes in many countries have pushed for more systematic data use. Policy reforms that started to spread from the mid-1980s onwards were inspired by the ‘new public management’ ideas (Bleiklie, Citation2018). Among its many facets, the paradigm believes in managerial autonomy in public organisations. It promotes the idea of discretionary freedom for organisational leadership to define its strategy and run the organisation in the best possible way. To balance the managerial autonomy and guard the public interest, the leaders are held accountable by the government for the results, that is for the quality and quantity of their services. Performance measurement is thus at the core of the doctrine (van Dooren & van de Walle, Citation2008).

Furthermore, the policy reforms introduced new types of steering mechanisms. In addition to ex-post control, competitive market-oriented incentives became a popular policy instrument (Bleiklie, Citation2018). Instruments such as performance-based funding and performance contracts (CHEPS, Citation2015), or encouraging competition for students (Beerkens, Citation2017) cannot work without reliable performance data. Prominent international organisations, such as the World Bank and Organisation for Economic Co-operation and Development (OECD) have promoted the use of performance indicators all over the world (Chalmers, Citation2008).

The governance reforms encouraged active performance control and measurement also within universities: both directly via such performance incentives but also indirectly by encouraging strategic and business-like practices within institutions. Key performance indicators, information dashboards and business intelligence operations, familiar from the private sector, were increasingly adopted also in academic organisations. As a result, performance data management within universities has become more organised and professionalised. A capacity to collect, store, manage, analyse and present data has grown considerably.

Last but not least, digitalisation increased data availability and reduced the costs of data collection. Digitalisation of journal research databases gave a major boost to research performance indicators, introducing measures such as citation scores and impact factors. Digitalisation of student records allowed better access to data on student progress. Online surveys simplified data collection on student and alumni satisfaction. Digitalised data sources, therefore, stimulated further development of performance indicators and supported universities in the societal agenda of evidence-based decision making.

As a result of the growing attention on data, performance measurement and performance management has become a field of its own in higher education literature. There are many attempts to define the characteristics of good performance indicators, as well as to analyse the validity and reliability of specific indicators (Cave et al., Citation2006; Chalmers, Citation2008; Loukkola, et al., Citation2020). Several large-scale international projects have attempted to define or explore university-level indicators on an international scale. The projects collect university-level statistics from thousands of universities in order to explore the dynamics of the higher education landscape but also to analyse the efficiency of higher education institutions (Daraio et al., Citation2011; Bonaccorsi, Citation2014). A recent project examines performance indicators in selected universities in five European countries, as an attempt to create a comprehensive set of internationally comparable performance indicators (Leiber Citation2019; SQELT-PI, Citation2020).

The purpose of performance management is often seen as an attempt to measure and evaluate objectively the level of performance. Yet such a straightforward view tends to ignore the fact that measurement itself has an effect on the system. Performance measures communicate what key stakeholders see as important and valuable in the activities of the organisation and measures set organisational priorities and attract resources. Furthermore, a poorly designed set of indicators can significantly distort actual performance. Quality of higher education is not easily expressed in quantifiable indicators, which makes the sector vulnerable to dysfunctional effects of measurement. A performance paradox (van Thiel & Leeuw, Citation2002) whereby organisations demonstrate higher scores without an actual performance improvement is not rare. Organisations can be developing a ‘tunnel vision’ and ‘goal displacement’ behaviour whereby they focus on those activities that are being measured, at the expense of other important but not measured activities. Therefore, not only the validity and reliability of single indicators is important but also the comprehensiveness of the complete set of measures. An evolution of performance indicators over time is related not only to further improvement and fine-tuning of indicators but also to a need to address the dysfunctional effects that imperfect measures can create over time. Switching an angle of measurement occasionally may be needed in order to allow universities to focus on quality and not grow in the direction of what is being measured.

Furthermore, the change of indicators over time is a good illustration of changing priorities in the system. The choice of performance indicators is driven by the definition of quality and performance, which is controversial in higher education (Beerkens, Citation2015). Student satisfaction or student engagement measure, for example, make a strong statement about what is educational quality and who should be the judge of it. Even more often, indicators tend to reflect perceived problems in the current system, which therefore need to be monitored closely. Measures such as study duration, labour market results, student-staff ratio, diversity of the student body, additional learning opportunities for excellent or for disadvantaged students, or a gender ratio: they all signify a perception of a specific problem in the system. An evolution in performance indicators thus reflects also changes in how the concept of quality is understood and interpreted and, first and foremost, what is seen either as a societal or organisational problem that needs to be fixed.

Finally, the optimal nature of performance data is linked to its purpose and target audience. Performance indicators have been used for performance-based funding of universities (for example, in the UK and Australia) and for performance contracts (for example, in Canada and the Netherlands) (CHEPS, Citation2015). They are used in compiling university rankings and information guides as well as worked into key performance indicators for internal use. In a comparative study, Siser et al. (Citation1992) concluded that the role performance indicators play in a particular country will depend upon the political culture, the administrative context, the funding system and the quality assessment procedures. This means that contextual factors affect what kind of data is being collected.

In general terms, performance indicators are used at all three levels: to reflect on system performance, on performance of individual universities, as well as of sub-units within organisations. In her cross-country analysis, Martin (Citation2018, pp. 141–2) lists five different ways how indicators are used and by whom.

  • 1. Indicator sets developed by governments for public information and accountability.

  • 2. Indicator sets submitted by higher education institutions to governments at their request for monitoring national policy objectives.

  • 3. Indicators requested by governments from institutions for systems’ management and resource allocation.

  • 4. Indicators developed for international comparison and benchmarking.

  • 5. Indicators developed by institutions for their internal management purposes (such as, performance monitoring, internal resource allocation and management).

What is a good indicator depends on the context and its users. Input and process variables are more likely to be relevant for internal monitoring and they may be misinterpreted or create even dysfunctional effects if published for an external audience. A student-staff ratio, for example, can be a helpful measure to monitor internal resource use but it has low validity as a proxy for class size and educational quality, as often assumed (Dill & Soo, Citation2005). While comparable information across different providers is important for external users and external accountability, it may be primarily the time trend that is relevant for internal use to monitor organisational processes. The advantage of specificity over simplicity of indicators depends strongly on an intended audience. The irresistible appeal of university rankings, while methodologically unsound, lies in their simplicity (Dill & Soo, Citation2005), particularly for external audiences. Nuanced and detailed data, however, allows monitoring progress and detecting problems early. For any purpose, validity, reliability and robustness to manipulation are important characteristics of good indicators. Costs and feasibility of data collection, however, are often an inevitable trade-off. Tolerance for data imperfection depends again on its use: the stronger the consequences that are linked to the indicators, the higher the risk of dysfunctional effects.

Finally, a rational decision-making process assumes that indicators are designed with a certain purpose in mind. Often the process is the other way around though: available data triggers their use as indicators. The influential Jarratt Report of 1985 in the UK stated explicitly that there is no intention to link universities’ funding to the quality because there is no reliable way of assessing quality but ‘this may change when satisfactory performance indicators for teaching are developed’ (Jarratt et al., Citation1985, article 2.11). Indeed, soon, appropriate measures were developed and linked to a funding scheme, as will be discussed below.

In sum, what should be measured, how it should be measured and to what end it is being measured determines the nature and scope of performance indicators. This also explains why performance indicators are continuously changing over time and why, despite increasing homogenisation between countries and institutions, there are still substantial differences across countries. The next section will take a closer look at how performance indicators have developed over the last few decades and what can be learned from this evolution for the future.

The evolution of performance indicators

This section will analyse the evolution of performance measures in higher education, with a particular focus on the interaction between data availability and its use in the governance system. The evolution is divided into three phases: the phase of the basic proxy measures, the phase of advanced indicators and consumer information and, finally, the phase of ‘big data’.

Towards basic proxy measures

Performance data has not always been in such a prominent place in higher education. In an overview study of 1992, Siser et al. (Citation1992, p. 133) concluded that ‘indicators to date appear to have played a relatively minor role in actual policy of national governments and of most institutions’. This characterisation was difficult to recognise only a few years after the statement was published.

There are early attempts to gather performance information and compare higher education institutions with each other. University rankings are one form of performance measurement that was in place in a few countries, particularly in the US and preceded the conscious efforts in performance management. Modern rankings emerged primarily in market-oriented higher education systems either as a tool to facilitate students’ choice or a profitable ‘infotainment’ product. Their original development, however, preceded such a market purpose (Webster, Citation1986). The first university rankings emerged as a rather eccentric hobby, with very limited visibility. The measures used were based on very limited data that happened to be available. In the early 1900s, a ranking in the US compared universities based on the number of graduates in the famous Who’s who? list of important people in the country. Another early initiative emerged out of the need of graduate schools to be able to select capable students for their programmes. So, a ranking was produced by evaluating the success of graduates of different universities in their subsequent graduate studies. In the 1950s, a reputational approach started to develop, ranking universities not by input or output criteria but by surveying the opinion of department chairs or selected academics about their peer institutions (Webster, Citation1986). Reputational rankings became quickly the dominant approach, applied first on graduate programmes but later also on undergraduate education. While early rankings remained mostly a one-time exercise targeting a small circle of interested parties, the U.S. News and World Report revolutionised the ranking industry with launching the America’s Best Colleges guide in 1983. Hereby college rankings became a regular, publicly visible instrument to compare universities.

A boost in collecting performance data arrived in many countries with a political change from 1980s onwards. Economic circumstances as well as political shifts of the time put the high costs of public sector services high on the agenda. Furthermore, massification of higher education created an additional pressure on public finances. The United Kingdom is a prime example here. Higher education went through severe budget cuts and universities were expected to increase efficiency in their resource use. It was quickly realised though that focusing only on cost-efficiency of services without any attention on quality, is a dangerous road. As stated in the influential Green Paper of the time, ‘sound management is based not only on efficient use of resources (inputs) but also on the effectiveness of results achieved (outputs)’ (Ball & Halwachi, Citation1987). The influential Jarratt Report from 1985 made a critical comment that universities base their decision-making only to some extent on quantitative data and that the data is rather limited and mostly about input measures (Jarratt, Citation1985). The report as well as the governmental Green Paper of the time explicitly encouraged universities to develop performance indicators (Ball & Halwachi, Citation1987). As a result, a special Performance Indicators Steering Group was established with a task to conceptualise what a performance indicator in higher education means and to propose a set of possible indicators. The steering group reached a conclusion that universities actually use some indicators on a regular basis and the group instead focussed on agreeing on definitions and on developing a coordinated list (Siser et al., Citation1992).

The Netherlands, as another example, followed a very similar direction but with a different twist. A list of indicators was presented with a governmental Higher Education and Research Plan in 1988. Unlike in the UK, it was not a conceptual exercise to define the meaning and purpose of performance indicators but only a pragmatic list of potential indicators. The report was heavily criticised by the higher education sector. The indicators were found unsubstantiated and their purpose remained unclear. As a result, also in the Netherlands then a formal working group was established to start developing performance indicators.

What kind of indicators became common in this early phase? The list includes predominantly data that was available in management systems or could be easily created. The list includes a lot of input measures, such as staff characteristics, expenditures related to information and communications technology and educational infrastructure. In the UK and some other countries with a selective entry system, the quality of the incoming students was an important indicator. Nevertheless, some process and outcome measure were also included, for example drop-out rate and study duration and even labour market success of graduates was proposed (Siser et al., Citation1992). Research performance data started to develop rapidly with the help of digitalised databases, counting publications, citations and patents. It became clear that more advanced indicators are needed also to evaluate teaching and learning.

Towards advanced measurement and consumer information

As performance data rooted itself as a common practice in the governance system, interest in more nuanced measures of teaching performance grew rapidly. Partly inspired by the need to balance well-established research performance measures, the focus moved to measuring learning outcomes. This required a major investment into data collection. While some outcome measures were widely used (for example, labour market success and drop-out rate), the measures showed fundamental problems and dysfunctional effects.

A problem with simple outcome measures is that the efforts of an institution explain only a part of the outcome, while another substantial part is explained by the characteristics of the entering students, such as their cognitive and social capabilities and their social capital. Linking incentives to such output measures creates a ‘cream skimming’ effect, meaning that institutions are triggered to adjust their admission and selection policy to improve the output scores, instead of improving the quality of their teaching and support services. The problem became particularly obvious when performance indicators were linked to financial rewards. For example, the Learning and Teaching Performance Fund in Australia claimed it rewarded teaching excellence but the results showed that the funds flew predominantly to highly selective universities whose main strength was in selecting ‘right’ students (Harvey et al., Citation2018). This made very clear that the indicators fail to distinguish performance from outcomes and, furthermore, that the instrument is hurtful to another important societal goal—the goal of equity.

The concept of ‘value added’ and ‘learning gain’ became thereby important. A ‘value added’ measure attempts to control for the qualifications of the incoming student and thereby focus only on the value that the university has added to the development of the particular student. Australia has been a forerunner in developing such measures, controlling for an impressive amount of background factors about students’ demographic and socio-economic background (Harvey et al., Citation2018).

The climax of performance measures in this phase is a large-scale OECD project Assessment of Learning Outcomes in Higher Education (AHELO) (OECD, Citation2013). The project piloted an instrument to measure students’ knowledge and competencies at the institutional level on a global scale. The project had an ambition to develop the instrument further into a ‘value added’ measurement, by incorporating additionally an early point of measurement as a ‘pre-test’. The AHELO demonstrates almost an ideal of what the measures in this phase try to achieve: a quantitative measurement of learning outcomes, that offers transparent comparative information about providers and control for background characteristics that may bias the results. For both political and technical reasons, the initiative did not grow into a regular assessment tool.

As another new development in this phase, many countries established national student surveys. In such surveys, students are asked about their satisfaction with various aspects of their study programme. The National Student Survey was launched in the UK in 2005. In Australia the Student Experience Survey (Harris & James, Citation2010) and in the Netherlands the National Student Survey—NSE were established around the same time and they were incorporated in the overall higher education governance system. Next to a student survey, also alumni surveys were set up in these countries, to measure satisfaction among recent graduates and to collect information about labour market success. While all these surveys were publicly initiated and funded, in some countries similar initiatives developed within a non-governmental sector. The National Student Engagement Survey in the United States (US) developed an advanced survey-instrument that measures students’ academic engagement, grounded in evidence that engagement is the best predictor of learning quality (Ewell, Citation2010). In Germany, a large-scale survey of student satisfaction at a programme level was funded by a donation from a privately-owned fund and carried out by a non-profit Center for Higher Education (Beerkens & Dill, Citation2010).

The scope and depth of indicators thus advanced considerably in this evolutionary phase but also the dominant purpose of performance information shifted. Changing indicators reflected changes in the ideas about effective policy instruments. Two steering instruments in particular are linked to the new types of indicators: performance-based funding and student choice. Both of the instruments share a neo-liberal logic (Bleiklie, Citation2018), creating external incentives and supporting competition between institutions. Performance-based funding or performance-based contracts are now in place in several countries (CHEPS, Citation2015). For such instruments, accurate performance information is essential. Such a strong policy instrument sets high expectations on the quality of the data and cannot tolerate remotely sufficient proxy measures. The push for ‘value added’ measures, for example, was inspired by the strong consequences linked to the indicators. Some institutional leaders and policy makers felt strongly that institutions should be funded by what they are actually doing and achieving, not some imperfect signals (Harvey et al., Citation2018), let alone signals that create severe dysfunctional incentives.

Not only financial incentives have strong and potentially dysfunctional effects. The Teaching Excellence Framework in the UK ranks universities according to three levels—gold, silver and bronze. Originally the instrument was meant to be linked to financial rewards but due to changes in the political and financial context, the rewards did not realise. However, the instrument still creates strong incentives to universities via its reputational effects, as the results are highly visible to prospective students, employers and to the public in general.

The ‘student choice’ logic is another powerful instrument that changes the expectations on good indicators. According to this logic, performance data should not be used only as a governmental control instrument but should be made available to (prospective) students as ‘customers’ for their informed choice of the provider. As a long-term effect, the mechanism should provide an incentive to universities to improve their performance and to remain competitive. The ‘user‘-centred approach to higher education was the reason behind starting a student survey in Germany, funded by a pro-business donor foundation (Beerkens & Dill, Citation2010). To accommodate the logic of student choice, many countries developed online platforms where performance information is presented in an accessible manner, such as My University website in Australia (DEEWR Citation2011), or the Studiekeuze123 in the Netherlands. Sometimes performance information turned into university rankings. The TEF instrument of gold-silver-bronze classification, as well as the German classification of green-yellow-red are often seen by the public as a university ranking of teaching quality.

Furthermore, private platforms and publishers made use of the publicly produced performance information for developing their own, for-profit user guides. University rankings as information tools started to spread in many countries anyway, making use of the best possible information that was available to the publishers without any major additional costs. Considering the accumulating expertise in measuring performance and awareness about dysfunctional effects of flawed measures, simple institutional rankings based on easily available proxy measures came under a lot of criticism. Attempts to regulate rankings by creating a minimum professional standard culminated in the Berlin Principles on Ranking of Higher Education Institutions.

In sum, this evolutionary phase illustrates very well the linkage between the nature of indicators and their use. Advancement of indicators is partly explained by accumulating expertise and willingness to invest into better indicators. Equally importantly, different steering instruments expect different kind of indicators. Performance funding made the flaws of old indicators very clear and triggered the development of the ‘value added’ measures. When performance indicators are supposed to inform student choice, then the indicators have to present information that is relevant when making a choice, which is not necessarily the same that experts need for assessing teaching quality. Indicators are thus in a continuous evolution and at the same time new indicators trigger new ideas for their use.

Towards ‘big data’ dominance?

With a growing digital footprint, the amount of available data is growing exponentially, which makes one wonder about the effects of such data on traditional data collection and performance management. Many believe that ‘big data’ leads to a ‘management revolution’ within organisations (McAfee & Brynjolfsson, Citation2012). Because of big data, managers can know radically more about their organisations and translate the information into better predictions, better interventions and quicker responses. Some claim that the ability to adapt to the revolutionary change will be the main factor in increasing productivity and outperforming competitors (Manyika et al., Citation2011).

What makes ‘big data’ qualitatively different from traditional data? ‘Big data’ is commonly characterised by three V-s: high-Volume, high-Velocity and high-Variety (McAfee & Brynjolfsson, Citation2012). The data is large in volume and it comes in a variety of forms. Much of the data is unstructured, for example raw social media data, or video material from surveillance cameras. The data is also highly dynamic in the speed of accumulation, offering a real-life picture of ongoing processes, instead of a one-time snapshot that traditional data creates. As a result, ‘big data’ requires alternative techniques for data analysis but also a different data management approach.

The potential for ‘big data’ use comes from the digital footprints within and around universities. The amount of recorded data has exploded. Universities’ management systems have become digitalised and there are possibilities for linking various administrative systems and datasets with each other. Students leave a significant digital footprint with their activities. Not only grades and study progress are digital but many of their activities get recorded. The frequency and the nature of using the learning management system is recorded, swipe cards record data on the use of a library or a fitness centre and cameras create data on the use of study areas. Furthermore, social media data signals what topics are relevant to students and employees, as well as their opinions about different matters. As social interaction between students moves increasingly online, student-to-student interactions can also be recorded. Such data sources create many more opportunities to monitor performance but also to develop targeted and timely responses.

‘Big data’ is expected to have a significant effect on performance management in (semi-)public organisations (Rogge et al., Citation2017). It has a potential to substantially improve the effectiveness and efficiency of public services. It may allow targeting public services better and tailoring services to the individual needs. It offers more detailed knowledge about the outputs and outcomes and thereby also about the effectiveness of various policy interventions. As a governor from the US described the ‘big data’ use, it allows moving away from an input-driven approach to performance, as typical for public services, to a results-driven approach; ‘from internal performance measures to citizen-centric measures’ (O’Malley, Citation2014).

‘Big data’ constitutes a change in the public management paradigm but it is still strongly rooted in the same norms and goals. Margetts and Dunleavy (Citation2013) present Digital Era Governance as the next generation within the New Public Management movement. The original ‘new public management’ focused on competition, incentives and disaggregation by creating single-purpose organisations (Beerkens, Citation2015). The Digital Era Governance Model focusses on reintegrating services, providing holistic services to citizens and implementing thoroughgoing digital changes in administration. Similar to the first wave, the attention is still strongly on efficiency and cost-cutting measures that digitalisation and ‘big data’ could realise.

‘Big data’ paradigm also has substantial effects on organisational leadership and management. O’Malley (Citation2014, p. 555) claimed that a switch to a ‘big data’ approach signified a transformation from bureaucratic governing towards information-age governing, ‘an administrative approach that is fundamentally entrepreneurial, collaborative, interactive and performance driven’. Similarly, McAfee and Brynjolfsson (Citation2012) pointed out that ‘big data’ requires a different leadership style, both in private and the public sector. They claimed that the whole organisation needs to redefine its understanding of ‘good judgment’. When data are scarce and expensive to obtain, it makes sense to let well-placed people make decisions, based on their past experience and on patterns and relationships they have observed and internalised during their career. Such an intuition-based decision making is no longer needed and appropriate in the context of a dynamic and solid evidence base, which allows identifying patterns with a greater precision. Furthermore, a ‘big data’ leadership model expects a different speed of action (Magretts & Dunleavy, Citation2013). Instead of taking time to collect data, analyse it and formulate a response, organisations are expected now to be agile and resilient to respond to problems in real time.

‘Big data’ is thus transforming current relationships in how performance is monitored and how data is used in the process of performance management. The next section will zoom in on specific areas in higher education where ‘big data’ is likely to make a substantially difference and thereafter elaborate on the interaction between what data is available and what is seen as an appropriate use of data.

The use for ‘big data’ in higher education

‘Big data’ in higher education can influence processes at different levels. This section focuses on four dimensions in particular where ‘big data’ already has or is likely to change current practices substantially. The dimensions vary from a level of a single course within a programme up to monitoring and steering the entire higher education system.

Student learning

Learning analytics, an attempt to understand a learning process of an individual student and the class performance as a whole, has received much attention in recent years. Digital data collected via online courses (such as MOOCs), or via digital learning platforms create an opportunity to observe students’ achievement as well as their learning behaviour such as time investment and engagement. Data that allows studying the link between learning outcomes and learning process is an invaluable source of information for developing an effective learning environment.

Learning analytics has multiple direct contributions. Class-level data can offer valuable insights to adjust educational material and processes for a more effective and efficient learning environment, both for students and for the organisation. There are numerous case studies demonstrating a substantial contribution of ‘big data’ on improving learning results, although sometimes the hopes exceed the reality (Viberga et al., Citation2018). More advanced data collection tools have been used for analysing behavioural responses to learning material, such as observing facial expressions to predict students’ level of engagement, frustration and interest in the material (Dede et al., Citation2016). Learning analytics increasingly moves beyond a single course and aggregates information from multiple courses and from multiple sources for a more complete picture. Furthermore, learning analytics allows focussing on the needs and progress of a single student. Such data allows designing timely interventions for individual students, as well as creating individualised learning paths (Dede et al., Citation2016).

Rapid technological developments contribute to the spread of learning analytics. Many learning platforms have released analytic tools for their users, and the providers of Massive Open Online Courses (MOOCs) use analytical tools regularly to inform their practices (Clouw, Citation2013). Further technological developments allow extending learning analytics beyond the strictly online activities and also include physical space. Mobile devices such as smartphones, tablets, clickers in classrooms and scanners in library can be linked with data from learners’ online activity and thereby provide a more complete understanding of factors that contribute to learners’ success (Long & Siemens, Citation2011).

Rapid developments in learning analytics are opening the way to ‘artificial intelligence’ in higher education. Large providers of educational materials and technology, such as Pearson or Google, obtain not only large amounts of data about students, they also make major investments into developing further the technology. Artificial intelligence technology allows individualised and automatised feedback to the student. Increasingly it will be used by human instructors for providing feedback or helping with other ‘virtual teaching assistant’ applications. It may radically change the current notion of performance in higher education. Pearson, one of the largest producers of online educational applications in the world, argued for a revolution in education policy that shifts ‘the focus from the governance of education through the institution of the school to the student as the focus of educational policy and concerted attention to personalising learning’ (Hill & Barber, Citation2014).

However, learning analytics has been developing rapidly not only because of advancements in technical opportunities. The development is supported by the spreading norm of data-based decision-making in education and particularly by the rise of learning-focused perspectives to higher education quality (Ferguson, Citation2012).

Student counselling and services

Digital solutions are not limited to learning systems in a context of a single course but extend to student advising and counselling more broadly. There are several examples of innovative data-based ways of advising students about their study behaviour. The Uni-How experiment in Helsinki University, for example, builds on a survey of study habits and well-being, which is a basis for an automatised, written feedback to students (University of Helsinki, Citationnd). ‘Big data’ can push the intentions further. Several universities have developed applications to detect early warning signs based on grades, attendance and academic behaviour and then refer students to necessary facilities depending on the nature of the signs (Picciano, Citation2012). Student wellbeing and mental health is a growing concern in many higher education systems and ‘big data’ applications may help identifying stress, anxieties, or depression (Educause, 2020). The great advantage of digital data applications is their timeliness: they can detect problems early on and also intervene before student reaches a critical point.

Such applications can be used not only for identifying problems. New courses or extracurricular activities can be recommended to students based on observing students’ interests and success in their other activities. A digital reflection and feedback system may lead to a digital tutor and take over some tasks of the physical tutor. Advancements in speech-based, interactive, machine-learning tools are likely to develop such an instrument further and offer an experience that is highly comparable to a talk with a student counsellor. As trust in artificial intelligence is growing even in the field of clinical psychology, its use in student counselling is not a farfetched idea.

Organisational management

Next to learning analytics, ‘big data’ feeds into academic analytics, business intelligence and data mining activities. While learning analytics focusses strictly on the learning process of a student, these terms refer to processes that focus on institutional, regional and even international levels (Clouw, Citation2013).

In the corporate world, business intelligence refers to data use that aims at improving organisational efficiency and effectiveness. Similarly, business intelligence tools in higher education can help in areas such as planning study places or optimising costs and personnel in educational tasks. ‘Big data’ solutions help recruitment and marketing exercises. Data sources for such activities can be varied. Linking various administrative databases is a straightforward option. The use of alternative data sources is developing as well. Universities are conscious users of social media for marketing and other purposes (Palmer, Citation2013; Belanger et al., Citation2013). Social media is receiving increasing attention also as an input for evaluating the performance of services. Agostino and Arnaboldi (Citation2017) showed in the case of Italian universities how Twitter data can be used for evaluating service performance in Italian universities. The Twitter data collected for the study demonstrates well the potential of effectively analysing students’ satisfaction with their class work, facilities and potentially other aspects of services.

External governance

‘Big data’ tools for policy and oversight are often proactively promoted. On one hand, big data is important input for data-based policy making and steering of the system; and on the other hand, it offers another perspective on performance control and accountability. It promises more data, real-life data and an opportunity for more sophisticated, algorithmic analytical tools to predict patterns, draw conclusions and react quickly.

‘Big data’ and algorithmic thinking is an essence of the risk-based regulation ideas in the UK, as adopted also in higher education (OFS, Citation2020). The regulatory approach aims at efficient regulation. It wants to avoid overburdening of the sector with unnecessary controls when most providers offer a good quality service. Risk-based regulation attempts to identify high-risk providers from various data sources and signals and then use physical inspections only for high-risk institutions. Identifying high-risk providers is a difficult task, though, and requires creative data use.

Implications of ‘big data’ in higher education governance

‘Big data’ entrance into higher education necessarily brings its own challenges. ‘Big data’ enthusiasts hope that the data can offer better and quicker understanding of problems and thereby suggest better solutions. Unfortunately, ‘big data’ shares many of the conceptual limitations with ‘small data’. First, it would be naïve to think that data alone can offer full answers in such a complicated area as education. It is still true that ‘statistics are no substitute for judgement’. Also, with ‘big data’, decision-making retains its political character and it cannot be replaced by a technocratic approach to performance assessment. As van der Voort et al. (Citation2019) showed in their study, an information logic and a decision logic are fundamentally different. ‘Big data’ can serve decision-makers by providing more and better information but the amount and quality of information may not be the main limitation for reaching an effective and legitimate solution. The political characteristic of decisions is linked to different opinions, different interests and different value positions and these are often not reconcilable only with better data and analysis, especially when decisions have to be made under time pressure.

Furthermore, ‘big data’ can also make some of the traditional data problems ‘bigger’. Performance measures are often criticised in public management literature for two reasons: they can encourage the ‘tunnel vision’ whereby organisations focus on measures and not on quality; and they are trusted blindly by users who have no time or expertise to critically reflect on their meaning. ‘Big data’ can make both of these issues only worse. Lavertu (Citation2016) demonstrated the negative effects in case of developing sophisticated ‘learning gain’ measures in the US primary and secondary education system. Traditional performance measures focussed on standardised scores in reading and mathematics. It became obvious that the standard outcome indicators measure more the socio-economic background of the student population, than the performance of the school. To overcome the problem, policy makers demanded ‘value added’ measures and a team of technicians and econometricians developed such measures, controlling outcomes for a large set of background factors. The negative effect was twofold. First, while originally the standardised measures were interpreted with a grain of salt as their limitations were clear, the more sophisticated the measure became, the more it was trusted blindly. As a result, the measures got more prominent in the governance system and their other fundamental flaw, that they measure narrowly proficiency in two subject matters, disappeared from the radar. So, the issue of ‘tunnel vision’ (focusing entirely on standardised tests) got even worse. Furthermore, external stakeholders are even more likely to make use of such information, as it seems more trustworthy, but the data is less transparent and approachable for a critical interpretation. With complex measures, technical experts get more power. As Lavertu (Citation2016) demonstrated, technical analysts played a big part next to policy makers in developing the ‘value added’ measures, thereby determining what a proficient student is and what the important background characteristics are. As a result, technical experts had an important role in making decisions about how to balance goals related to equity and effectiveness and external political actors reacted punitively without fully understanding what these measures actually captured.

The reduced transparency is an important issue with ‘big data’. Algorithms and data visualisation tools have been created to makes it is easier to draw conclusions from incomprehensible amount of data. The compressed nature of data means a reduced ability to fully understand the nature of the data and the basis of decisions. It gives a growing role to technical experts and élite political actors in defining what public goals performance measures prioritise (Lavertu, Citation2016).

Another difficult set of issues related to ‘big data’ concerns various ethical aspects (Long & Siemens, Citation2011; Slade & Prinsloo, Citation2013). One important issue concerns privacy. ‘Big data’ tools are founded on linking various data sources and they have a potential to create a near-complete profile of a person. Various applications follow students around campus, observe their behaviour behind the computer, monitor their views and moods in social media, which are intrusive to personal privacy. Especially if the data is not used as a meta-data for an anonymous analysis but personalised to identify students in trouble or refer them to individualised solutions, the violation of personal privacy is severe. Countries have strengthened the privacy rules when it comes to data use. The General Data Protection Regulation (GDPR) of the European Union from 2018 creates severe restrictions on what data can be collected, how it can be used and when explicit consent is needed.

Data security is another side of the same coin. Institutions become an owner of personal and sensitive data, which increases the risks of cyber-safety. Also, private, for-profit providers of learning materials store a lot of individua-level data and the agreements about data use are often contested.

‘Big data’ thus demonstrates similar characteristics to the earlier developments around performance data. On the one hand, there is a possibility for more advanced performance measurement, focusing even more closely on learning output. At the same time, the nature of the data will determine how it will be used, by whom and for what purpose. The political aspect of performance and performance management cannot be overcome with a technocratic approach alone.

Conclusion

Performance data has gone through a major evolution over a few short decades. When special task forces were put together to identify possible performance indicators for the higher education sector, the measures tended to focus primarily on inputs and on some easily available process and output indicators. As the knowledge and interest in indicators rose, more advanced indicators were developed, often requiring large-scale and expensive (national) student surveys. ‘Big data’ will offer new opportunities that will probably make traditional static performance indicators, even the advanced ones, look outdated. ‘Big data’ is dynamic and offers almost real-live insights about various aspects of performance. It offers new avenues for data collection that may make some labour-intensive and expensive data collection instruments obsolete but they require their own investments in technology and capacity.

As argued in this paper, performance data is not evolving in isolation. Performance data has developed rapidly because of political interest in evidence-based steering and data-driven decision making. However, technical advancement in performance data in return has influenced how it is being used. The nature of performance data is strongly linked with the overall purpose and use of the data. A transition from some basic proxy measures to student-satisfaction measures meant a shift not only in the idea that students are in the best position to evaluate quality of higher education but also from a conviction that such data should be available to (prospective) students for an informed choice between different providers. ‘Big data’ approach takes also a specific view. It sees the user as a central point for targeted, individualised services where the student becomes the co-producer of the services. ‘Big data’ changes the perspective on how performance data is used. Instead of a static snapshot of the level of performance that can be analysed and then lead to further improvement, the data is a direct input into the ‘production process’ and improvement. External accountability for performance will therefore be less about the performance outcomes but more about the organisational capacity and willingness to use data in their processes. This corresponds to the overall trend in quality assurance that moves towards audits of quality processes within the organisation and focuses less on actual quality outcomes.

The evolution of performance data in higher education has been triggered by increasing knowledge and capacity, as well as technological developments. At the same time, the evolution has been influenced by the political demand for data and the shifting expectations in the use of the data. The purpose of performance data is multiple: external accountability by societal stakeholders, internal accountability within the organisational hierarchy, input for improvement, or input for developing targeted services. A shift from external control towards an organisational mindset to continuously improve the quality of services, as promoted also in recent quality assurance practices, underlines the potential for ‘big data’ use in higher education.

Acknowledgments

The author would like to thank the European Commission (EC) for supporting this work under Erasmus+ project SQELT Grant 2017-1-DE01-KA203-003527. However, the Commission’s support for producing this publication does not constitute an endorsement of the contents, which reflect the views only of the author and the Commission cannot be held responsible for any use which may be made of the information contained therein.

Disclosure statement

No potential conflict of interest was reported by the author(s).

References

  • Agostino, D. & Arnaboldi, M., 2017, ‘Social media data used in the measurement of public service effectiveness: empirical evidence from Twitter in higher education institutions’, Public Policy and Administration, 32(4), pp. 296–322.
  • Ball, R. & Halwachi, J., 1987, ‘Performance indicators in higher education’, Higher Education, 16(4), 393–405.
  • Beerkens, M. & Dill, D., 2010, ‘The CHE-University Ranking in Germany’, in Dill, D. & Beerkens, M. (Eds.), 2010, Public Policies for Academic Quality: Analyses of innovative policy instruments, pp. 61–82 (Dordrecht, Springer).
  • Beerkens, M., 2015, ‘Quality assurance in the political context: in the midst of different expectations and conflicting goals’, Quality in Higher Education, 21(3), pp. 231–50.
  • Beerkens, M., 2017, ‘Information issues. Higher education markets’, in Teixeira, P. & Shin, J. (Eds.), 2017, Encyclopedia of Higher Education Systems and Institutions (Dordrecht, Springer).
  • Belanger, C.H., Bali, S. & Longden, B., 2013, ‘How Canadian universities use social media to brand themselves’, Tertiary Education and Management, 20(1), pp. 14–29.
  • Bleiklie, I., 2018, ‘New public management or neoliberalism, higher education’, in Teixeira, P. & Shin, J. (Eds.), 2018, Encyclopedia of International Higher Education Systems and Institutions (Dordrecht, Springer).
  • Bonaccorsi, A., 2014, Knowledge, Diversity and Performance in European Higher Education: A changing landscape (Cheltenham, Elgar).
  • Cave, M., Hanney, S., Henkel, M. & Kogan, M., 2006, The Use of Performance Indicators in Higher Education: The challenge of the quality movement (London, Kingsley).
  • Center for Higher Education Policy Studies (CHEPS), 2015, Performance-Based Funding and Performance Agreements in Fourteen Higher Education Systems, Report for the Ministry of Education, Culture and Science. Available at https://core.ac.uk/download/pdf/31149022.pdf (accessed 27 January 2021).
  • Chalmers, D., 2008, Indicators of University Teaching and Learning Quality (Australian Learning and Teaching Council). Available at https://ltr.edu.au/resources/Indicators_of_University_Teaching_and_Learning_Quality.pdf (accessed 27 January 2021).
  • Clouw, D., 2013, ‘An overview of learning analytics’, Teaching in Higher Education, 18(6), pp. 683–95.
  • Daraio, C., Bonaccorsi, A., Geuna, A., Lepori, B., Bach, L., Bogetoft, P., 2011, ‘The European university landscape: a micro characterization based on evidence from the Aquameth project’, Research Policy, 40(1), pp. 148–64.
  • Dede, C., Ho, A. & Mitros, P., 2016, ‘Big data analysis in higher education: promises and pitfalls’, EDUCAUSE Review, 51(5), pp. 23–34.
  • Department of Education, Employment and Workplace Relations (DEEWR), 2011, ‘Development of performance measurement instruments in higher education: discussion paper’. Available at http://www.voced.edu.au/content/ngv%3A52066 (accessed 15 January 2021).
  • Dill, D. & Soo, M., 2005, ‘Academic quality, league tables and public policy: a cross-national analysis of university ranking systems’, Higher Education, 49(4), pp. 495–537.
  • Ewell, P.T., 2010, ‘The US National Survey of Student Engagement (NSSE)’, in Dill, D. & Beerkens, M. (Eds.), 2010, Public Policies for Academic Quality: Analyses of innovative policy instruments, pp. 83–98 (Dordrecht, Springer).
  • Ferguson, R., 2012, The State of Learning Analytics in 2012: A review and future challenges, technical report KMI-12-01 (Milton Keynes, Knowledge Media Institute, Open University).
  • Harris, K.-L. & James, R., 2010, ‘The course experience questionnaire, graduate destination survey, and learning and teaching performance fund in Australia’, in Dill, D. & Beerkens, M. (Eds.), 2010, Public Policies for Academic Quality: Analyses of innovative policy instruments, pp. 99–120 (Dordrecht, Springer).
  • Harvey, A., Cakitaki, B. & Brett, M., 2018, Principles for Equity in Higher Education Performance Funding, Report for the National Centre for Student Equity in Higher Education Research (Melbourne, Centre for Higher Education Equity and Diversity Research at La Trobe University).
  • Higher Education Funding Council for England (HEFCE), 2014, Review of the National Student Survey Report to the UK Higher Education Funding Bodies (London, HEFCE).
  • Hill, P. & Barber, M., 2014, Preparing for a Renaissance in Assessment (London, Pearson).
  • Jarratt, A., 1985, Report of the Steering Committee for Efficiency Studies in Universities (London, Committee of Vice-Chancellors and Principals).
  • Lavertu, S., 2016, ‘We all need help: “Big data” and the mismeasure of public administration’, Public Administration Review, 76, pp. 864–72.
  • Leiber, T., 2019, ‘A general theory of learning and teaching and a related comprehensive set of performance indicators for higher education institutions’, Quality in Higher Education, 25(1), pp. 76–97.
  • Long, P. & Siemens, G., 2011, ‘Penetrating the fog: analytics in learning and education’, Educause Review, 46(5), pp. 31–40.
  • Loukkola, T., Peterbauer, H. & Gover, A., 2020, Exploring Higher Education Indicators (Brussels, European University Association).
  • Manyika, J., Chui, M. & Brown, B., 2011, Big Data: The next frontier for innovation, competition, and productivity (New York, McKinsey Global Institute Report).
  • Margetts, M. & Dunleavy, P., 2013, ‘The second wave of digital-era governance: a quasi-paradigm for government on the Web’, Philosophical Transactions of the Royal Society, 371, pp. 1–17.
  • Martin, M., 2018, ‘Using indicators in higher education policy: between accountability, monitoring and management’, in Hazelkorn, E., Coates, H. & McCormick, A.C. (Eds), 2018, Research Handbook on Quality, Performance and Accountability in Higher Education, pp. 139–48 (Cheltenham, Elgar).
  • McAfee, A. & Brynjolfsson, E., 2012, ‘Big data: the management revolution exploiting vast new flows of information can radically improve your company’s performance’, Harvard Business Review, 90(10), pp. 60–8.
  • Mullich, J., 2013, Closing the Big Data Gap in Public Sector (Bloomberg Businessweek Research Services, Real-Time Enterprise).
  • O’Malley, M., 2014, ‘Doing what works: governing in the age of big data’, Public Administration Review, 74(5), pp. 555–56.
  • Office for Students (OFS), 2020, A Matter of Principles: Regulating in the student interest, Insight 7 October 2020. Available at https://www.officeforstudents.org.uk/media/2d22f2c7-ccc5-4489-b7d1-59605f166fa9/insight-brief-a-matter-of-principles-regulating-in-the-student-interest-oct-2020.pdf (accessed 25 March 2021).
  • Organisation for Economic Cooperation and Development (OECD), 2013, Assessment of Higher Education Learning Outcomes: Feasibility Study Report (Paris, OECD). Available at http://www.oecd.org/education/skills-beyond-school/AHELO%20FS%20Report%20Volume%201%20Executive%20Summary.pdf (accessed 27 March 2021).
  • Palmer, S., 2013, ‘Characterisation of the use of Twitter by Australian universities’, Journal of Higher Education Policy and Management, 35(4), pp. 333–44.
  • Picciano, A.G., 2012, ‘The evolution of big data and learning analytics in American higher education’, Journal of Asynchronous Learning Networks, 16(3), pp. 9–20.
  • Pollitt, C., 2018, ‘Performance management 40 years on: a review’, Public Money & Management, 38(3), pp. 167–74.
  • Rogge, N., Agasisti, T. & De Witte, K., 2017, ‘Big data and the measurement of public organizations’ performance and efficiency: the state-of-the-art’, Public Policy and Administration, 32(4), pp. 263–81.
  • Sizer, J., Spee, A. & Bormans, R., 1992, ‘The rôle of performance indicators in higher education’, Higher Education, 24(2), pp. 133–55.
  • Slade, S. & Prinsloo, P., 2013, ‘Learning analytics: ethical issues and dilemmas’, American Behavioral Scientist, 57(10), pp. 1510–29.
  • Sustainable Quality Enhancement in Higher Education Learning and Teaching (SQELT-PI), 2020, SQELT Comprehensive Performance Indicator Set, Erasmus+ Strategic Partnership SQELT. Available at https://www.evalag.de/fileadmin/dateien/pdf/forschung_international/sqelt/Intellectual_outputs/sqelt_perfindicset4_o9_201127_final_sec.pdf (accessed 27 March 2021).
  • Turnbull, S., 2018, A Guide to UK League Tables in Higher Education, HEPI Report 101. Available at https://www.hepi.ac.uk/wp-content/uploads/2018/01/HEPI-A-Guide-to-UK-League-Tables-in-Higher-Education-Report-101-EMBARGOED-4-JAN-2018.pdf (accessed 27 March 2021).
  • University of Helsinki (nd) UniHow website. Available at https://studies.helsinki.fi/instructions/article/unihow-system-and-howulearn-questionnaire (accessed 27 March 2021).
  • Van der Voort, H.G., Klievink, A.J., Arnaboldib, M. & Meijer, A.J., 2019, ‘Rationality and politics of algorithms: will the promise of big data survive the dynamics of public decision making?’, Government Information Quarterly, 36, pp. 27–38.
  • Van Dooren, W. & van de Walle, S., 2008, Performance Information in the Public Sector: How it is used (London, Palgrave Macmillan).
  • Van Thiel, S. & Leeuw, F.L., 2002, ‘The performance paradox in the public sector’, Performance & Management Review, 25(3), pp. 267–81.
  • Viberga, O., Hatakkab, M., Bältera, O. & Mavroudia, A., 2018, ‘The current landscape of learning analytics in higher education’, Computers in Human Behavior, 89, pp. 98–110.
  • Webster, D.S., 1986, Academic Quality Rankings of American Colleges and Universities (Springfield, IL, Schenkman).