939
Views
1
CrossRef citations to date
0
Altmetric
Research Articles

Opportunities and challenges for international institutional data comparisons

ORCID Icon, ORCID Icon & ORCID Icon
Pages 373-390 | Received 23 Jan 2022, Accepted 23 Jun 2022, Published online: 05 Jul 2022

ABSTRACT

This paper discusses empirical comparisons of higher education institutions across world regions. It argues that institutional data systems have the potential for complementing global comparisons promoted by rankings by providing sensible information on institutional size, budgets, staffing, enrolments and activity profiles. With this perspective in hand, this paper tackles three questions. First, how is it feasible to identify Higher Education Institutions (HEIs) given their complex structures? Second, how is it feasible to define the perimeter of HEI sectors? Third, what kinds of data could be used for comparison, and where are the main data gaps? By analysing institutional data systems across the United States, Europe and Asia, the paper concludes that institutional data systems display some remarkable similarities that make them an important resource for global comparisons; however, variation in the context of data production and usage implies differences in the higher education perimeter and on institutional delimitation; sensible comparisons, therefore, require explicit knowledge of the institutional context in which data have been borne.

1. Introduction

The higher education literature has highlighted the impact of globalization on the knowledge economy, and more, in general, on the emergence of a ‘world society’ (Drori, Meyer, and Hwang Citation2006) of higher education systems (Marginson and Van der Wende Citation2007). While the internationalization of higher education and research is not new, these processes have assumed a new quantitative and qualitative dimension in the last decades with the increase in cross-border education and the growth of global research collaborations (Altbach Citation1999; Rumbley, Altbach, and Reisberg Citation2012). More importantly, a sense has emerged that Higher Education Institutions (HEIs) are competing globally for reputation and excellence, as observed through the lenses of international rankings (Hazelkorn Citation2015). While most HEIs remain embedded in national higher education systems, where they acquire the lion’s share of their resources and are delivering services to local communities as well (Jongbloed, Enders, and Salerno Citation2008), national and even local stakeholders such as students and companies are increasingly looking to global comparisons to orient their decisions. Indeed, the term ‘glonacal’ has been coined to designate this dynamic where higher education phenomena exist and interact across global, national and regional levels (Marginson and Rhoades Citation2002).

International rankings have spurred the idea that it is possible to compare HEIs globally and have provided an easy-to-use set of comparisons based on purportedly well-defined metrics (Hazelkorn Citation2017). By doing so, they disregard the diversity of missions, profiles and societal contexts within which HEIs are funded and operate (Jongbloed, Enders, and Salerno Citation2008; van Vught and Ziegele Citation2012) and the fact that higher education systems are diverse in terms of how they are organized and governed and of their role within society.

While it might be argued that such global comparisons are simply meaningless, in this paper, we take a more pragmatic stance. We consider that comparisons of institutions across world regions are increasingly requested by different groups of users and are also affecting competition at national and regional levels. Moreover, such comparisons are out there in international rankings with their well-known conceptual and methodological problems (Van Raan Citation2005; Saisana, d’Hombres, and Saltelli Citation2011). Instead of denying this situation, we consider it more important to contribute to improvement.

More specifically, our position paper deals with the potential of institutional data systems, i.e. those systems managed by public authorities that provide data and indicators at the level of individual HEIs. Unlike system-level comparisons, institutional data systems have not been designed to compare HEIs across countries or world regions, as they are bound to specific contexts and have been created for specific purposes (Borden et al. Citation2013; Lepori and Bonaccorsi Citation2013).

While some scholars within the scientometric community have suggested that institutional data systems are not comparable and cannot be used for global comparisons (Glänzel, Thijs, and Debackere Citation2016), we suggest that these data have an important, but unexploited, potential for complementing global rankings on two grounds. First, they provide information on complementary dimensions of HEIs, notably their level of input (Abramo and D’Angelo Citation2016) and educational activities (Daraio, Bonaccorsi, and Geuna Citation2011). Second, they include contextual information on the structure of higher education systems that could make global comparisons more sensitive to national and regional contexts.

With this perspective in hand, this paper establishes a practical position for exploiting data derived from institutional data systems for global comparisons. It does so by, first, looking at the purposes and audiences for which these systems have been developed, and second, by analysing how these choices reverberate into seemingly technical questions, such as deciding which institutions and organizations should be included (the perimeter), identifying individual HEIs given their complex and multilevel structure and defining which data and indicators should be provided. Third, we suggest some directions for meaningful comparisons at the global level that could limit the inherent biases due to the context-related nature of data.

While this is not an empirical paper, we nevertheless illustrate our arguments with examples from institutional data systems in three world regions – the United States, Europe and Eastern and South-eastern Asia – derived from the literature and from the authors’ own experience in managing such systems. In terms of global reputation, as coined by international rankings, these three regions constitute the core of global higher education including about 90 out of the top 100 institutions in the Academic Ranking of World Universities (https://www.shanghairanking.com/). These three regions enrolled in 2019, respectively, with 9% (US), 12% (Europe) and 32% (Eastern and South-eastern Asia) of tertiary education students, and the remainder being mostly accounted by Southern Asia (20%), Latin America and the Caribbean (12%) and Africa (7%) (http://data.uis.unesco.org).

2. From systems’ to institutional comparisons

Against claims purported by international rankings that indicators are ‘universal’, the sociology of measurement demonstrated their contextual and user-specific nature (Barré Citation2004). Indicators have been developed to answer the needs of specific audiences and reflect their interests and power – such as the wish by the state to control society that has been a driving force behind the development of official statistics (Desrosières Citation2001). This also applies to international statistics on science and higher education that have been developed first and foremost to compare the national investment in these areas (Godin Citation2005).

Therefore, the design of indicators is frequently driven by controversies and power struggles that flare around seemingly technical issues such as the definition of an HEI and inclusion criteria; however, once a settlement has been achieved, indicators tend to assume ‘a taken-for-granted status where existing controversies and methodological issues are removed’ (Lepori and Bonaccorsi Citation2013). This represents both their strength and weakness. Indicators are easy-to-use because users do not have to bother about definitions, methodological issues and comparability problems. Through such use, however, indicators become powerful instruments to reproduce existing social orders, as in the case of rankings (Sauder and Espeland Citation2009). The development of higher education statistics and indicators, therefore, reflects structural changes in the governance of higher education and cannot be understood without taking into account their context of production and usage.

Higher education statistics were first developed in the 1960s, thanks to the work of international organizations such as the OECD and UNESCO (Godin Citation2005). This work was spurred by the goal of comparing national systems along vectors such as education participation and attainment, R&D investments and economic innovation. Generalizability was achieved through methodological manuals that provided identification of the units of analysis, definitions and procedures for data collection, as codified in the UNESCO-OECD-EUROSTAT manual on educational statistics (UOE Citation2013) and in the Frascati manual of R&D statistics (OECD Citation2015). While, in principle, the data collected would have allowed for more fine-grained analyses, the scope was limited to comparing countries. To cut across differences in educational and research structures, national systems were conventionally divided in terms of educational levels (using the International Standard Classification of Educational Degrees, ISCED) and sectors of R&D performance (using the sectoral classification of the Frascati Manual).

The outcome has been a range of comparative international statistics, published by the OECD and by the UNESCO Institute of Statistics (OECD Citation2016). The public status of data and their ready-made nature have promoted wide usage not just in policy reports, but also in the scholarly literature.

Despite the effort for standardization, underlying comparability issues were always well-known to the statistical producers (Wellman Citation2007). Most of them were generated by different national structures that do not fit international classification. For example, United States bachelor’s degrees (four years and in many cases granting direct access to PhD programmes) do not sit easily in the latest ISCED classification which is referenced to the European bachelor/master model. Also, when data are derived from administrative sources, national classifications are frequently used and the mapping to international classifications frequently involves some approximation, such as in the case of educational subjects. Ways of counting personnel and students also vary across countries, for example, because of the different maximum duration of enrolment. While such issues are usually not visible to data users, they can substantially affect published figures.

Interest in developing public data systems comparing HEIs individually emerged only at a later stage and at the national level. It was promoted by changes in higher education, including massification (Trow Citation2010), marketization (Teixeira et al. Citation2004) and the focus on third mission and societal relevance (Etzkowitz Citation2004; Laredo Citation2007). In this process, higher education has become connected with a broader set of audiences such as scholarly communities (Becher and Trowler Citation2001), students, businesses and societal actors (Jongbloed, Enders, and Salerno Citation2008). While the state maintains a key supervisory role in higher education by regulating and organizing competition (Capano Citation2011), audiences are now empowered with a strategic role, for example by selecting the HEI which best fits their needs for education. To allow for informed choices, the provision of reliable data has become a core task of the state and other independent agencies. Differences in the governance of higher education have however led to different systems.

In the US, the development of an institutional data system was driven by a widely differentiated higher education system and by a competitive setting for resources and students (Borden et al. Citation2013). Because education is the province of local and state governments, as well as the private sector, the US Department of Education was created already in 1867 for the purpose of collecting statistics about the nation’s schools. With the expansion of federal support to students with the broadening of access through the 1960s and 1970s, to monitor the distribution of federal aid, the US Department of Education started to collect data from all degree-granting institutions in 1966 through the Higher Education General Information Survey (HEGIS). HEGIS was replaced with the Integrated Postsecondary Education Data System (IPEDS) in the mid-1980s and was made mandatory in 1993 for any institution that distributed federal grants and loans (Aliyeva, Cody, and Low Citation2018).

A variety of constituencies, such as the federal government, state governments, academic researchers and entrepreneurs, started exploring ways to use data towards a variety of purposes and ends. The need to ‘creating order’ in a system composed of 50 state systems and including a large private sector led to the establishment of the first systematic classification of US universities, i.e. the Carnegie Classification, in the early 1970s (McCormick and Zhao Citation2005). More recently, the federal government’s College Score Card (https://collegescorecard.ed.gov/) provides consumer-oriented information about student success rates and post-college outcomes for individual HEIs. Many states have dashboards for monitoring institutional performance with regard to serving under-served populations and the employment outcomes of college graduates that are used for both public policy and consumer transparency purposes; NGOs like the Education Trust and Philanthropic organizations, like the Bill & Melinda Gates Foundation, fund the development of consumer information accountability data systems and efforts like the Institute for College Access and Success; researchers like those at the University of Southern California’s Center for Urban Education also assist institutions in examining and improving equitable outcomes (https://cue.usc.edu/tools/the-equity-scorecard/). While these examples give a taste for the variety of constituents using such data and the purposes for which they are used, they underlie a vast and exponentially expanding domain of groups exploiting publicly available data about HEIs for diverse purposes. Such demands also led to a progressive expansion of IPEDS in terms of HEIs covered and data included.

In Europe, the process was slower due to the fragmentation of higher education (and related statistical systems) in national jurisdictions. The strong ties between national States and HEIs implied that many data were collected by the state administration to directly manage HEIs and accordingly were not available to the general public (Lepori and Bonaccorsi Citation2013). With the ‘steering at distance paradigm’ spurred by New Public Management (Ferlie, Musselin, and Andresani Citation2008), public national data systems progressively emerged in the last decades. The establishment of these systems was driven by national states’ needs for policy formulation, implementation and monitoring (Lehtonen Citation2015), such as, for example, distributing public funding based on the number of students (Lepori and Jongbloed Citation2018). Since the beginning of the twenty-first century, the European Commission also started to promote the establishment of a European higher education area where students could move freely to select the best educational provider (Kehm, Huisman, and Stensaker Citation2009). To this aim, the Commission also promoted the establishment of a European-level Tertiary Education Register that builds on existing national data systems (Lepori and Bonaccorsi Citation2013) and other data tools such as a multidimensional ranking offering customized data to students and other stakeholders (van Vught and Ziegele Citation2012).

While several countries in Southeast Asia have built mature higher education systems in the last three decades, national data systems have been slower to develop and remain immature in all but the most advanced countries (Coates Citation2017a). This can be associated with the fact that, in most countries, central planning by the state plays a major role in the governance of higher education and research in these countries and, accordingly, higher education systems and institutions are in the phase of rapid infrastructure development, and the request for public remains limited.

While most countries have developed institutional-level data systems, they are coined to different contexts of usage – the US system serves a broad set of needs by audiences such as students, companies, and philanthropic organizations alongside the federal government and state, while European and Asian systems being more tailored to (varying) needs of public policies. As we discuss below, this also translates into different choices concerning three core questions, i.e. how to identify HEIs, which HEIs to include and which data to provide.

3. Global comparisons. International rankings

The lack of national, sectoral or institutional development of generalizable data and reports spurred innovation now known as international rankings. These efforts took a different path to international comparisons by relying either on international publication databases such as the Web of Science (Waltman, Calero-Medina, and Kosten Citation2012) or on indicators that can be collected globally, such as the list of Nobel laureates. Some rankings, such as Times Higher Education (2021), also rely on student and scholar surveys, and in some cases, some data are collected directly from the institutions through a questionnaire (van Vught and Ziegele Citation2012).

Such initiatives have been criticized on different grounds. On the one hand, scholars have noted challenges related to methodological flaws (Van Raan Citation2005), the lack of transparency in how rankings are generated (Saisana, d’Hombres, and Saltelli Citation2011) and their instability and sensitivity to contextual factors (Piro and Sivertsen Citation2016). But, more importantly, most international rankings focus on international research reputation as defined by academic élites (Pusser and Marginson Citation2013; Hazelkorn Citation2015), irrespective of underlying differences in the national or regional context (van Vught and Ziegele Citation2012), of institutional mission (Bogetoft, Fried, and Eeckaut Citation2007) and organizational size (Abramo and D’Angelo Citation2016). By doing so, international rankings organize global competition based on a specific set of values coined to the US research universities (Brankovic Citation2018; Lepori, Geuna, and Mira Citation2019) and, accordingly, by focusing users on that set of characteristics, exert powerful pressures for conformity among HEIs (Sauder and Espeland Citation2009).

From a sociological perspective, rankings have taken an opposite path as compared with institutional data systems. While the latter have been coined to specific politico-institutional contexts, rankings made heroic assumptions to allow for global comparisons assuming that a single set of indicators might describe HEIs and that ‘standardized’ measures from international databases, such as bibliometric indicators, are comparable irrespective of the underlying organizational contexts. The higher educational literature widely criticized these choices both on their ideological (Brankovic Citation2018) and methodological grounds (Piro and Sivertsen Citation2016).

In this paper, we are suggesting that institutional data systems, because of their greater proximity to specific (national) contexts, might allow addressing some of these concerns and providing more specific and contextualized information generated by national systems.

And, indeed, some promising trends are emerging. For example, following the insight that the position in international rankings is strongly associated with institutional size (Abramo and D’Angelo Citation2016) and level of funding (Benito, Gil, and Romera Citation2019), some rankings such as ARWU and Times Higher Education started to normalize bibliometric output against number of academic staff, relying on national or continental institutional data systems, and to add contextual information like legal status and foundation year. Relatedly, a move towards customized rankings focusing on types of institutions has been observed (‘young’, ‘technical’ or ‘small’ institutions), such as the pioneering U-MULTIRANK project (van Vught and Ziegele Citation2012).

4. Three core questions for international institutional comparisons

In the following, we focus on three core issues for global comparisons. We consider how HEIs are identified and delimited, and which HEIs should be included and which data should be provided. For each of these considerations, we first review the main comparability issues at stake, second discuss the solution proposed by international rankings, and, finally, we advance suggestions on how to compare institutions globally.

4.1. Defining higher education institutions and their boundaries

Institutional comparisons assume that entities labelled as ‘higher education institutions’ are relevant units of analysis and are recognizable and distinct. They are associated with the (political and scholarly) recognition of HEIs as strategic actors (Bonaccorsi and Daraio Citation2007; Whitley Citation2008), which are characterized by a clear identity and well-defined boundaries (Brunsson and Sahlin-Andersson Citation2000). And most institutional comparisons implicitly assume that institutions are stable over time.

However, closer consideration reveals that recognizing HEIs as relevant units of analysis largely depends on the institutional context. This process started earlier in the US with their tradition of (private) HEIs largely autonomous from the state (Cohen Citation2007), while, until the 1970s, in many European countries, HEIs were closely linked with the state to the point that they were considered by some scholars as non-existent as organizations (Musselin Citation2007). Since the 1980s, reforms inspired by New Public Management (Ferlie, Musselin, and Andresani Citation2008) promoted stronger autonomy of HEIs in most European countries; accordingly, in most cases, it is now considered meaningful to analyse inputs and outputs at the institutional level (Bonaccorsi and Daraio Citation2007). The emergence of institutional data systems reflects this recognition.

Yet, many (context-related) complexities remain. In a few countries, permanent personnel are still employed and paid directly by the state and, accordingly, the HEI budget is largely a fictional construction; moreover, the infrastructure of many European HEIs is still owned and funded from the state budget. On the contrary, the budget of many US HEIs includes subsidiaries that are not related to the HEI core missions such as sports, services or publishing houses. These issues mostly affect the input size of HEI data systems such as finances and personnel.

Multi-level structures are another issue, which is pervasive in the United States. In the public sector, many state universities are organized as multiple, often separately accredited campuses and there are frequent changes as to whether these are considered as a single entity in the higher education data system or as separate institutions. Such changes account for breaks in data series and are not necessarily consistent with how these institutions are treated in publication databases and international rankings. In the private for-profit sector, mergers and acquisitions are frequent and the multi-campus structure is very common with continuous openings and closures of new campuses. In Europe, multilevel structures are less frequent with the notable exception of France with its complicated setting of Communities of Universities, which regroup different institutions with different statutes.

Many issues are generated by situations in which universities are associated with entities outside the educational sector. The most relevant issue deals with the delimitation between universities and hospitals; where different situations are found in terms of legal relationships, financial flows and appointment of staff. Since health-related research accounts for about one-third of all scientific publications, bibliometric centres have developed approaches to harmonize data (Calero-Medina et al. Citation2020), yet this is not necessarily consistent with how such issues are treated by statistical authorities for financial and personnel data.

Analysis of HEIs in Asia reveals similar issues and several regional characteristics. Even HEIs with a global vision and ambition are shaped by the higher education system in which they are located (Yang et al. Citation2020). Universities in China, for instance, are established and regulated with prescriptive provisions, though many have affiliated institutions, such as hospitals and science parks, which contribute to host universities in academic and commercial ways. Physical walls often separate academic and more public or commercial spaces, however such distinctions become blurred in statistical aggregations. Unlike China, Southeast Asia has relied on private higher education to service growth, particularly in countries such as Indonesia and Malaysia. This has spawned a wide array of market-focused universities, which beyond basic regulatory foundations are markedly different compared to established public universities. Rather than control subsidiary entities, such universities may themselves be absorbed into broader private, family-owned or listed conglomerate via myriad structures which render complex the delineation of individual institutions.

Following bibliometric databases, rankings traditionally have cut over these issues by relying on authors’ affiliations, therefore being blind to differences in structures that might affect these choices, such as in the case of French joint laboratories between universities and PROs, where no clear divide is possible (Mustar and Larédo Citation2002). To address these issues, bibliometric databases started to cooperate with individual universities to delineate their perimeter; while this might improve data quality, it also generates the potential for strategic gaming.

Another issue for organizational delineation is demography. While the core of the university system is stable over time, a number of university mergers took place in Europe with the aim of rationalizing the higher education system (Heller-Schuh, Lepori, and Neuländtner Citation2020); indeed, one of the rationales for grouping institutions was to increase their visibility in international rankings (Docampo, Egret, and Cram Citation2015). Some countries, such as Denmark, engaged in extensive system-level restructuring, while others witnessed only isolated demographic events (Pinheiro, Geschwind, and Aarrevaara Citation2016). To deal with demographic changes, international rankings widely adopted retrospective reconstruction, i.e. projecting the current organizational structure in the past (Waltman, Calero-Medina, and Kosten Citation2012). While this might be reasonable when focusing on current excellence, many uses of data systems require a proper treatment of the evolution over time, such as undertaking policy evaluation and institutional management. Standard ways to document organizational demography exist in business registers (EUROSTAT Citation2010) and have been adopted by the European Tertiary Education Register (Lepori Citation2020a), while IPEDS in the US also documents demographic changes in its HEI population.

To sum up, while in general, it is reasonable to consider HEIs as a relevant unit of analysis, how these entities are delimited depends on the national system considered and varies across data sources. While global comparisons by rankings have cut across these issues in a highly simplified way, national and regional institutional data systems have gathered a large amount of information that allows identifying those issues that strongly affect global comparisons.

4.2. Defining the list of HEIs to be included

A second core issue for institutional data systems is to identify the list of entities to be included (and the respective inclusion criteria), what we label as the institutional perimeter. This has become more complex due to the expansion of higher education worldwide (Trow Citation1979). Consequently, higher education is populated by an increasingly diverse set of institutions beyond traditional universities (Huisman and Kaiser Citation2000), while the boundaries with professional tertiary education are increasingly blurred (Lepori Citation2020b). Choices in that respect reflected different contexts of usage of data systems.

International rankings adopted inclusion criteria based on measures of research output, such as thresholds in the number of publications (Waltman, Calero-Medina, and Kosten Citation2012) or displaying only a limited number of institutions. Frequently, this has been justified by the fact that research indicators are not statistically robust when the volume of output is low. While this might be reasonable if the goal is to measure research performance, it blends out HEIs that provide important services for audiences such as students in an international context.

Inclusion criteria in institutional data systems have been driven by nation-specific factors. There is currently no internationally comparable definition of HEIs, as international educational statistics defined ‘tertiary education’ in terms of programme and degree characteristics (UOE Citation2013) rather than at the institutional level.

In the US, the perimeter of higher education has been closely tied to the federal government’s role in providing financial support to students attending postsecondary institutions. From 1998, participation in IPEDS was required for any postsecondary institution that wished to make available federal financial aid (grants or loans) to its students, including those offering only vocational programmes leading to certifications. The number of institutions within IPEDS hit its peak in 2014 when a total of 7,236 institutions—4,724 degree-granting and 2,512 non-degree—completed the IPEDS surveys.

Most famously, the Carnegie Classifications provided a taxonomy that both defined the perimeter (accredited degree-granting institutions) and, more notably, distinguished among six broad types of institutions (https://carnegieclassifications.iu.edu).

The expansion of the perimeter of postsecondary education within the United States to include an increasingly diverse array of providers has been supported primarily by the market-responsive nature of all US systems, including postsecondary education. The characterization of institutions through the Carnegie Classification was adopted and simplified by the U.S. News & World Report rankings. Recent efforts to expand the perimeter have been similarly driven by market-related interests, spurred by philanthropic organizations that focus on under-served populations. Efforts like the Credential Engine (https://credentialengine.org) have been established to incorporate apprenticeship providers, certification agencies, and licensing systems into the sphere of postsecondary training. The blurring of lines between degree-granting and non-degree institutions reflects one end of this spectrum that seeks to serve career and technical education needs. However, the same blurring is occurring within the realm of research and doctoral education. National laboratories have long competed with universities for top research talent and for federal research grants. More recently, the corporate sector has developed partnerships with HEIs, especially in pharmaceutical research and defence-related research. Large health complexes, like the Mayo Clinic and the Cleveland Clinic, offer doctoral training with the former accredited to confer degrees.

In Europe, decisions on what is considered a higher education institution are essentially the remit of national ministries (Lepori Citation2020a). Since the 1960s, important parts of professional tertiary education have been integrated into the higher education sector by upgrading professional schools to colleges or Universities of Applied Science, the extent of the process has however been very different by country (Kyvik Citation2004 ). These decisions also had practical consequences for data systems: institutions recognized as part of the national higher education system were subject to greater scrutiny and, increasingly to institutional evaluation (Whitley Citation2008). Accordingly, extensive data systems have been developed at the national level, beyond the minimum requirements of international statistics. On the contrary, professional tertiary education providers remained in the shadow of the state and professional associations and very limited institution-level data are collected in most countries. Reflecting this situation, when constructing a European higher education system, ETER focused on HEIs recognized at the national level, particularly those delivering at least a bachelor’s degree (Lepori and Bonaccorsi Citation2013). While national perimeters remain under state control in all European countries, there has been in the last few decades a significant opening up. In most countries, educational institutions outside the system, such as private ones, can now require state accreditation either at the institutional level or the programme level and accordingly be integrated within higher education data systems.

Higher education in Asia is very large and diverse, and the current discussion draws illustrations from China and Thailand. To a certain extent, the rapid growth of higher education in Asia has led to more consistency in the articulation of institutional perimeters.

Higher education institutions in China are defined by the national government and have passed through several periods over the last century. The most recent set of reforms were announced in the mid-1990s and led to the construction of 39 comprehensive universities that would qualify as world-class university candidates. After two decades of rapid expansion, China’s contemporary regular higher education system is composed of 2,631 universities and vocational colleges. This includes 115 centrally administered universities, 75 of which are administered by the Ministry of Education, 38 by other central ministries, and two by the Chinese Academy of Sciences. China also has around 300 private HEI which take various forms.

Notwithstanding legal and policy similarity, many of the characteristics of universities render them as very different, for instance, selection rates, research productivity, budget, and global role. One of the most intriguing ‘perimeter’ considerations pertains to foreign or transnational branch campuses, which have an array of partnerships with local or home-country institutions and are depicted and reported in varying ways.

Thailand has around 160 HEIs. There are 57 public HEIs. Autonomous HEIs are public institutions which have reached sufficient maturity to devolve from the government. These institutions, about 27 in number, still receive government funding. The 72 private HEIs are like autonomous institutions but do not receive government funding and play a more market-facing role in the higher education system. Public institutions enrol over eighty per cent of students and contribute to most research. These institutions play different roles in Thai society and are historically and commercially nuanced in ways which render comparison difficult domestically and even to nearby countries in Southeast Asia.

In summary, the perimeter of HEIs included in institutional systems does not generally obey a unique conceptual logic about what is higher education. Rather, it is the outcome of interactions between different factions. These include governments for policy and resource allocation; university researchers who study the industry; and the media providing consumer information to prospective students and their families. European and Asiatic systems are dominated by a state logic, but increasingly media play an important game-changer role and push towards a stronger integration of non-public higher education. The US system is characterized by a more distributed setting, with overlapping data systems adopting different choices, and the unique role of a fourth faction, i.e. philanthropic organizations and related non-governmental organizations that seek to promote the public purposes of higher education.

While international rankings impose an artificial uniformity by a single inclusion criterion, our analysis shows that differences in institutional perimeters convey meaningful information on how higher education is organized and that the position and functions of HEIs included in rankings might differ across countries.

4.3. Defining which data should be provided

A third core issue is which data should be provided to users. In that respect, our framework suggests that the response is the outcome of the interaction between normative considerations, requests by the users and data availability. While numbers cast an allure of certainty and precision, this facility masks an enormous amount of complexity and difference. Beyond technical issues, a core aspect to be considered when comparing data is the intended use for which they have been generated – that drives also to specific choices in terms of definitions, data production systems and data availability to the public.

International rankings provide a clear-cut answer that the only data which count are those referred to international research reputation, such as bibliometric data, reputational surveys and prizes. As explained, this is coined to the model of US research universities (Geiger Citation1993), where competition for human resources, students and funding is primarily based on research reputation. Rankings come therefore with a strong normative position on which indicators should be provided and are globally comparable, as they are based on context-independent data sources (Glänzel, Thijs, and Debackere Citation2016).

On the contrary, in most countries institutional data systems are shaped to allow the state to manage higher education systems, such as distributing public money to HEIs, evaluating institutions and assessing the achievement of policy goals such as producing human resources and favouring inclusion.

Data relevant to these purposes generally include information on student characteristics, participation patterns, and graduation, as well as information on finances and personnel. Many of these attributes are usually highly national or cultural, hampering generalizability. What characterizes a ‘disadvantaged student’ in one context is unlikely to resonate in another (Salmi and D’Addio Citation2021), and what is considered a ‘credit point’ and orderly accumulation of ‘credit points’ is unlikely to play across contexts, the world of learning outcomes assessment has failed to make progress on defining standards, and what is reported as a timely completion in one context may be considered delayed in another (Zlatkin-Troitschanskaia, Pant, and Coates Citation2016). This applies to an even greater extent to financial data, which take a different form and meaning depending on whether HEIs are considered part of the public administration or self-standing corporate units. Recent investigations of higher education productivity have exposed the limitations of such data and that sensible comparisons need to take into account their original context of production (Moore, Croucher, and Coates Citation2019; Yang et al. Citation2020).

Historically in Europe, universities were closely entrenched within the public sector and, as of administrative matters, tightly controlled by the state (Braun and Merrien Citation1999). This translated into a practice of administrative data collection for management by the responsible ministry, which largely explains why HEI-level data were unavailable to scholars and the general public until the beginning of the twenty-first century (Bonaccorsi et al. Citation2007). However, reforms driven by New Public Management have generated new data needs: since universities have been attributed more autonomy in their operations (De Boer, Enders, and Schimank Citation2007), data are needed for purposes of evaluation and both external and internal distribution of funding based on outputs such as degrees (Hazelkorn, Coates, and McCormick Citation2018). The idea of transparency and allowing students to select their institution based on quantitative evidence of performance and quality has emerged as a major driver of European-level data systems such as ETER, U-Multirank, EUROSTUDENT and EUROGRADUATES. However, data generated at the national level still largely serve the purposes of public management by the state, funding agencies and evaluation agencies. Accordingly, data on issues such as students’ satisfaction, social issues and graduates’ careers are still largely country-based and hardly comparable internationally.

Data collection in most Asian countries tends to mirror that of Europe. Most countries have developed higher education data collections with respect to input-side matters such as funding, students, faculty, teaching hours and buildings, infrastructure and strategic reach. While a raft of country-specific programme-level platforms has been developed to provide more targeted advice (e.g. https://www.applysquare.com, https://www.studyinjapan.go.jp/en, https://studyinindonesia.kemdikbud.go.id), these are not intended for consumption broader then prospective students in search of programmes, while the availability of data generalizable beyond national frames remains limited (Zhong, Coates, and Jinghuan Citation2019).

Finally, in the US, the request for data from a variety of constituencies and different usages pushed the expansion of the national institutional data system well beyond what was observed in the other world regions – besides basic information on HEI activities and resources, IPEDS currently provides a wealth of data on student’s characteristics and origin, enrolment condition and tuition fees, graduation rates and time to completion just to provide a few examples (Jaquette and Parra Citation2014). Institutional data systems tend therefore to reflect their societal context in that data are systematically collected and harmonized when a powerful audience requests them.

Despite these differences, institutional data systems in the three world regions also display important commonalities. First, all of them provide some information on educational activities and outputs, as well as on the resources available to HEIs, as these are required by the state. These data are largely complementary to research data collected from international databases. Second, standardization efforts by international organizations such as UNESCO and OECD had an important impact, since institutional data systems are also used to produce (aggregated) international statistics (UOE Citation2013). Accordingly, national classifications have been largely mapped to the international classification, particularly for what concerns students and graduates. While underlying comparability issues remain, recent work shows that some basic aggregates, such as total enrolled students and total revenues are now widely available and can be meaningfully compared across world regions (Lepori, Geuna, and Mira Citation2019).

5. What is in common? What we can compare

Our analysis shows that the development of institutional data systems for higher education has been a major trend in recent decades, starting in the United States, then moving to Europe and, more lately, to Asian countries. It has been driven by a number of factors, such as efforts to empower ‘consumers’ to choose higher education providers (Coates Citation2017b), to promote transparency and evaluation in the public sector as demanded by the New Public Management (Ferlie, Musselin, and Andresani Citation2008), as well as the increasing role of private providers in higher education and the public-purpose agenda of philanthropic foundations.

Coined by their origin, these systems share commonalities and differences. In most cases, they focus on basic information about the volume of inputs (students, personnel, finances), and activities and outputs in education and research (degrees, publications, patents). In the overall design, they display a convergence towards a model of HEIs as autonomous entities, which have some strategic capabilities (Bonaccorsi and Daraio Citation2007) and use a set of resources to jointly produce research, education and third mission in different compositions and with different profiles (Van Vught et al. Citation2008). Despite national specificities, efforts by international organizations have managed to achieve some level of comparability at least for the most aggregated indicators. We also observed a trend towards enhanced availability of data thanks to the development of regional systems, in the US first (IPEDS) and, more recently, in Europe (ETER). Building on these resources, the OECD has recently launched an ambitious project for a harmonized institutional data system covering its member countries, whose establishment would represent a major step towards international comparisons of HEIs.

Our assessment concerning the possibility of using institutional data systems is therefore more optimistic than the one assumed by most scholars. Despite their limitations, these systems have the potential for complementing international rankings by considering the institutional size and the diversity of institutional profiles, as they provide reasonable measures of resources (staff or finances) and educational activities (students or degrees). Moreover, institutional data systems, because of their connection with specific politico-institutional contexts, include important information on core issues for global comparisons.

At the same time, our analysis also shows that international rankings and other private data systems are important game-changers that also push institutional data systems to move towards the public domain and to broader international comparisons.

Our analysis has also identified several issues that need to be addressed. First, definitions of higher education are too diverse worldwide to allow using national or regional higher education perimeters as a basis for institutional comparisons. The work of classifying HEIs within systems and, accordingly, identifying reasonably comparable groups of institutions, such as ‘research universities’ is key for institutional comparisons (Borden and McCormick Citation2020). Second, it cannot be taken for granted that institutions are identified in the same way in different data systems and, accordingly, careful analysis and documentation of their boundaries (and their variation) over time are needed (Lepori, Geuna, and Mira Citation2019). Third, some knowledge of the institutional context in which data have been borne is required and this also suggests that comparisons are made by teams with sufficient knowledge of individual systems.

As definitions and data mature in the coming years it will be necessary to also recontextualize efforts. For instance, few reporting mechanisms take account of institutional financial arrangements, despite these being audited and usually public information, despite obvious constraints around comparing processes and outcomes without reference to budget. Current platforms compare institutions with budgets of several billion $US to those with budgets a tenth the size. Of course, issues arise around demarcating and apportioning budgets, but, if the goal is to identify institutions in the same budgetary class, this seems feasible.

As all this sectoral work takes shape, and as higher education itself grows in scale and significance, it is important to make parallel advancements in broader contexts. Prevailing global volatility is in many ways ushering a new era of higher education transformation (Coates Citation2020). Research on the social or public engagement of universities is in its infancy. Major thinking and development is taking place around sustainability and social impact indicators. Indeed, there are ample signs that, as these new conceptualizations unfold, the social dimension is playing not just a compartmentalized functional role as one university ‘vertical’ among others, but also has broader relevance and influence on all academic and institutional functions. Almost by definition, building data in this context will require it to be generalizable and go ‘beyond institutions’ and even ‘beyond the sector’. Given it is being constructed in a global era, information specifications may well ‘bake in’ the need for international generalizability.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Notes on contributors

Benedetto Lepori

Benedetto Lepori, PhD, is a professor at the Università della Svizzera italiana in Lugano and is coordinator of the European Tertiary Education Register project, i.e. the reference database on European higher education. In the past two decades, he extensively contributed to the methodology and data development on higher education, as well as to studies of institutional differentiation and the organization of higher education systems.

Victor M. H. Borden

Victor (Vic) M. H. Borden, PhD is a Professor of Higher Education and Student Affairs at Indiana University Bloomington. He is the Project Director for the Carnegie Classification of Institutions of Higher Education and for IU's Charting the Future initiative and senior advisor to the Executive Vice President and Interim Provost. Dr. Borden's general area of scholarship is on the assessment and evidence-informed improvement of higher education programmes and institutions.

Hamish Coates

Hamish Coates, PhD, is a Tenured Professor at Tsinghua University’s Institute of Education and Director of the Higher Education Division. The focus of his research is on improving the quality and productivity of higher education. He is considered an authority in large-scale evaluation, tertiary education policy, institutional strategy, assessment methodology, learner engagement, and academic work and leadership.

References

  • Abramo, Giovanni, and Ciriaco Andrea D’Angelo. 2016. “A Farewell to the MNCS and Like Size-Independent Indicators.” Journal of Informetrics 10: 646–651.
  • Aliyeva, Aida, Christopher A. Cody, and Kathryn Low. 2018. The History and Origins of Survey Items for the Integrated Postsecondary Education Data System (2016-17 Update). NPEC 2018-023. National Postsecondary Education Cooperative.
  • Altbach, Philip G. 1999. “The Logic of Mass Higher Education.” Tertiary Education and Management 5: 107–124.
  • Barré, Rémi. 2004. “S&T Indicators for Policy Making in a Changing Science-Society Relationship.” In Handbook of Quantitative Science and Technology Research, edited by H. F. Moed, W. Glänzel, and U. Schmoch, 115–132. Dordrecht: Kluwer Academic Publishers.
  • Becher, Tony, and Paul R. Trowler. 2001. Academic Tribes and Territories. Intellectual Enquiry and the Culture of Disciplines. Ballmoor. Buckingham/Philadelphia, PA: The Society for Research into Higher Education and Open University Press.
  • Benito, Mónica, Pilar Gil, and Rosario Romera. 2019. “Funding, is it key for Standing out in the University Rankings?” Scientometrics 121: 771–792.
  • Bogetoft, Peter, Harold O. Fried, and Philippe Vanden Eeckaut. 2007. “The University Benchmarker: An Interactive Computer Approach.” In Universities and Strategic Knowledge Creation. Specialization and Performance in Europe, edited by A. Bonaccorsi, and C. Daraio, 443–462. Bodmin, Cornwall: MPG Books Limited.
  • Bonaccorsi, Andrea, and Cinzia Daraio. 2007. “Universities as Strategic Knowledge Creators: Some Preliminary Evidence.” In Universities and Strategic Knowledge Creation. Specialization and Performance in Europe, edited by A. Bonaccorsi, and C. Daraio, 31–81. Cheltenham: Edwar Elgar.
  • Bonaccorsi, Andrea, Cinzia Daraio, Benedetto Lepori, and Stig Slipersaeter. 2007. “Indicators on Individual Higher Education Institutions: Addressing Data Problems and Comparability Issues.” Research Evaluation 16: 66–78.
  • Borden, Victor MH, Angel Calderon, Neels Fourie, Benedetto Lepori, and Andrea Bonaccorsi. 2013. “Challenges in Developing Data Collection Systems in a Rapidly Evolving Higher Education Environment.” New Directions for Institutional Research 2013: 39–58.
  • Borden, Victor MH, and Alexander C. McCormick. 2020. “Accounting for Diverse Missions: Can Classification Systems Contribute to Meaningful Assessments of Institutional Performance?” Tertiary Education and Management 26: 255–264.
  • Brankovic, Jelena. 2018. “The Status Games They Play: Unpacking the Dynamics of Organisational Status Competition in Higher Education.” Higher Education 75: 695–709.
  • Braun, Dietmar, and F. X. Merrien. 1999. Towards a New Model of Governance for Universities? A Comparative View. London: Jessica Kingsley.
  • Brunsson, N., and K. Sahlin-Andersson. 2000. “Constructing Organizations: The Example of the Public Sector Reform.” Organization Studies 21: 721–746.
  • Calero-Medina, C., Noyons E., Visser M., and De Bruin R. 2020. “Delineating Organizations at CWTS—A Story of Many Pathways.” In Evaluative Informetrics: The Art of Metrics-Based Research Assessment, ed. Anonymous, edited by Daraio C. and Glänzel W, 163–177. Cham: Springer.https://doi.org/10.1007/978-3-030-47665-6_7.
  • Capano, Giliberto. 2011. “Government Continues to do its job. A Comparative Study of Governance Shifts in the Higher Education Sector.” Public Administration 89: 1622–1642.
  • Coates, Hamish. 2017a. Productivity in Higher Education: Research Insights for Universities and Governments in Asia. Tokyo: Asian Productivity Organisation.
  • Coates, Hamish. 2017b. The Market for Learning. Singapore: Springer.
  • Coates, Hamish. 2020. Higher Education Design: Big Deal Partnerships, Technologies and Capabilities. Singapore: Springer Nature.
  • Cohen, Arthur M. 2007. The Shaping of American Higher Education: Emergence and Growth of the Contemporary System. San Francisco: John Wiley & Sons.
  • Daraio, Cinzia, Andrea Bonaccorsi, Aldo Geuna, et al. 2011. “The European University Landscape: A Micro Characterization Based on Evidence from the Aquameth Project.” Research Policy 40: 148–164.
  • De Boer, Harry, Jürgen Enders, and Uwe Schimank. 2007. “On the Way Towards new Public Management? The Governance of University Systems in England, the Netherlands, Austria, and Germany.” In New Forms of Governance in Research Organizations, ed. Anonymous, edited by Jansen David Dordrecht, 137–152. Springer.
  • Desrosières, Alain. 2001. “How Real Are Statistics? Four Posssible Attitudes.” Social Research 68 (2): 339–355.
  • Docampo, Domingo, Daniel Egret, and Lawrence Cram. 2015. “The Effect of University Mergers on the Shanghai Ranking.” Scientometrics 104: 175–191.
  • Drori, G., John W. Meyer, and H. Hwang. 2006. Globalization and Organization. World Society and Organizational Change. Oxford: Oxford University Press.
  • Etzkowitz, Henry. 2004. “The Evolution of the Entrepreneurial University.” International Journal of Technology and Globalisation 1: 64–77.
  • EUROSTAT. 2010. Business Registers. Recommendation Manual. Luxembourg: EUROSTAT.
  • Ferlie, Ewan, Christine Musselin, and Gianluca Andresani. 2008. “The Steering of Higher Education Systems: A Public Management Perspective.” Higher Education 56: 325–348.
  • Geiger, Roger L. 1993. Research and Relevant Knowledge: American Research Universities Since World War II. Oxford: Oxford University Press.
  • Glänzel, Wolfgang, Bart Thijs, and Koenraad Debackere. 2016. “Productivity, Performance, Efficiency, Impact-What do we Measure Anyway? Some Comments on the Paper``A Farewell to the MNCS and Like Size-Independent Indicators''by Abramo and D’Angelo.” Journal of Informetrics 10: 658–660.
  • Godin, Benoit. 2005. Measurement and Statistics on Science and Technology. London: Routledge.
  • Hazelkorn, Ellen. 2015. Rankings and the Reshaping of Higher Education: The Battle for World-Class Excellence. Basingstoke: Palgrave McMillan.
  • Hazelkorn, Ellen (ed.). 2017. Global Rankings and the Geopolitics of Higher Education. London: Routledge.
  • Hazelkorn, Ellen, Hamish Coates, and Alexander C. McCormick. 2018. Research Handbook on Quality, Performance and Accountability in Higher Education. Cheltenham: Edward Elgar Publishing.
  • Heller-Schuh, Barbara, Benedetto Lepori, and Martina Neuländtner. 2020. “Mergers and Acquisitions in the Public Research Sector. Toward a Comprehensive Typology.” Research Evaluation 29 (4): 366–376.
  • Huisman, Jeroen, and Frans Kaiser. 2000. Fixed and Fuzzy Boundaries in Higher Education: A Comparative Study of (Binary) Structures in Nine Countries. Den Haag: AWT.
  • Jaquette, Ozan, and Edna E. Parra. 2014. “Using IPEDS for Panel Analyses: Core Concepts, Data Challenges, and Empirical Applications.” In Higher Education: Handbook of Theory and Research, ed. Anonymous, edited by M. B. Paulsen, 467–533. Dordrecht: Springer.
  • Jongbloed, Ben, Jürgen Enders, and Carlo Salerno. 2008. “Higher Education and its Communities: Interconnections, Interdependencies and a Research Agenda.” Higher Education 56: 303–324.
  • Kehm, Barbara M., Jeroen Huisman, and Bjørn Stensaker. 2009. The European Higher Education Area: Perspectives on a Moving Target. Rotterdam: SensePublishers.
  • Kyvik, Svein. 2004. “Structural changes in higher education systems in Western Europe.” Europe 29: 393–409.
  • Laredo, Philippe. 2007. “Revisiting the Third Mission of Universities: Toward a Renewed Categorization of University Activities?” Higher Education Policy 20: 441–456.
  • Lehtonen, Markku. 2015. “Indicators: Tools for Informing, Monitoring or Controlling?” In The Tools of Policy Formulation, ed. Anonymous, edited by A. J. Jordan and John R. Turnpenny, 76–99. Cheltenham: Edward Elgar Publishing.
  • Lepori, Benedetto. 2020a. “A Register of Public-Sector Research Organizations as a Tool for Research Policy Studies and Evaluation.” Research Evaluation 29 (4): 355–365.
  • Lepori, Benedetto. 2020b. “Non-university Education and Professional Institutions.” In Encyclopedia of International Higher Education Systems and Institutions, edited by J. C. Shin, and P. Teixeira, 2118–2124. Dordrecht: Springer.
  • Lepori, Benedetto, and Andrea Bonaccorsi. 2013. “The Socio-Political Construction of a European Census of Higher Education Institutions.” Minerva 51: 271–293.
  • Lepori, Benedetto, Aldo Geuna, and Antonietta Mira. 2019. “Scientific Output Scales with Resources. A Comparison of US and European Universities.” PloS one 14: 1–18.
  • Lepori, Benedetto, and Ben Jongbloed. 2018. “National Resource Allocation Decisions in Higher Education: Objectives and Dilemmas.” In Handbook on the Politics of Higher Education, edited by B. Cantwell, H. Coates, and R. King, 211–228. Cheltenam: Edward Elgar.
  • Marginson, Simon, and Gary Rhoades. 2002. “Beyond National States, Markets, and Systems of Higher Education: A Glonacal Agency Heuristic.” Higher Education 43: 281–309.
  • Marginson, Simon, and Marijk Van der Wende. 2007. Globalisation and Higher Education. Education working paper.
  • McCormick, A. C., and C. M. Zhao. 2005. “Rethinking and Reframing the Carnegie Classification.” Change 37 (5): 50–57.
  • Moore, Kenneth, Gwilym Croucher, and Hamish Coates. 2019. “Productivity and Policy in Higher Education.” Australian Economic Review 52: 236–247.
  • Musselin, Christine. 2007. “Are Universities Specific Organisations?” In Towards a Multiversity? Universities Between Global Trends and National Traditions, edited by G. Krücken, A. Kosmützky, and M. Torka, 63–84. Bielefeld: transcript.
  • Mustar, Philippe, and Philippe Larédo. 2002. “Innovation and Research Policy in France (1980-2000) or the Disappearance of the Colbertist State.” Research Policy 31 (2002): 55–72.
  • OECD. . 2016. Education at a Glance. Paris: OECD.
  • OECD. . . . . 2015. Frascati Mnaual. Guidelines for Collecting and Reporting Data on Research and Experimental Development. Paris: OECD
  • Pinheiro, Rómulo, Lars Geschwind, and Timo Aarrevaara. 2016. “Mergers in Higher Education.” European Journal of Higher Education 6: 2–6.
  • Piro, Fredrik Niclas, and Gunnar Sivertsen. 2016. “How Can Differences in International University Rankings be Explained?” Scientometrics 109: 2263–2278.
  • Pusser, Brian, and Simon Marginson. 2013. “University Rankings in Critical Perspective.” The Journal of Higher Education 84: 544–568.
  • Rumbley, Laura E., Philip G. Altbach, and Liz Reisberg. 2012. “Internationalization Within the Higher Education Context.” The SAGE Handbook of International Higher Education 3: 26.
  • Saisana, Michaela, Béatrice d’Hombres, and Andrea Saltelli. 2011. “Rickety Numbers: Volatility of University Rankings and Policy Implications.” Research Policy 40: 165–177.
  • Salmi, Jamil, and Anna D’Addio. 2021. “Policies for Achieving Inclusion in Higher Education.” Policy Reviews in Higher Education 5: 47–72.
  • Sauder, M., and W. N. Espeland. 2009. “The Discipline of Rankings: Tight Coupling and Organizational Change.” American Sociological Review 74: 63–82.
  • Teixeira, Pedro, Ben Jongbloed, David Dill, and Alberto Amaral. 2004. Markets in Higher Education. Rhetoric or Reality? Dordrecht: Kluwer Academic Publishers.
  • Trow, Martin.. 1979. Elite and Mass Higher Education: American Models and European Realities. Stockholm: National Board of Universities.
  • Trow, Martin. 2010. Twentieth-century Higher Education: Elite to Mass to Universal. Baltimore: JHU Press.
  • UOE. 2013. UOE Data Collection on Education Systems. Volume 1. Manual. Concepts, Definitions, Classifications. Montreal, Paris, Luxembourg: UNESCO, OECD, Eurostat.
  • Van Raan, Anthony FJ. 2005. “Fatal Attraction: Conceptual and Methodological Problems in the Ranking of Universities by Bibliometric Methods.” Scientometrics 62: 133–143.
  • Van Vught, Frans, Jeroen Bartelse, David Bohmert, Jon File, Christiane Gaethgens, Saskia Hansen, Frans Kaiser, et al. 2008. Mapping Diversity. Developing a European Classification of Higher Education Institutions. Enschede: CHEPS Center for Higher Education Policy Studies.
  • van Vught, F. A., and F. Ziegele. 2012. Multidimensional Ranking. The Design and Development of U-Multirank. Dordrecht: Springer.
  • Waltman, Ludo, Clara Calero-Medina, Joost Kosten, et al. 2012. “The Leiden Ranking 2011/2012: Data Collection, Indicators, and Interpretation.” Journal of the American Society for Information Science and Technology 63: 2419–2432.
  • Wellman, Jane V. 2007. Apples and Oranges in the Flat World: A Layperson's Guide to International Comparisons of Postsecondary Education. American Council on Education, Center for Policy Analysis, Center for … .
  • Whitley, Richard. 2008. Constructing Universities as Strategic Actors: Limitations and Variations. Manchester Business School.
  • Yang, Jiale, Chuanyi Wang, Lu Liu, Gwilym Croucher, Kenneth Moore, and Hamish Coates. 2020. “The Productivity of Leading Global Universities: Empirical Insights and Implications for Higher Education.” In Responsibility of Higher Education Systems, ed. Anonymous, edited by B. Broucker, V. Borden, T. Kallenberg, and C. Milsom, 224–249. Leiden: Brill.
  • Zhong, Zhou, Hamish Coates, and Shi Jinghuan. 2019. Innovations in Asian Higher Education. Singapore: Routledge.
  • Zlatkin-Troitschanskaia, Olga, Hans Anand Pant, and Hamish Coates. 2016. Assessing Student Learning Outcomes in Higher Education: Challenges and International Perspectives. London: Taylor & Francis.