10,986
Views
119
CrossRef citations to date
0
Altmetric
Articles

Are there distinctive clusters of higher and lower status universities in the UK?

Abstract

In 1992 the binary divide between universities and polytechnics was dismantled to create a nominally unitary system of higher education for the UK. Just a year later, the first UK university league table was published, and the year after that saw the formation of the Russell Group of self-proclaimed ‘leading’ universities. This paper asks whether there are distinctive clusters of higher and lower status universities in the UK, and, in particular, whether the Russell Group institutions can be said to constitute a distinctive elite tier. Cluster analysis of publicly available data on the research activity, teaching quality, economic resources, academic selectivity, and socioeconomic student mix of UK universities demonstrates that the former binary divide persists with Old (pre-1992) universities characterised by higher levels of research activity, greater wealth, more academically successful and socioeconomically advantaged student intakes, but similar levels of teaching quality, compared to New (post-1992) institutions. Among the Old universities, Oxford and Cambridge emerge as an elite tier, whereas the remaining 22 Russell Group universities appear to be undifferentiated from the majority of other Old universities. A division among the New universities is also evident, with around a quarter of New universities forming a distinctive lower tier.

This article is part of the following collections:
Oxford Review of Education - 50th Anniversary

Introduction

In 1992 the UK government formally abolished the binary divide between universities and polytechnics to establish a unitary system of higher education for the UK. The 1992 Further and Higher Education Act granted 35 polytechnics full university status, thereby increasing the number of universities by 50% and doubling the number of university students virtually overnight (Halsey, Citation2000). That the new system was unitary only in name, however, was soon highlighted by the publication of the first university rankings by The Times in 1993, and by the emergence of university mission groups spearheaded by the formation of the Russell Group in 1994. These developments reflected the fact that Old, pre-1992, universities continued to be held in higher regard than New, post-1992, universities and that differences in esteem were prevalent even amongst the Old universities (Scott, Citation1995; Tight, Citation1988).

Indeed, few observers in the mid-1990s and still fewer today would dispute that the UK university system is a differentiated rather than a unitary one. It was and is common knowledge that UK universities differ significantly in a number of respects, chief among them the intensity and measured quality of their research activity, the perceived quality of the teaching and learning experience, the level of economic resources at their disposal, their degree of academic selectivity in admissions, and the socioeconomic mix of their student bodies. Importantly, these differences inform judgements about the status of different universities, or as sociologist Max Weber puts it, ‘specific, positive or negative, social estimation[s] of honour’ (Weber [Citation1946] 2001, p. 136). Given this, it is not surprising that many universities spend a great deal of time and effort trying to improve their position in university league tables (Brown, Citation2006), and that the four universities invited to become members of the Russell Group of ‘leading’ universities in 2012 were reportedly only too happy to pay a £500,000 joining fee (Jump, Citation2013).Footnote1

But while it is well known that the contemporary UK university system is highly differentiated, relatively little is known about how this differentiation is structured today. Are there distinctive clusters of UK universities? If so, how do these relate to university mission groups, and in particular can the Russell Group universities be said to form a distinctive cluster of ‘leading’ universities? The Russell Group has been tremendously successful in promoting its 24 member institutions as the ‘jewels in the crown’ of the UK higher education system (Russell Group, Citation2012). However, this self-proclaimed ascendency is increasingly being challenged (Coughlan, Citation2014; Fazackerley, Citation2013; Rogerson, Citation2013), including by Sir David Watson, professor of higher education at the University of Oxford, who argued that the Russell Group ‘… represents neither the sector as a whole [nor], in many cases, the best of the sector’ (Morgan, Citation2014).

Several now historic studies of the UK higher education sector found that there were clusters of similar universities in the 1960s, 1970s, 1980s and 1990s. A study of the UK’s 41 universities in 1967 identified four main university types (King, Citation1970);Footnote2 an analysis of data for 47 UK universities in 1977 identified five clusters of universities (Dolton & Makepeace, Citation1982);Footnote3 a study of 51 UK universities in 1984 found six clusters (Tight, Citation1988);Footnote4 and research on 73 universities and 48 other higher education institutions in England in 1992 reported 16 clusters (Tight, Citation1996).Footnote5 The first three studies identified the universities of Oxford and Cambridge as belonging to a distinctive cluster, whereas the fourth study found Oxbridge to be clustered together with 11 of the 22 other institutions that now make up the Russell Group. Notably, all four studies suggest that the future Russell Group universities were not a particularly homogeneous group at that time but were in fact spread across three or four clusters also containing various other Old universities.

The most recent of these previous studies is now some 20 years old and since then the UK higher education system has clearly become much more marketised (Brown, Citation2013). Against this backdrop, the present paper sets out to explore how the differentiation of UK universities is structured today. Hierarchical cluster analysis is used to analyse data on the character of 127 UK universities, drawn primarily from the statistical data published annually by the Higher Education Statistics Agency (HESA). The next section discusses the historical development of distinctions between UK universities, before setting out five key dimensions of status differentiation today, namely: research activity, teaching quality, economic resources, academic selectivity, and socioeconomic student mix. Next, the data and methods used to explore these dimensions of status differentiation among UK universities are described, and some empirical results are presented. The final section sets out the main conclusions of the paper.

The historical development of distinctions between UK universities

Long before the first appearance of university rankings and university missions groups, status distinctions were being drawn between UK universities on the basis of their historical origins. The UK’s two oldest universities, Oxford (est. 1096) and Cambridge (est. 1209) have always been by far the most revered institutions of higher education in the UK, and older universities are invariably more esteemed than more recent foundations (Scott, Citation1995; Tight, Citation1996). Alongside Oxford and Cambridge, the universities of St Andrews, Glasgow and Aberdeen founded in the 1400s, and the universities of Edinburgh and Dublin founded in the 1500s, became designated as ‘ancient’ universities to distinguish them from the universities of London, Durham, Wales and Queen’s Belfast founded in the 1800s. Further distinctions were drawn between the ‘red brick’ universities of Birmingham, Bristol, Leeds, Liverpool, Manchester and Sheffield, founded in the late 1800s and early 1900s, and the ‘civic’ universities including Exeter, Newcastle, Southampton and Leicester, formed in the mid-1900s from the former associate colleges of longer-established universities such as Oxford, Durham and the University of London (Archer, Hutchings & Ross, Citation2003; Scott, Citation1995).

Subsequently all of the above universities became known collectively as ‘Old’ or ‘established’ universities in contrast to the thirteen ‘New’ universities founded during the Robbins expansion of higher education in the late 1960s and early 1970s, comprising the seven newly-created universities of East Anglia, Essex, Kent, Lancaster, Sussex, Warwick and York and six upgraded Colleges of Advanced Technology (Halsey, Citation2000). These New universities were in turn distinguished from the 30 ‘polytechnic’ institutions also created as part of the Robbins expansion to provide more vocationally-oriented, sub-degree courses as part of the new binary system of higher education (Halsey, Citation2000). Tellingly, the New universities founded during the Robbins era later became known as Old universities to distinguish them from the new New universities created as a result of the dismantling of the binary divide between universities and polytechnics in 1992. Since 2004 the number of New (post-1992) universities has grown following a legislative change which dispensed with the condition that a higher education provider must have research degree awarding powers in order to qualify for university status (Brown, Citation2013). Although the historical designations of Old and New have been applied to different sets of institutions at different times, these terms have always been tacitly synonymous with high and low status institutions, with good and not-so-good universities.

Within a year of the binary divide between universities and polytechnics being dissolved the first university rankings appeared, published by The Times newspaper in a book entitled The Times Good Universities Guide (The Times, Citation1993). The word ‘Good’ in the title alluded to the book’s premise that ‘it had become obvious that all universities were by no means equal in the new higher education world’ created by the 1992 Further and Higher Education Act (The Times, Citation1993, p. 7). The authors of the book emphasised their expectation that ‘the existence of more than 90 diverse universities and an ever-growing pool of graduates will make a pecking order inevitable’ and the stated motivation for producing the guide was to ‘provide an indication of each university’s standing on a range of significant indicators’ (ibid). Following the model set by the US News and World Report rankings of American higher education institutions, the first guide produced by The Times ranked universities according to their composite score on 14 variously weighted indicators of university ‘performance’ (The Times, Citation1993, pp. 61–65).Footnote6

Jointly The Times and The Sunday Times continue to publish a set of university rankings annually, based currently on nine indicators: the average UCAS tariff points of entrants, student–staff ratios, research income, spending on academic services, spending on facilities, degree completion rates, the proportion of first and upper second class degrees awarded, research assessment exercise ratings, and the proportion of graduates in graduate jobs or further study six months after graduation. Two other university rankings also appear annually: The Complete University Guide rankings which were first published in 2007 and are based on the same indicators as The Times rankings; while The Guardian University Guide rankings which have been published since 2001 use indicators that are concerned more with teaching quality and student satisfaction.

It is important to note that university rankings have been heavily criticised from the outset, including by the Higher Education Statistics Agency (HESA) which is the primary source of the underlying data. In the mid-1990s, readers of HESA’s first statistical publications were warned that there are ‘dangers of drawing conclusions or making comparisons based on one year’s figures in isolation. Construction of league tables of any sort from such misguided comparisons is absurd’ (HESA, Citation1996). Still today, its publications carry the disclaimer that ‘HESA cannot accept responsibility for any inferences or conclusions derived from the data by third parties’ (HESA, Citation2014). Nevertheless, it is noteworthy that all three extant university rankings are highly correlated with one another.Footnote7 Older universities are generally ranked higher than newer universities in all three university rankings, and the universities of Oxford and Cambridge invariably occupy the top two positions.Footnote8 Of the top 20 ranked universities in the latest Complete University Guide (Citation2014) league table, 13 are currently members of the Russell Group and seven were members of the now defunct 1994 Group. Of the bottom 20 ranked universities, all are post-1992 institutions.

Hot on the heels of the 1992 Further and Higher Education Act and the 1993 publication of the first university rankings came the formation of the Russell Group in 1994 by the Vice Chancellors of 17 pre-1992 universities including the universities of Oxford and Cambridge, two out of five of the other ancient universities (Edinburgh and Glasgow), all six of the red brick universities (Birmingham, Bristol, Leeds, Liverpool, Manchester and Sheffield), six civic universities (Newcastle, Nottingham, Southampton, the London School of Economics, University College London and Imperial College London), one Robbins university (Warwick), and no post-1992 universities. The Russell Group was formed expressly for the purpose of advancing its members’ interests as large ‘research-intensive’ universities in contrast to other more ‘teaching only’ institutions. Two more civic universities joined the Russell Group in 1998 (Cardiff and King’s College London), another old university joined in 2006 (Queen’s University Belfast), and four more old universities accepted invitations to join in 2012 (Durham, York, Exeter and Queen Mary London). The last four universities to become Russell Group members were until then members of the 1994 Group, an alliance formed in the immediate wake of the establishment of the Russell Group by 17 universities keen to defend their interests as smaller research-intensive universities. After losing two more members, the 1994 Group disbanded in 2013.

The three other university mission groups still in existence each serve particular constituencies of primarily post-1992 universities. Million+, originally formed in 1997 as the Coalition of Modern Universities, is an association of 17 post-1992 universities with a particular concern with widening access to higher education (Million+, Citation2014); GuildHE, formed in 2006, is an association of 28 ‘smaller and specialist’ post-1992 universities and university colleges (GuildHE, Citation2014); and University Alliance, originally formed in 2006 as the satirically named Alliance of Non-Aligned Universities, represents 20 post-1992 and two Robbins-era universities with a science and technology focus (University Alliance, Citation2014).

Dimensions of status differentiation

The status differentiation of UK universities is undoubtedly complex and multifaceted. Universities which occupy similar status positions can be thought of as having in common ‘a specific style of life’ (Weber [Citation1946] 2001, p. 137) with regard to five key dimensions, namely: research activity, teaching quality, economic resources, academic selectivity, and socioeconomic student mix.

Research activity is undoubtedly one of the primary aspects of the status differentiation of UK universities. Universities differ significantly in terms of how much research they do, as indicated by the amount of research income they receive and by the relative size of their postgraduate student populations. Universities also diverge considerably in their scores in the periodic Research Assessment Exercise (RAE) which is intended to assess research quality. Notably, older universities are generally more research intensive and fare better in the RAE, and RAE scores are used to calculate two of the three extant university rankings. Equally telling about the importance of research activity as a component of institutional differentiation, the Russell Group gives top billing to its role in representing ‘24 leading universities which are committed to maintaining the very best research’ (Russell Group, Citation2014).

UK universities also differ in terms of judgements made about the quality of the teaching and learning experience they offer. One important set of indicators of teaching quality, though of questionable validity, are the responses to the National Student Survey (NSS) through which final year students are invited to voice their opinions on the teaching, assessment and feedback, and overall quality of their courses. NSS scores are a major component of the university rankings published in the Guardian University Guide, as is an arguably more objective ‘value-added’ measure of ‘teaching effectiveness’ which is calculated by comparing students’ degree performances with their qualifications prior to entry. Historically, many Old universities have prioritised research over teaching, aligning themselves with the label ‘research intensive’ (as opposed to ‘teaching only’) institutions. Many New universities, in contrast, have embraced their teaching mission, styling themselves as ‘teaching led’ (rather than ‘teaching only’) institutions. That said, teaching quality has moved up the agenda of all universities since the first National Student Survey in 2005 and the first increase in tuition fees in 2006. Unsurprisingly, then, the Russell Group nowadays gives second billing to its commitment to providing ‘an outstanding teaching and learning experience’ (Russell Group, Citation2014).

UK universities differ significantly, too, in terms of the economic resources at their command. As a general rule, the wealth of universities is positively correlated with their age. The universities of Oxford (est. 1096) and Cambridge (est. 1209) are by far the richest, each with estimated assets in excess of £3billion, a figure which dwarfs the £2billion in assets of all other UK universities combined (Salek, Citation2013). Relatedly, there are substantial differences in how much universities spend on their undergraduate provision, including expenditure on academic services such as libraries, and on the number of teaching staff per student. For example, the universities of Oxford and Cambridge at one extreme have student–staff ratios of around 11 to 1, compared to ratios twice as large in many post-1992 universities (Complete University Guide, Citation2014).

Academic selectivity in admissions is a further key dimension of status differentiation among UK universities (Croxford & Raffe, Citation2014; Raffe & Croxford, Citation2013). For example, whereas the average entrant to the universities of Oxford and Cambridge has a tariff point equivalent to four A* grades at A-level, the average for entrants to universities at the bottom of the table is closer to CCD at A-level (Complete University Guide, Citation2014). More indirectly indicative of differences in academic selectivity are differences in degree completion rates and the percentages of students receiving a ‘good degree’, that is, a first or upper second degree classification, both of which tend to be higher at older universities.

One final major form of status differentiation between UK universities, which is not explicitly referenced in historical designations, university rankings or university mission groups but which undoubtedly contributes to different estimations of university prestige, is the socioeconomic mix of the students attending different universities (Croxford & Raffe, Citation2014; Raffe & Croxford, Citation2013). It is well known that students from more advantaged social class backgrounds and private schools are especially over-represented at the universities of Oxford, Cambridge and the Russell Group, whereas students from ‘non-traditional backgrounds’ are concentrated in New, post-1992 universities with widening access remits (Boliver, Citation2011, Citation2013; Hemsley-Brown, Citation2014).

Data and methods

The data to be analysed comes primarily from statistics published by the Higher Education Statistics Agency (HESA) and relates to publicly funded UK universities in the early 2010s (N=127).Footnote9 Three measures of research activity are used to capture the quantity and measured quality of research carried out at each institution: research income adjusted for science/arts mix and institution size, the percentage of students who are postgraduates, and RAE scores.Footnote10 Three measures of teaching quality are used to capture both subjective and objective assessments of standards of learning and teaching: two National Student Survey satisfaction measures and the value-added score calculated by the compilers of the Guardian University Guide.Footnote11 Three measures of economic resources are intended to convey something about the wealth, income and spending of universities: endowment and investment income, spending on academic services per student, and student–staff ratio.Footnote12 Three measures of academic selectivity convey something about the academic selectivity of the institution: the UCAS point score of the average entrant, the degree completion rate, and the percentage of students receiving a ‘good degree’.Footnote13 Finally, three indicators of the socioeconomic mix of universities index the extent to which the institution serves more privileged socioeconomic groups: the percentage of students who are not from a low participation neighbourhood (LPN), who are from more advantaged social class backgrounds, and who attended a private school.Footnote14 Most variables have very little missing data, but where there are missing data the sample mean value has been substituted.Footnote15

It is worth pointing out that the data refer to institutional averages, and so no account is taken in the analysis of differences between departments within the same institution (although this may be a fruitful avenue for further study). It is also important to be clear that none of the variables used in the analysis are entirely ‘objective’ indicators of the character of the universities. For instance, universities may be able to influence their scores on certain variables, such as by awarding more ‘good degrees’ over time, and students who switched from private schools to state schools during the last years of their schooling will be counted as not being from private schools.

Cluster analysis is used to analyse the data. One particularly appealing feature of cluster analysis is that it is a case-based rather than a variable-based approach; that is to say, it treats each case as an integral whole instead of dealing in disembodied variables (Byrne, Citation2005; Uprichard, Citation2005). For this reason, cluster analysis has been chosen over methods such as latent class analysis in which cases are assigned to discrete categories or ‘latent classes’ based on predicted probabilities of membership of latent classes that have themselves been constructed so as to achieve conditional independence of variables across latent classes. Cluster analysis, in contrast, assigns cases to discrete categories or ‘clusters’ of similar cases considering all case attributes. As such, cluster analysis can identify clusters of cases that display a high degree of within-cluster similarity and a high degree of between-cluster dissimilarity taking into account all of the variables included in the analysis.

Agglomerative hierarchical cluster analysis is used to discover how many clusters of universities there are, and how they are composed. It involves progressively fusing together similar cases, with the similarity of cases captured in this case by the squared Euclidean distance between them. Because the variables are measured using different metrics, they have been transformed into z-scores so that each variable has a mean of zero and a standard deviation of one and therefore contributes equally to the analysis. Cluster solutions tend to be sensitive to the specific clustering algorithm used (Kantardzic, Citation2011; Xu & Wunsch, 2009), and so four different hierarchical clustering algorithms are employed. First, the between-groups average method is used which merges clusters that have the smallest mean distance between their respective cases. Second, the within-groups average method is used which prioritises the minimisation of the dissimilarity of cases within the resulting cluster. Third, the nearest neighbour (aka single linkage) method is used, which joins clusters with the smallest distance between their most proximate members. Finally, the furthest neighbour (aka complete linkage) method is used, which fuses clusters with the smallest distance between their least proximate members.

It is important to emphasise that cluster analysis is an exploratory and iterative process in which no single cluster solution is necessarily ‘correct’ (Kantardzic, Citation2011; Uprichard, Citation2005). Instead, cluster solutions can be judged to be better or worse than others on criteria such as the amount of between-case dissimilarity that is accounted for by the clusters, theoretical interpretability, and consistency with other cluster solutions (Xu & Wunsch, 2009).

Results

Figure presents four elbow plots, one for each of the hierarchical clustering algorithms described above, which show how much of the dissimilarity between cases is accounted for, as cases are progressively fused together until all of the cases have been combined to form one big cluster. At the starting point of the analysis, each case represents its own cluster, and so the percentage of dissimilarity between cases accounted for by the 127 clusters is 100%. By the end point, each case is contained within one large cluster, and so the percentage of dissimilarity between cases accounted for by that one cluster is 0%. Because the most similar cases are fused together first, the percentage of dissimilarity between cases accounted for tends to decline very slowly to begin with; but as the number of clusters gets closer to one and the cases being fused together are increasingly dissimilar, the percentage of dissimilarity between cases explained by the number of clusters starts to decline more rapidly towards 0%. The optimal number of clusters is determined by locating the ‘elbow point’ in the plot; that is, the point at which a further decrease in the number of clusters brings about a sharp and sustained fall in the percentage of dissimilarity between cases accounted for by the clusters. Any further reduction in the number of clusters after the ‘elbow point’ would result in a marked increase in within-cluster dissimilarities between cases.

Figure 1. Elbow plots illustrating cluster solutions

Figure 1. Elbow plots illustrating cluster solutions

Elbow plot 1a, for the between-groups average method of clustering cases shows that there is a sharp decline in the percentage of dissimilarity between cases accounted for as the number of clusters decreases from three to two, suggesting that there are three distinctive clusters of universities, which collectively account for 74% of the dissimilarity between cases. Plot 1b for the within-groups average method of clustering cases is less clear cut but appears to have an elbow point at four clusters, accounting for 49% of the dissimilarity between cases. Plot 1c for the nearest neighbour clustering algorithm has its elbow point at three clusters, although these account for a mere 28% of the dissimilarity between cases. In contrast, plot 1d for the furthest neighbour algorithm has its elbow point at four clusters, accounting for 75% of the dissimilarity between cases.

Turning to the dendrograms for each algorithm, presented in Figure , it is possible to see how the cases have been assigned to clusters. Each case is represented in the dendrogram by a short vertical line extending upwards from the x-axis. Horizontal lines connecting cases indicate that they have been assigned to the same cluster. The tree-like structure of the dendrogram shows how smaller clusters have been fused together to form larger clusters, to the point of forming one large cluster at the top of the dendrogram. Longer vertical lines indicate more distinctive clusters. Labels have been added to the dendrograms to aid discussion of the clusters and their membership.

Figure 2. Dendrograms illustrating cluster solutions

Figure 2. Dendrograms illustrating cluster solutions

Dendrogram 2a shows that the clusters distinguished by the between-groups average method are composed as follows: one cluster contains two Russell Group universities (namely Oxford and Cambridge); a second cluster contains the remaining 22 Russell Group universities together with 14 other Old universities; and a third cluster contains 16 Old and all 73 New universities. Plot 2b reports the membership of the four clusters identified by the within-groups average method: one cluster contains two Russell Group universities (as before, Oxford and Cambridge); a second cluster contains 22 Russell Group universities and 16 other Old universities; a third cluster contains 13 Old universities and 61 New universities; and a fourth cluster contains one Old and 12 New universities. Plot 2c is rather different from the other plots and demonstrates the ‘chaining effect’ to which the nearest neighbour method of clustering cases is prone. Nevertheless, plot 2c shows that one cluster contains two Russell Group universities (as previously, Oxford and Cambridge); a second cluster contains one Russell Group university on its own (Edinburgh University); and a third cluster contains all of the other universities. Finally, plot 2d shows that the four clusters distinguished by the furthest neighbour algorithm are composed as follows: one cluster contains two Russell Group universities (Oxford and Cambridge again); a second cluster contains the remaining 22 Russell Group universities together with 17 other Old universities; a third cluster contains 13 Old universities and 54 New universities; and a fourth cluster contains 19 New universities.

Surveying the results for all four algorithms at once, a number of consistent findings are apparent. First, all four algorithms identify the universities of Oxford and Cambridge as belonging to a distinctive cluster. Secondly, three out of four algorithms (between-groups average, within-groups average, and furthest neighbour) separate all of the New universities from the majority of Old universities. Finally, two out of four algorithms (within-groups average and furthest neighbour) identify a further split among the New universities.

All sorting algorithms have their strengths and weaknesses depending partly on the data they are applied to. In this case some of the solutions are clearly better than others. The nearest neighbour algorithm undoubtedly performs worst in terms of amount of dissimilarity between cases accounted for, theoretical interpretability, and degree of consistency with other solutions. The percentage of the dissimilarity between cases accounted for by the within-groups average algorithm is also comparatively low. In contrast, both the between-groups average algorithm and the furthest neighbour algorithm explain a large percentage of the dissimilarity between cases. The furthest neighbour algorithm is also highly consistent with other solutions, making it the best single solution of the four.

Table shows which universities are members of the four clusters identified by the furthest neighbour algorithm. As is already clear, Cambridge University and Oxford University are in a cluster of their own. Cluster two contains not only the remaining 22 Russell Group universities but also 17 other Old universities—a little over half of all the other Old universities—including all but one of the former members of the 1994 Group. Cluster three contains the remaining 13 other Old universities and 54 New universities including all but one of the University Alliance member institutions. Finally cluster four contains 19 New universities made up of a mixture of Million+, GuildHE, and unaffiliated post-1992 institutions.

Table 1. Universities in each of the four cluster identified by the furthest neighbour algorithm

One further question to be answered is: to what extent do the clusters differ from one another on the attributes used to derive them? To answer this question, Table reports for each of the four clusters generated by the furthest neighbour method the mean and standard deviation of each of the variables included in the analysis. Borders have been placed around cells where the value is statistically significantly different from the value for the adjacent cluster.

Table 2. Cluster mean values (standard deviations in parentheses)

Because the most salient distinction identified by the furthest neighbour algorithm is between Old and New universities it makes sense to begin with the contrast between cluster 2, which comprises more than 70% of all Old universities, and cluster 3 which comprises more than 70% of all New universities (together with one quarter of all Old universities). Clusters 2 and 3 differ significantly from one another on all but one of the 15 variables included in the analysis. In terms of research activity, cluster 2 universities receive more than three and a half times as much research income, have 50% more postgraduate students, and have RAE scores that judge research quality to be ‘internationally excellent (3*)’ rather than ‘internationally recognised (2*)’. In terms of teaching quality, however, cluster 2 universities score only slightly higher than cluster 3 universities on student satisfaction with teaching, they are no better with respect to student satisfaction with feedback, and the Guardian value-added score is only half a point higher. In terms of economic resources, cluster 2 universities receive more than six times as much income from endowment and investment sources; correspondingly, they spend 50% more on academic services per student head and have significantly more favourable student–staff ratios. In terms of academic selectivity, cluster 2 universities admit students with considerably higher UCAS point scores equivalent to around A*A*A* (compared to BBB) at A-level, and their rates of degree completion and achievement of a ‘good degree’ are around 10 and 20 percentage points higher respectively. In terms of socioeconomic student mix, cluster 2 universities serve an appreciably more socioeconomically advantaged student body: the percentage of students not from low participation neighbourhoods and from higher social class backgrounds are around five and 15 percentage points higher respectively, while the percentage of students from private schools is more than four times greater for cluster 2 universities than for cluster 3 universities, at 16.1% compared to 3.6%. In summary, Old universities are clearly more highly placed than New universities on four out of five dimensions of status differentiation, the important exception being teaching quality.

Among the Old universities, the universities of Oxford and Cambridge occupying cluster 1 are highly distinct from the other 22 Russell Group universities and 17 other Old Universities that make up cluster 2. In terms of research activity, Oxford and Cambridge receive around 70% more research income than cluster 2 universities, although the size of their postgraduate student populations are similar, as are their RAE scores. In terms of teaching quality, Oxford and Cambridge are no different from cluster 2 universities with regard to student satisfaction with feedback and teaching value added scores, although their student satisfaction with teaching scores are slightly higher. Oxford and Cambridge clearly have much higher levels of economic resources, including more than five times as much endowment and investment income; correspondingly, Oxford and Cambridge spend nearly twice as much on academic services for students and have much more favourable student–staff ratios. Oxford and Cambridge are also significantly more academically selective than cluster 2 universities: their students enter with UCAS point scores equivalent to in excess of A*A*A*A* at A-level (compared to A*A*A*), and their rates of achieving a ‘good degree’ are around ten percentage points higher than for cluster 2 universities, although their degree completion rates are not significantly higher. Oxford and Cambridge also serve much more socioeconomically advantaged student bodies: the percentage of students not from low participation neighbourhoods is similar to that for cluster 2 universities, but the percentage coming from higher social class backgrounds is around ten percentage points higher and the percentage coming from private schools is around twice as high at 34.9% compared to 16.1%. All in all, Oxford and Cambridge universities are highly distinct and clearly represent an elite tier of universities in the UK.

Lastly, a significant divide is apparent between the majority of New universities which, together with 13 Old universities, make up cluster 3, and the one-quarter of all New universities that make up cluster 4. With respect to research activity, cluster 4 universities receive less than a third as much research income, and have significantly lower RAE scores, compared to cluster 3 universities. In terms of teaching quality, student satisfaction levels are quite similar but the value-added score for cluster 4 universities is appreciably lower. In relation to economic resources, cluster 4 universities receive around two-thirds as much income from endowment and investment sources than cluster 3 universities, they spend only around two-thirds as much on academic services per student, and they have markedly less favourable student–staff ratios. In terms of academic selectivity, cluster 4 universities admit students with UCAS point scores that are much lower on average, equivalent to CCC at A-level (compared to BBB), and their rates of degree completion and achievement of a ‘good degree’ are around five and ten percentage points lower respectively. Cluster 4 universities also admit a less socioeconomically advantaged student body with respect to low participation neighbourhood, social class and school type. In summary, on all five status dimensions, cluster 4 universities can be said to constitute a distinctive lower tier of universities.

Conclusions

The results presented above suggest that there are currently four distinctive clusters of universities in the UK. A stark division is evident between the Old pre-1992 universities on the one hand and the New post-1992 universities on the other hand, with large differences evident in terms of research activity, economic resources, academic selectivity and social mix. The difference between Old and New universities with respect to teaching quality, however, is much more minor. This chimes with evidence that most universities have National Student Survey student satisfaction scores that are not significantly different from the sector average (Cheng & Marsh, Citation2010). All the same, it is perhaps surprising that student satisfaction scores are not lower in New universities given that they serve much less academically and socially select student bodies on far more limited resources, suggesting that many New universities are living up to their mission as ‘teaching-led’ universities.

Rather less surprisingly, it is Oxford and Cambridge that stand out among the Old universities as forming an ‘elite’ tier of universities. Again the differences between this ‘elite’ tier and the rest are substantial in relation to research activity, economic resources, academic selectivity and social mix, but are much more modest in relation to teaching quality. Notably, the remaining 22 Russell Group universities are found to cluster together with over half (17 out of 30) of all the other Old universities, and thus cannot be said to constitute a distinctive elite group. These findings are consistent with research predating the formation of the Russell Group, discussed earlier, which found that Oxford and Cambridge stood out from the rest but that the majority of the Russell Group universities clustered together with the majority of other Old universities as middle tier institutions (Dolton & Makepeace, Citation1982; King, Citation1970; Tight, Citation1988). The findings also chime with recent challenges to the Russell Group’s claim to represent 24 ‘leading’ UK universities, which point out that a substantial number of non-Russell Group universities rank above the average Russell Group institution on a range of measures including research intensiveness (Coughlan, Citation2014; Fazackerley, Citation2013; Morgan, Citation2014; Rogerson, Citation2013) and NSS student satisfaction scores (Grove, Citation2014).

Finally, it would appear that around a quarter of New universities form a distinctive bottom tier. These universities are far less well-resourced than all other universities, and the student populations they serve are much less academically successful and much less socioeconomically advantaged. As such, it is these universities whose continued existence is most imperilled by the growing privatisation and marketisation of the UK higher education system.

Recently, the Vice Chancellor of Oxford University called for the £9,000 cap on tuition fees to be lifted to enable different universities to charge ‘significantly different amounts’ so that ‘an institution’s charges are clearly aligned with what it offers’ (Garner, Citation2013). If this should come to pass, it seems likely that the universities of Oxford and Cambridge will forge even further ahead of the rest in terms of economic resources. Less certain is which of the 22 Russell Group and 17 other Old universities in cluster 2 will manage to follow in the slip stream of Oxford and Cambridge under a fully marketised tuition fees system.

Notes on contributor

Vikki Boliver is Senior Lecturer in Sociology / Social Policy in the School of Applied Social Sciences at Durham University. Her research focuses on the stratification of higher education and on multigenerational social mobility.

Acknowledgements

I am grateful to Roger Brown, Emma Uprichard, David Byrne, Matthew David and Malcolm Parkes for their very helpful feedback on this article. Data from the HEIDI database maintained by HESA has been used on the proviso that ‘HESA cannot accept responsibility for any inferences or conclusions derived from the data by third parties’.

Disclosure statement

No potential conflict of interest was reported by the author.

Notes

1. The 24 members of the Russell Group (with dates of joining in parentheses) are: Birmingham (1994), Bristol (1994), Cambridge (1994), Cardiff (1998), Durham (2012), Edinburgh (1994), Exeter (2012), Glasgow (1994), Imperial College London (1994), King’s College London (1998), Leeds (1994), Liverpool (1994), London School of Economics and Political Science (1994), Manchester (1994), Newcastle (1994), Nottingham (1994), Oxford (1994), Queen Mary University of London (2012), Queen’s University Belfast (2006), Sheffield (1994), Southampton (1994), University College London (1994), Warwick (1994) and York (2012).

2. Based on a principal components analysis of 10 variables: number of undergraduates, undergraduate ‘wastage’ rate, percentage of male undergraduates, percentages of undergraduates studying arts/social sciences and applied sciences, student–staff ratio, number of undergraduates per reading space, number of books per undergraduate, percentage of students living in halls, and percentage of university income coming from research contracts.

3. Based on a principal components analysis and cluster analysis of 21 variables: number of undergraduates, student–staff ratio, percentage of male undergraduates, percentage of undergraduates studying arts/social sciences and engineering/technology, percentage of overseas students, library expenditure per student, percentage of undergraduates in halls, average A-level conditional over and its deviation, percentage of university income coming from research contracts, total income, percentage of full-time students who were undergraduates, University Grants Committee grant per student, age of university, percentage of graduates going on to further study, to employment and to unknown destinations, percentage of students in the local population, percentage change in numbers after the 1981 ‘cuts’ and the size of that cut.

4. Based on a principal components analysis and cluster analysis of 19 variables: number of full-time and part-time postgraduates and undergraduates, percentage of full-time students in each of nine subject groups, percentage of full-time students in halls, percentage of full-time home undergraduates who were male, percentage of full-time postgraduates undertaking research, student–staff ratio, total recurrent income, percentage of income coming from research grants.

5. Based on a cluster analysis and factor analysis of 42 variables: numbers of full-time and part-time students broken down into 11 subject areas, total numbers of full-time and part-time students, total number of students, percentage of students studying on a full-time, part-time and sandwich basis, percentage of students studying for undergraduate, postgraduate taught and research degrees, percentage of home students recorded as mature and as female, percentage of full-time students domiciled overseas, income in £millions from council grant, fees, research grants and contracts and other sources, percentage expenditure on departmental and central staff and non-staff costs.

6. The average A-level points of entrants, student–staff ratios, research income, staff possession of PhDs and professional qualifications, library spending, supply of university controlled student accommodation, course completion rates, proportion of first class degrees, research assessment exercise ratings, value added by the university, proportion of graduate students, graduate employment outcomes, and the proportion of international students.

7. Spearman’s rho 0.93-0.94.

8. All three university rankings are highly positively correlated with the order in which the universities were founded (Spearman’s rho 0.70-0.80).

9. The dataset does not include higher education institutions which offer postgraduate courses only, or highly specialist institutions such as the musical conservatoires. Hence the focus is on universities that offer undergraduate courses in a range of disciplines. The dataset also omits a small number of recently established for-profit institutions and the Open University, but it is unlikely that the absence of these few cases would materially affect the results.

10. Research income data and the percentage of students who are postgraduates are both taken from HESA records for 2012. RAE scores are for 2008, are taken from the Complete University Guide Citation2014, and denote the following judgements about research quality: 4 = ‘world leading’, 3 = ‘internationally excellent’, 2 = ‘internationally recognised’ and 1 = ‘nationally recognised’.

11. All three measures are taken from the Guardian University Guide Citation2014.

12. All three measures are taken from HESA records for 2012.

13. UCAS point scores and degree completion rates are taken from the Complete University Guide Citation2014. The percentage of ‘good degrees’ (first and upper second class degrees) is taken from HESA records for 2012. UCAS points can be interpreted as equivalent to the following grades at A-level: 140=A*, 120=A, 100=B, 80=C, 60=D, 40=E.

14. All three measures are taken from HESA records for 2012. Not from a low participation neighbourhood means not from a neighbourhood in the lowest quintile of HE participation rates nationally. More advantaged social class backgrounds refers to those with parents in NS-SEC classes 1–3.

15. The variable with the most missing values is the percentage of students who are not from a low participation neighbourhood. This variable has missing values for 14 out of 127 cases because low participation neighbourhood data are not collected for Scottish institutions.

References

  • Archer, L., Hutchings, M., & Ross, A. (2003). Higher Education and social class: Issues of exclusion and inclusion. London: RoutledgeFalmer.
  • Boliver, V. (2011). Expansion, differentiation and the persistence of social class inequalities in British higher education. Higher Education: The International Journal of Higher Education Research, 61, 229–242.
  • Boliver, V. (2013). How fair is access to more prestigious UK universities? British Journal of Sociology, 64, 344–364.
  • Brown, R. (2006). League tables—do we have to live with them? Perspectives: Policy and Practices in Higher Education, 10, 33–38.
  • Brown, R. (2013). Everything for sale? The marketisation of UK higher education. London & Oxford: Routledge and the Society for Research into Higher Education.
  • Byrne, D. (2005). Using Cluster Analysis, QCA and NVIVO in relation to the establishment of causal configurations with pre-existing large N data sets: Matching hermeneutics. In D. Byrne & C. Ragin (Eds.), Sage handbook of case based methods (pp. 260–268). London: Sage.
  • Cheng, J. H. S. & Marsh, H. W. (2010). National Student Survey: Are differences between universities and courses reliable and meaningful? Oxford Review of Education, 36, 693–712.
  • Complete University Guide (2014). Retrieved from http://www.thecompleteuniversityguide.co.uk/
  • Coughlan, S. (2014, 15 May). Is the Russell Group really an ‘oligarchy’? BBC News Education and Family.
  • Croxford, L. & Raffe, D. (2014). The iron law of hierarchy? Institutional differentiation in UK higher education. Studies in Higher Education. doi:10.1080/03075079.2014.899342.
  • Dolton, P. & Makepeace, G. (1982). University typology: A contemporary analysis. Higher Education Review, 14, 33–47.
  • Fazackerley, A. (2013, 18 February). Should students be encouraged to set their sights on Russell Group universities? The Guardian.
  • Garner, R. (2013, 9 October). We need tuition fees of up to £16,000, says Oxford vice-chancellor Professor Andrew Hamilton. The Independent.
  • Grove, J. (2014, 14 August). National Student Survey: Non-Russell Group universities lead in satisfaction stakes. Times Higher Education.
  • Guardian University Guide (2014). Retrieved from http://www.theguardian.com/education/universityguide
  • GuildHE (2014). Retrieved from http://guildhe.ac.uk/
  • Halsey, A. H. (2000). Further and higher education. In A. H. Halsey & J. Webb (Eds.), Twentieth century British social trends. London: Macmillan.
  • HESA (1996). Resources of higher education institutions 1994/95. Preface to Appendix.
  • HESA (2014). Terms of use. Retrieved from http://www.hesa.ac.uk/content/view/28
  • Hemsley-Brown, J. (2014). Getting into a Russell Group university: High scores and private schooling. British Educational Research Journal. doi:10.1002/berj.3152.
  • Jump, P. (2013, 30 May). Quartet pay hefty admission fee to enter elite club. Times Higher Education.
  • Kantardzic, M. (2011). Data mining: Concepts, models, methods, and algorithms. Hoboken, NJ: Wiley-IEEE Press.
  • King, J. (1970). The typology of universities. Higher Education Review, 2, 52–61.
  • Million+ (2014). Retrieved from http://www.millionplus.ac.uk/
  • Morgan, J. (2014, 3 April). Sir David Watson: Russell Group is not all it’s cracked up to be. Times Higher Education.
  • Raffe, D. & Croxford, L. (2013). How stable is the stratification of higher education in England and Scotland? British Journal of Sociology of Education. doi:10.1080/01425692.2013.820127.
  • Rogerson, P. (2013). Is the Russell Group a distinguishable elite among UK universities? Unpublished research paper available on request from [email protected].
  • Russell Group (2012). Jewels in the crown: The importance and characteristics of the UK’s world-class universities. London: The Russell Group.
  • Russell Group (2014). Retrieved from: http://russellgroup.ac.uk/
  • Salek, S. (2013, 16 August). Clear water between Oxford and Cambridge in money stakes. Reuters, UK edition. Retrieved from http://uk.reuters.com/article/2013/08/16/uk-britain-universities-idUKBRE97F0CA20130816
  • Scott, P. (1995). The meanings of mass higher education. Buckingham: Open University Press.
  • Tight, M. (1988). Institutional typologies. Higher Education Review, 20, 27–51.
  • Tight, M. (1996). Institutional typologies re-examined. Higher Education Review, 29, 57–77.
  • TheTimes (1993). The Times Good Universities Guide. London: Times Books.
  • The Times and Sunday Times Good University Guide (2014). Retrieved from http://www.thesundaytimes.co.uk/sto/University_Guide/
  • University Alliance (2014). Retrieved from http://www.unialliance.ac.uk/
  • Uprichard, E. (2005). Introducing cluster analysis: What can it teach us about the case. In D. Byrne & C. Ragin (Eds.), Sage handbook of case based methods (pp. 132–114). London: Sage.
  • Weber, M. [1946] (2001). Class, status and party. In D. B. Grusky (Ed.), Social stratification: Class, race and gender in sociological perspective (pp. 132–141). Boulder, CO: Westview Press.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.