2,052
Views
5
CrossRef citations to date
0
Altmetric
Research Article

Academic capitalism, competition and competence: the impact on student recruitment and research assessment

Pages 780-792 | Received 02 Mar 2021, Accepted 02 Nov 2021, Published online: 15 Dec 2021

ABSTRACT

Recent events show a government in constant changes of mind about how to control implementation of frequently changing policies, and the inefficiencies and inequalities that result from capitalistic approaches to public service functions. These same factors feature within higher education (HE), demonstrating that the values, culture, decision processes and power structures of twenty-first-century capitalism have filtered into the UK higher education system and permeate the operation of quality assessment for both teaching and research, where, sponsored by government, the structures and processes – the ‘commanding heights’ – of policy development and delivery have been captured by a small elitist group of institutions. The result has been system inequity, process inefficiency and stress and risk to the mental health and well-being of research staff, as well as failure to meet desirable objectives.

… the simple plan
That they should take who have the power
And they should keep who can.
William Wordsworth, Rob Roy’s Grave

In a minute there is time
For decisions and revisions that a minute will reverse
T.S. Eliot, The Love Song of J. Alfred Prufrock

Introduction and approach

In late 2020/early 2021, ministers for higher education and research in England anticipated further changes to the Teaching (and student outcomes) Excellence Framework (TEF), and the Research Excellence Framework (REF). This article deals with both, the first briefly, focused on student recruitment, the second more substantially linked to resource allocation. It examines, critically, policies and their impact, over time, drawing on:

  • Work from a major project, funded by the HE Funding Council for England (McNay Citation1997);

  • Smaller projects after all successive exercises (McNay Citation2015a), built on email surveys, and discussions within professional development programmes with research staff worldwide;

  • Constant analysis of emerging policy documents and directives;

  • Published work of others in the field.

It is not heavily theoretical, but has examples that fit with concepts of capital, power and symbolic violence (Bourdieu Citation1991), meritocracy (Young Citation1958; Donovan Citation2006) and Lucas’s (Citation2006) ‘research game’. The overarching conclusion is that values and practices of twenty-first-century capitalism have penetrated the HE system, reversing the Robbins Report’s aim for the sector: ‘the transmission of a common culture and common standards of citizenship’ to society (Robbins Citation1963).

Capitalism and HE are both global, risking neglect of the local/national, and narrowing of notions of quality. Both are competitive, HE over international students and, increasingly, research funding, risking the co-operation across borders that is a strength, as vaccine research has shown. Capitalism emphasises private profit and commercial secrecy; HE sharing of knowledge and general benefit for human good – the disputes over vaccine patents and low-cost distribution show the difference, though lines are blurred when public money funds the research through a spin-off company which could make researcher shareholders millionaires (Neate Citation2021). Capitalism concentrates wealth and influence on those at the top of a pyramid who own what worker employees produce. Unequal financial rewards within companies and HE have been constant themes in the popular press (McNay Citation2019). Output production is owned by the university, submissable to REF 2021 long after the research worker has left their employ.

Competition for market share underpins student recruitment in a marketised system managed by government; for research the effect is more structural, leading to inequity in the distribution of power and influence over policy and processes of decision-making, and consequently in grading and funding. The issue of competence overlies all those features, echoing King and Crewe’s (Citation2014) The Blunders of our Governments (note the plural), and, in a much more minor way, imitating leadership of the response to the Covid-19 pandemic.

Student numbers and teaching quality

Competition as a driver towards excellence has long underpinned UK HE policy, particularly England (Willetts Citation2017). The biggest competition has been for student numbers, reflecting two basic tenets of capitalism – constant growth, despite the reduction in the size of the 18+ age group, and inequity. Fees of £9,000+ – more than the marginal cost of many non-STEM subjects – and the removal of the student number cap promoted a fiercely competitive approach by prestige universities, using their reputational capital to attract student customers who might previously have gone to less advantaged, lower-ranked institutions of equal quality (NAO Citation2017).

The upswing of the 18+ cohort may lead to new controls on student numbers, favouring such elite institutions, which Simmons (Citation2020) sees as evident government policy. Post-Covid austerity, claims of an oversupply of traditional graduates and government preference for apprenticeships will also affect the nature of the competition. Government will exert more directive management over the market, as a near monopoly customer, controlling supply and demand, e.g. by changes in the currency of qualifying exams under Baker, Gove and the recent chaos of algorithms and alternatives, and by funding of student loans.

TEF, a quality balance to the quantitative contest for student numbers (DBIS Citation2015), had flaws from the beginning: government recognised that its choices of metrics were poor indicators of quality of teaching, but claimed they were adequate proxies in the absence of better and the lack of any convincing method of assessing value added (McNay Citation2017, Kandiko Howson Citation2019).

Competition within the state sector was not based on price as expected by government – £9k became the norm. Nor was there competition on quality, as government had also expected: there was little evidence of extra spend on teaching and learning. Marketing budgets more than doubled and there was much capital spend on the brand image of campus facilities. The National Audit Office records that ‘stakeholders … were concerned that this investment would not lead to a proportionate increase in teaching quality’ (Citation2017, 9).

Extension of participation to less advantaged students was left to modern universities committed to widening access and social justice – both government objectives as part of TEF. They struggled in the ultra-competitive climate, though NAO noted that there was no ‘evidence that providers that struggle financially will be of any less quality than those doing well’ (Citation2017, 10). Yet the extra 5,000 students allocated by the English government in 2020 went mainly to research-intensive universities, with criteria, e.g. well-paid graduate employment (DfE Citation2020), stacked against those committed to access, even when they had won a gold medal TEF grade. Commitment to the elite (Simmons Citation2020) dominated other promises. The first-phase metrics of TEF evaluation placed many elitist universities in a lower grade than that finally published, after pressure from them to add context as a criterion, given the accumulated capital that such universities have in providing a well-resourced environment beyond the classroom, adding to their reputational capital promoted by their graduates in the press and politics. Yet reports suggest concerns about the quality of student experience in such universities. Press headlines featured a lack of a duty of care, sexual and racial harassment, even abuse, snobbish classism and bullying, and flouting of the regulatory ethics of the market in misleading students (McNay Citation2019). Cuthbert (Citation2020, 4) identifies the victims of government policies: the ‘less well-placed’ institutions, ‘in case students dared to choose them rather than those higher up the league tables. Government policy is that student choice is paramount, but only if students choose the institutions which the government thinks they should choose’.

Deem and Baird (Citation2019) suggest that TEF is ‘not about improving teaching, but rather an endeavour to pit universities against one another in a highly marketised competitive system with an oversupply of places’. The prominence given to salaries after graduation as a measure of learning outcomes suggests ‘a move away from teaching excellence, as the major determinants of getting such [highly paid] jobs are social class, gender and degree subject, not teaching quality’. Universities have no control of biases in the employment market; the use of such metrics undermines their trust in the judgement process, compounding the view that the competition is unfair, favouring those at the top, despite a manifesto commitment to ‘levelling up’.

Things may be improving. Boliver and Powell (Citation2021) recorded a shift, both conceptually and operationally, in high tariff universities, from an equality of opportunity model, based on procedural fairness interpreted as equal treatment against a common standard – meritocracy, to an equity of opportunity model, taking account of socio-economic circumstances affecting performance, to achieve a greater degree of distributive fairness in allocating places. Pearce (Citation2021), chair of the TEF review, has a welcome view: ‘there is no absolute measure of educational quality … the difference of mission across different providers, with multiple different kinds of learning outcomes, makes it difficult to judge all provision against a single set of metrics’ – as used in league tables.

Research quality

This article urges a similar recognition of diverse excellences in the REF. Evidence shows that competition has embedded a hierarchy and widened the gap in resourcing academic capital by not rewarding contextually conditioned performance.

This review is offered so that others may learn from long experience of the first systematic approach to the issue. It highlights the impact of competition and other capitalistic approaches. Lucas (Citation2006) identified the ‘research game’; comparisons are drawn with some sports exhibiting similar traits. Deficiencies in the process have been identified by successive generations of researchers. Frequent changes of approach have not meant significant changes in the underlying principles of policy and operation, with lessons learned only slowly, if at all, for reasons recorded by Gorard (Citation2020).

The evidence shows, among other things, that

  • Criteria for winning the competition are obscure

  • Prizes are quantified only after results are announced

  • The processes of playing the game – the ‘rules’ – change constantly

  • Judges are not disinterested, being mostly themselves players in the game, drawn from a minority of competitor institutions at the top of the capitalistic pyramid, whose employers have lobbied for approaches to funding that favour their cluster of academic corporations

  • There is no convincing evidence that the exercise adds value over other possible approaches

  • Value for money claims has doubtful validity

  • The accumulated exercises have had significant negative consequences

The RAE/REF provided a template for other nations who have been encouraged to imitate it. Evaluation of the impact of the 1992 exercise, commissioned by HEFCE (McNay Citation1997) led to the Australian government deciding not to imitate the UK approach (Bourke Citation1997); they never have. In Italy, sociologists, concerned about the outcome of discussions between UK and Italian government officials, commissioned an evaluation to balance the messages received (McNay Citation2016; Borrelli, McNay, and Vidaillet Citation2020). In the Netherlands, research and teaching are assessed together, with developmental feedback, and no connection to funding. In Australia and New Zealand the aims of exercises include improving quality, increasing productivity and supporting emergent researchers (ARC Citation2014; Ministry of Education Citation2014). The UK might learn from them.

The REF ‘prizes’ amount to billions of pounds over the period between assessment exercises. Its unhypothecated funding balances the themed and targeted approach of the Research Councils. Auranen (Citation2014), in a study of seven European HE systems, concluded that the UK had the most competitive funding environment for research. The shifts towards a more selective base for funding had contributed to a stronger competitive environment. In the UK and Australia ‘the basic funding systems emphasise funding through performance, giving a lot of weight to steering incentives and competition’ (Auranen Citation2014, 89), with a ‘move away from trust as a solution to the delegation problem of the principal-agent setting between the state and the university system, with a transfer to monitoring’ (89). Funding cuts ‘sensitised the universities to money as a policy instrument of government’ so even while postulating institutional autonomy ‘the government has been able to use public funding in an effort to steer the universities to act according to national science policies’ (90). The rules of the competition have been set to ‘encourage’ conformity to policy preferences by ‘autonomous’ universities through state-managed academic capitalism.

Auranen’s conclusion is that ‘examination of the university systems’ research performance … gives reason to doubt the benefits of the competitive funding environment in improving research performance (93)’. Any improvement in publications – both quantity and quality – proves to be a short term ‘peak’ related to assessment cycles, with reversion to a lower norm in the longer term.

No work has been found that demonstrates that the REF system has had a more beneficial effect than other possible reward systems for adding value to research activity. Enhancing quality has never been formally stated as part of the purpose of RAE/REF. If quality is defined as ‘fitness for purpose’, excellence must imply a fitness closely aligned to a clear and rigorous purpose. Different purposes should therefore lead to different excellences and different rewards.

Playing to the rules of the game?

Williams (Citation1993, 4) identified the issue in early exercises that triggered the research ‘game’ (Lucas Citation2006):

Average scores have risen over the period, which is usually interpreted as showing how research performance is improving. However, it also means that the universities with the highest ratings progressively receive a smaller share of the total research funds available. (Emphasis added)

In 1992, modern universities’ good performance on entering the exercise, without previous funding, surprised system actors and meant earmarked development funds were more than consumed, further reducing money available in a zero-sum game to elite universities, with consequent pressure, by lobbying over ends and means (McNay Citation2015a), for funding to be increasingly concentrated, which became operational policy, though rarely, if ever, overtly stated as a principle.

Over succeeding exercises, adjustments were made to activities and organisational elements to be assessed, to grading scales and criteria, funding was withdrawn from lower grades and the ratio of funds between points on the scale was widened – in 2001, in England a 5* grade was worth nearly nine times as much per head of staff as a 3a grade (Brown and Carasso Citation2013). Given that grades were based on proportions of work at different levels, four top staff could generate more funding without 16 other active, but lower-graded, researchers dragging the single grade down. That posed challenges for staff management, motivation and development. Changes were announced late, in some cases well after the start of the period to be assessed. One survey respondent, a head of department, expressed considerable anger at having to change strategy after starting planning immediately results came out (McNay Citation2015a).

The approach was similar to managing the rules of the pandemic and just as contradictory as ‘eat out/stay in’, ‘schools are safe/close them’. The number of panels changed five times, as did the number of publications to be submitted. Government, through the funding agency, used finance as an instrument to condition universities’ behaviour, as Auranen (Citation2014) had said, by tinkering with processes/means to changing ends.

Martin’s summary (Citation2011, 247) was that ‘various forms of research assessment have been developed, each more complicated, burdensome and intrusive than its predecessor’. Marques et al. (Citation2017)tabulate changes across seven exercises showing shifts to greater formalisation and standardisation, with major shifts for 2008 to assess research cultures instead of individuals, and making international quality paramount as part of global competition, both of which favoured the elite. Three new elements – outputs, environment and esteem – were given different weightings by panels. The 2021 exercise has 12 further major changes, and some minor ones, announced five years after the latest round of the competition started (REF Citation2019). These include staff to be submitted; number of outputs; criteria for judging submissions; weighting of the three elements of grading, with impact replacing esteem factors. The rules of the game changed every time it was played, to the extent that it became a different game with different rules on how to win. Bureaucracy reduces the collegiality that peer review implies and is no way to enhance the imagination and creativity essential to good research. Marques et al. (Citation2017) suggest responses to REF results are either reactive or proactive: post-92 universities try to understand the evaluation system and process better, so they can match its perceived expectations and requirements next time; pre-92 universities try to change the rules of the game for that next time to better fit with their preferences.

This trial and error approach was unsatisfactory. The ends were deficient and occluded. There was no openly stated aim to improve quality, nor to link to national research strategy; nor to increase productivity: from 1996 to 2014 only four outputs were required to be submitted. For 2021, this was reduced to an average of 2.5 with a minimum of one and a maximum of five for any individual (REF Citation2019). The core aim – ‘to inform funding’ was flawed because it meant that the resource allocation model was consistently changed after submissions had been made. Knowledge of what performance would win what prize was unknown: those competing were under-informed, or misinformed by silence (another feature of pandemic management). Their decisions would have been different had there been transparency in the ‘rules of the game’. A later aim – ‘to fund excellence wherever it is found’ (RAE Citation2006), seemed to be based on a single concept of excellence, not the diversity of excellences needed to recognise a variety of purposes, but the shift to a four-point grade profile was positive, demonstrating, again, that modern universities achieved 4* outputs with little, if any, funding. A third aim was introduced after 2008: ‘to change behaviour’, with impact replacing esteem factors. There were teething troubles: the assessment process was flawed and lacked credibility (Oancea Citation2016; Greenhalgh Citation2015); the Education impact assessors raised concerns about the value for money of such a time-consuming exercise (REF Citation2014), and the impact of education research on teaching was excluded from consideration. In establishing TEF, government had stated that there should be greater linkage between teaching and research because ‘research and teaching should be recognised as mutually reinforcing activities’ (DBIS Citation2015, 18). The 2021 exercise will allow impact case studies to include evidence from teaching.

For 2001 onwards, the funding councils introduced eight principles to underpin the process – how they intended to ‘play the game’ (HEFCE Citation1999):

  • Clarity of documentation;

  • Consistency across academic areas;

  • Continuity between exercises;

  • Credibility to those being assessed;

  • Efficient use of public funds;

  • Neutrality over research approaches;

  • Parity across different forms of output;

  • Transparency.

McNay (Citation2015a) surveyed researchers’ ratings of these principles in operation at system and institutional level. The average score across the nine principles was 46.4% for the system and 53.1% for institutions. Not a ‘world leading’ performance.

The most serious failings combined inconsistency, lack of clarity and lack of certainty – key defects in leadership. This was seen in grade criteria. Wieczoric and Schubert (Citation2020) give the four different sets up to 2008. In 1992 and 1996 a 4* grade went to

research quality that equates to attainable levels of national excellence in virtually all sub-areas of activity, possibly showing some evidence of international excellence, or to international level in some and at least national level in a majority.

Panels, in the author’s direct experience as a member, had difficulty quantifying the italicised phrases and others such as, ‘a clear majority, and ‘up to half’ of outputs. So did Units of Assessment (UoAs). One lost a 5* grade where ‘all the rest’ was very clear and did not allow discounting of ‘one or two’ out of many.

The grade criteria hid behind undefined labels. Johnston (Citation2008) thought that any department in his discipline where 55% of output was ‘world-leading’ (4*) should not have many staff with work of which he was unaware. There is a difficulty, too, in what is ‘recognised internationally’ (2*) but is not ‘internationally excellent’ (3*), a difference the then director of REF could not explain. When asked by the author, he said ‘it is up to the panels’.

They had been inconsistent. For 2001, for History of Art, ‘national excellence’

… makes an intellectually substantial contribution to new knowledge and understanding and/or original thought. (Emphasis added)

For History, it

… includes highly competent work within existing paradigms, which does not alter existing interpretations.

The Ancient History panel did not define it, but used its ‘professional judgment’, drawn from members ‘chosen for their standing in, and knowledge of, the subject’ (HEFCE Citation1999).

Achievement of the value for money aim was increasingly challenged (McNay Citation2015b). Fewer staff were submitted in successive exercises. In Education, numbers submitted fell from 2806 in 1996 to 1440 in 2014; in Sociology, the fall was from 927 to 704 from 2008 to 2014. The number of UoAs in 2014 was down by 15% from 2008; the total number of outputs was down by 12.5% and the average number of outputs per head down from 3.74 to 3.41 (REF Citation2014). After the 2008 exercise, which, in England, funded only 3* and 4* grades, with most going to 4* work, Adams and Gurney demonstrated that concentration of funding had gone too far. They argued that

(Funding) the peak (of research excellence) only works if there is a platform of very good research funded right across the country … You can’t just restrict your focus to the elite institutions (without investing in others); you don’t have a feedthrough of younger researchers (because progression is usually within the university of first degree study) … A loss of structural diversity is a loss of capacity to respond flexibly when priorities change or when opportunities appear. Diversity builds sustainable performance. (Citation2010, 18)

Despite those findings, concentration of funding on the elite was further intensified in England, perhaps to ‘fund only the best’ even at the risk of abandoning ‘value for money’. The dangers of that can be learned from China. Shu, Sugimoto, and Lariviere Shu, et al., (Citation2021), in a survey of 198 universities, showed that for every $1m spent between 2007 and 2016, elite universities with supplementary research funds got nearly twice as much per capita as non-elite institutions, yet their staff produced about 30% fewer articles per head (9.7 to 13.3) in international journals and many fewer in Chinese journals (38 to 82) and fewer monographs (0.2 to 0.4) (All figures are rounded). Selectivity and stratification failed. In Italy, Abramo and D’Angelo (Citation2021) showed that universities with higher funding following a performance-based research funding exercise increased their productivity less than those with lower funding, so the performance gap closed. Grichting’s (Citation1996) conclusions suggest development funding for second tier ‘improvers’ who give more ‘bang per buck’, which Scotland had done. Research and Development is one key element in government ‘levelling up’, so a more distributive resource allocation model (RAM) will be needed. Yet researchers from a self-interested elite institution have recently put an argument defending the present arrangement (Chaytor, Gottlieb, and Reid Citation2021), rigorously rebutted by Westwood (Citation2021).

Panels’ data, deliberations and decisions

This inequity in the funding model builds on possible bias in panel judgements because of their composition. Sharp and Coleman (Citation2005) showed that higher grades were given to units where their host university had a panel member, or came from a university like theirs. Elite universities dominated membership even where the majority of research was done in other HEIs. They cite a HEFCE document stressing ‘the desirability of … membership [from] institutions of differing histories’. HEFCE (Citation2004, para 43) repeated that principle in relation to 2008: ‘the desirability of ensuring that the overall body of members reflects the diversity of the university community, including … focus of their home institution’. Yet modern universities had no representation on the Stern Committee (Citation2016) reviewing the exercise, and were outnumbered on its advisory panel by employees of government agencies. Things have improved, but only marginally: in 2001, of 68 panels, 31 (>45%) had no members from modern universities (Sharp and Coleman Citation2005, 167). The latest data on membership of REF panels, issued in December 2020, show that the elite universities still control the commanding heights of the research economy. On the four main panels, pre-92 universities have 46 full members, post-92 institutions have one. International universities have 15, which shows where competition in the research game is focussed. On the sub-panels the figures are 636 to 87, with 32.5% having none (REF Citation2020). This affects grading. This may not be crony capitalism, but possibly an outcome of common cultural identity, network power, as in the Eurovision Song Contest, where votes go to ‘people like us’: cultural and political affiliations affect perceptions of excellence, so that what is done is set alongside where performers come from, leading to a, perhaps unconscious, bias against the ‘other’ as in discriminations in the wider society. So the same old same old may be rewarded ahead of different approaches and new, challenging findings. That in turn affects funding. A parliamentary reply on 17 November listed overall government research funding to the 13 universities in the West Midlands. Between 2015 and 2019, Birmingham and Warwick (33 members) got an increase of 21% to £256 m; Aston and Keele (8 members) had no increase on £30 m; Coventry (4 members) gained £3 m to £9 m after a good REF. The other eight institutions (no members) had £12 m among them. So two Russell Group universities, dominating regional representation on panels, got 83% of the funds distributed in 2018/9.

This links to a further inequitable decision: to reward elements that are conditioned by the equivalent of inherited wealth in a capitalist society. For 2008 ‘environment’ became a formal contributor to overall UoA grading, favouring larger, long-established, ‘well-found’ units. For 2014, esteem indicators disappeared, replaced by impact. Whereas staff in modern universities could gain esteem through voluntarism in the wider academic community – office in learned societies, for example – such commitments had diminished in research-intensive universities as distracting from research effort. The networks contributing to impact, and investment in research marketing, were longer established in one group than the other, though modern universities built up research in partnership with practising professionals. That has small, local, though effective, impact, but struggles to be seen as having ‘world leading’ excellence: the Education panel exemplified excellence as large-scale projects, drawing on high volumes of quantitative, statistical data in historic time-series analyses (REF Citation2014). Such an expectation excludes many departments in modern universities where research has lower time allocation than teaching, and smaller numbers of staff involved. Large quantitative studies may show what happened; qualitative work is needed to explore why and what happened as a result – impact. This may need smaller, local projects because of variable context factors not uncovered by large scale, less granulated work. The feedback dealt only with the structure of arrangements, without any examples or evidence.

Hamman (Citation2016, 761), studying history across three exercises, identifies two main effects of these processes: stratification and standardisation.

Symptoms of stratification are documented by the distribution of membership in assessment panels, of research active staff, and of external research grants. Symptoms of a standardization are documented by the publications submitted to the assessments. The main finding is that the RAEs/REF and the selective allocation of funds they inform consecrate and reproduce a disciplinary centre that, in contrast to the periphery, is well-endowed with grants and research staff, decides in panels over the quality standards of the field, and publishes a high number of articles in high-impact journals. This selectivity is oriented toward previous distributions of resources and a standardized notion of ‘excellence’ rather than research performance.

This marginalising through ‘symbolic violence’ is treated by Bourdieu (Citation1991) and reflects Young’s (Citation1958) satirical treatment of meritocracy, as opposed to new ‘merit’.

Naidoo (Citation2018, 612, 610) is clear:

… the most important consequence of competition is the legitimation of inequality … [it] acts as a mechanism through which the wealthy and powerful draw on deeply inscribed beliefs to reproduce inequality while at the same time concealing intergroup stratification … elite academics in general work to co-produce the drivers, structures and templates of status competition since these are based on the criteria dictated by the internal reputational hierarchies that already prevail.

Those factors may explain the comparison in of two Education departments: Staffordshire and Southampton, selected because of their alphabetic proximity in the published table of results, and because of personal experience of an earlier panel keeping Staffordshire’s borderline grade low because the work was new and done outside a traditional academic unit, whilst one such unit, also borderline, was given the higher grade because it had a long previous history. Southampton will have 15 panel assessors for 2021, including one for Education as in 2014; Staffordshire, none.

Table 1. Equity in grading education UoA 2014.

If the competition were like that for student recruitment, there might be allowance for contextual factors – rewarding a good output performance despite a less advantageous environment; not in REF. If the REF approach were applied to football, Leicester City might never have won the 2016 English Premier League: ‘yes, you did win more matches and score more points, but other, more prestigious teams have better training facilities, bigger stadia, and impact through well-established international fan bases, so, under the rules of the competition …’. In 2020–21, Leicester were still demonstrating that excellent output can be achieved without excessive resources

Those contributory grades removed continuity and considerably reduced credibility. Grade inflation in the two was excessive and again favoured the more established universities. In Education, under the environment factor, in 2008, there were only 5 institutions with 50% or more work graded 4*, with a top score of 75%. In 2014, 18 institutions scored 50% or more at grade 4*; 8 scored 100%. Only two modern universities scored more than zero. For esteem factors in 2008, the top 4* score was 40% and 13 scored 30% or more. In 2014, when impact replaced esteem, 31 units scored 30% or more, 3 scored 100% (McNay Citation2015a).

The principles of neutrality over research approaches and parity across different forms of output were re-iterated for 2021. Equity should mean that ‘all types of research and all forms of output across all disciplines shall be assessed on a fair and equal basis … without distorting the activity nor encouraging or discouraging any particular type of research activity’ (REF Citation2019, 5, emphasis added). That has not been so hitherto: principles were not put in to practice, according to staff perceptions (McNay Citation2015a)

Adams et al. (Citation2020) analysed shifts over time in outputs submitted. Between 1992 and 2014, in Engineering, conference papers declined from 26.9% of submissions to 7.9%; books and chapters in Science and Engineering went from over 13% to under 1%; in Social Science, from 46% to 15.9%. Patents also fell. Even in Art and Design, journal articles displaced media artefacts. Journal articles became the dominant form of output submitted: between 1996 and 2014, journal articles as a share of outputs submitted went from 40% to 78% in Education, and from 59% to over 95% for Business and Management Studies.

Wieczoric and Schubert (Citation2020) offer evidence of the changes made by UK sociology departments and staff to be more competitive within the rules of the game, introducing instrumental motivation to optimise exploitation of the system, displacing intrinsic motivation to pursue quality work. Researchers lose influence over the means of production and distribution: they ‘manage publications’, targeting journals favoured by panel members, and keep a ‘mainstream focus’, avoiding innovative and experimental work. Departments look to promote ‘research profile development’ by concentrating on a narrow range of work, with considerable congruence across departments in different universities; and by ‘REFable recruitment’, looking for new scholars to enhance that narrow profile, not expand it, and not needing support to reach ‘REFable’ standards.

These are ‘gaming’ tactics. There are other parallels between sport and the REF, which may need a protest by frustrated researchers as stakeholders in line with football fans’ revolt against capitalist owners in 2020. In football, international talent has been imported. In the Guardian December listings of the world’s top footballers in 2020, 32 played in the English Premier League, but only six were qualified to play for England, one for Scotland. In 2018, half of full-time research students and 48% of ‘research-only staff’ were not born in the UK, where graduates are loaded with debt. Non-UK academic staff overall had increased by 26.3% since 2014 (UniversitiesUK Citation2020). In-house development is then underfunded, a tactic also used by employers in the wider capitalist economy, with a long history of failure to heed the warning from Prince Albert at the 1851 Great Exhibition, to invest in people and plant, or risk being overtaken by Germany and the USA (Barnett Citation1986). A competition with prizes is not an investment strategy. The reward structures also parallel sport: prize money in tennis and wages in soccer are excessively concentrated at the top with much less invested in grass roots development in those two sports as well as cricket, which relies on private schools for new talent. The short forms of cricket, which are where the money is, risk affecting skill levels for 5-day tests. REF assessment is seen to encourage short-term targets, linked to the cyclic submission deadlines of the exercises, not blue sky speculative work, which needs more time. There is another lesson – from rugby union. A Guardian headline of 28 May 2018 quoted the English coach: ‘“Farrell can captain England with rule of fear”, says Jones’. A subsequent headline, after the men’s match against France, said ‘Fearful England need to play with autonomy if they don’t want to see Paris repeated’. The analysis was that ‘I don’t think this England team have underperformed for any other reason than they have lacked autonomy and freedom from external control … It all just feels overly managed and, while not suffocating, short of breath’ (Ryan Citation2020, 44).

The 2021 principles state that equality and diversity should be imbedded in the treatment of staff. It remains to be seen what the effects will be on staff entered for REF, of BLM campaigns and discrimination against BAME staff in higher education more generally (Bhopal Citation2016), and of Covid-19 triggered home schooling and work, which increased the disadvantage for women whose journal submission rate went down, while men’s increased (Fazackerley Citation2020).

‘The credibility of the REF is reinforced by transparency in the process … the criteria and procedures that will be applied in the assessment will be published in full, well in advance of institutions making their submissions. The outcomes will be published in full and decision-making processes at main and sub-panel levels will be explained openly’ (REF Citation2019, 5). This is true of much data on submissions, a positive development, but Neyland, Ehrenstein, and Milyaeva (Citation2019) expose the lack of this principle in panel decision-making, with pressures to converge towards a single collective ‘normative’ model in grading through internal recording of members’ grades as decisions are made, with deviations from norms displayed graphically, so that discussions between the pairs who look at each item may lead to compromise, group pressures towards conformity operate informally and chairs’ management may come in to play in monitoring members. None of this is reported transparently. All written records of such ‘negotiations’ are destroyed at the end of the process. Neyland, Ehrenstein, and Milyaeva (Citation2019) suggest that the resolution of conflict between competing views is seen as promoting a ‘fair’ process which can be used to justify unequal outcomes in the competition between units and institutions. This is Young’s meritocracy (Citation1958): standardised procedures are seen as fair in all circumstances, but made unfair because all circumstances are not identical. Donovan’s analysis (Citation2006, 61) emphasises Young’s warning of the risk that such an approach could ‘crystallise into a smug and self-replicating elite class or oligarchy that would pull the ladder of opportunity up behind it, and so no longer be a true meritocracy of talent’.

Neyland et al. (Citation2019) show the internal competition among members about the real meanings of normative criteria (their own, or the dominant, conservative majority of more prestigious members – Bourdieu’s (Citation1991) symbolic violence), so that the operational standards are negotiated and never revealed to those submitting bids in the exercise. This is a failure of transparency, contrary to the HEFCE principles.

Conclusion

The tone of nearly all the articles based on research into research quality assessment is, like this one, critical of its claimed success, not only by those in the UK but from, perhaps more disinterested, international commentators, challenging the title of the 2016 Stern Report: Building on Success. The second part, and Learning from Experience, echoes politicians’ pledges, usually not delivered, after any disaster.

Despite 35 years of operation, there is still no continuity of criteria and processes. The elite universities lobby for favourable changes and, through dominating panel membership, control the interpretation of quality. So, the means are flawed; constant change suggests that they have been so from the start, with little learned.

The policy aim of achieving value for money has not been met. Fewer staff produce fewer outputs per head. ‘Agency’ approaches to management, encouraged by competition, with targets and constant monitoring, lead to lower outputs in teaching and research and to lower quality, at higher cost (Franco-Santos, Rivera, and Bourne Citation2014). There has been a drift to isomorphic conservative research strategies, reducing diversity and retaining a strengthened hierarchy. The concentration of funding has become inefficient and lacks fit with the strategic agenda of ‘levelling up’. As with capitalist economies in the wider world, there is no ‘trickle down’ effect.

If the policy aim has shifted to being competitive on the international level, that too, has been a failure. The 2020 QS league tables show elite UK universities drifting downwards despite favourable funding and policy preferentiality. The compilers of the tables attribute this to poor teaching and declining research impact: ‘of the UK’s 84 ranked universities, 66 saw their staff student ratio decline, while 59 had a drop in research citations’ (Hall Citation2020, 19).

This derives from leadership. Good leadership is characterised by clarity of policy, so there is certainty about expectations; consistency and continuity rather than constant change, which makes us feel like experimental guinea pigs; and confidence in leaders, which the first two will help promote, but which also needs a sense of common identity, where we are visibly and evidentially ‘all in it together’.

In addition to those flaws, there have been strongly negative effects. Schafer (Citation2016) covers them comprehensively under eight headings: research performance; equity; diversity; academic freedom; the teaching/research nexus; academic staff recruitment; research motivation and power. The Wellcome Foundation, exploring the reasons for an unkind and aggressive research culture with work conditions based on bullying and overwork found that ‘78% of respondents blamed high levels of competition, with 42% describing levels of competition in their workplace as unhealthy’ (Grove Citation2020, 8). This was reinforced by a Vitae project funded by UKRI that found a culture of high pressure and bullying (Metcalfe et al. Citation2020). The effect of pressures from competitive sport has been exemplified by Simone Biles, Ben Stokes, many less prominent others. Something fundamental needs to change in research (and sport) management to forestall long-term consequences.

In 2020 Amanda Solloway, Minister for Science, Research and Innovation, accepted that Building on Success had been somewhat Panglossian. She recognised some of the long-standing issues covered above: the pressure that comes from perception that where you publish is more important than what; risk averse strategies leading to a compliance culture; world leading work can be done with a local/regional focus and impact. She committed to wide involvement in a consultation, unlike previous exercises where only institutions were invited (unlike Australia) with only one response expected – from managers, not the researchers they managed. Overall the consideration would be of what excellence looks like, implying a diversity of concepts reflecting a diversity of aims (Solloway, Citation2020). That sounded positive. However, the advisory group membership is a global super-elite (Research England Citation2021), so the eventual reality may not reflect the political rhetoric.

There may be a final lesson from the pandemic, where approaches by elite western countries failed; under-regarded countries did better. In 2019 the Johns Hopkins Global Health Security Index, ranking capacity to deal with outbreaks of infectious disease, put the USA first, the UK second; New Zealand came 35th. The article in the Guardian from which those figures are taken (Laura Spinney on 30 December 2020) quotes Sarah Dalglies in the Lancet – ‘The pandemic has given the lie to the notion that expertise is concentrated in, or at least best channelled by, legacy powers and historically rich states’. That may apply to research, too.

Disclosure statement

No potential conflict of interest was reported by the author.

Additional information

Notes on contributors

Ian McNay

I am professor emeritus, HE and Management in the School of Education at Greenwich, having previously been Head of School of Post-Compulsory Education and Training. Before that, Anglia Ruskin as now is, as head of an R+D unit, the Open University, as lecturer, senior lecturer and deputy regional director, Bristol Poly as was, as academic registrar, ESADE in Barcelona as advisor/consultant, EFMD in Brussels as assistant director, and Strathclyde as a junior administrator.

I served on the JFHE editorial board as one of the highlights of my career, and have pursued research, professional development and consultancy around the world – over 20 countries, with others represented in UK-based international courses.

References

  • Abramo, G., and C.A. D’Angelo. 2021. “The Different Responses of Universities to Introduction of Performance-based Research Funding.” ResearchGate Pre-print. Accessed 20 January 2021. https://arvix.org/abs/2102.0537
  • Adams, J., and K. Gurney. 2010. Funding Selectivity, Concentration and Excellence – How Good is the UK’s Research? Oxford: HE Policy Institute.
  • Adams, J., K. Gurney, T. Loach, and M. Szonszor. 2020. “Evolving Document Patterns in UK Research Assessment Cycles.” Frontiers in Research Metrics and Analytics 5: 2. doi:https://doi.org/10.3389/frma.2020.00002.
  • ARC (Australian Research Council). 2014. “The Excellence in Research in Australia (ERA) Initiative.” Accessed January 2015. www.arc.gov.au/era/
  • Auranen, O. 2014. “University Research Performance: Influence of Funding Competition, Policy Steering and Micro-level Factors.” Doctoral thesis, University of Tampere.
  • Barnett, C. 1986. The Audit of War: The Illusion and Reality of Britain as a Great Nation. London: Macmillan.
  • Bhopal, K. 2016. The Experiences of Black and Minority Ethnic Academics: A Comparative Study of the Unequal Academy. London: Routledge.
  • Boliver, V., and M. Powell. 2021. Fair Admissions to Universities in England: Improving Policy and Practice. London: Nuffield Foundation.
  • Borrelli, D., I. McNay, and B. Vidaillet. 2020. “Lo Scenario Internazionale: I Casi Britannico E Francese.” In Consequenze della valutazione: Idee e pratiche dei docenti universitari nelle scienze sociali, edited by R. Fontani and E. Valentini. Milan: Franco Angeli.
  • Bourdieu, P. 1991. Language and Symbolic Power. Cambridge: Polity Press.
  • Bourke, P. 1997. Evaluating University Research: The British Research Assessment Exercise and Australian Practice. Canberra: NBEET.
  • Brown, R., and H. Carasso with. 2013. Everything for Sale? The Marketisation of UK Higher Education. London: SRHE/Routledge.
  • Chaytor, S., G. Gottlieb, and G. Reid. 2021. Regional Policy and R+D: Evidence Experiments and Expectations. London: HEPI.
  • Cuthbert, R. 2020. “Editorial: Policymaking in a Panic.” SRHE News 42, October. Society for Research into Higher Education.
  • DBIS (Department of Business Innovation and Skills). 2015. Fulfilling Our Potential: Teaching Excellence, Social Mobility and Student Choice. Cm 9141. London: Department of Business Innovation and Skills.
  • Deem, R., and J.-A. Baird. 2019. “The English Teaching Excellence (And Student Outcomes) Framework: Intelligent Accountability in Higher Education?” Journal of Educational Change. Online. doi:https://doi.org/10.1007/s10833-019-09356-0.
  • DfE (Department for Education). 2020. Introduction of Temporary Student Number Controls in Response to COVID-19 Non-healthcare Places. London: DfE.
  • Donovan, C. 2006. “The Chequered Career of a Cryptic Concept.” The Political Quarterly 77 (s1): 61–72. doi:https://doi.org/10.1111/j.1467-923X.2006.00781.x.
  • Fazackerley, A. 2020. “Women’s Research Plummets during Lockdown – But Articles from Men Increase.” Guardian Online, May 12. Accessed 13 May 2020.
  • Franco-Santos, M., P. Rivera, and M. Bourne. 2014. Performance Management in UK Higher Education Institutions: The Need for a Hybrid Approach. London: Leadership Foundation for Higher Education.
  • Gorard, S. 2020. “Where Next for Improving the Use of Good Evidence: The Story of the Book so Far.” In Getting Evidence into Education. Evaluating the Routes to Policy and Practice, edited by S. Gorard. London: Routledge 234–241 .
  • Greenhalgh, T. 2015. “Research Impact in the Community-based Health Sciences: What Would Good Look Like?” Dissertation, MBA in HE Management, UCL Institute of Education.
  • Grichting, W.L. 1996. “Do Our Research Unis Give Value for Money?” Campus Review (6), February, 29.
  • Grove, J. 2020. “Half of Researchers Seek or Desire Counselling, Says Wellcome Poll.” Times Higher Education, January 16, 8.
  • Hall, R. 2020. “UK Universities Drop down International Ranking Again.” The Guardian, June, 10, 19.
  • Hamman, J. 2016. “The Invisible Hand of Research Performance Assessment.” Higher Education 72 (6): 761–799. doi:https://doi.org/10.1007/s10734-015-9974-7.
  • HEFCE (Higher Education Funding Council for England). 1999. Research Assessment Exercise, 2001: Assessment Panels’ Criteria and Working Methods. Bristol: Higher Education Funding Council for England.
  • HEFCE (Higher Education Funding Council for England). 2004. RAE 2008: Units of Assessment and Recruitment of Panel Members. Bristol: Higher Education Funding Council for England.
  • Johnston, R. 2008. “On Structuring Subjective Judgements: Originality, Significance and Rigour in RAE 2008.” Higher Education Quarterly 62 (1–2): 120–147. doi:https://doi.org/10.1111/.1468-2273-2008.00378.x.
  • Kandiko Howson, C.B. 2019. “Final Evaluation of the Office for Students’ Learning Gain Pilot Projects.”
  • King, A., and I. Crewe. 2014. The Blunders of Our Governments. London: Oneworld.
  • Lucas, L. 2006. The Research Game in Academic Life. Buckingham: SRHE/OpenUP.
  • Marques, M., Powell, J.J.W., Zapp, M., and Biesta, G. (2017) How does research evaluation impact educational research? Exploring intended and unintended consequences of research assessment in the United Kingdom, 1986-2014. Europen Educational Research Journal, 16(6), 820–842
  • Martin, B.R. 2011. “The Research Excellence Framework and the ‘Impact Agenda’: Are We Creating a Frankenstein Monster?” Research Evaluation 20 (3): 247–254. doi:https://doi.org/10.3152/095820211X13118583635693.
  • McNay, I. 1997. The Impact of the 1992 RAE on Institutional and Individual Behaviour in English HE: The Evidence from a Research Project. Bristol: HEFCE.
  • McNay, I. 2015a. “Learning from the UK Research Excellence Framework: Ends and Means in Research Quality Assessment and the Reliability of Results in Education.” Higher Education Review 47 (3): 2:24–47.
  • McNay, I. 2015b. “Does Research Quality Assessment Increase Output and Give Value for Money?” Public Money and Management 35 (1): 67–68. doi:https://doi.org/10.1080/09540962.2015.986888.
  • McNay, I. 2016. “Imbalancing the Academy: The Impact of Research Quality Assessment.” Sociologia Italiana, - AIS Journal of Sociology 8 (7): 119–125.
  • McNay, I. 2017. “TEF: Why and How? Ideological and Operational Imperatives Driving Policy.” Compass 10 (2): 54–59. https://journals.gre.ac.uk/index.php/compass/issue/view/51
  • McNay, I. 2019. “Governance, Leadership and University Cultures: Do Universities Critique Social Norms, or Copy Them?” In Values of the University in a Time of Uncertainty, edited by P. Gibbs, J. Jameson, and A. Elwick. Cham, Switzerland: Springer Nature 89–105 .
  • Metcalfe, J., K. Wheat, M. Munafo, and J. Parry. 2020. Research Integrity: A Landscape Study. Swindon: UKRI.
  • Ministry of Education. 2014. Review of the Performance-based Research Fund. Auckland: Ministry of Education.
  • Naidoo, R. 2018. “The Competition Fetish in Higher Education: Shamans, Mind Snares and Consequences.” European Educational Research Journal 17 (5): 605–620. doi:https://doi.org/10.1177/1474904118784839.
  • NAO (National Audit Office). 2017. “The Higher Education Market.” Report by the Comptroller and Auditor General. National Audit Office.
  • Neate, R. 2021. “Oxford Vaccine Scientists in Line for £20m Payday with US Flotation.” Guardian, April 8, 6.
  • Neyland, D., V. Ehrenstein, and S. Milyaeva. 2019. Can Markets Solve Problems? London: Goldsmiths Press.
  • Oancea, A. 2016. “Research Impacts: Networks and Narratives.” Presentation to Launch Seminar, Centre for Global Higher Education, UCL Institute of Education, February 3. www.researchcghe.org
  • Pearce, S. 2021. “The Value of a National TEF Is in Enhancing University Learning and Teaching.” WonkHE, February 8.
  • RAE. 2006. RAE 2008. Panel Criteria and Working Methods. Bristol: HEFCE. www.rae.ac.uk/pubs
  • REF. 2014. The Research Excellence Framework, 2014: The Results. Bristol: HE Funding Council for England.
  • REF. 2019. “Guidance on Submissions.” REF. January.
  • REF. 2020. “REF Assessment Phase Panels Confirmed.” [ Press Release]. December 4.
  • Research England. 2021. Future Research Assessment Programme. Bristol: UKRI.
  • Robbins, L. ( chair). 1963. Higher Education. A Report. London: Her Majesty’s Stationery Office.
  • Ryan, B. 2020. “Fearful England Need to Play with Autonomy if They Don’t Want to See Paris Repeated.” The Guardian, February 6, 37.
  • Schafer, L.O. 2016. “Performance Assessment in Science and Academia: Effects of RAE/REF on Academic Life.” Working Paper No. 7. Centre for Global Higher Education. www.researchcghe.org
  • Sharp, S., and S. Coleman. 2005. “Ratings in the Research Assessment Exercise 2001 – The Patterns of University Status and Panel Membership.” Higher Education Quarterly 59 (2): 153–171. doi:https://doi.org/10.1111/j.1468-2273.2008.00288x.
  • Shu, F., C.R. Sugimoto, and V. Lariviere. 2021. “The Institutional Stratification of the Chinese Higher Education System.” Quantitative Sciences Studies 2 (1) 327–344 doi:https://doi.org/10.1162/qss_a_00104.
  • Simmons, J. 2020. “The Political Environment around Research, Presentation to HEPI Webinar.” The Research Landscape, October 20. www.hepi.ac.uk
  • Solloway, A. 2020. “Science Minister on ‘The Research Landscape’.” Transcript of Speech to HEPI Webinar, October 20. www.gov.uk/government/speeches/science-minister-on-the-research-landscape
  • Stern, N. 2016. Building on Success and Learning from Experience. An Independent Review of the Research Excellence Framework. London: Department of Business, Energy and Industrial Strategy.
  • UniversitiesUK. 2020. HE in Facts and Figures, 2019. London. https://universitiesuk.ac.uk/factsandstats
  • Westwood, A. 2021. What Does ‘Levelling Up’ R+D Look Like? London: HEPI.
  • Wieczoric, O., and D. Schubert. 2020. “The Symbolic Power of the Research Excellence Framework.” Preprint on ResearchGate. Accessed 2 January 2020.
  • Willetts, D. 2017. A University Education. Oxford: Oxford University Press.
  • Williams, B. 1993. “Research Management for Selectivity and Concentration.” Higher Education Quarterly 47 (1): 4–16. doi:https://doi.org/10.1111/j.1468.2273.1993.tb01610.x.
  • Young, M. 1958. The Rise of the Meritocracy. Harmondsworth: Penguin.