322
Views
27
CrossRef citations to date
0
Altmetric
Original Articles

Research quality assessment in education: impossible science, possible art?

Pages 497-517 | Published online: 28 Jul 2009
 

Abstract

For better or for worse, the assessment of research quality is one of the primary drivers of the behaviour of the academic community with all sorts of potential for distorting that behaviour. So, if you are going to assess research quality, how do you do it? This article explores some of the problems and possibilities, with particular reference to the UK Research Assessment Exercise and discussion around the proposed new Research Excellence Framework and the ongoing work of the Framework 7 European Education Research Quality Indicators project (EERQI). It begins by asking whether there are any meaningful generic criteria of quality which can be applied to research, and tension between such criteria and the diverse and sometimes contradictory requirements of educational research. It then looks at attempts to identify measurable indicators of quality, including consideration of the location of the publication, citation and download counts, and approaches based semantic analysis of machine readable text—but finds all these quasi ‘scientific’ attempts at quality assessment wanting (hence the ‘impossible science’). This is all the more the case because of their attachment to extrinsic correlates of quality rather than intrinsic characteristics of quality and hence the probability that the measures will induce behaviours not conducive to quality enhancement. Instead the article turns to a different approach. This is better expressed perhaps as quality ‘appreciation’, ‘discernment’ or even ‘connoisseurship’ and is rooted in the arts and humanities rather than in (quasi) science. It considers whether this might offer a better approximation to the kind of judgement involved in quality assessment of a piece of research writing than the sort of metrics approaches favoured in current discussion.

Acknowledgements

I am grateful to Hilary Perraton, John Elliott, Cristina Devecchi and colleagues in the Centre for Applied Research in Education at St Edmund’s College—as well as the anonymous BERJ reviewers—for their contributions to the development of this paper. Ágnes Sándor has, I hope, enabled me to correct some of my misunderstandings of the contribution of semantic analysis but will probably not agree with my assessment of its contribution.

Notes

1. Full details of the current RAE (and documentation from previous assessments) are available on the Higher Education Funding Council for England (HEFCE) RAE website at www.rae.ac.uk. (One of the features of the assessment is the Council’s commitment to transparency in the procedures employed.)

2. In the European Education Research Quality Indicators project (funded under EU Framework 7) we have started operating with these plus two additional criteria of ‘integrity’ and ‘style’, but I shall leave these aside for the purposes of this article. These criteria of rigour, originality and significance are themselves riddled with ambiguity, notwithstanding the attempts of the RAE panels to clarify them. I have, however, discussed these problems elsewhere (Bridges, Citation2003, Citation2009) and will not pursue these issues here.

3. A further 20% was based on evidence of the quality of ‘the research environment’ and 10% on ‘evidence of esteem’.

4. There are, of course, many other issues, especially to do with the unintended consequences of the RAE, some of which I discuss in Bridges (Citation2009).

5. The Roberts Review estimated the ‘real terms’ cost of the 2001 RAE as £5–6 million and anticipated that the cost of the 2008 one would be substantially higher (Roberts, Citation2003). Figures for 2008 will be published in due course. The ambition to reduce the burden of work and the cost of research assessment through the use of bibliometrics looks, however, unlikely to be fulfilled. Expert groups working on the revised Research Evaluation Framework had concluded by May 2009 that ‘whichever approach to bibliometrics was used … HEIs will want to verify the data, and therefore the burden of using bibliometrics within the REF is unlikely to be reduced compared with the RAE’ (HEFCE, Citation2009b, para. 18). The fact that the HEFCE Expert Advisory Group anticipates that ‘any additional cost of using bibliometrics would be largely absorbed by internal management within institutions’ (HEFCE, Citation2009a, p. 7) will not immediately endear the proposal to the nation’s universities.

6. See also the report by Evidence Ltd to Universities UK on ‘The use of bibliometrics to measure research quality in UK higher education institutions’ (Universities UK, Citation2007, p. 35).

7. I first suggested this model of research assessment to Derek Hicks, then HEFCE regional consultant to the East of England, following the 2001 RAE. The suggestion apparently created considerable hilarity in the usually rather subdued corridors of HEFCE, but this response only served to implant the seed of the idea more firmly in my mind.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.