1,627
Views
12
CrossRef citations to date
0
Altmetric
Perspectives

For ethical ‘impactology’

ORCID Icon
Pages 78-83 | Received 27 Jun 2016, Accepted 25 Feb 2017, Published online: 15 Mar 2017

ABSTRACT

The routine evaluation of broader impacts of research has made the UK an impact-aware culture, although the practice of assessment has run ahead of its theory. The paper describes UK practice in assessing broader impacts, notes the rise of the profession of ‘impactology’ alongside the rise of academics’ impact-fatigue, and notes that the two combined may lead us to commit ‘metricide’ by abandoning time-consuming impact narratives in favour of simple metrics. The paper concludes by considering what an ethical impactology might look like, and finds at its heart the responsible use and non-use of metrics.

1. Introduction

Discussion at the Evaluating broader impacts: The state of the art workshop highlighted that the routine evaluation of broader impacts is more advanced in the UK at the national level when compared with other countries. This is in large part due to attempts by the UK government to hold public research funding agencies to account so that scientific research is seen to benefit wider society. Yet, in this context, the practice of assessing broader impacts has raced far ahead of its theory. The workshop allowed time to take stock, and to reflect on positive and negative aspects of assessing broader impacts, and possible future directions. Below I outline UK practice; describe the rise of ‘impactology’ and how impact-fatigue might lead to committing ‘metricide’; and present the case for an ethical impactology.

2. Broader impacts: the UK context

UK Higher Education Institutions (HEIs) rely heavily on government funds, which are received through a dual support system. First, competitive grant funds are distributed by various national Research Councils, amounting to around £3 billion ($US 4.3 billion) for 2015/2016. The UK Research Councils require all grant applications to include a ‘pathways to impact’ section describing strategies for achieving broader economic or societal impacts, currently defined as ‘fostering global economic performance, and specifically the economic competitiveness of the United Kingdom, increasing the effectiveness of public services and policy, and enhancing quality of life, health and creative output’ (Research Councils UK Citation2016).

Second, an annual block grant is distributed to HEIs based on their relative performance in the Research Excellence Framework (or REF) exercise. The last REF was conducted in 2014, and approximately £1.5 billion ($US 2.16 billion) was distributed to HEIs for 2015/2016. The REF is a national research evaluation exercise that rates the relative performance of groups of university researchers (Units of Assessment or UoAs) by discipline. Scientific quality accounts for 65% of UoA ratings, research environment 15%, and broader impacts 20%. The impact criterion was first introduced in 2014, and comprised an impact template that set out individual UoA approaches to creating research impact, and impact case studies demonstrating the ‘reach’ (breadth) and ‘significance’ (depth) of broader impacts, presented as a narrative ideally supported by evidence including appropriate metrics. The 2014 REF defined broader impact as ‘an effect on, change or benefit to the economy, society, culture, public policy or services, health, the environment or quality of life, beyond academia’ (HEFCE Citation2015), and there was also panel-specific (or discipline-specific) guidance. The next REF exercise is due to be conducted no later than 2021 (Department of Business, Innovation and Skills Citation2015a), and the impact criterion may possibly increase to 25% within UoA ratings.

It is fair to say that UK HEIs now have an impact-aware culture. However, there are concerns about the future of impact assessment in the REF in particular. Martin (Citation2016, 18–19) has argued that the quality assessment component of the REF (and its predecessor, the Research Assessment Exercise) has run its course: overall research quality standards have been raised, and the costs (financial, behavioural and institutional) of any future REF will outweigh its possible benefits. He believes this will also be the case for the broader impacts element of the REF:

… there will no doubt be widespread criticism of the ‘simple-minded’ approach adopted, and work will start to ‘improve’ the assessment of impact in the following REF. REF 2 will consequently be more elaborate, more burdensome and more time-consuming. It will also encourage more sophisticated game-playing. (Martin Citation2016, 17)

And so on, he argues, for REF 3 and beyond.

The shape of the next REF is currently under review by the UK government, which is seeking to reduce institutional and researcher burden, including whether metrics can reasonably replace the narrative element in the assessment of broader impacts (Department of Business, Innovation and Skills Citation2015b, Citation2016).

3. The rise of ‘impactology’

The workshop coined a new term: ‘impactology’ (the practice of assessing broader impacts, and the study of the practice of assessing broader impacts). There is also the rise of a new class of ‘impactologists’ or impact professionals comprising HEI-based impact managers, supported by consultants (including professional case study writers) and for-profit companies supplying impact documentation software and training courses. As impactology becomes more professionalised and bureaucratised, it is important to guard against expedient pressures to save time and money by pursuing impact metrics at the expense of more appropriate and nuanced assessment approaches.

4. Impact-fatigue

Assessing the broader impacts of research at the national level in the 2014 REF was a world first. While initially controversial, there is now ‘grudging support’ amongst the UK’s academic community for the narrative approach to assessment (Oancea Citation2016). Yet this needs to be balanced with the fact that writing impact statements and impact case studies involves a great deal of time and effort, and that many academics want to be less regulated and left alone to get on with research and teaching. Anecdotally, after the REF 2014 deadline, several of my colleagues around the UK confessed that while the impact case study approach was more equitable than simple metrics (especially for the humanities, arts and social sciences), they were so tired of the REF burden that they would happily switch to a metrics-only system. We can call this REF-fatigue or, more generally, impact-fatigue. But a weary backlash against bureaucracy presents the danger of committing ‘metricide’.

5. (I + IF = M): impactology + impact fatigue = metricide

How broader impacts are defined and assessed is a vital issue. For example, in 2011 an expert workshop on the state of the art in assessing research impact concluded that best practice was the use of narratives supported by available robust metrics (Donovan Citation2011). Little has changed in recent years in the form of innovation or best practice, the exception being the nascent (and yet to be realised) promise of altmetrics and social media data linking publications, dissemination, and engagement with research users, to broader impacts. In this light, the approach to assessing broader impacts in the 2014 REF matched the state of the art. As already noted, the UK REF system is currently under review. Wilsdon produced a summary of submissions to the consultation phase of the review, and observed ‘widespread recognition that robust metrics for impact don’t yet exist, such that narrative case studies, assessed by peer review, remain the best option’ (Citation2016, 3–4). Yet while the future of the REF hangs in limbo, we should not take this reasoning for granted.

Martin (Citation2011) has drawn our attention to the fact that best practice in assessing the broader impacts of research is drawn from a long tradition of evaluating research outcomes in the health and medical sciences. For example, ‘payback studies’ use mixed methods to produce very detailed accounts of the impact of research on the research system, product and drug development, changes in clinical practice and public behaviour, improvements in service delivery, health gain and economic returns (Donovan and Hanney Citation2011). Yet, for Martin, this has been more akin to a craft industry with bespoke items being ordered by specific clients, while impact assessment is moving into mass production for the whole Higher Education sector. In this light, metrics seem to promise efficiency in terms of time and money saved, but at the expense of nuance and detail.

A parallel may be drawn with similar exercise, the development of Australia’s Research Quality Framework (abandoned before implementation and replaced by Excellence in Research for Australia). An almost identical narrative-based approach to assessing research was recommended because robust impact metrics did not yet exist, but a new minister baulked at its apparent complexity, and instead four simple metrics were adopted for the whole sector: patents, plant breeders’ rights, registered designs and registered commercialization income (Donovan Citation2008). This is perhaps the starkest example of committing metricide in broader impact assessment, which ultimately hid the wide and varied benefits of the impact of research for society.Footnote1

6. For ethical impactology

The Higher Education Funding Council for England (HEFCE), the body that administers the REF, commissioned an independent review of the role of metrics in research and management, which reported in 2015. The review studied the evidence and its report, The Metric Tide (Wilsdon et al. Citation2015), recommended a balanced approach to impact assessment using narratives and available robust metrics, and echoed Hicks et al. (Citation2015) in calling for the responsible use of metrics in research assessment and management. Specifically, responsible metrics are robust, transparent, diverse (vary by field and career path), reflexive (understand the effects measures may have on what is being measured, and respond appropriately) and humble (accept that quantitative data supports, but is not superior to, qualitative expert assessment) (Wilsdon et al. Citation2015, x). This would seem a good starting point for devising an ethical impactology.

Might REF-fatigue or impact-fatigue be symptoms of innovation gone wrong? Perhaps the largest challenge ahead is to retain an impact-aware culture, but with less regulation, and to use the lens of responsible metrics without completely falling into a mass production mode of assessment. It is important that definitions of research impact remain open and defined by the academic community, rather than be closed down and restricted by impactologists and the limits of whatever simplistic data is to hand. Overly data-driven systems are likely to neglect social and cultural impacts, and overlook the distinctive contributions of not only the humanities, arts and social sciences, but of all research endeavours, to society at large.

Importantly, more ‘holistic’ state of the art approaches to impact assessment encourage nurturing qualitative methods so that case studies may triangulate data (Donovan and Hanney Citation2011), include the perspectives of various stakeholders on what ought to be valued, and, crucially, acknowledge the importance of co-production as an essential mechanism in achieving impact (Spaapen and van Drooge Citation2011; Joly et al. Citation2015). It follows that externally imposed measures are less likely to produce valuable or enduring wider impacts than those agreed on and actively pursued by a range of stakeholders including researchers, research users, research funders, policymakers, citizens, practitioners, research evaluators, or indeed impactologists of the ethical variety.

Acknowledgements

Thanks are due to the organisers of Evaluating Broader Impacts: The state of the art for their invitation to participate in the workshop and contribute to this special edition, and to the anonymous reviewers and Erik Fisher for their helpful comments.

Disclosure Statement

No potential conflict of interest was reported by the author.

Notes on Contributor

Claire Donovan is a Reader at Brunel University London, and previously held posts in the Research School of Social Sciences at The Australian National University; Nuffield College, Oxford University; and The Open University. Her research focuses on the governance of the humanities, arts and social sciences within science systems.

Additional information

Funding

This work was supported by National Science Foundation [Grants 1353796 and 1445121].

Notes

1. An anonymous referee observed that the use of the neologism ‘metricide’ is incorrect, as this should mean ‘to kill metrics’ rather than ‘to kill with metrics’, which was the original intention. However, on further reflection this observation can be combined with an undercurrent of this article: the use of unethical impactology could indeed lead to the demise of metrics itself, which one could argue could helpfully be encouraged.

References

  • Department of Business, Innovation and Skills. 2015a. Fulfilling Our Potential: Teaching Excellence, Social Mobility and Student Choice, CMND. 9141. London: HMSO.
  • Department of Business, Innovation and Skills. 2015b. Review of Research Excellence Framework (REF): Terms of reference. London: Department of Business, Innovation and Skills.
  • Department of Business, Innovation and Skills. 2016. Lord Stern’s Review of the Research Excellence Framework: Call for Evidence. London: Department of Business, Innovation and Skills.
  • Donovan, Claire. 2008. “The Australian Research Quality Framework: A Live Experiment in Capturing the Social, Economic, Environmental, and Cultural Returns of Publicly Funded Research.” New Directions for Evaluation 118: 47–60. doi: 10.1002/ev.260
  • Donovan, Claire. 2011. “State of the Art in Assessing Research Impact: Introduction to a Special Issue.” Research Evaluation 20 (3): 175–179. doi: 10.3152/095820211X13118583635918
  • Donovan, Claire, and Stephen Hanney. 2011. “The ‘Payback Framework’ Explained.” Research Evaluation 20 (3): 181–183. doi: 10.3152/095820211X13118583635756
  • HEFCE [Higher Education Funding Council for England]. 2015. “REF Impact.” Accessed May 9, 2016. http://www.hefce.ac.uk/rsrch/REFimpact/.
  • Hicks, Diana, Paul Wouters, Ludo Waltman, Sarah de Rijcke, and Ismael Rafols. 2015. “Bibliometrics: The Leiden Manifesto for Research Metrics.” Nature 520: 429–431. doi: 10.1038/520429a
  • Joly, Pierre-Benoît, Ariane Gaunand, Laurence Colinet, Philippe Larédo, Stéphane Lemarié, and Mireille Matt. 2015. “ASIRPA: A Comprehensive Theory-Based Approach to Assessing the Societal Impacts of a Research Organization.” Research Evaluation 24 (4): 440–453. doi: 10.1093/reseval/rvv015
  • Martin, Ben. 2011. “The Research Excellence Framework and the ‘Impact Agenda’: Are we Creating a Frankenstein Monster?” Research Evaluation 20 (3): 247–254. doi: 10.3152/095820211X13118583635693
  • Martin, Ben. 2016. “What’s Happening to Our Universities?” Science Policy Research Unit Working Paper Series, SWPS 2016-03. Brighton: University of Sussex.
  • Oancea, Alis. 2016. “Challenging the Grudging Consensus Behind the REF.” Times Higher Education, March 25. Accessed May 9, 2016. https://www.timeshighereducation.com/blog/challenging-grudging-consensus-behind-ref.
  • Research Councils UK. 2016. “Pathways to Impact.” Accessed May 9, 2016. http://www.rcuk.ac.uk/innovation/impacts/.
  • Spaapen, Jack, and Leonie van Drooge. 2011. “Introducing ‘Productive Interactions’ in Social Impact Assessment.” Research Evaluation 20 (3): 211–218. doi: 10.3152/095820211X12941371876742
  • Wilsdon, James. 2016. “Consensus and Conflict: What Do Responses to Stern Tell Us About the Future of the REF?” WONKHE, 18 April. Accessed May 9, 2016. http://wonkhe.com/blogs/consensus-and-conflict-what-do-responses-to-stern-tell-us-about-the-future-of-the-ref/.
  • Wilsdon, James, Liz Allen, Eleonora Belfiore, Philip Campbell, Stephen Curry, Steven Hill, Richard Jones, et al. 2015. The Metric Tide: Report of the Independent Review of the Role of Metrics in Research Assessment and Management. Bristol: HEFCE. Accessed May 9, 2016. doi:10.13140/RG.2.1.4929.1363.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.