The following special section of perspectives focuses on the use of metrics to assess the broader societal impacts of research. As the original editorial published in the first issue of the Journal of Responsible Innovation describes them, perspectives are shorter and ‘intentionally ecumenical’ contributions that,
while potentially derived from research, are somewhat more descriptively or polemically oriented. (Guston et al. Citation2014, 8)
Using metrics to assess research impact is nothing new. The field of Scientometrics – which employs various empirical data and methods to analyze, visualize, and assess the scientific impacts of research – can trace its origins to the first half of the twentieth century (Garfield Citation2009). Near the end of the twentieth century, however, research managers and policy makers arguably became more interested in assessing the impacts of research on society, rather than (or at least in addition to) the impacts of research on the research enterprise itself (Holbrook Citation2012). Often couched in terms of accountability to the public, many early attempts at assessing ‘broader societal impacts’ or simply ‘broader impacts’ utilized peer review (Holbrook and Frodeman Citation2011; Frodeman and Briggle Citation2012). By the early twenty-first century, attempts to capture broader impacts using quantitative metrics had also been employed, and had come under attack as stand-alone tools. The UK’s 2014 Research Excellence Framework, which used peer review panels to assess both internal and broader impacts, employed an approach that Donovan (Citation2011) argued was the state of the art: ‘Best practice combines narratives with relevant qualitative and quantitative indicators to gauge broader social, environmental, cultural and economic public value’ (175, emphasis added). Yet, metrics are often used without any such qualitative or narrative information, not to mention non-traditional bibliometrics and indications often referred to as ‘altmetrics,’ and they seem to exert a special pull on both experts and non-experts alike when it comes to the area of research being evaluated. The intuition that using metrics alone to assess broader societal impacts is a dangerous temptation underlies current attention to the responsible use of metrics (Wilsdon Citation2016, Citation2018).
The perspectives in this special section stem from a National Science Foundation-sponsored research workshop, ‘Evaluating broader impacts: The state of the art,’ which was held in Washington, DC, on February 10 and 11, 2016. Workshop participants were then invited to expand upon their initial arguments and to relate the issue of using metrics to assess broader impacts to the notion of responsible research and innovation (RRI). Donovan (Citation2018) dubs the temptation to use metrics as stand-alone tools for broader impacts evaluation ‘metricide,’ suggesting that it is due to the rise of professional ‘impactologists’ and an increasing level of ‘impact fatigue’ among researchers subject to impact assessments. In an effort to combat metricide, Donovan urges us to move toward ‘ethical impactology,’ which would require the cultivation of an impact-aware culture among academics to offset impact fatigue and the temptation to give in to the ease of metrics-only assessments. Holbrook (Citation2018) takes the fight straight to the impactologists by treating both RRI and altmetrics as tools that could be designed (for managers) to measure the impacts of research on society or to encourage (researchers to engage in) activities that could enhance the broader impacts of their research. Holbrook argues that those designing tools for broader impacts assessment ought to design them to empower researchers rather than policy makers and managers of research. Where Donovan and Holbrook treat researchers and those evaluating them as separate communities with different interests, Briggle (Citation2018) suggests that researchers and policy makers are actually working together, engaged in a sort of magic act designed to create the illusion of broader impacts. This collaborative act distracts us from the fact that not all impacts on society are beneficial. Responsible research evaluation, however, must account for negative, as well as positive impacts. Frodeman (Citation2018), goes a step further than Briggle, suggesting that talk of societal impact and responsible innovation ‘has become a cheat’ that ‘gives science a green light disguised as a flashing yellow.’ In order to be fully responsible, according to Frodeman, we must question the idea of science as a journey toward infinite knowledge.
Acknowledgement
Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author and do not necessarily reflect the views of the National Science Foundation.
Disclosure statement
No potential conflict of interest was reported by the authors.
Notes on contributor
J. Britt Holbrook is an assistant professor of Philosophy in the Department of Humanities at New Jersey Institute of Technology.
ORCID
J. Britt Holbrook http://orcid.org/0000-0002-5804-0692
Additional information
Funding
References
- Briggle, Adam. 2018. “The Great Impacts Houdini.” Journal of Responsible Innovation. doi:10.1080/23299460.2017.1422925.
- Donovan, Claire. 2011. “State of the Art in Assessing Research Impact: Introduction to a Special Issue.” Research Evaluation 20 (3): 175–179. doi:10.3152/095820211X13118583635918.
- Donovan, Claire. 2018. “For Ethical Impactology.” Journal of Responsible Innovation. doi:10.1080/23299460.2017.1300756.
- Evaluating Broader Impacts: The State of the Art. 2016. Workshop Agenda and Presentations. https://philosophyimpact.org/publications-and-workshops/.
- Frodeman, Robert. 2018. “The Ethics of Infinite Impact.” Journal of Responsible Innovation. doi:10.1080/23299460.2018.1489172.
- Frodeman, Robert, and Adam Briggle. 2012. “The Dedisciplining of Peer Review.” Minerva 50 (1): 3–19. doi:10.1007/s11024-012-9192-8.
- Garfield, Eugene. 2009. “From the Science of Science to Scientometrics Visualizing the History of Science with HistCite Software.” Journal of Informetrics 3 (3): 173–179. doi:10.1016/j.joi.2009.03.009.
- Guston, David H., Erik Fisher, Armin Grunwald, Richard Owen, Tsjalling Swierstra, and Simone Van der Burg. 2014. “Responsible Innovation: Motivations for a New Journal.” Journal of Responsible Innovation 1 (1): 1–8. doi:10.1080/23299460.2014.885175.
- Holbrook, J. Britt. 2012. “Re-assessing the Science–Society Relation: The Case of the US National Science Foundation’s Broader Impacts Merit Review Criterion (1997–2011).” In Peer Review, Research Integrity, and the Governance of Science–Practice, Theory, and Current Discussions, edited by Robert Frodeman, J. Britt Holbrook, Carl Mitcham, and Hong Xiaonan, 328–362. Beijing: People’s Publishing House.
- Holbrook, J. Britt. 2018. “Designing Responsible Research and Innovation to Encourage Serendipity Could Enhance the Broader Societal Impacts of Research.” Journal of Responsible Innovation. doi:10.1080/23299460.2017.1410326.
- Holbrook, J. Britt, and Robert Frodeman. 2011. “Peer Review and the Ex Ante Assessment of Societal Impacts.” Research Evaluation 20 (3): 239–246. doi:10.3152/095820211X12941371876788.
- Wilsdon, James. 2016. The Metric Tide: Independent Review of the Role of Metrics in Research Assessment and Management. https://responsiblemetrics.org/the-metric-tide/.
- Wilsdon, James. 2018. “Has the Tide Turned Towards Responsible Metrics in Research?” The Guardian, July 10.