1,403
Views
3
CrossRef citations to date
0
Altmetric
Non-theme articles

Debate: We need to change the culture of reliance on inappropriate uses of journal metrics—a publisher’s viewpoint

ORCID Icon

The recent new development article by van Helden and Argento (Citation2020) published in Public Money & Management astutely highlights some of the pitfalls created by an over-reliance on certain metrics in the research environment. Metrics have long been a topic of debate over how they are used for managerialist convenience in the assessment of journal quality, researcher’s performance and academic career progression. From an academic journal publisher’s viewpoint, responding to the demands of the author market, there are also some substantial effects on how we develop our journal portfolios.

The impact of metrics on journal performance

In business & management and related fields, strong metrics, such as high journal Impact Factors (IFs) or high ratings in journal quality lists (for example CitationCABS, CitationABDC), help journals in a number of ways. The better ranked journals tend to see more submissions and more of higher quality, allowing their editors to uphold and improve the quality threshold. Higher-ranked journals tend to see larger numbers of article downloads and citations, creating a virtuous circle effect on those journals’ performance. Journals lacking strong metrics are more likely to struggle for both quantity and quality of submissions.

Given the pressure to publish in highly-ranked journals, this results in publishers aiming to provide authors with a wide choice of highly-ranked journals for them to publish in. How to improve journal metrics is a frequent conversation with editors and publishing partners. But is it right that that metrics govern the research community’s publishing activity to this extent?

Most well-known metrics measure the story of the research, i.e. just the final research article, rather than the whole research process itself. The venue of its publication tells us nothing useful about the quality of a paper. IFs really measure the success in editorial judgement of journal editors and reviewers to identify research that attracts citations. And we often assume that work is cited only for positive reasons. The IF was originally created to help librarians identify journals to purchase, not as a measure of scientific quality of research articles and journals. It’s misuse over decades for other purposes is troubling.

Changing the metrics culture

My challenge to the business & management research community is that this culture needs to change. Our use of metrics is overly simplistic and unhealthy, placing unnecessary pressure on researchers at all stages of their careers, but particularly on early-career researchers.

If the current suite of metrics and rankings are not appropriate for use alone, what should we add to or replace them with? In 2012, a meeting in San Francisco produced the CitationDeclaration on Research Assessment (DORA) with the aim to improve evaluation and research assessment in publications, university hiring and promotion decisions, and the awarding of funding grants. Among its many recommendations, DORA advocates for the end of journal-level citation metrics and scores, like the IF and the H-index, being used to assess quality in all these situations. By tying rewards to metrics, universities and research organizations incentivize gaming and encourage behaviours that may be at odds with their larger purpose. The culture of short-termism engendered by metrics also impedes innovation and ingenuity.

The UK’s Citation2015 Metric Tide report concluded that metrics should support and not supplant expert judgement, informing peer review so that they are complementary tools of evaluation. The Forum for Responsible Research Metrics, launched in 2016, aims to change the underlying culture of the use of metrics in the UK.

But this needs to be a global project. And there are encouraging signs. In February 2020, China—the world’s largest producer of academic articles—issued two government documents looking at reforms to the research and higher education evaluation systems, with a reduction in focus on the quantity of researcher publications and the targeting of journals with an IF (Tollefson, Citation2018).

Overall, there are increasing numbers of institutions, funders, publishers and individuals who have signed DORA. However, support and implementation of DORA principles has been piecemeal. While numerous Dutch institutions are setting the best example, many universities say they are attracted by the principles of DORA but are not sure how to set a metrics policy and implement it. This needs some championing at the institutional level—researchers working in the university sector can advocate for DORA’s proposed practices to be adopted. In the end, if they are, it will benefit them and benefit society more broadly as research output will not be solely governed by crude metrics like journal ranking lists and IFs.

Advocating for the use of responsible metrics

Metrics are often regarded as neutral, objective tools, but they contain cultural biases and assumptions. A better way to understand them is as indicators. They indicate what the value of a piece of research might be, but they require interpretation and contextualizing, not just quoted as a ‘one-size-fits-all’ score. CitationWe should be looking at the value of all outcomes and all outputs of research. And there are numerous metric tools out there trying to do this, including:

  • Article-level metrics count the number of views, downloads and citations at the article level. Does that not show how useful each article is, rather than what journal it’s published in?

  • Real world impact and policy—real-life outcomes that an article contributed to—for example use in a committee, a policy change, take-up by practitioners and companies.

  • Press coverage—did a piece of research have sufficient applicability to be of interest to the general public?

  • New data and intellectual property (IP)—patents, new data sets used in practice or commercially.

  • CitationHuMetricsHSS—an initiative arguing for rethinking humane indicators of excellence in humanities and social sciences. They want to establish a values-based framework for evaluating research and all aspects of scholarly life in institutions and organizations—espousing values such as equity, openness, collegiality, community and quality.

  • CitationAltmetrics—this provides a weighted score based on real-world impacts like some of the above: social media interactions, news coverage, use in policy documents, blogs etc.

There needs to be a push to adopt these types of more responsible metrics at the institutional and international level, rather than relying on inappropriate use of IFs, H-indexes etc., and use a whole array of them in research assessment, supporting expert peer review. Should we only be using metrics and peer review though? How else should we judge research quality? Where appropriate for quantitative studies, researchers should surely spend time replicating and reproducing studies—particularly highly regarded studies—to see if they generate the same or similar results? Journals, in a world less obsessed with IFs and journal rankings, could publish more of these replications which for the most part are absent from business & management publications to date. At the moment, many journals are only interested in novelty and contribution to theory, rather than validating, questioning or extending previously published work.

Responsible metrics are just one of the areas created by the push for more open scholarship and transparent procedures in research and higher education. We must recognize that we live in an era where public trust in experts is being eroded. Researchers and research institutions need to redouble their efforts to have the highest standards in how they conduct research and how they assess it. Using a basket of metrics, including responsible metrics, would be the best way to address the current problems with research assessment.

Acknowledgements

The author would like to thank Vicki Whittaker for her comments on an earlier draft of this article.

Disclosure Statement

James Cleaver is an employee of the academic publishing company, Taylor & Francis. Any opinions and views expressed in this publication are the opinions and views of the author and are not the views of, or endorsed by, Taylor & Francis.

References

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.