1,752
Views
2
CrossRef citations to date
0
Altmetric
Editorial

Measuring Journal Success

The Journal of the American Planning Association (JAPA), though published by Taylor & Francis, is still owned and edited by the American Planning Association (APA). A question APA members ask periodically is how they know JAPA is a success. An expanding number of quantitative indicators, such as the Journal Impact Factor and CiteScore, appear to answer this question. Do they?

Unsurprisingly, there has been quite a bit of debate on journal success measures as editors, authors, publishers, and readers grapple with how to assess journal quality and impact. Quantitative methods, or bibliometrics, have proponents, and indeed they do measure some items of value in planning (Stiftel, Citation2011). However, they are heavily influenced by factors that vary across fields and across specialty areas within a field or discipline. These areas of variation include patterns of sourcing within specialties (for citations), levels of subscription and pirating (for downloads), or even the social media savvy of individual authors and editors (for Altmetrics). In all of this quantification, the central issue of article and journal research quality can be lost. For a journal such as JAPA the long-term influence on planning and policy is also important and not well assessed by these measures. The bottom line is there are many imperfect measures, and some of the most promoted are the least useful.

ABOUT THE EDITOR: Ann Forsyth is the Ruth and Frank Stanton Professor of Urban Planning at Harvard University.

ABOUT THE EDITOR: Ann Forsyth is the Ruth and Frank Stanton Professor of Urban Planning at Harvard University.

The Journal Impact Factor and Its Alternatives

At their base, journal measures aggregate scores for specific articles into an overall measure for a journal. One of the most well known of these measures is the Clarivate Web of Science 2-year Journal Impact Factor (JIF) released in the Journal Citation Reports. The formula for this measure is the number of citations received in 1 year to articles published in the prior 2 years (numerator) divided by the number of articles published in those 2 years (denominator). For example, the 2019 impact factor is the number of citations in 2019 to articles published in 2017 and 2018 divided by the number of articles published in 2017 and 2018. Not everything counts as an article: In JAPA it is the fully refereed research articles, review essays, and viewpoints. Editorials, short commentaries, and book reviews do not count in the denominator (Clarivate Analytics, Citationn.d.).

JAPA’s 2-year impact factor has been rising. The recently released 2019 JIF was 4.7 (technically 4.711, but that is overly precise). JAPA thus regained its place in the top few journals in the areas of urban studies and of regional and urban planning. I thank the prior editor Sandi Rosenbloom for this improvement. Because I did not take the helm until 2019, the articles in the 2019 JIF are ones she edited. This 2-year citation window is, however, more appropriate for journals in the hard sciences where fields move fast than for planning, where articles have a longer shelf life. In journals such as JAPA with relatively few articles, the JIF is also sensitive to citations to specific articles in any one year. A 5-year JIF is calculated in a similar way, though across 5 years rather than 2, and is an improvement though still sensitive to citations in one specific year.

Further, high impact factors do not always reflect high quality. As Bruce Stiftel reminded me when commenting on this editorial, an article hitting the street just as its subject is gaining wide traction in the field can buoy the impact factor of even a mediocre piece. This year, articles on pandemics and planning will do better than they might have just a few years ago. Occasionally, an article is highly cited because it makes a mistake that other authors identify and criticize. Goldstein and Maier (Citation2010) show that reputation of planning journals among faculty diverges widely from JIF scores.

With an increased ability to count and track aspects of publication, other measures of journal impact have emerged (Taylor & Francis, Citation2020). Here I just mention a few of them. As with predatory journals, there are also fake impact factors, though here I am talking about well-established metrics (Forsyth, Citation2019).

  • The Scopus CiteScore has recently changed its methods to calculate the ratio of citations in the past 4 years divided by peer-reviewed documents in the journal in that same period. Like the JIF, it now excludes some items like editorials and, it seems, book reviews. This provides a longer time frame for citations than the JIF, making it less subject to sourcing patterns in a single year, and means that journals that publish rarely cited items such as book reviews are not as disadvantaged. It is based on the Scopus database, which is quite different than the Clarivate database for the JIF (Scopus, Citation2020).

  • The Eigenfactor score measures the number of citations in 1 year for articles published within 5 years, adjusted for journal type, with citations from highly cited journals carrying more weight (Elsevier, Citation2020). Journal self-citations are removed, and the project claims to adjust for the differing citation rates of different fields by calibrating for reference list length (Eigenfactor, Citationn.d.). It is thus a kind of prestige ranking (Perera & Wijewickrema, Citation2018; Taylor & Francis, Citation2020).

  • The H-Index can be used to rank journals and individuals. For a journal, the H-index is P if the journal has at least P articles cited at least P times. It can be based on Web of Science or Google Scholar. Because Google Scholar includes policy reports and the like, the scores from the two sources are often different (Google Scholar, Citationn.d.).

  • Article downloads for a journal are a measure of reaching readers. This is simply a count of downloads, typically from the website of the publisher. This is again an imperfect measure because many faculty members place the accepted manuscript online in their university library for open access. PubMed Central does something similar in the health field. If most people access articles through these alternative sources, they are not measured as downloads on the publisher’s website. Further, pirating websites provide illegal copies entirely outside these systems (Forsyth, Citation2019). Alternatively, downloads may be increased when a large group, such as a large freshman class, is expected to download an article.

  • Although not provided at the journal level, the Altmetric Attention Score is the “weighted count of all of the online attention Altmetric has found for an individual research output. This includes mentions in public policy documents and references in Wikipedia, the mainstream news, social networks, blogs and more” (Altmetric, Citation2020). From the perspective of JAPA’s mission to influence planning practice, these results capture important activity that the journal citation metrics do not. Many JAPA articles do quite well in terms of Altmetrics, though it can be a matter of being the right topic at the right time.

Such quantification is not without critics, including myself. One of the more well-known critiques is the Leiden Manifesto (Hicks et al., Citation2015). Hicks and colleagues criticize the obsession with indicators used as an end in themselves and not as a support for real peer review. They start their manifesto with a strong critique of current practice:

Data are increasingly used to govern science. Research evaluations that were once bespoke and performed by peers are now routine and reliant on metrics. The problem is that evaluation is now led by the data rather than by judgement. Metrics have proliferated: usually well intentioned, not always well informed, often ill applied. We risk damaging the system with the very tools designed to improve it, as evaluation is increasingly implemented by organizations without knowledge of, or advice on, good practice and interpretation. (Hicks et al., Citation2015, p. 429)

Manipulating the Metrics

One of the most important aspects of this critique is that such scores represent factors other than quality and thus are at best unfair and at worst open to manipulation. To provide a provocative example, below I explain how I could design a journal to do well in bibliometrics. This is not the approach I am taking at JAPA, but the example is to highlight problems of excessive quantification.

First, the journal would be careful about what it publishes. It would publish many review essays: They are cited more often because they can set the stage for a variety of articles. A planning journal would publish in planning theory, transportation, and housing because they are the specialties with the most citations; education, diversity, or land use papers are cited less (Stevens et al., Citation2019). That kind of difference is found in many fields, with journals in mathematics having JIFs substantially lower than those in neurosciences (Pendlebury, Citation2009). The journal would not publish articles likely to be cited mainly in books because they are often not counted. It would not publish articles likely to be read and used more than they are cited: Such practically important research might help Altmetrics, but not much more.

Second, it would have a structure and workflow to increase citations in the “window” in which they count. There are many ways to do this, so I only highlight a few that are more common. For measures like the 2-year JIF, it works well to have a 1- to 2-year queue of articles published online before their actual publication date. This is because items tend to be cited most after 3 to 4 years once they are known but while they are still fresh. An online queue means that the paper is available early but only “published” in its second year, meaning the peak citation period will more likely fall in the JIF window. Another strategy uses the mix of article types to boost citations. This involves changing the number of items that can be cited without changing the number of “research” articles. Some high impact factor journals have a very large amount of content in document types such as letters to the editor or items like guidelines for practice. These other article types may be cited but are essentially bonus citations because most formulas only count research articles in the denominator (David, Citation2016). This approach plays with the formula. To increase downloads, a journal would have more open access articles and channel all downloads to the publisher’s site.

Finally, such a journal would increase citations and social media presence via author self-promotion. A simple approach would be to encourage articles by authors known to undertake a lot of self-citation or who are involved in citation cartels or citation stacking. Citation cartels and stacking involve groups of people citing work of others in the group excessively, though what that means is of course a matter of opinion. For Altmetrics, one can favor publication toward those likely to publicize their materials. JAPA blogs about every article on the APA website, and we post about each article at least twice on Twitter and Facebook. This is a way to reach the practitioner audience. However, some articles are promoted much more, and a journal wanting to do well with Altmetrics would focus mainly on those authors. This would be aided by publishing articles likely to be controversial among the general public. It would also be possible to publish articles that are cited in a lot of policy reports, also counted in Altmetrics, but my impression is even if articles are used in reports, they are often not fully cited.

A Path Forward for Measures

The recent news of a substantial jump in JAPA’s JIF has been quite welcome. It broadly indicates that JAPA articles are reaching a wide audience of readers and authors. The generally high Altmetric Attention Scores of many JAPA articles show that they are of interest to a wider group of professionals and others.

Overall, however, although the various metrics and measures have some use, they only present part of the picture. At worst, they add a veneer of objectivity to what should be a more qualitative assessment. People now can measure things that a few years ago were not quantifiable, at least not to scale. However, being able to measure something does not mean it is important to measure. This is a key insight not only for assessing journal quality but for research in planning more generally. Can the work in the journal help improve the world?

ACKNOWLEDGMENT

I thank Editorial Advisory Board member Bruce Stiftel for excellent comments.

References

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.