723
Views
3
CrossRef citations to date
0
Altmetric
Editorial

Editorial

If a (scientific) paper has to be judged by a metric, then it should by the citations to it and not to the journal. (Fersht, Citation2009, p. 6883)

On the Thomson Reuters' Web of Science it is stated that the Journal Impact Factor (IF) is

a measure of the frequency with which the ‘average article’ in a journal has been cited in a particular year or period … . The impact factor of a journal is calculated by dividing the number of current year citations to the source items published in that journal during the previous two years. (http://wokinfo.com/essays/impact-factor/)

The IF provides quantitative evidence for editors and publishers for positioning their journals in relation to the competition — especially others in the same subject category.

Many researchers have concerns over the notion of using metrics as a means to reflect the quality of a paper published in an academic journal, yet we are also both fascinated and we are being ‘sucked’ into believing that the Journal IF is the key means by which the quality of a journal can be assessed when considering the submission of a paper. This editorial raises some of these concerns, addressing two particular points with respect to the IF, and a series of supplementary points that illustrate the means by which the values can be manipulated to raise the IF. These comments are not made from a negative perspective as Transport Reviews is ‘basking’ in the glory of an increased IF to 2.903 (2014: 4th in the Transport listing out of 29 journals), the highest in its history (35 years), and an increase of over 70% from the 2013 figure. The issues raised are more fundamental.

The first point is that there seems to be a strange logic at work, namely that authors are encouraged to submit papers to high IF journals in the expectation that if their paper is of a sufficiently high quality it will be accepted. This is based on the notion of self-perpetuating excellence, rather than the notion that if the paper is of a high quality it will be published and cited in its own right. The rationale originally seems to have been that library budgets were being squeezed, and that those journals with a lower IF were the ones that could be ‘culled’. But this thinking is rather dated. The ‘false’ logic here seems to be that more people are likely to read papers submitted to high IF journals and hence are more likely to cite that paper (and journal).

When advising which journals researchers (e.g. Ph.D. students) should submit their papers to, it is the job of the supervisor to help the researcher identify the most appropriate journal after a discussion and assessment of the quality of the paper being submitted, and the suitability of the journal. One of the skills needed by young researchers is to have realistic expectations about the successful submission of a paper, and in many cases this might mean advising against those journals with a high IF. The assumption here is that these high IF journals are harder to get papers accepted, but the evidence is not clear, as to whether high IF journals have higher rejection rates. If this were the case, why should you advise someone to submit a paper that might have a higher chance of being rejected? Should not the advice be to submit the paper to a lower IF journal and get it accepted, and then to see whether that paper gets heavily cited in its own right? The key throughout must be the quality of the paper being submitted, not the assumed link between high impact journals and high-quality papers.

This strategy might be seen as a risk averse one, namely that the researcher can make a decision between the prestige of the journal and the possibility of rejection. A counter argument might be that the researcher needs to go through the peer review process, together with the requirement to revise the paper, respond to comments from referees, resubmit the paper and even rejection. This is all part of learning about the academic publication process. Here, the researcher consciously becomes a risk taker, where the decision about which journal to submit a paper to is seen to be implicitly uncertain, but as the potential returns are high, the risks are worth taking provided that one can accept the possibility of eventual rejection.

As there are now so many opportunities for papers from a range of different sources to be accessed as part of academic research, the importance of the established high IF journals is likely to reduce. The logic here is that there must be a weakening relationship between Journal IF and paper citations, and that the links between the two is primarily historical (Lorenzo, Larviere, & Gingras, Citation2012). Papers can now be accessed, read and cited solely on their own merits, and consequentially the links with journals will become much looser. If this is the case, then the IF becomes an ineffective means to evaluate the quality of journals. The individual papers can be accessed through a variety of online databases and other sources, and an individual paper can then be assessed both in terms of the numbers of citations and its relative position to other papers in the same field. This assessment can be made irrespective of the IF of the journal in which it is published.

Key papers will continue to appear (and increasingly so) in a more diverse range of journals and in other forms of output (e.g. online). Over time this means that more references will be cited in each published paper, thus increasing the overall scores for the IF. This may be a consequence of more papers being published and read by researchers, but again there is a dilution effect taking place (Button, Citation2015). The value of any given Journal IF is being eroded over time, as impact inflation takes place. The question here is whether the Journal IF is a useful device to help researchers locate the best papers for review as part of the paper writing process, or whether the search engines provide the best set of filters for papers on the basis of individual citations. Algorithms should improve, but researchers may still prefer to use Journal IF as one means to identify quality.

The second issue relates to the period over which the Journal IF is assessed. The current time horizon is a two-year period, yet there is little justification that this is the most appropriate time span. In fast moving medical, physical and biological sciences this might be the most suitable length of time, where immediacy is crucial. There are few important historical citations in the sciences, but in the social sciences it often takes a considerable period of time for a paper to have an impact. The two-year review period is too short to measure the true impact of journal papers, monographs and other types of output. Since 2007, there has been a five-year Journal IF introduced, but here again there is little justification as to why this is a more appropriate time horizon. It would be very easy to carry out a citation profile analysis of key papers published in transport, and to estimate the most suitable time period over which to assess impact. Over these last eight years, the five-year IF for Transport Reviews has been consistently higher than the two-year IF (the unweighted average figure is 23.8% higher), indicating that a longer term time span gives a fuller picture of the citation profile. The data are now available to carry this out over a time horizon of at least ten years, perhaps even longer. This possibility should be assessed, as not all subject areas have the same impact period, even if there may be a correlation between the two and five-year IF.

In addition to these two core concerns, there are many other factors that have combined to reduce the value of the Journal IF as it now stands. As the perceived importance of the IF grows, so do the concerns over its use and objectivity. These factors are well documented (e.g. Balaban, Citation2012):

  1. The selection of papers to be included in the nominator of the Journal IF (citations), but not in the denominator (papers) — editorials are an example here;

  2. The selection of journals to be included in the process, and the criteria for their inclusion;

  3. The highly skewed distribution of citations and the use of the average as the key value — for example in Transport Reviews (over a 34-year period), the top 50 papers account for about 40% of the citations (out of over 1000 papers) and the top 20 papers account for about 25% of citations. These figures are probably replicated in many other journals. Using averages gives only one perspective on the nature of citations, and the possibility of using other measures that account for the highly skewed distribution of citations may provide a very different picture;

  4. The issue of coercive citation is becoming more prevalent. This was mentioned in Button's (Citation2015) recent editorial, and more generally in terms of strategic journal self-citation by Chorus (Citation2015) and by others (e.g. Wilhite & Fong, Citation2012);

  5. Increasingly, papers are now published online before they are published in the conventional journal format, meaning that the two years' time scale is expanded and there is a longer period for them to be cited — how important is this online ‘pre-publication’ to a Journal's IF?

  6. Publication in January gives more time for citation than publication in December, and this disadvantages those papers published later in the year in terms of the Journal's IF;

  7. Free access and other means to raise profile for individual papers gives opportunities for publishers and editors to potentially increase the numbers of citations.

In conclusion, there needs to be a debate about the future of the Journal IF as it stands. It is not a proxy for paper quality, and it seems that the relationship between paper quality and Journal IF (if it ever really existed) is weakening. The IF itself is not a good measure of journal quality. A system needs to be designed that has the respect (and hopefully the support) of the academic community that is transparent and clear, together with an undertaking from journals and editors that it will be used and not abused. Otherwise, why have it at all?

References

  • Balaban, A. T. (2012). Positive and negative aspects of citation indices and journal impact factors. Scientometrics, 92, 241–247. doi:10.1007/s11192-012-0637-5
  • Button, K. (2015). Publishing transport research: Are we learning much of use? Transport Reviews, 35(5), 555–558. doi:10.1080/01441647.2015.1070514
  • Chorus, C. G. (2015). The practice of strategic journal self-citation: It exists, and should stop. A note from the editor-in-chief. European Journal of Transport and Infrastructure Research, 15(3), 274–281.
  • Fersht, A. (2009). The most influential factors: Impact factor and eigenfactor. Proceedings of the National Academy of Sciences of the United States of America, 106(17), 6883–6884. doi:10.1073/pnas.0903307106
  • Lorenzo, G. A., Larviere, V., & Gingras, Y. (2012). The weakening relationship between the impact factor and papers’ citations in the digital age. Journal of the American Society for Information Science and Technology, 63(11), 2140–2145. doi:10.1002/asi.22731
  • Wilhite, A. W., & Fong, E. A. (2012). Coercive citation in academic publishing. Science, 335, 542–543. doi:10.1126/science.1212540

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.