147
Views
0
CrossRef citations to date
0
Altmetric
Comment

Critical reflections on the journal peer review process

ABSTRACT

The journal article review process is one of the more troubling areas of academic life. Research indicates that journals’ practices are highly varied. This article discusses a number of issues related to this issue: when submitted articles disappear in ‘black holes’; when the review process appears endless; when different standards are being applied; and when it seems you are never going to get a worthwhile article published. It is suggested that journals need to provide more guidance for both reviewers and authors, that reviewing needs to be recognised in academic workload models, and that perhaps we need an academic Hippocratic oath.

Introduction

The journal peer review process is one of the most angst-inducing aspects of academic life, though, of course, peer review is endemic to the whole of academe: e.g., in employment, grant and promotion decisions (Tight, Citation2022). Yet it seems to have a particular resonance when it comes to the production of journal articles. As a researcher and author, having carefully crafted your article over several weeks or more, on your own or with one or more colleagues, you send it off to what you believe to be an appropriate journal, and wait. It may be rejected almost immediately, or you may be asked to revise or re-write it in the light of the reviewers’ comments; and then the process starts again.

The purpose of this short piece is to discuss some of the key issues that arise during this process, and, in doing so, possibly also suggest ways in which the process might be improved. I would not claim that I have all of the answers, but I do think it’s important to openly discuss these matters.

Positionality

I have to admit right from the start that I have ‘form’ in this area. I have been editing academic journals – Teaching at a Distance, Open Learning, Higher Education Quarterly, Studies in Higher Education, Assessment and Evaluation in Higher Education, Tertiary Education and Management – throughout my career. I have had articles published in dozens of journals, and have acted as a reviewer for all of those and dozens more. I have also edited dozens of books, and currently edit one book series, International Perspectives on Higher Education Research, and co-edit another, Theory and Method in Higher Education Research. So you may well have ‘suffered’ at my hands as reviewer and/or editor – if so, my apologies – just as I have suffered at the hands of others.

Existing studies

Twenty years ago, with many years’ experience of the journal publication process already under my belt, I decided to carry out a personal test of the veracity of the journal article reviewing process (Tight, Citation2003). I had kept copies of all of the reviews of the articles I had submitted to journals over the previous ten years, together with the decisions taken on them by the editors concerned. I went through them, assessing whether the reviews were positive or negative in tone, or a mixture of the two. While this was a subjective assessment, it was surprisingly easy to do. I then cross-tabulated the results against the editors’ verdicts, which were typically one of four decisions: accept, minor revisions, major revisions or reject.

The pattern this exercise revealed was quite striking: the relationship between referees’ ratings and editorial decisions was far from obvious. Highly criticized articles had sometimes been accepted with little or no amendment required, while positively reviewed articles were sometimes rejected. Clearly, the opinion of one’s peers was not the only factor that mattered – other considerations were also in play. Two obvious additional factors are the editor’s own opinions and the limitations imposed by the amount of publication space available in the journal.

This topic has also been the subject of more extensive research (Pontille & Torny, Citation2015). Peters and Ceci (Citation1982) went as far as resubmitting articles they had already published, with minor changes to hide their identities, to other journals: while most were rejected, there were no accusations of plagiarism (which was, of course, harder to establish 40 years ago). Many studies have focused on the experience of a specific journal or nation (e.g., Atjonen, Citation2018; Falkenberg & Soranno, Citation2018; Hewings, Citation2004).

Journal article peer review has also been the subject of large-scale research synthesis. Bornmann et al. (Citation2010) undertook a meta-analysis of studies of the journal peer review process, identifying previous quantitative studies of the topic and combining their data. They identified 70 reliability coefficients from 48 studies, which together had examined the assessment of 19,443 manuscripts. They found that the inter-rater reliability was low: in other words, journal reviewers seldom agreed with each other.

This might, of course, be at least partly explained by the relatively simple rating scale used by many journals; the accept/minor/major/reject scale already referred to. What constitutes major revisions to one reviewer might, for example, easily be called minor revisions by another. There are also, however, cases where one reviewer recommends that an article should be accepted without any further work and another recommends rejection. While the obvious response of seeking a third opinion (perhaps that of the editor themselves) is pragmatic, it does ignore the underlying disparity of judgement.

Peer review practices also vary significantly. Hamilton et al. (Citation2020) report on:

a survey of 322 editors of journals in ecology, economics, medicine, physics and psychology. We found that 49% of the journals surveyed checked all manuscripts for plagiarism, that 61% allowed authors to recommend both for and against specific reviewers, and that less than 6% used a form of open peer review. Most journals did not have an official policy on altering reports from reviewers, but 91% of editors identified at least one situation in which it was appropriate for an editor to alter a report. (p. 1)

Thelwall (Citation2023) reports on an analysis of ‘45,385 first round open reviews from published standard journal articles in 288 MDPI [an open access publisher] journals’, finding ‘substantial differences between journals and disciplines in review lengths, reviewer anonymity, review outcomes, and the use of attachments’ (p. 299). He found that:

Physical Sciences journal reviews tended to be stricter and were more likely to be anonymous. Life Sciences and Social Sciences reviews were the longest overall. Signed reviews tend to be 15% longer (perhaps to be more careful or polite) but gave similar decisions to anonymous reviews. Finally, reviews with major revision outcomes tended to be 68% longer than reviews with for minor revision outcomes. (Citation2023)

These analyses suggest that both the practices of peer review of academic journal articles and the accuracy of the results may be challenged. Some journals do publish explicit criteria, against which they ask their reviewers to make their judgements, often on a Likert-type scale of four or five options. There are questions, though, about how such criteria and instructions are interpreted, and whether they impact upon reviewers’ overall assessments.

Another response, however, is to question just how much this matters? After all, authors receiving reviews of their work – even when it is rejected by the journal in question – are hopefully receiving at least some useful advice, which they may use in revising their articles for possible publication elsewhere. There are usually many alternative journals available, with higher or lower acceptance thresholds, in which publication may be sought. We might argue that academic authors simply have to get used to the rough and tumble of the article publication process.

It is also possible, in certain circumstances, for authors to negotiate with journal editors, and even through the editor with their reviewers, over the treatment of their submission after a decision has been taken (Kumar et al., Citation2011). This can work to mutual benefit. Thus, in their study of selected science and engineering articles, Kumar et al. report that:

Most types of negotiations helped authors to improve presentation of their underlying concepts, quality, clarity, readability, grammar and technical contents of the article, besides offering an opportunity to rethink about several other aspects of the article that they overlooked during the preparation of manuscript. (p. 331)

It is, of course, unrealistic in any case to expect unanimity of judgement amongst academics. Some may warm to a particular line of argument, theoretical framework and/or methodology, while others will be put off by or opposed to it. The academic world, at least in research terms, is built to a large extent on competition and disagreement. To some extent, reviewers might also be said to be acting in a ‘zero-sum’ game; that is, if they recommend the rejection of an article they are reviewing, there is potentially that much more space available for their own publications.

It would be hard, however, to argue that the academic journal article review process works well, for, in addition to taking up an inordinate amount of (typically unpaid) time and effort, it causes a great deal of emotional upset among those whose efforts are being judged. It may be, of course, that the growing moves towards online, open and freely available publication, and towards researchers self-publishing their articles on their own websites, will go some way towards resolving these issues. Post-publication review, whether carried out formally or informally, also opens up the possibilities for a greater variety of opinions to be expressed, as well as for more engagement between academic authors and their readers.

Some key issues

I will now identify and discuss some of the troubling issues with the journal peer review process that I have identified over the years. This is a personal selection, presented in no particular order. You may have other troubling issues or disagree with my assessment – you may even want to respond to this article.

Black hole journals and editors

The issue which troubles me most is what I call ‘black hole’ journals or editors; it’s black hole editors really, as they change with time and the experience with a particular journal changes with them. This is where you submit an article to a journal and you hear nothing – apart from the automatic acknowledgement produced by the system – for ages.

You can, of course, contact the editor or administrator to ask what is happening, but you will probably just get the standard reply that your article is still under review. There is also a risk here that you will irritate the editor or administrator, which may impact on their eventual decision.

There is little that you can really do to expedite matters, apart from withdrawing your article, which is not always easy to do, and sending it somewhere else. I’d certainly advise your colleagues against submitting to the journal in question, though, at least for the time being.

Some journals now detail average article turnaround times on their websites, which can give authors a little more leverage.

Endless rounds of revision

A contrasting experience is where the journal you have sent your article to is operating at least reasonably efficiently, and you receive the reviewers’ comments back and are asked to revise or re-write your article. If you’re wise, you take the reviewers’ comments seriously, make appropriate changes and/or additions to your article, and re-submit it with details of the changes you have made.

The revised article is sent out for review again – perhaps to the same reviewers, to new reviewers, or a mixture – and in due course you receive their comments, in the light of which you are asked to make further revisions to your article. The review/revise/review/revise process may stretch over several rounds and last for well over a year. As the author it can feel like you are trapped in an endless cycle.

As an editor myself, I think such a process is misguided and unproductive. One or two rounds of review and revision should be more than enough to reach a decision; academic publications are not meant to be perfect.

Always wanting a different article from the one you have written

This issue relates to how reviewers respond to the article you have submitted. At times it can seem that, while they may be interested in and engaged with what you have done and have to say, they would rather that you’d done something a bit different. They make suggestions for additional reading, data collection and analysis which would take your research in a different direction, as well as involving significant further work. They may also expect their own publications to be referenced.

This kind of response can be very frustrating and difficult to counter. Your best option may well be to be open about it with the editor, who may be understanding, or (again) to withdraw your article and send it elsewhere if they are not.

When will I ever get this published?

Sometimes when you’ve written something, you know it’s not the best thing you have ever written, but still feel that it’s worthy of publication. After all, you’ve read plenty of published articles which you did not think were of particularly high quality.

But your article gets rejected by the first journal you send it to. You make some revisions in the light of the comments received and send it to another journal, but it gets rejected again. The rounds of revisions and rejections continue, and you begin to despair about whether you will ever get the article published.

This issue is a bit like the ‘endless rounds of revision’ discussed earlier, except that it involves multiple journals rather than just one. The only response I can suggest here is a mixture of faith and persistence. See if you can beat your all-time record for the number of journals you’ve sent an article to. And try and keep your article up-to-date while the process is ongoing.

Ways forward

In addition to being more selective about which journals (or editors) we target with our articles, and being more pro-active in withdrawing articles that aren’t getting anywhere, what other ways can we think of to improve the situation outlined?

One of the key reservations expressed about those who review journal articles is their lack of training in the process. Many journals do, of course, provide guidance to their reviewers, and editors may engage with individual reviewers on particular issues. But editors also need to be careful not to expect too much: reviewing journal articles is usually done freely in the reviewer’s own time.

General guidance on reviewing is also available in the research literature. For example, Chong and Lin (Citation2023) argue that ‘Authors appreciate peer-review feedback that is precise and detailed, providing specific and well-justified suggestions that authors can act on … Good peer-review feedback does not focus on every single problem in a manuscript, but points out major concerns’ (p. 9). Yet, while it is good advice, it is not easy to implement. Garcia-Costa et al. (Citation2022) examined ‘a sample of 1.3 million reports submitted to 740 Elsevier journals in 2018–2020’ (p. 1). They found that the developmental standards of peer review varied greatly by discipline, age and sex of reviewers. They concluded that ‘increasing the standards of peer review at journals requires effort to assess interventions and measure practices with context-specific and multi-dimensional frameworks’ (Citation2022).

Clearly, there is a great deal more to be done, and the journal article review process needs to be taken much more seriously. So what do we do, as publishers, editors, reviewers and authors? I will make three suggestions in conclusion, without pretending that these provide a complete answer:

  • journals and publishers do need to provide basic guidance for their reviewers, and monitor how well they follow this

  • while accepting that most journals and their owners cannot afford to pay reviewers, the journal article review process (along with many other unpaid or poorly paid tasks such as external examining and helping to run learned societies) needs to be recognized as part of academics’ workload

  • perhaps we need some kind of general academic Hippocratic Oath, whereby we undertake to be supportive, as well as critical, of our fellow academics and students.

You may well have other thoughts and suggestions, which I would be interested to hear.

Disclosure statement

No potential conflict of interest was reported by the author(s).

References

  • Atjonen, P. (2018). Ethics in peer review of academic journal articles as perceived by authors in the educational sciences. Journal of Academic Ethics, 16(4), 359–376. https://doi.org/10.1007/s10805-018-9308-3
  • Bornmann, L., Mutz, R., & Daniel, H.-D. (2010). A reliability-generalization study of journal peer reviews: A multilevel meta-analysis of inter-rater reliability and its determinants. PLoS One, 5(12), e14331. https://doi.org/10.1371/journal.pone.0014331
  • Chong, S., & Lin, T. (2023). Feedback practices in journal peer-review: A systematic literature review. Assessment and Evaluation in Higher Education, 49(1), 1–12. https://doi.org/10.1080/02602938.2022.2164757
  • Falkenberg, L., & Soranno, P. (2018). Reviewing reviews: An evaluation of peer reviews of journal article submissions. Limnology and Oceanography Bulletin, 27(1), 1–5. https://doi.org/10.1002/lob.10217
  • Garcia-Costa, D., Squazzoni, F., Mehmani, B., & Grimaldo, F. (2022). Measuring the developmental function of peer review: A multi-dimensional, cross-disciplinary analysis of peer review reports from 740 academic journals. PeerJ, 10, e13539. https://doi.org/10.7717/peerj.13539
  • Hamilton, D., Fraser, H., Hoekstra, R., & Fidler, F. (2020). Journal policies and editors’ opinions on peer review. eLife, 9, e62529. https://doi.org/10.7554/eLife.62529
  • Hewings, M. (2004). An ‘Important Contribution’ or ‘Tiresome Reading’? A study of evaluation in peer reviews of journal article submissions. Journal of Applied Linguistics, 1(3), 247–274. https://doi.org/10.1558/japl.2004.1.3.247
  • Kumar, P., Rafiq, I., & Imam, B. (2011). Negotiation on the assessment of research articles with academic reviewers: Application of peer-review approach of teaching. Higher Education, 62(3), 315–332. https://doi.org/10.1007/s10734-010-9390-y
  • Peters, D., & Ceci, S. (1982). Peer-review practices of psychological journals: The fate of published articles, submitted again. Behavioral and Brain Sciences, 5(2), 187–195. https://doi.org/10.1017/S0140525X00011183
  • Pontille, D., & Torny, D. (2015). From manuscript evaluation to article valuation: The changing technologies of journal peer review. Human Studies, 38(1), 57–79. https://doi.org/10.1007/s10746-014-9335-z
  • Thelwall, M. (2023). Journal and disciplinary variations in academic open peer review anonymity, outcomes and length. Journal of Librarianship and Information Science, 55(2), 299–312. https://doi.org/10.1177/09610006221079345
  • Tight, M. (2003). Reviewing the reviewers. Quality in Higher Education, 9(3), 295–303. https://doi.org/10.1080/1353832032000151157
  • Tight, M. (2022). Is peer review fit for purpose? In E. Forsberg, L. Geschwind, S. Levander, & W. Wermke (Eds.), Peer review in an era of academic evaluative culture (pp. 223–241). Palgrave Macmillan.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.