315
Views
0
CrossRef citations to date
0
Altmetric

Running a journal is an exciting but tedious task. Exciting, because it gives you the opportunity to interact and discuss science with many people (authors, reviewers, editors), to read many manuscripts addressing many biological issues that you would probably have never heard off if you were to stick to your own (very focused) research field. Tedious, because you inevitably have to deal with many misconducts that drive your energy far from purely scientific issues (or is it just about “human” science issues?) … 

There are indeed many potential deviations in scientific behavior, either intentional or unintentional (Smith, J R Soc Med Citation2006. Babalola et al, Clin Dermatol Citation2012). If fabrication or falsification of data clearly belong to the first (and most severe) category, more or less (un)intentional misconducts regularly spoil the system and drive the editors to spent much time facing ethical rather than scientific challenges: check authors' backgrounds, identify reviewers with appropriate expertise and no conflict of interest, deal with compliant reviews, etc.

Let me just quickly list a few cases that should be rare but are unfortunately very common instead. They concern either authors or referees behavior and mostly result from flippancy rather than clear misconducts:

  1. the unreadable paper. It usually has two origins: extremely poor English (this, you can quickly notice and reject) or very intricate writing, which is a more subtle issue. This is the kind of paper you just don't understand at all at first sight; then you wonder if it might be because of your limited scientific knowledge; then you ask a colleague in the field; then after much time and effort you realize that the paper is indeed unreadable … 

  2. the lazy (“no change”) revision. You ask the author(s) for a major revision with a detailed list of changes that should improve their manuscript … and the revised version comes back in three days with two words in red and one additional reference (and a very self-satisfactory cover letter on the top); this, you can easily handle (reject), but you clearly lose time again at your end.

  3. the cheating (“all change”) revision. You ask for some revisions, the authors decide the referees were not severe enough and change 50% of the paper … ending with what you can indeed consider a completely new paper; start from zero.

  4. the compliant reviewers suggestion. This is unfortunately a very common case: you ask the author to suggest some relevant referees, authors provide a list of people from the same lab, same unit, former co-authors, friends, family, students, or, even in the worst case, fake reviewers (Ferguson et al, Nature 2014). The recurrence of this misconduct means that editors spend many a time verifying the referees' origins or even abandon the option for authors to suggest reviewers (which is another example of collateral damage due to some authors' misconduct, since in principle, who better should know who the most relevant reviewers are than authors themselves?).

  5. the “yes but no at the end” reviewer. When you think about “bad” reviewers, you usually think about reviewers stealing data and/or retaining a paper (with a succession of very bad reports or requests for fancy control experiments), which is clearly the worst case (Nature Citation2001). Much less dramatic, but unfortunately quite common, are cases that concern reviewers who accept the request to review the paper but never send their report (with no bad intention, but just a kind of flippancy), leading the editor to start the reviewing process again, resulting in a substantial delay in the publication.

  6. the “all good” report. This can be connected to cases 4 and 5. You select some (apparently) good reviewers, then you wait for their report, and you get a nice “this is a very good paper that should be accepted” or other similar formulations. Whether it is due to a friendly (case 4) or a glib (case 5) referee, the result is the same: you start from zero again. Let's be clear: some papers can indeed be good from the beginning, but they probably could also all be improved in some way.

This list could of course be extended with dozens of other “standard” cases, potentially much worse than these; some of them will be more thoroughly discussed in a forthcoming editorial. But let's say it is sufficient for now to illustrate the bad side of the editors' job, which indeed only reflects the dark side of scientists … and humans.

After all, running a journal is a human task in a human world. That is probably what makes it both exciting and tedious … 

References

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.