2,632
Views
1
CrossRef citations to date
0
Altmetric
Perspective

Two dogmas of peer-reviewism

ORCID Icon
Pages S129-S133 | Received 24 Aug 2020, Accepted 20 Nov 2020, Published online: 22 Dec 2020

How should we organize our research activities, and to what extent do we want academic culture to be shaped by funding issues? These are not trivial questions. We all want to support excellent research and good working conditions; that is beyond dispute. But are competitive grants and peer review always the best solutions for curiosity-driven projects and innovation in science? For many researchers, writing proposals is a time-wasting and thankless practice. Today, we need new, thought-provoking ideas regarding the sensitive question of how to allocate research money, because the current system of peer review is haunted by serious problems. I have highlighted the possibilities of a funding lottery in a previous article (Roumbanis Citation2019) and I will, therefore, not repeat all of the arguments in favor of a lottery here. In the following commentary, I will instead briefly confront two common dogmas among those who defend grant peer review. My use of the word dogma refers to the fact that many researchers are critical towards the current funding system, but nevertheless believe peer review to be the most plausible method (e.g. Hayes and Hardcastle Citation2019). The first dogma that I will discuss is that grant peer review is more legitimate than all other conceivable methods, because it relies on meritocracy through expert judgments. The second dogma is that grant writing and the peer review process is valuable in itself, because it fosters excellent research.

Dogma 1: peer review is the most legitimate method

Grant peer review is generally considered to be a meritocratic and value-creating system. But what exactly do we mean by meritocratic in the complex context of science today? Do we simply mean that the best and most talented researchers should be given the opportunity to continue their important work? If this is the primary goal – which indeed seems plausible – then there might be other, possibly more or equally legitimate ways to facilitate this in practice. The crux of the matter is, how can we manage to identify the best researchers and the best projects in advance. Recognition is always dependent on many different criteria and the long-lasting value of scientific contributions is much easier to evaluate in retrospect. Most researchers are making contributions to what Kuhn famously called ‘normal science,’ but only a few are making groundbreaking discoveries. So, at a fundamental epistemological level, deciding between project x, y or z may actually be rather unimportant. Also, as Merton (Citation1973) has remarked, timing is of uttermost importance for the academic reward system; if, for example, many young promising scholars are not recognized and awarded when they really need it, then that might be a sign of dysfunction.

It is well-known that peer review is far from perfect. In fact, many of the problems related to the peer review process has been critically examined and discussed innumerable times. Although both public and private funding organizations are trying to improve the quality of the assessment procedures, some of the dilemmas inherent in peer review are almost impossible to neutralize, such as expert bias, chance and disagreement. But, regardless of whether we choose to view these aspects as a problem of reliability or as a consequence of the cognitive diversity found in most fields of research, we should acknowledge the practical effects resource distribution has for the individual researchers and scientific communities over time. When Cole, Cole, and Simon (Citation1981) made their experimental study of the NSF peer review process, they found remarkably strong elements of chance that determined the fate of the particular grant applications. This result, namely that ‘the luck of the reviewer draw’ has such a great impact upon funding decisions, made them question the very legitimacy of the peer review system. Since then, several other studies have confirmed this problem and raised the same kind of concerns (Boudreau et al. Citation2016; Gross and Bergstrom Citation2019; Pier et al. Citation2018).

Another way to allocate resources could, for example, be based on researcher’s reputation and the impact of their previous work. Einstein, Curie, Keynes, and many other great scholars back in the days, never had to write a single proposal of the lengthy type most funding organizations require today. Block funding enabled them to conduct their research. Their job was to be creative and to come up with new exciting ideas at their own pace. In other words: the people most worthy of recognition, did not have to worry about funding or spend valuable time trying to pitch projects (just try to imagine Wittgenstein writing a proposal!). But things have changed dramatically since then, and grant peer review is now taken for granted. Today, the evaluation of research proposals and impact metrics functions as new gate-keeping mechanisms that generates new Matthew-effects. It was not necessarily better hundred years ago, when scientific careers were often affected by nepotism and favoritism. But there are other problems with the current system that must be handled, and it is not evident that increased block funding, combined with enhanced transparency concerning hiring processes, is less legitimate than grant peer review.

As I have argued in my article (2019), to use a well-designed lottery (together with increased block funding) and to fully embrace chance could also be a legitimate method; it would produce a cleaner randomness than the one generated by peer review and a complete impartiality in the selection process. The idea of using a lottery could partly be seen as a critical response to the excessive audit culture and the destructive effects of competitive funding prevalent in academia. In fact, recognizing the role that luck plays in individual outcomes is an important potential counter to dogmatic meritocratic beliefs (Sauder Citation2020). Also, the increasing dominance of project funding is, according to Franssen et al. (Citation2018), problematic in light of epistemic innovation. Should we just go on with our habits of writing and reviewing proposals? Peer review is much less peer review today, because of the hyper-specialization and the unsustainable growth of academic science; it is hard to find appropriate expert reviewers for all applications. The members of a panel group cannot always judge and compare the content in different applications in a consistent manner. We can therefore never be sure that we are judged by our true peers. Are our proposals really assessed by the foremost experts in our respective fields, who can appreciate our ideas with the right kind of intellectual distance? And how are they going to reach consensus – what happens if they disagree? Travis and Collins brought attention to this when they talked about peer review as ‘a blackball system whereby one poor grade can damn a proposal’ (Citation1991, 335).

Still, the majority of researchers believe that peer review is the most legitimate method for allocating opportunities within academic communities. This belief seems to be so strongly internalized that it bears resemblance to a dogma. Dogmas make people less disposed to see other possibilities or think in new directions. The crucial question is: In what way is peer review legitimate for those 80–90 percent rejected applicants that might have produced excellent research had they been given the opportunity? This is a tricky question. If we have, say, ten really good proposals, which one or two should we choose? Many reviewers are highly skilled and might identify new promising research ideas, no doubt, but the problem is that they often have to compromise since all highly qualified proposals cannot be funded. The destructive competition for funding has contributed to a situation, in which reviewers are under pressure to weed out many high-quality applications. Does anyone really benefit from that kind of practice? Peer review functions as a quality control, but it does also represent a clear obstacle for many younger researchers, and for those researchers who are conducting science outside of the mainstream with unorthodox and risky projects. How can we be so sure that an excellent proposal will lead to much greater impact than a mediocre one, given that creativity and hard work can have rather unpredictable consequences? What we are really witnessing today is, as I see it, a crisis of legitimacy when it comes to competitive funding and peer review.

Dogma 2: writing research applications improves research

Doing good and responsible research must be rewarded, but writing high-quality applications is not exactly the same thing. Nevertheless, ‘grantsmanship’ has gained its own status among researchers; successfully acquiring external funding is part of today’s recognition and reward structure (Fang and Casadevall Citation2016). But, do we really benefit intellectually from writing grants – is it a necessary requirement to motivate scientific imagination and the exploration of new ideas? Engaging critically and constructively (‘organized skepticism’) is taking place everywhere in academia, for example, during research seminars, in department corridors, at conferences, and in journal peer review. Allocating money using a lottery will not necessarily reduce the organized skepticism. Instead, we will get more time to do real science. Today, many senior professors are complaining that they do not have much time doing their own research, because they feel the continuous pressure to bring in new grants. And some junior researchers receive short-term contracts to write proposals that often do not result in new funding. It had been much more preferable to do real research in the meantime (Sloman Citation2014; Times Higher Education Citation2019). An argument in favor of the current system suggests that grant writing gives researchers the opportunity to pinpoint their ideas and clarify what they want to do. And even if their proposals are rejected, they might still get fruitful feedback from the reviewers, which might help them sharpen their arguments.

However, many applicants do not get any substantial feedback on their rejected proposal. There is no time for that, because there are too many applications. In cases where feedback is provided, its quality varies. In fact, feedback from reviewers is often disappointing, revealing either to standardized comments or a lack of understanding, adding to the frustration of those who get their proposal rejected. Thus, the argument that writing an application will itself contribute to scientific progress is limited to the comparatively few cases where high-quality feedback is provided. This kind of reasoning is more a way to legitimize this whole ‘funding game’ for oneself and within the academic community. The way academics talk about grant writing and funding issues actually contain many subtle power dimensions that reproduces the system. One cannot really compare the time used for grant writing with the time used for doing new experiments or writing a beautiful theoretical essay. And this is far from being a forced dichotomy, because it is based on the lived experiences of thousands of researchers all over the world. Writing proposals is all about trying to pitch a project, which hardly fits well with scientific boldness and innovation, but instead promotes exaggerated estimations about future impact, as well as opportunistic and shortsighted thinking. In fact, the practice of writing proposals fosters researchers to think in well-defined grooves, because that is what gets them grants, and tenure and prestige in their institutions, and yet, the exploration of unknown territory always comes with great uncertainty (Sloman Citation2014; see also Gross and Bergstrom Citation2019).

But the question remains: Why are so many researchers instinctively against the lottery idea, while admitting at the same time that there is luck involved in the peer review process? Lotteries have not yet been extensively tested and compared with peer review. How can they be so sure that peer review is always better? For example, it seems self-defeating, when Reinhart and Schendzielorz (Citation2020), in their critique against the lottery idea, write polemically that, ‘it is much easier to organize a lottery than finding highly qualified reviewers, get them to read the proposals, get them to write substantial reports’. But is not that a serious problem? Reviewers lack time and stamina. Reviewers are sloppy and some lack the relevant expertise to properly evaluate proposals sent to them. Other reviewers are meticulous and read applications with an open mind. This makes peer review itself no better than a real lottery, but with much bigger transaction costs. If we aim to foster high-quality research, we should reduce unnecessary burdens and reform the current system, because it is too arbitrary and biased, time-consuming, expensive, and highly de-motivating (both for junior and senior scholars). A well-designed lottery could make a positive difference – either as a large-scale alternative to peer review or for certain special calls. Finally, if the lottery would prove to be a weaker alternative than peer review in longitudinal comparisons, then at least the ancient old practice of drawing lots could be used as a problem-solving technique, especially in negotiation situations where the experts find it difficult to choose among several really strong applications that deserves to be funded. In such situations, chance may be the absolutely most fair and rational way to make funding decisions.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Notes on contributor

Lambros Roumbanis is an associate professor in sociology and works at Stockholm Centre for Organizational Research (SCORE), at Stockholm University. He received his PhD in 2010 with a dissertation on sociological theory (‘Kierkegaard and the blind spot of sociology’). Since 2012 he has developed his interest in the sociology of science, with a special focus on peer review, academic judgment and research funding. In 2020 he is starting a new research project about organizations that uses AI technology and algorithmic decision-making in recruitment processes.

References

  • Boudreau, Kevin J. , Eva C. Guinan , Karim R. Lakhani , and Christoph Riedl . 2016. “Looking Across and Looking Beyond the Knowledge Frontier: Intellectual Distance, Novelty, and Resource Allocation in Science.” Management Science 62 (10): 2765–2783.
  • Cole, Stephen , Jonathan Cole , and Gary A Simon . 1981. “Chance and Consensus in Peer Review.” Science 214: 881–886.
  • Fang, Ferric C. , and Arturo Casadevall . 2016. “Research Funding: the Case for a Modified Lottery.” mBio 7 (2): 1–7.
  • Franssen, Thomas , Wout Scholten , Laurens K. Hessels , and Sarah de Rijcke . 2018. “The Drawbacks of Project Funding for Epistemic Innovation: Comparing Institutional Affordances and Constraints of Different Types of Research Funding.” Minerva 56 (1): 11–33.
  • Gross, Kevin , and Carl T. Bergstrom . 2019. “Contest Models Highlight Inherent Inefficiencies of Scientific Funding Competitions.” PloS Biology 17 (1): e3000065.
  • Hayes, Matthew , and James Hardcastle . 2019. Grant Review in Focus . London: Publons.
  • Merton, Robert K. 1973. The Sociology of Science . Chicago: University of Chicago Press.
  • Pier, Elizabeth L. , Markus Brauer , Amarette Filut , Anna Kaatz , Joshua Raclaw , Mitchell J. Nathan , Cecilia E. Ford , and M. Molly Carnes . 2018. “Low Agreement among Reviewers Evaluating the Same NIH Grant Applications.” Proceedings of the National Academy of Sciences of the United States of America 115 (12): 2952–2957.
  • Reinhart, Martin , and Cornelia Schendzielorz . 2020. “The Lottery in Babylon – On the Role of Chance in Scientific Success.” Journal of Responsible Innovation . doi:10.1080/23299460.2020.1806429.
  • Roumbanis, Lambros. 2019. “Peer Review or Lottery? A Critical Analysis of two Different Forms of Decision-Making Mechanisms for Allocation of Research Grants.” Science, Technology, & Human Values 44 (6): 994–1019.
  • Sauder, Michael. 2020. “A Sociology of Luck.” Sociological Theory 38 (3): 193–216.
  • Sloman, Aaron. 2014. “How to Select Research Proposals Less Wastefully: Use a Sensibly Designed, Relatively Inexpensive, Dynamic, Weighted Lottery.” http://www.cs.bham.ac.uk/research/projects/cogaff/misc/lottery.html.
  • Times Higher Education . 2019. “Is Paid Research Time a Vanishing Privilege for Modern Academics?” https://www.timeshighereducation.com/features/paid-research-time-vanishing-privilege-modern-academics.
  • Travis, G. D. L. , and H. M. Collins . 1991. “New Light on Old Boys: Cognitive and Institutional Particularism in the Peer Review System.” Science, Technology, & Human Values 16 (3): 322–341.