801
Views
0
CrossRef citations to date
0
Altmetric
Research Article

‘Special issue-ization’ as a growth and revenue strategy: Reproduction by the “big five” and the risks for research integrity

ORCID Icon, ORCID Icon &
Received 03 Apr 2024, Accepted 26 Jun 2024, Published online: 07 Jul 2024

ABSTRACT

The exponential growth of MDPI and Frontiers over the last decade has been powered by their extensive use of special issues. The “special issue-ization” of journal publishing has been particularly associated with new publishers and seen as potentially “questionable.” Through an extended case-study analysis of three journals owned by one of the “big five” commercial publishers, this paper explores the risks that this growing use of special issues presents to research integrity. All three case-study journals show sudden and marked changes in their publication patterns. An analysis of special issue editorials and retraction notes was used to determine the specifics of special issues and reasons for retractions. Descriptive statistics were used to analyse data. Findings suggest that these commercial publishers are also promoting special issues and that article retractions are often connected to guest editor manipulation. This underlies the threat that “special issue-ization” presents to research integrity. It highlights the risks posed by the guest editor model, and the importance of extending this analysis to long-existing commercial publishers. The paper emphasizes the need for an in-depth examination of the underlying structures and political economy of science, and a discussion of the rise of gaming and manipulation within higher education systems.

Introduction

Scholars are increasingly concerned about the rising number of journal “special issues” (Alrawadieh Citation2020; Oviedo-García Citation2021), and whether these can be linked to the work of paper mills, fabricated research papers, and editorial bribing (Abalkina Citation2023; Cabanac, Labbé, and Magazinov Citation2021; Joelving Citation2024). Despite growing concerns, research on “special issue-ization” is exceptionally scarce and tends to focus on new commercial publishers, in particular MDPI. The assumption has been that the use of special issues as a growth and revenue strategy is associated solely with these publishers. This research offers a close-up analysis of three case-study journals published by traditional long-existing publishers, each with a sizable number of special issues, and by doing so, it extends and deepens the discussion of “special issue-ization.” To the best of our knowledge, it is the first study of “special issue-ization” in traditional commercial publishers (i.e., the big five). Our concern is less with the concept of the special issue itself, but rather its use as a growth and revenue strategy, which is “an increasingly widespread phenomenon with slight variations” (Repiso et al. Citation2021, 593).

Our contribution to the scholarly discussion on research integrity is twofold. First, we demonstrate through our analysis that this strategy is no longer confined to new commercial publishers. Second, we argue that this dynamic both poses significant risks for research integrity and associating it solely with new commercial publishers undermines the risks this spread poses for the scientific community.

Special issue-ization as a growth and revenue strategy

Neoliberal manifestations of performativity have molded higher education institutions into commercial enterprises (Croucher and Lacy Citation2022) where performative values are emphasized (Aguinis et al. Citation2020; Mula-Falcón, Caballero, and Segovia Citation2022). Within this landscape, we as scholars are increasingly pressurized to prove our worth in highly metricized research systems, often characterized as a game where scoring and value judgments are based on publication quantity, impact and reach (Kulikowski, Przytuła, and Sułkowski Citation2023; Sandy and Shen Citation2019). Within this landscape, playing the “evaluation game” (Kulczycki Citation2023) is a precondition for academic success and is integral to most higher education systems. Scholars are required, in many cases pressured, to comply with research evaluation systems and reward structures and publish in order not to perish (van Dalen and Henkens Citation2012).

All too often, these systems foster goal displacement. The measurement loses its instrumental value and becomes the main goal (Csiszar Citation2019; de Rijcke et al. Citation2016; Müller and de Rijcke Citation2017), even as scholars are urged to avoid gaming and manipulation to advance their careers (Butler and Spoelstra Citation2020). Some seek to make a distinction between playing without compromising scholarly values and ethical principles and using cynical strategies that violate research integrity to game the system for personal benefits (Butler and Spoelstra Citation2024; McIntosh and Vitale Citation2023). Kulczyzki sees this as the difference between “playing” the game and gaming the system (2023). As the quantification of scholarship and the dominance of performance indicators spreads across the globe, some argue that gaming and manipulation have proliferated (Gopalakrishna et al. Citation2022; Oravec Citation2020; Xie, Wang, and Kong Citation2021), posing significant risks for research integrity and damaging trust in the academic system.

The finger is pointed at individual researchers (Biagioli et al. Citation2019), publishers of so-called “predatory” journals (Mertkan, Onurkan Aliusta, and Suphi Citation2021a; Mertkan, Onurkan Aliusta, and Suphi Citation2021b; Frandsen Citation2022), conferences that guarantee publications (Pecorari Citation2021), citation cartels (Baccini, De Nicolao, and Petrovich Citation2019), and paper mills fabricating papers using AI, selling authorship or bribing editors (Abalkina Citation2023; Cabanac, Labbé, and Magazinov Citation2021; Joelving Citation2024). Most recently “special issues” have been linked to paper mills, fabricated research papers, and editorial bribing following their increasing use as a growth and revenue strategy by new commercial publishers (Abalkina Citation2023; Cabanac, Labbé, and Magazinov Citation2021; Joelving Citation2024).

Special issues are defined as journal issues that are normally “proposed and overseen by a guest editor, and focus on a specific area of research” (Else Citation2021). They have long been used as a way for academic collaborators to publish work on a topic of shared interest. Soon after the Royal Society’s launch of its first journal Transactions in the late 17th Century, the society began publishing a series of “collections” based on the work of its members to complement the journal (Fyfe et al. Citation2022). Ever since, journals have occasionally published themed or “special” issues, which have been observed to generate greater citations and impact within specific fields and in doing so, can act to catalyze new debates and approaches within a specific field (Olk and Griffith Citation2004; Sala et al. Citation2017).

Since 2000, this practice has been turned into a commercial strategy by new publishers such as Hindawi, MDPI and Frontiers to accelerate growth and generate revenue. In some cases, the number of special issues they publish has increased exorbitantly (Huang et al. Citation2022) to far outnumber regular issues (Oviedo-García Citation2021). This exponential increase has led to growing concerns about the editorial standards set by these new publishers (Petrou Citation2023). There has been a particular focus on MDPI (Petrou Citation2020; Brockington Citation2022; Crosetto Citation2021), and to a lesser extent Frontiers and Hindawi (Petrou Citation2023).

On NaN Invalid Date , the senior executives at John Wiley and Sons reported on their latest financial results in a zoom call with investors. John Napack, Wiley CEO, described what he called an “unexpected event” at Hindawi, their Open Research publication arm. This had forced them to temporarily suspend a “fast growing publishing program” known as “SIs.” Investors wanted to know more about this SI program. The CEO recounted how “compromised papers” were being submitted by “non-Wiley” guest editors and reviewers, and that their response was to “purge the external bad actors” as well as “scrubbing the archive,” retracting the compromised articles. During the Q and A session, the CEO admitted that they had been on track to bring in $85 million revenue from Hindawi, which Wiley had bought 2 years earlier for $300 million. About half of this anticipated revenue was from SIs. He admitted that this publishing “pause” would lead to a $30 million loss in Wiley revenue. The news immediately hit their share price, which fell 16%. Wiley subsequently closed down the four Hindawi journals that accounted for half of the 1700 articles they were forced to retract, and a further 19 Hindawi journals were delisted from Clarivate’s Web of Science index for failing to meet its quality benchmarks.Footnote1 The story highlights the profits that SIs generate within an “author-pays” publishing model generates, along with the risks that come with rapid expansion and the prioritization of volume over quality.

Beyond the Hindawi case, there have been an increasing number of other high-profile mass “batch” retractions made by long standing major publishers from special issues of their journals. Despite this, data on the growth of “special issue-ization” has focused primarily on new commercial publishers, in particular MDPI (e.g., Hanson et al. Citation2023). This study provides evidence on the adoption of this strategy by traditional long-existing publishers and the risks this poses for research integrity.

Research design

This paper is based on article-level document analysis of three case-study journals published by a long-standing major publisher. After one author became aware of publication practices in one case-study journal, the research team intentionally looked for similar journals. Our selection was guided by three criteria: journals being published by the same publisher, having published a considerable number of special issues over years, and having been previously indexed. More information about case-study journals is provided in the next section.

All the issues listed on journal websites were examined to identify how the publication patterns of case study journals have changed over time. Each volume was individually analyzed for the number of issues and each issue for the number of articles excluding editorials, notes, comments and alike, issue type (i.e., special issue or regular issue) and the number of subsequent retractions, if any. There were cases of issues including both special issue articles (in some cases articles from topical collections) and regular issue articles. In most cases, this was not easily discernible from the journal site, despite appearance. Some special issues also included two special issues. For example, issue 3 of one of the case-study journals published in 2020 included two special issues, one of which was conference related while the other was not. We counted this issue and similar cases as two issues.

The number of articles published in a special issue/topical collection were calculated by counting articles in each issue. This information was extracted and saved in an Excel file. Publication date of each issue was also noted along with an approximation of the journals’ CiteScores for each year this information was available on Scopus. We intentionally avoided adding the exact CiteScore into the analysis to protect the anonymity of the case-study journals. CiteScore has been used by Elsevier since December 2016 to measure the citation impact of journals indexed in Scopus (Meho Citation2019). A high CiteScore suggests the journal is cited more often and has a high reputation. As such, it influences the image of a journal and the submission decisions of authors.

The research team also read the editorials of special issues to determine the conference the issue was linked to (if any), as well as retraction notes to identify the main reasons for retraction and retracted articles originally published in special issues. This generated trend data on their publishing history and volume expansion over time. We did not examine author affiliation for two of the case study journals as our focus was on the adoption of the guest editor model as a growth and revenue strategy. Author affiliations would not necessarily provide further details on publisher practices. A different approach was used for the research methods journal, because we came across two exceptionally large journal “supplements” with all the authors from the same institution. At first glance this gave the impression that the issue was university-sponsored, but there was no mention of this in the supplement. We decided to analyze author affiliations and to seek to understand the relationship between the authors and one higher education institution. Descriptive statistics were used to analyze data.

Results

This section provides case studies of three anonymized academic journals. All three are published by one of the dominant “big five” commercial publishers – Elsevier, Taylor and Francis, Springer-Nature, Sage and Wiley. We have chosen journals published by these publishers because critiques of the integrity of special issue, albeit limited in extent, have tended to focus on journals run by “challenger” publishers, in particular MDPI. All three journals are “hybrid,” containing a mixture of open access and pay walled articles. Three have been – and two continue to be – indexed in Scopus. All three were also previously indexed in Web of Science, with one being delisted in late 2018, one in 2021 and one in 2022. All have grown in volume size and frequency. In some cases, sudden jumps in the size of an issue or volume are the direct result of guest-edited special issues or supplements. We chose these journals primarily because of their previously indexed status. Being indexed, and the accompanying impact factor that this provides, is a key marker of journal reputation. As many universities expect their researchers only to submit to journals in Web of Science or Scopus, being indexed generates additional submissions, as well as pressure on editors. The publisher brand is another reputational marker that influences submission decisions.

Two of three case-study journals have been the subject of high-profile “exposes” by Retraction Watch and subsequent retractions, whereas one has avoided any journalistic suspicions or questions. All have been anonymized to minimize the risk of identification. Their titles, field and scope matter less than the patterns that they make visible.

A computing journal

This computing journal was launched in 2010, capitalizing on the growth of research into the internet. Initially it was published four times a year, with around 8 articles in each issue. The journal gradually grew in size and the frequency of publication, publishing 6 issues a year by 2013 and 12 a year since 2019. It has been indexed by Scopus since 2010 and in 2021 was ranked around 30th out of 230 in its field, with a CiteScore of more than 6.

In 2018, it published 146 articles, and this number increased to more than 432 articles in 2020 and 775 in 2021. By the end of 2022, the journal had published a total of 2564 papers including retracted articles, 431 of which were retracted as we discuss in more detail below. 2154 of these were published between the years of 2018 and 2022, during which the number of articles published each year consistently increased. Years 2021 and 2022 are particularly noteworthy with a dramatic increase and a decrease in special issues were observed in these years. As we explain in more detail below, four of the “regular” issues published in 2021 were later retracted, either in their entirety or to a large extent. Of the 775 articles published in 2021, 629 were published in “regular” issues; 334 of these were retracted articles that had been published previously in guest-edited issues as retraction notes indicate. In total, 337 retracted articles (all previously published in guest edited issues) were published in 2021, meaning only 438 of the 775 articles were new and published in “regular” issues. demonstrates growth in the number of articles published including retractions.

Figure 1. Number of articles published a year by this Computing journal.

Figure 1. Number of articles published a year by this Computing journal.

As shows, much of this growth was achieved through special issues, including three years in which all issues were special issues. Of the 114 issues (only 97 are listed on the journal website, but 16 included two special issues and so were counted as two issues each; one regular issue also included a special issue and again was counted as two issues) published by the end of 2022, 84 (or 73.68%) were special issues. 16 of these incorporated two special issues each and 32 were conference related. Each year a number of issues containing two or more special issues − 2013 (n = 3), 2014 (n = 1), 2018 (n = 4), 2019 (n = 2), 2020 (n = 2), 2021 (n = 1) and 2022 (n = 2). This, along with publishing articles from guest-edited issues in “regular” issues seems to indicate the attempts to obscures the full extent of the use of special issues.

Figure 2. Number of special and regular issues by year in this Computing journal.

Figure 2. Number of special and regular issues by year in this Computing journal.

There were also five issues where the vast majority of articles were subsequently retracted. Four of these were published in 2021; 2 included 100 articles with one including 97 retracted articles and the other 99, 1 included 99 articles 72 of which were retracted, and one 100 articles of which 63 were retracted. None of these issues were labeled differently and all looked like regular issues. The remaining one was published in 2022, was labeled a supplement and included only retracted articles (n = 79). Two issues with 97 and 99 retracted papers and the supplement were not included in our analysis of issue type as these included articles previously published and would not count as a regular issue with new articles.

Of this total of 2564 articles, 1887 (73.59%) were published in guest-edited issues. Only 677 (26.40%) articles were published in “regular issues.” In total, by the end of 2022, the journal had retracted 431 papers, vast majority of which was retracted in 2021 (n = 337; 78.14%) followed by 2022 (n = 86; 22.19%).

As the retraction notes indicate, 429 of the 431 retracted papers were from guest-edited issues, but 421 of these 429 were actually published in regular issues. We include these articles (n = 421) as being published in special issues even though retractions were published in regular issues. According to the retraction notes subsequently published, all but two (n = 430; 99.53%) were from guest edited special issues. This indicates that 22.79% of all articles published in special issues resulted in retractions. The same retraction note is appended to all, blaming “manipulation” of the peer-review process by a guest editor, and explaining that the editorial process had been compromised. The publisher’s confusing retraction statements attracted the attention of integrity “sleuths” and Retraction Watch, which ran several exposes on the journal. A response from the publisher placed the blame on so-called “paper mills.”

Web of Science delisted the journal in early 2022. The comments section of SCImago SJR – a portal for Scopus data–- includes complaints about the impact of this delisting on PhD students who needed to publish in an SCI-indexed journal in order to graduate. One described their university as retrospectively applying the Web of Science judgment to articles published in 2021 and earlier. The journal was delisted from Web of Science for two years, and can reapply for registration in 2024. It continues to be ranked in the top quartile of Computing science journals by Scopus. Its CiteScore increased from 5.4 in 2020 to 6.5 in 2021 and to 9.6 in 2022.

A research methods journal

This research methods journal was founded in the late 1960s. It has been indexed by Scopus since 1960s and in 2021 was ranked around 10th out of 264 in its field, with a CiteScore of more than 7. Explicitly interdisciplinary in ambition, it has sought to promote the use of statistical tools and methods in the social sciences. It began by publishing two issues a year, a number that gradually increased to six in the early 2000s. It continues to publish around six to eight issues a year, totaling around 200 articles a year, a much smaller increase than the other two journals examined. In total, it has published just under 2000 articles over more than 50 years, fewer than the two other journals being reviewed, but a third of these have been published since 2011 as is indicated by .

Figure 3. Number of articles published a year by this Research Methods journal.

Figure 3. Number of articles published a year by this Research Methods journal.

As demonstrates the journal first published a special issue in the early 2000s. Since 2015, the journal has been made up of a growing proportion of special issues, and a total of 18 special issues were published by the end of 2022. Around half of these are conference related. In 2018, 70% of all articles published (amounting to more than 180 articles) were published in a special issue, with 106 of these being in two guest-edited “supplements,” a new category for the journal.

Figure 4. Number of special and regular issues by year in this Research Methods journal.

Figure 4. Number of special and regular issues by year in this Research Methods journal.

This journal’s publisher describes “supplements” as collections and types of articles that might not normally be considered, such as commercially sponsored work. The nature of the sponsorship in the case of this particular journal’s supplements, if indeed any did exist, was not made clear. There was also no accompanying editorial explaining why this was a supplement, or what that meant for the type of articles it included.

The two guest-edited supplements – amounting to 106 articles in total – are noteworthy for their authorship profiles. The guest editors authored or coauthored 19 of the articles. 94 (or 89%) of the articles in these supplements were authored or coauthored by researchers all with the same university affiliation as the editors. Though not acknowledged openly on the journal’s website, the author profile suggests these supplements match the description of university-sponsored supplements, seen as a “questionable” practice by many (Chen Citation2015). One supplement consists of three special issues, all edited by the same editor, in one case with another co-editor. A number of individuals coauthored more than 10 articles in each special issue. Creating a further “zone of exception,” around 80 of the articles were labeled as “Continuing Education” rather than as “Original Research” articles. This term was first used by the journal earlier in the same year, again to denote articles from the same institution featured in these supplements. No information is provided on the journal website as to what this term means or how the articles it denotes differ from “Original Research” articles.

We individually read the abstracts for content relevance, and classified nearly all of the articles published in these supplements as “out-of-scope” in relation to the journal remit. Our approach to assessing scope acknowledges that one of the aims of publishing special issues has long been to introduce new lines of research. Because the journal deliberately sets out to promote debates on methods, we assessed articles based on their link to methodological debates, classifying them “out of scope” only in the absence of such link or no mention of methodological debates. All were education related, most reported on empirical research, but none focused specifically on the methodological issues raised by these research projects. This is despite the journal clearly stating that its remit is to promote methodological debates. Special issues outside the scope of the journal were previously identified as a feature of “deceptive” journals (Memon Citation2019).

The publisher’s guidelines require that even if supplements are financially sponsored by institutions and conferences, articles submitted to a supplement go through the same rigorous peer-review process. Many of the articles in these supplements have significant methodological, linguistic or content-related problems. For example, two included obscure references to qualitative interviews, using a terminology that made the discussion of data collection somewhat incomprehensible. These referred to interviews as a meeting technique.

No post-publication concerns have been raised about the quality or scope of these articles, and the journal has not featured in discussions on blogs such as PubPeer or Retraction Watch. Consequently, there have been no retractions from this journal. The journal continues to be indexed in Scopus and had a CiteScore of about 5 in 2022, up from around 2.2 in 2017, the year it started to publish a larger proportion of articles in special issues. It is noteworthy, however, that its CiteScore dropped from around 7.3 in 2021. However, these unusual supplements may have led Web of Science to delist this journal. WoS stopped indexing the journal after the articles featured in the supplements were published online, but before they were published as a supplement.

Three more conference special issues were published in the journal following these supplements. One was published in 2019 and two in 2020. Starting from early 2022, the journal began using the terminology of special issue, supplement and “collection” interchangeably.

A physical sciences journal

This physical sciences journal was launched in 2008, publishing 4 issues a year, with a remit to cover work across a wide range of specialisms in its field, and a regional remit to promote work in one part of the world. It adopts a hybrid publishing model, and authors can choose to publish Open Access. By 2018 it was publishing 24 issues a year, with an average of 30 issues in each issue. This journal referred to a group of papers on a particular theme or from a conference as “topical collections” rather than as special issues. The main difference seems to have been the staggered timing of publication of articles within a collection. A set of 15 topical “Chief Editors” were created to oversee editorial boards in each of its specialist areas. A total of more than 30 “topical collections” were published, the majority linked to conferences. Apart from one issue published in 2010, none of these issues were labeled as special issues. Instead, individual articles are labeled as being from a topical collection within each issue.

Over time the journal gradually expanded, publishing 419 articles in 2013, 778 in 2018, 1270 in 2020 and 2480 in 2021. By the second half of 2021, some issues had more than 140 articles. A total of 10,285 articles were published to date; 7419 (72.13%) were published since 2015 as is revealed by .

Figure 5. Number of articles published a year by this Physical Sciences journal.

Figure 5. Number of articles published a year by this Physical Sciences journal.

Starting in 2016, articles were increasingly labeled as being part of “topical collections,” as shows.

Figure 6. Published articles in this physical sciences journal, including ‘topical collections’.

Figure 6. Published articles in this physical sciences journal, including ‘topical collections’.

Articles from these topical collections were subsequently subject to “mass” retractions, after concerns about the articles were raised in PubPeer and then amplified by Retraction Watch. PubPeer is a California-based nonprofit, online forum for post-publication discussion of papers. The PubPeer commentaries about these issues included links to discussion forums hosted by Let Pub, a company based in the US and China. This company offers a range of journal selection and manuscript editing services, and claims to have a high profile within the Chinese academy. The accusation was that the mandarin-language discussion forums hosted by Let Pub included guidance on how best to get issues accepted by journals, and which journals to target.

The publisher’s use of “topical collections” makes it difficult to match retracted articles to specific special issues. Examination of “retraction notes,” however, reveal that these articles were originally submitted to a guest-edited issue, suggesting the terminological change was largely cosmetic. A total of 370 articles were retracted by the end of 2022. Of these, 365 (98.65%) were from topical collections. First an editorial expression of concern was published, then retraction notices were published, tersely stating that the peer review process had not been carried out “in accordance” with the Publisher’s peer review policy. One such topical collection subsequently had 301 retractions, another retracted around 43, in a third, 20 articles were retracted, and in a fourth collection, one article was retracted.

The journal had a CiteScore of around 2.5 in 2021, the year it was discontinued in Scopus. Its CiteScore increased from 1.8 in 2015, the year it published its first special issue. Retraction Watch published several quizzical pieces on the case, along with the guest editor’s claim that his e-mail had been hacked and used to approve these submissions without his knowledge. The publisher responded that they would make greater use of machine learning tools to detect manipulation and strengthen their oversight of special issue workflows.

Discussion

Special issues provide a lens through which to understand the changing economics of the global science communication system and its ethical shadows. They exemplify the concatenation of commercial, editorial and academic interests, and the incentives driving an accelerated research economy (Carrigan Citation2015; Vostal Citation2015). The journals analyzed in this paper were, until these events, all indexed in Web of Science, making them highly appealing for authors whose universities require publication “outputs” in “reputable” (read indexed) journals. All three had expanded, two rapidly, partly at the behest of their publisher, using special issues, themed issues, supplements and topical collections to attract (and accept) more submissions. All three also tended to attract multiply-authored articles, making problematic “guest” or “ghost” authorship difficult to assess (Matheson Citation2023; Sismondo Citation2009). All are quite broad in their scope, either in terms of their methodological or substantive coverage, again making it difficult to recognize “out of scope” submissions.

Many of the business practices pioneered first by MDPI and Frontiers are now being copied by the big five commercial publishers. Springer-Nature and Elsevier, for example, market a broader set of services to authors, including the option of ensuring that rejected articles are transferred to other journals in their portfolio. Where there was once one Nature journal there is now a suite of more than 40 journals with Nature in the title (Khelfaoui and Gingras Citation2022), a process of brand “extension” that capitalizes on the visibility of the Nature trademark. Meanwhile the number of articles published in Nature’s OA journals has expanded six-fold in 8 years.

Our analysis of the three case study journals examined exhibited a similar trend: the adoption of guest edited special issues as a growth and revenue strategy by long-existing commercial publishers. Special issues pose new risks for research integrity as it can be a relatively easy target of large-scale fraud. We hereby acknowledge that any peer review system is imperfect with editors and authors being colleagues who work in narrow disciplines, but as recent research has highlighted relying on the oversight of guest editors can be particularly vulnerable to suspicious collaboration patterns (Abalkina Citation2023; Joelving Citation2024). This is partially because there is a greater onus on journal editors to sustain their journal’s reputation for rigorous peer review while guest editors do not have the same concerns and do not face similar pressures.

With this perspective, we argue for the importance of understanding the gaming and manipulation tactics associated with the guest editor model in more depth. More empirical attention needs to be given to the conditions under which “special issue-ization” promotes a lack of research integrity. More research on the true extend of the research integrity risks caused by “special issue-ization” is also timely. This would, at the very least, require extending our analysis beyond new publishers to also include long-existing commercial ones. Otherwise, the picture will be no better than the tip of the iceberg.

However, perhaps the most important issue for further research to concentrate on is the underlying structures and political economy of science to discuss why “gaming” is integral to any system of audit and measurement. It is only by making careful sense of these relationships and moral judgments that we can develop a fuller understanding of the best way to address the gaming and manipulation tactics increasingly being observed in higher education systems across the globe. The growth in the discourse around fraud and the extent it is detected by investigative sleuths is certainly important, but so too is the sheer expansion in the global science system, and the adoption of rigorous research evaluation policies that dishearten, rather than encourage gaming and manipulation tactics among scholars and publishers (Teixeira da Silva et al. Citation2024).

Not everyone is convinced that this is an epidemic of fraud. Some even question whether framing these developments as a “new ecology of misconduct,” to use Biagioli and Alexandra’s term, (Citation2020, 1), is helpful. Gross and Harman suggest that “science as a social institution … has silently incorporated into itself incentives at odds with its norms” (Gross and Joseph Citation2016, 51), such that these practices are now embedded within the publishing system.

Others take the view that “misconduct is embodied in the infrastructure itself” (Power Citation2020). As auditability and quantified self-assessment become a “common” sense cultural value, instrumentalism is the only way to respond. From Power’s perspective, terms like “gaming” and cheating are misnomers. The conversion of metrics into standards requires participants to conform to the system’s expectations (Griesemer Citation2020). Misconduct is, in this view, the misalignment between the system and the conduct it requires: “ethical evaluation lags metrical assessment” (Griesemer Citation2020, 109).

Griesemer goes on to suggest that the research system can no longer be understood epistemically. Extending Goodhart’s law, he argues that both individual and organizational behaviors change in response to policies based on the models, and that “policy causes the model to cease to represent the very thing the measure was designed to measure as it changes the system’s causal structure” (Griesemer Citation2020, 115). The resulting feedback loops creates an endless arms race between those seeking to shore up the “integrity” of the research publishing system by recourse to metrics-based surveillance and detection technologies, and those who “game” in order to make more profit.

Conclusion

Once upon a time, Robert Merton imagined science as a self-governing community (Merton Citation1942). Few would be so idealistic today. Investigative “sleuths,” community forums such as Pub-Peer, “whistle-blowers” and journalistic organizations such as Retraction Watch have become the high-profile guardians of scientific integrity. They are first to raise concerns about breaches in academic publishing integrity (Didier and Guaspare-Cartron Citation2018), and regularly highlight the governance and quality risks that journal special issues present. Whilst they publicize the relatively small numbers of extreme cases, their work sustains the broader perception that these breaches are just the “tip of iceberg” (Gross and Joseph Citation2016). In our opinion, it is in their interest to sustain this sense of anxiety. Science watchdogs and “guardians” have less to say about the underlying structures and political economy of science and why manipulation is increasingly evident across global higher education systems.

The case-studies in this article show how special issues are one driver of journal growth, helping to maximize profile and citation impact. Whilst all three case-study journals are “hybrid” journals, some of the authors will be paying to publish open access. After the headlines about high profile retractions are forgotten, the reputational drivers and incentives remain the same. There will be more special issues, more brokers, and more opportunities. Given the incentives, pressures and expectations built into this metricized system, there is always a new tactic to try.

Journal and publisher reliance on guest-edited special issues poses important questions about the relationship between academic practice and research integrity, and offers a valuable analytical lens through which to understand the changing economics and reputational logics of the science communication system. In this paper, we have shown how the “special issue-ization” of journal publishing is reshaping business models, reinforcing the influence of the citation economy (Cranford Citation2020), and putting the whole publishing system under pressure. Yes, it is leading to revenue growth and citation growth, but it is also promoting gaming, acceleration and productivism. In an unequal science system, these trends have most impact on junior scholars and those working at the academic peripheries as they find themselves forced to publish ever more in order to be seen as credible and competitive.

Authors, editors, reviewers, publishers, universities and funders are all caught up in this complex knowledge production ecosystem, along with the non-human calculus of the citation indexes. In the short term, most benefit from the continued expansion of the commercial publishing system, despite the questionable value and ultimate sustainability of this exponential growth. Portraying mass retractions of journal special issues as an existential integrity crisis for science communication is one response. Yet this deflects media and policy attention from the inequalities built into this global science system, and the everyday acts of academic productivism required to survive in a metrics-driven research economy.

Are there any alternatives to this commercially-driven publishing ecosystem? In 2021, UNESCO published its Open Science recommendations. It sets out an idealistic vision for research infrastructures that are “organized and financed upon an essentially not-for-profit and long-term vision, which enhance open science practices and guarantee permanent and unrestricted access to all, to the largest extent possible” (UNESCO Citation2021). Building on this aspiration, in May 2023, the European Council recommended that European member states “step up support” for the development of a not-for-profit publishing platform free to both authors and readers (so-called Diamond OA).Footnote2 It is hard to imagine a complete shift to a noncommercial model, but beyond the logics of expansion, another future may be possible.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Correction Statement

This article has been corrected with minor changes. These changes do not impact the academic content of the article.

Additional information

Funding

The author(s) reported there is no funding associated with the work featured in this article.

Notes

1. The full webcasts and transcripts of Wiley’s earning calls are available at https://investors.wiley.com/events-and-presentations/events/event-details/2023/Q3–2023-Earnings-Call/default.aspx. and https://investors.wiley.com/events-and-presentations/events/event-details/2023/Q4–2023-Earnings-Call/default.aspx (accessed June 21st, 2023). See also Clarke and Esposito’s March 2023 Brief “Not so Special” on these events: https://www.ce-strategy.com/the-brief/not-so-special/.

References

  • Abalkina, A. 2023. “Publication and Collaboration Anomalies in Academic Papers Originating from a Paper Mill: Evidence from a Russia-Based Paper Mill.” Learned Publishing 36 (4): 689–702. https://doi.org/10.1002/leap.1574.
  • Aguinis, H., C. Cummings, R. S. Ramani, and T. G. Cummings. 2020. “‘An a Is an A’: The New Bottom Line for Valuing Academic Research.” Academy of Management Perspectives 34 (1): 135–154. https://doi.org/10.5465/amp.2017.0193.
  • Alrawadieh, Z. 2020. “Publishing in Predatory Tourism and Hospitality Journals: Mapping the Academic Market and Identifying Response Strategies.” Tourism and Hospitality Research 20 (1): 72–81.
  • Baccini, A., G. De Nicolao, and E. Petrovich. 2019. “Citation Gaming Induced by Bibliometric Evaluation: A Country-Level Comparative Analysis.” PLOS ONE 14 (9): e0221212. https://doi.org/10.1371/journal.pone.0221212.
  • Biagioli, M., and L. Alexandra. 2020. “Introduction: Metrics and the New Ecologies of Academic Misconduct.” In Gaming the Metrics: Misconduct and Manipulation in Academic Research, edited by M. Biagioli and A. Lippman. Cambridge, MA: MIT Press.
  • Biagioli, M., M. Kenney, B. R. Martin, and J. P. Walsh. 2019. “Academic Misconduct, Misrepresentation and Gaming: A Reassessment.” Research Policy 48 (2): 401–413. https://doi.org/10.1016/j.respol.2018.10.025.
  • Brockington, D. 2022. “MDPI Journals: 2015 -2021.” https://danbrockington.com/2022/11/10/mdpi-journals-2015-2021/.
  • Butler, N., and S. Spoelstra. 2020. “Academics at Play: Why the ‘Publication Game’ is More Than a Metaphor.” Management Learning 51 (4): 414–430. https://doi.org/10.1177/1350507620917257.
  • Butler, N., and S. Spoelstra. 2024. “‘You Just Earned 10 Points!’: Gaming and Grinding in Academia.” Organization 31 (4): 720–730. https://doi.org/10.1177/13505084221145589.
  • Cabanac, G., C. Labbé, and A. Magazinov. 2021. “Tortured Phrases: A Dubious Writing Style Emerging in Science: Evidence of Critical Issues Affecting Established Journals.” arXiv. 2017.06751.
  • Carrigan, M. 2015. “Life in the Accelerated Academy: Anxiety Thrives, Demands Intensify and Metrics Hold the Tangled Web Together.” http://blogs.lse.ac.uk/impactofsocialsciences/2015/04/07/life-in-the-accelerated-academy-carrigan/.
  • Chen, X. 2015. “Questionable University-Sponsored Supplements in High-Impact Journals.” Scientometrics 105:1985–1995. https://doi.org/10.1007/s11192-015-1644-0.
  • Cranford, S. 2020. “CREAM: Citations Rule Everything Around Me.” Matter 2 (6): 1343–1347. https://doi.org/10.1016/j.matt.2020.04.025.
  • Crosetto, P. 2021. “Is MDPI a Predatory Publisher?” https://paolocrosetto.wordpress.com/2021/04/12/is-mdpi-a-predatory-publisher/.
  • Croucher, G., and W. B. Lacy. 2022. “The Emergence of Academic Capitalism and University Liberalism: Perspectives of Australian Higher Education Leadership.” Higher Education 83 (2): 279–295. https://doi.org/10.1007/s10734-020-00655-7.
  • Csiszar, A. 2019. “Gaming Metrics Before the Game: Citation and the Bureaucratic Virtuoso.” In Gaming the Metrics: Misconduct and Manipulation in Academic Research, edited by M. Biagioli and A. Lippman, 31–42. Cambridge, MA: MIT Press.
  • de Rijcke, S., P. F. Wouters, A. D. Rushforth, T. P. Franssen, and B. Hammarfelt. 2016. “Evaluation Practices and Effects of Indicator Use: A Literature Review.” Research Evaluation 25 (2): 161–169. https://doi.org/10.1093/reseval/rvv038.
  • Didier, E., and C. Guaspare-Cartron. 2018. “The New Watchdogs’ Vision of Science: A Roundtable with Ivan Oransky (Retraction Watch) and Brandon Stell (PubPeer).” Social Studies of Science 48 (1): 165–167. https://doi.org/10.1177/0306312718756202.
  • Else, H. 2021. “‘Tortured Phrases’ Give Away Fabricated Research Papers.” Nature 596 (7872): 328–329. https://doi.org/10.1038/d41586-021-02134-0.
  • Frandsen, T. F. 2022. “Authors Publishing Repeatedly in Predatory Journals: An Analysis of Scopus Articles.” Learned Publishing 35 (4): 598–604. https://doi.org/10.1002/leap.1489.
  • Fyfe, A., N. Moxham, J. McDougall-Waters, and C. Mørk Røstvik. 2022. A History of Scientific Journals. London: UCL Press.
  • Gopalakrishna, G., G. Ter Riet, G. Vink, I. Stoop, J. M. Wicherts, and L. M. Bouter. 2022. “Prevalence of Questionable Research Practices, Research Misconduct and Their Potential Explanatory Factors: A Survey Among Academic Researchers in the Netherlands.” PLOS ONE 17 (2): e0263023. https://doi.org/10.1371/journal.pone.0263023.
  • Griesemer, J. 2020. “Taking Goodhart’s Law Meta: Gaming, Meta-Gaming, and Hacking Academic Performance Metrics.” In Gaming the Metrics: Misconduct and Manipulation in Academic Research, edited by M. Biagioli and A. Lippman, 77–87. Cambridge, MA: MIT Press.
  • Gross, G. A., and E. H. Joseph. 2016. The Internet Revolution in the Sciences and Humanities. Oxford: Oxford University Press.
  • Hanson, M. A., Barreiro, P. G., Crosetto, P., & Brockington, D. 2023. “The Strain on Scientific Publishing.” Impact of Social Sciences. https://blogs.lse.ac.uk/impactofsocialsciences/2023/10/23/the-strain-on-academic-publishing/.
  • Huang, R., Y. Huang, F. Qi, L. Shi, B. Li, and W. Yu. 2022. “Exploring the Characteristics of SIs: Distribution, Topicality, and Citation Impact.” Scientometrics 127 (9): 5233–5256. https://doi.org/10.1007/s11192-022-04384-9.
  • Joelving, F. 2024. “Paper Trail.” Science 383 (6680): 253–255. https://doi.org/10.1126/science.ado0309.
  • Khelfaoui, M., and Y. Gingras. 2022. “Expanding Nature: Product Line and Brand Extensions of a Scientific Journal.” Learned Publishing 35 (2): 187–197. https://doi.org/10.1002/leap.1422.
  • Kulczycki, E. 2023. The Evaluation Game: How Publication Metrics Shape Scholarly Communication. Cambridge: CUP.
  • Kulikowski, K., S. Przytuła, and L. Sułkowski. 2023. “When Publication Metrics Become the Fetish: The Research Evaluation Systems’ Relationship with Academic Work Engagement and Burnout.” Research Evaluation 32 (1): 4–18. https://doi.org/10.1093/reseval/rvac032.
  • Matheson, A. 2023. “The “Monsanto Papers” and the Nature of Ghost-Writing and Related Practices in Contemporary Peer Review Scientific Literature.” Accountability in Research: 1–30. https://doi.org/10.1080/08989621.2023.2234819.
  • McIntosh, L. D., and C. H. Vitale. 2023. “Safeguarding Scientific Integrity: A Case Study in Examining Manipulation in the Peer Review Process.” Accountability in Research. https://doi.org/10.1080/08989621.2023.2292043.
  • Meho, L. I. 2019. “Using Scopus’s CiteScore for Assessing the Quality of Computer Science Conferences.” Journal of Informetrics 13 (1): 419–433. https://doi.org/10.1016/j.joi.2019.02.006.
  • Memon, A. R. 2019. “Revisiting the Term Predatory Open Access Publishing.” Journal of Korean Medical Science 34 (13): e99. https://doi.org/10.3346/jkms.2019.34.e99.
  • Mertkan, S., G. Onurkan Aliusta, and N. Suphi. 2021a. “Profile of Authors Publishing in ‘Predatory’ Journals and Causal Factors Behind Their Decision: A Systematic Review.” Research Evaluation 30 (4): 470–483.
  • Mertkan, S., G. Onurkan Aliusta, and N. Suphi. 2021b. “Knowledge Production on Predatory Publishing: A Systematic Review.” Learned Publishing 34 (3): 407–413.
  • Merton, R. 1942. “A Note on Science and Democracy.” Journal of Legal and Political Sociology 1:115–126.
  • Mula-Falcón, J., K. Caballero, and J. D. Segovia. 2022. “Exploring Academics’ Identities in Today’s Universities: A Systematic Review.” Quality Assurance in Education 30 (1): 118–134. https://doi.org/10.1108/QAE-09-2021-0152.
  • Müller, R., and S. de Rijcke. 2017. “Thinking with Indicators: Exploring the Epistemic Impacts of Academic Performance Indicators in the Life Sciences.” Research Evaluation 26 (3): 157–168. https://doi.org/10.1093/reseval/rvx023.
  • Olk, P., and T. L. Griffith. 2004. “Creating and Disseminating Knowledge Among Organizational Scholars: The Role of SIs.” Organization Science 15 (1): 120–129. https://doi.org/10.1287/orsc.1030.0055.
  • Oravec, J. A. 2020. “Academic Metrics and the Community Engagement of Tertiary Education Institutions: Emerging Issues in Gaming, Manipulation, and Trust.” Tertiary Education and Management 26 (1): 5–17. https://doi.org/10.1007/s11233-019-09026-z.
  • Oviedo-García, M. Á. 2021. “Journal Citation Reports and the Definition of a Predatory Journal: The Case of the Multidisciplinary Digital Publishing Institute (MDPI).” Research Evaluation 30 (3): 405–419. https://doi.org/10.1093/reseval/rvab020.
  • Pecorari, D. 2021. “Predatory Conferences: What Are the Signs?” Journal of Academic Ethics 19:334–361.
  • Petrou, C. 2020. “MDPI’s Remarkable Growth.” Scholarly Kitchen. August 10. https://scholarlykitchen.sspnet.org/2020/08/10/guest-post-mdpis-remarkable-growth/.
  • Petrou, C. 2023. “Of SIs and Journal Purges.” https://scholarlykitchen.sspnet.org/2023/03/30/guest-post-of-special-issues-and-journal-purges.
  • Power, M. 2020. “Playing and Being Played by the Research Impact Game.” In Gaming the Metrics: Misconduct and Manipulation in Academic Research, edited by M. Biagioli and A. Lippman, 57–66. Cambridge, MA: MIT Press.
  • Repiso, R., J. Segarra‐Saavedra, T. Hidalgo‐Marí, and V. Tur‐Viñes. 2021. “The Prevalence and Impact of SIs in Communications Journals 2015–2019.” Learned Publishing 34 (4): 593–601. https://doi.org/10.1002/leap.1406.
  • Sala, F. G., J. O. Lluch, F. T. Gil, and M. P. Ortega. 2017. “Characteristics of Monographic Special Issues in Ibero-American Psychology Journals: Visibility and Relevance for Authors and Publishers.” Scientometrics 112 (2): 1069–1077. https://doi.org/10.1007/s11192-017-2372-4.
  • Sandy, W., and H. Shen. 2019. “Publish to Earn Incentives: How Do Indonesian Professors Respond to the New Policy?” Higher Education 77 (2): 247–263. https://doi.org/10.1007/s10734-018-0271-0.
  • Sismondo, S. 2009. “Ghosts in the Machine: Publication Planning in the Medical Sciences.” Social Studies of Science 39 (2): 171–198. https://doi.org/10.1177/0306312708101047.
  • Teixeira da Silva, J. A. T., S. Nazarovets, T. Daly, and G. Kendall. 2024. “The Chinese Early Warning Journal List: Strengths, Weaknesses and Solutions in the Light of China’s Global Scientific Rise.” The Journal of Academic Librarianship 50 (4): 102898. https://doi.org/10.1016/j.acalib.2024.102898.
  • UNESCO. 2021. UNESCO Recommendation on Open Science. Paris: UNESCO.
  • van Dalen, H., and K. Henkens. 2012. “Intended and Unintended Consequences of a Publish-Or-Perish Culture: A Worldwide Survey.” Journal of the American Society for Information Science and Technology 63 (7): 1282–1293. https://doi.org/10.1002/asi.22636.
  • Vostal, F. 2015. “Speed Kills, Speed Thrills: Constraining and Enabling Accelerations in Academic Work-Life.” Globalisation, Societies and Education 13 (3): 295–314. https://doi.org/10.1080/14767724.2014.959895.
  • Xie, Y., K. Wang, and Y. Kong. 2021. “Prevalence of Research Misconduct and Questionable Research Practices: A Systematic Review and Meta-Analysis.” Science and Engineering Ethics 27 (4): 41. https://doi.org/10.1007/s11948-021-00314-9.