1,357
Views
1
CrossRef citations to date
0
Altmetric
Editorial

New Year, New Initiatives for the Journal of Sex Research

ORCID Icon & ORCID Icon

In this editorial we are sharing details of an exciting new initiative for the journal – the publication of registered reports – along with updates on JSR’s statistics, metrics, and editorial board.

Registered Reports for Pre-Research Peer-Review Feedback and Editorial Decisions

With the onset of the Gregorian New Year, comes a new submission format available to our Authors: JSR has now joined the 300+ journals offering Registered Reports (RR; https://www.cos.io/initiatives/registered-reports). For those of us attuned to the discourse on replicability and open science, the emergence of support for RRs – an article type that was introduced nearly 10 years ago (Chambers, Citation2013; see Chambers & Tzavella, Citation2020 for a review) – at a mainstream sexuality journal may be met with some amount of excitement or dismissal, depending on one’s opinions. These readers may wish to skip to the subsection “The Vision of Registered Reports at JSR.” For most other readers of JSR, however, I suspect that the term “Registered Report” is fairly new, and some explanation is required about what they are and aren’t, what makes this kind of submission unique, and why one might consider it as a desirable publishing avenue for their research. It is for these readers that I provide an overview in the next two subsections.

What are (And Aren’t) Registered Reports?

For those who are mostly, or entirely new to RRs, I think it is often clarifying to draw a distinction between two often confused, but distinct terms:

  • Preregistration: A preregistration (sometimes referred to informally as a “prereg”, other times a “registration) is an unalterable date/time-stamped document(s), filed before data collection has begun (using freely available online platforms, like the Open Science Framework and AsPredicted), that researchers use to verify (either to others, and/or themselves) that some element(s) of their research process (often including, but not limited to predictions, measures and procedures, and analytic strategies) were determined a priori (for a review, see Nosek et al., Citation2018). Researchers usually provide a public link to their preregistration that accompanies the affiliated manuscript during peer review and/or publication.

  • Registered Report: A RR is a multi-stage article submission format that requires the use of preregistration and other novel editorial features (e.g., pre-data-collection peer-review, transparent and high evidential-value methods, acceptances “in principle”, commitments to publish null effects), in order to produce results-agnostic credible research findings. Researchers submit a proposal inclusive of Introduction and Methods (referred to as the Stage 1 [or “S1”] Report), which is then peer-reviewed before data collection takes place. Helpful suggestions from reviewers are incorporated into an updated proposal, and if all goes well, an Editor gives the submission an “in principle acceptance” (or “IPA”) – a commitment to publish the article regardless of what pattern of results is observed. The researcher then files a preregistration of the IPA’d S1 report, conducts the research as planned, and submits an updated report with Results and Discussion (referred to as the Stage 2 [or “S2” Report). So long as the researcher maintained fidelity to the IPA’d protocol and presents a reasonable interpretation of their results (whatever they may be), the final RR is published.

With these definitions in mind, I offer three contrasts drawn between creating a preregistration (i.e., to preregister) and pursuing an RR:

First, to preregister requires no approvals from any external parties; it is entirely permissionless (not even a researcher’s own collaborators could stop them from preregistering). Registered Reports, in comparison, require a variety of approvals, including the permission of coauthors for the submission of a manuscript (a S1 or S2 Report) to the journal, and the approval of the reviewers and editors; if an S1 is insufficiently compelling, or a S2 departs from an IPA’d protocol or includes outlandish interpretations of results, then the RR can be rejected.

Second, to preregister at no point offers any guarantee to an author of any editorial outcome, nor does it oblige a journal to any particular course of action. Journals vary in their support, enthusiasm, and incentivization for preregistration. Those preregistering therefore must face the uncertainty of the outcome from a standard peer review process, and risk having their technical research decisions rejected by their reviewers. In contrast, although Registered Reports involve some risk in rejection at the S1 Report (i.e., proposal) stage, once an S1 is IPA’d, researchers enjoy a much lower degree of uncertainty and risk regarding editorial decisions. If authors stay true to the IPA’d protocol, and nothing procedurally cataclysmic occurs (e.g., a randomizer failure in an experiment), then their expenditures of time, energy, and funding will find realized value as their journal has committed to publish their findings.

Third, and finally, to preregister offers no scholarly quality control before unalterable, binding scientific decisions have been made, which lead to ambiguities of accountability when things go poorly in the research and publication process. Brilliant studies can be preregistered as easily as bad studies. Moreover, journals often do not – or cannot – guarantee that preregistrations will be reviewed alongside their manuscripts. This makes it difficult to ensure that (un)intentional discrepancies between preregistration and submitted manuscript – whether trivial or critical – will be detected, reported, and explained. And when such discrepancies are later detected by readers, it becomes unclear who ought to be accountable between the author(s), reviewers and/or the editor. RRs, meanwhile, ensure some minimal level of a priori quality control over the preregistered methodology and analyses featured in published research (because protocols are reviewed and improved prior to preregistration), and clearly position the reviewers and editor as responsible for ensuring fidelity between the preregistered protocol (i.e., the IPA’d S1 Report) and the final manuscript (i.e., the S2 Report) – indeed, this is one of the primary goals emphasized at the stage of reviewing S2 Reports.

It may seem apparent to the reader – and it is fair to say – that I am much more bullish on the benefits of Registered Reports (which includes preregistration) to the researcher, reviewers, Editor, and our field, than I am “DIY-preregistration” (i.e., without the other features of Registered Reports). This was not always the case, but it is a good thing that minds can change. My newfound preference for Registered Reports is further bolstered by their other unique benefits, which I now briefly detail.

Why Consider a Registered Report?

Some of the selling points of RRs are spelled out in my contrasting them against “DIY-preregistration.” Other benefits of RRs may be less clear.

Feeling Good During Peer Review

For example, though anecdotal, many authors submitting RRs and reviewers appraising RRs describe feeling as though the reviewing process takes on a more constructive and positive vibe than in traditional peer-review. No longer must authors guess at what methodological and analytical decisions will later appease reviewers; reviewers will tell them in time for the authors to act on this information. This in turn transforms the perception of reviewers from (sometimes antagonizing) gatekeepers who authors must mind read or (if making decisions reviewers disagree with) outwit in their responses, to a perception as anonymous collaborators.

Meanwhile, because suggestions are made by reviewers in time for these to become incorporated into how the research is carried out by the authors, reviewers often find that authors are more open minded about and less defensive to suggestions for alternative and improved ways of doing things. I can attest to experiencing both feelings in the Registered Reports I have conducted and those that I have reviewed. And in my view, opportunities to improve the way that peer review functions and feels for both authors and editors are opportunities worth exploring!

More Certain Outcomes for Trainee-Led Research

Another motivator for considering RRs is when conducting research with trainees whose tolerance of publication uncertainty may be low, or their timeline of availability short. Honors students and graduate students on non-research-oriented training tracks, for example, may struggle to stick with seeing a project through to publication when chances of publication are unknown (or known and low) and the reception of their work is uncertain. Even research-oriented trainees can only stick with trying to publish a project for so long before it comes rational to move onto greener pastures.

Through RRs, assurances could be secured for trainees in advance (via IPA of a S1 Report) of their efforts, which may help them secure the motivation (and invest and protect the time) needed see the completion and dissemination of their research through to publication. Integrating RRs into theses and dissertations as an option into program workflows could be a gamechanger – though one surely requiring changes to institutional timelines and oversights – as trainees often already go through a proposal stage for their projects. With the approval of their supervisory committees, these defended proposals could be submitted as S1 Reports, improved through peer-review feedback, and then carried out. No longer would students need to feel a sense of dread over the prospect of non-significant focal effects in their theses and dissertations, as they could have publications (with high-quality null effects) to show for their considerable efforts.

When Controversial Lightning Strikes

A final instance in which RRs might be a particularly desirable publication format is when researchers are pursuing projects with results that may be particularly controversial or galvanizing within their fields. Such projects may, vis-à-vis the attention they generate, come under greater scrutiny, and it could be of considerable a benefit to have such findings published under RRs given some of their methodological features.

Indeed, in order for RRs to become published, the reviewers and the editor must have reached some degree of agreement that a given study was important, well designed, and likely to provide an informative outcome whatever the outcome may be, otherwise the S1 Report would not receive an IPA. This, from my perspective, ought to generate a stronger sense of collective investment and good faith around the published article, and take certain attributions off the table of defensibility – protocols were improved and agreed to in advance, S1 and S2 Reports were checked for adherence, data is either available or was evaluated in review, etc. Although RRs do not guarantee the replicability of the findings or certify the published interpretations of a given set of effects, they should hopefully eliminate some of the more cynical possibilities from the minds of the readership.

With these characteristics in mind, I see RRs as a particularly promising publication mechanism through which some of the most pressing and complex questions in our field could be addressed. Further, these questions need not be pursued by individuals stanning one particular theoretical perspective or camp. Multi-lab RRs could be phenomenal sites for adversarial collaborations (see, Nosek & Errington, Citation2020), whereby researchers who disagree must come to terms in advance (during preparation of a S1 Report) on what would constitute persuasive design and patterns of effects that would corroborate or disconfirm their respective theoretical positions. There are, to be sure, many low-hanging fruit for questions of fieldwide importance, but which lack consensus on their replicability and/or meaning, that would be excellent candidates for this kind of RR (e.g., Conley et al., Citation2011 vs. Schmitt et al., Citation2012).

When to Not Consider a Registered Report?

The RR format has generated concerns amongst some researchers; some of these are quite understandable, whereas others are a bit flimsier (Chambers & Tzavella, Citation2020 do a good job of reviewing and dispelling some of the more frequent worries). This is, however, not to say that Registered Reports are (yet) suitable for all kinds of research. In my view, one type of paper – quantitative, but exploratory in nature – is likely to be perennially ill-suited for the RR approach, while another type – qualitative papers – is not yet reconcilable with the RR approach under certain conditions.

Exploratory Quantitative Research

While RRs can, and often do, contain exploratory analyses (see, Chambers & Tzavella, Citation2020) of secondary importance, the modus operandi of RRs is to impose some restrictions a priori on methodological and analytic possibilities, in order to perform a high-evidential confirmatory test of one or more pre-specified focal effects. Thus, RRs are not particularly compatible with studies by researchers who have a collection of variables in which they are interested, but are unsure of their structural relations that they might attempt to better understand through data-driven exploration. My intent here is not to malign this kind of research – description and exploration are important goals of science – but just to offer that if one were to jettison the features of RRs that are irreconcilable with a primarily exploratory approach, one would be left with an article for which it would be difficult to justify was sufficiently “Registered Report-like” in its essence.

Qualitative Research

It does not yet seem clear to me – either within, or beyond sexual science – what consensus has emerged around how simpatico and/or beneficial RRs are to qualitative researchers. The methods reform movement has been slow to consider the needs of their qualitative-leaning colleagues (see, Lorenz & Holland, Citation2020). But given some of the movement’s values and definitions (e.g., certain forms of “reproducibility failures” in Nosek et al., Citation2021), a qualitative researcher of a certain philosophical perspective (e.g., one with a constructionist view, who saw researcher positionality as a feature, and not a reproducibility bug in the research process) would, in my opinion, be entirely reasonable to view some methods reforms pursuits as epistemologically hostile (see, Sakaluk, Citation2021). I therefore think it best to give time to let the dust settle from the discourse and see what organic appetite for RRs emerges from qualitative researchers considering JSR for their articles.

The Vision of Registered Reports at JSR

I see the addition of RRs at JSR as offering sexual scientists a new way – not better or worse than traditional publishing – for conducting and publishing their research. Traditional publishing offers researchers tremendous flexibility and autonomy that may lead to spontaneous discovery, at the expense of publication certainty and potential self-/other deception if research flexibility is entirely unrestrained or not disclosed transparently. RRs, meanwhile, offer researchers some efficiency in resource expenditure and reduced outcome uncertainty while pursuing a high evidential-value test of a critical hypothesis, at the expense of “bureaucratic tennis” and (many) more constraints on their autonomy. Both approaches (and others), have their place; methodological pluralism is desirable in a healthy science (a point made particularly effectively by Devezer et al., Citation2019), and there are reasonable concerns about the utility of blanket adoption and meaning of proposals for methodological reform (e.g., Ledgerwood, Citation2018; Szollosi et al., Citation2019). So let us find out, at JSR, how useful of a format this is for the unique needs of sexual scientists.

At launch, I am tremendously proud to say that JSR will support four different kinds of RRs, in an attempt to make RRs worthy of consideration across some of the varied designs and scholarly goals within the sexual science community. These four supported RR types include:

  1. RRs of novel hypotheses/effects

  2. RRs attempting direct replications of effects published in JSR (a la Srivastava’s “Pottery Barn Rule”, 2012)

  3. RRs focused on measurement development and validation, and

  4. RRs focused on smaller N designs (e.g., Smith & Little, Citation2018), with samples from particular sexuality-relevant communities of interest

RR reviewing procedure is better established in the first two of these types than in the second two, but I see tremendous upside in emerging opportunities in RRs for good measurement-focused work (see also Samuel, Citation2019), and I would sincerely lament if RRs became a tool to estrange researchers doing valuable community-oriented work from publishing in JSR. And so, though we are a bit “late” to the RR game, we are attempting to be quick out of the gate in supporting these up-and-coming RR formats, while offering some innovative RR features (e.g., a piloting of a Rapid Review system for particularly high-quality and time-sensitive S1 Reports; Chambers & Tzavella, Citation2020). Interested readers can learn more on the Instruction for Authors page of the JSR website – https://www.tandfonline.com/action/authorSubmission?show=instructions&journalCode=hjsr20 – where you can find brief and detailed instructions for RR submissions (between the “Formatting and Templates” and “References” sections).

Let’s Get Started

I look forward to the next three years – my term limit as Registered Reports Editor – and seeing what sexual scientists can make out of the RR formats. Meanwhile, over the coming months, I’ll continue to do my best to increase awareness of RRs at JSR, to provide training on navigating the format (such as during our webinar later in January), and to promote the excellent S1 and S2 reports that we accept. And in the meantime, those with questions about RRs at JSR can reach me through the format’s new editorial e-mail account ([email protected]) or on twitter (@JohnSakaluk) if they are looking for a more dynamic and public exchange.

Updates on Submissions, Metrics, and Editorial Board

Submissions and Disposition

By the end of 2021, we had received 603 new submissions to the journal. Submissions have nearly doubled in the last ten years (we received 311 new manuscripts in 2011). Despite the rise in the number of submissions, our mean turnaround time (time from submission to first decision letter) has remained fairly stable in recent years and is currently 64 days. Our current acceptance rate is 19%.

Also, partly in response to the increased number of submissions, the Associate Editors and myself have revisited the aims and scope of the journal. The primary aim of JSR (“to stimulate research and promote an interdisciplinary understanding of the diverse topics in contemporary sexual science”) remains unchanged. However, because we receive a large number of submissions that are desk-rejected, (56.8% of submissions) mainly because their focus is not on sexuality, we have more clearly delineated our scope on the journal website https://www.tandfonline.com/action/journalInformation?show=aimsScope&journalCode=hjsr20 by adding this text: “JSR considers papers on the topic of gender when they include focus on some aspects of sexuality. Similarly, we consider papers on the topic of HIV/AIDs and sexually transmitted infections when they include focus on some aspects of sexuality. JSR do not consider papers that validate translations of existing measures.”

Regarding types of submissions, we consider empirical reports, brief reports (no specified word limit), theoretical essays, review articles, methodological articles, commentaries, and letters to the editor, and now registered reports. In keeping with our interdisciplinary focus, we welcome manuscripts on research using diverse methodologies. We receive many submissions based on qualitative or mixed method approaches and unlike many journals do not have a fixed maximum page count (or minimum sample size); 16% of the articles published in the last 10 years have reported on studies using qualitative or mixed-methods approaches.

As stated in the Aims and Scope, submissions from researchers outside of North America are particularly welcomed. In the last ten years, 35% of the articles we have published have included at least one author from outside of North America.

Metrics

In an editorial published in 2008 (the year I started as Editor) two of my stated goals were to increase the visibility of the JSR and maintain the high quality of the articles published. I think we are making good progress toward meeting these goals. In recent years we have seen a very large increase in the number of article downloads. The most recent figures available from the publisher show a 42% increase in the year to date usage (download figures) from 2020 to 2021. Altmetrics scores for JSR articles have also risen, with the most recent figures showing 21,565 mentions for research articles published in the journal in the last year.

The citation metrics for the journal have shown a linear increase over the last ten years. The traditional 2-year Impact Factor (IF) is calculated as the total number of citations received in a particular year to articles published in the previous two years, divided by the number of articles published by the journal in those two years. Our 2-year IF in 2020 increased from 3.683 to 5.141 and our 5-year IF from 4.213 to 5.672. Our rankings in the two categories that JSR are included in were #5/110 for Social Sciences, Interdisciplinary and #18/131 for Psychology, Clinical. It should be noted that last year Clarivate made a change in the IF calculation, by including early online papers (those that are published online but not in a volume/issue) in the numerator for the IF calculation, but not in the denominator. This has inflated the IF for any journal that publishes online ahead of print, which most journals now do, but the large increase in our IF was unlikely to be solely due to this change and as discussed below, our other metrics are also strong.

A new metric introduced this year was the Journal Citation Indicator (JIC). The JIC calculation controls for different fields, document types (articles, reviews, etc.) and year of publication. The resulting number represents the relative citation impact of a given paper as the ratio of citations compared to a global baseline. A value of 1.0 represents world average, with values higher than 1.0 denoting higher-than-average citation impact (2.0 being twice the average) and lower than 1.0 indicating less than average. Our JIC this year was 2.18, which means that JSR content is twice as well cited as content of a similar subject, age and article type.

Another fairly new metric is the Scopus CiteScore. The CiteScore reflects the yearly average number of citations to recent articles published in a journal (based on citations recorded in the Scopus database) for articles published in the preceding 4 years. The 2020 CiteScore for JSR was 6.8 (compared with 6.5 in 2019). Our CiteScore placed JSR as 1st out of 155 journals in Gender Studies.

Editorial Board

In response to the growing number of submissions, we have been growing the editorial board. In 2021 Marta Meana (University of Las Vegas) and Kelly Davis (Arizona State University) joined the team as Associate Editors. In January 2022 we welcomed Amanda Gesselman (Indiana University and The Kinsey Institute) as a new statistical consultant and we have 17 new Consulting Editors joining the board (Serena Corsini-Munt, Daphne van de Bongardt, George Gaither, Sam Hughes, Chelsea Kilimnik, Mark McCormack, Todd Morrison, Tom Nadarzynski, Zoe Peterson, Myeshia Price, Qazi Rahman, Eric Schrimshaw, Charlotte Tate, Rose Wesche, Keon West, Val Wongsomboon, and Melanie Zimmer-Gembeck).

The editorial board at JSR comprises a large team and I want to extend my gratitude to the seven Associate Editors (Kelly Davis, Christian Grov, Osmo Kontula, Marta Meana, Lucia O’Sullivan, Margaret Rosario, and Brian Willoughby); the Annual Review of Sex Research Editor (Kirstin Mitchell) and our two statistical consultants (Devon Hensel and John Sakaluk – now Registered Reports Editor). The work involved in all these roles is significant and as for everyone, the last year has been especially challenging.

Thank you also to the hard-working Consulting Editors on the Board and to our reviewers. Our contacts at the publishers – Rachel Wilson, Jason Jones, Melody Harris, and Lucia Garavaglia – provide amazing support. Last, but not least, thank you to Dawn Laubach and Sarah Erskine at SSSS for all of their work in supporting the journal.

Acknowledgments

We wish to thank Drs. Neil Lewis Jr., Roger Giner-Sorolla, and Jessica Wood for their helpful feedback on Registered Report types, submission guidelines, and this editorial.

Disclosure Statement

No potential conflict of interest was reported by the author(s).

References

  • Chambers, C. D. (2013). Registered reports: A new publishing initiative at Cortex. Cortex, 49(3), 609–610. https://doi.org/10.1016/j.cortex.2012.12.016
  • Chambers, C., & Tzavella, L. (2020). The past, present, and future of registered reports. Nature Human Behavior. https://doi.org/10.1038/s41562-021-01193-7
  • Conley, T. D., Moors, A. C., Matsick, J. L., Ziegler, A., & Valentine, B. A. (2011). Women, men, and the bedroom: Methodological and conceptual insights that narrow, reframe, and eliminate gender differences in sexuality. Current Directions in Psychological Science, 20(5), 296–300. https://doi.org/10.1177/0963721411418467
  • Devezer, B., Nardin, L. G., Baumgaertner, B., & Buzbas, E. O. (2019). Scientific discovery in a model-centric framework: Reproducibility, innovation, and epistemic diversity. PLoS ONE, 14(5), e0216125. https://doi.org/10.1371/journal.pone.0216125
  • Graham, C. A. (2008). Editorial. Journal of Sex Research, 45(1), 1. https://doi.org/10.1080/00224490701848528
  • Ledgerwood, A. (2018). The preregistration revolution needs to distinguish between predictions and analyses. Proceedings of the National Academy of Sciences, 115(45), E10516–E10517. https://doi.org/10.1073/pnas.1812592115
  • Lorenz, T. K., & Holland, K. J. (2020). Response to Sakaluk (2020): Let’s get serious about including qualitative researchers in the open science conversation. Archives of Sexual Behavior, 49(8), 2761–2763. https://doi.org/10.1007/s10508-020-01851-3
  • Nosek, B. A., Ebersole, C. R., DeHaven, A. C., & Mellor, D. T. (2018). The preregistration revolution. Proceedings of the National Academy of Sciences, 115(11), 2600–2606. https://doi.org/10.1073/pnas.1708274114
  • Nosek, B. A., & Errington, T. M. (2020). The best time to argue about what a replication means? Before you do it. Nature, 583(7817), 518–520. https://doi.org/10.1038/d41586-020-02142-6
  • Nosek, B. A., Hardwicke, T. E., Moshontz, H., Allard, A., Corker, K. S., Dreber, A., Fidler, F., Hilgard, J., Kline, M., Nuijten, M. B., Rohrer, J., Romero, F., Scheel, A., Scherer, L., Schönbrodt, F., & Vazire, S. (2021). Replicability, robustness, and reproducibility in psychological science. Annual Review of Psychology, 73(1), 719–748. https://doi.org/10.1146/annurev-psych-020821-114157
  • Sakaluk, J. K. (2021). Response to commentaries on Sakaluk (2020). Archives of Sexual Behavior, 50(5), 1847–1852. https://doi.org/10.1007/s10508-021-02020-w
  • Samuel, D. B. (2019). Incoming editorial. Assessment, 26(2), 147–150. https://doi.org/10.1177/1073191118822760
  • Schmitt, D. P., Jonason, P. K., Byerley, G. J., Flores, S. D., Illbeck, B. E., O’Leary, K. N., & Qudrat, A. (2012). A reexamination of sex differences in sexuality: New studies reveal old truths. Current Directions in Psychological Science, 21(2), 135–139. https://doi.org/10.1177/0963721412436808
  • Smith, P. L., & Little, D. R. (2018). Small is beautiful: In defense of the small-N design. Psychonomic Bulletin & Review, 25(6), 2083–2101. https://doi.org/10.3758/s13423-018-1451-8
  • Szollosi, A., Kellen, D., Navarro, D., Shiffrin, R., van Rooij, I., Van Zandt, T., & Donkin, C. (2019). Is preregistration worthwhile? Trends in Cognitive Sciences 24 (2) 94–95 . https://doi.org/10.1016/j.tics.2019.11.009

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.