666
Views
11
CrossRef citations to date
0
Altmetric
Opinion Piece

A call for replications of addiction research: which studies should we replicate and what constitutes a ‘successful’ replication?

ORCID Icon
Pages 89-97 | Received 04 Feb 2020, Accepted 31 Mar 2020, Published online: 01 May 2020
 

Abstract

Several prominent researchers in the problem gambling field have recently called for high-quality replications of existing gambling studies. This call should be extended to the entire field of addiction research: there is a need to focus on ensuring that the understanding of addiction and related phenomena gained through the extant literature is robust and replicable. This article discusses two important questions addictions researchers should consider before proceeding with replication studies: [1] which studies should we attempt to replicate? And: [2] how should we interpret the findings of a replication study in relation to the original study? In answering these questions, a focus is placed on experimental research, though the discussion may still serve as a useful introduction to the topic of replications for addictions researchers using any methodology.

Acknowledgements

The author would like to thank Geoff Cumming, John Stapleton, Dylan Pickering, Peder Isager, Eric-Jan Wagenmakers and three anonymous reviewers for their helpful comments and suggestions for this article. The author also thanks Daniel Lakens and Peder Isager for kindly sharing the R code that was adapted to produce the figures presented here.

Disclosure statement

The author reports no conflicts of interest.

Data availability

Supplemental materials, including an annotated list of relevant articles and resources relating to the interpretation of replication outcomes, the data used in all calculations and figures presented here, the R code used to produce the figures, and the code for calculating effect sizes when using the Small telescopes approach (i.e., d33%) can be accessed via this project’s Open Science Framework Project Page: https://osf.io/5r7a9/.

Notes

1 A focus is placed on answering these questions and not the “how?” and “why?” of replication as these questions have received considerable attention in recent discussions (e.g., Wohl et al. Citation2019; Zwaan et al. Citation2018).

2 I use “addiction” here as shorthand for all addiction-related phenomena of interest, including harms, interventions, associated cognitions and so on.

3 This could be more formally achieved using the Altmetric service: https://www.altmetric.com.

4 The exact details of these calculations are shared on Open Science Framework: https://osf.io/5r7a9/.

5 Calculation: (100- [.80 x .80 x 100]). The probabilty of null effects can be easily calculated for different scenarios using the Shiny app provided by Lakens & Etz, Citation2017: http://shiny.ieis.tue.nl/mixed_results_likelihood/

6 Coles et al. (Citation2018) recommend doing this in a pre-registered format in collaboration with the authors of the original study wherever possible.

7 As an extensive review of all approaches is beyond the scope of this article, the interested reader is referred to an annotated list of relevant articles and resources shared on this project’s Open Science Framework (OSF) page: https://osf.io/5r7a9/

8 Cumming (Citation2008) has further calculated that only 12% of the variance in replication ps can be explained by the original value, with only smaller ps (i.e., p < .001) providing useful predictive information about future replications.

9 See Morey et al. (Citation2016) for an important discussion of the fallacies that surround confidence intervals.

10 This is because the original parameter estimate and its confidence intervals may be a poor indication of the true population value, particularly if the sample size(s) studied were small.

11 This is not to say that addictions researchers should not conduct single replication studies, but rather that replication outcomes—like those of the original study—can be subject to a variety of influences (e.g., questionable research practices, biases [conscious and unconscious], contextual factors, statistical power, sampling variation etc.). Each study is valuable in the same way that individual pieces contribute to a puzzle—giving some but not complete insight into what the overall picture looks like. The more pieces (studies), the clearer the overall picture (the true effect).

12 The TOST can also be adapted for use by those wanting to employ the small telescopes approach.

13 Cohen’s d effect size was used to set boundaries here (hence the difference in scales along the x-axes in the plots), though raw difference scores can also be used for this purpose.

14 For an excellent discussion of “who should replicate” and how we can align incentives to encourage replication studies, see Romero (Citation2018).

15 These simply reflect hypothetical questions one may ask—determining whether an effect exists and estimating an effect are not the exclusive goals of direct and conceptual replication studies, respectively.

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 65.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 416.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.