1,813
Views
15
CrossRef citations to date
0
Altmetric
Letters

Replication is fundamental, but is it common? A call for scientific self-reflection and contemporary research practices in gambling-related research

Pages 362-368 | Received 05 Sep 2019, Accepted 22 Sep 2019, Published online: 30 Sep 2019

Researchers around the world have observed that in many fields the published peer-reviewed literature reflects a widespread publication bias that favours statistically significant and novel outcomes (e.g. Nosek & Lakens, Citation2014). These preferences relate to the replication crisis first uncovered by the Many Labs research initiative (Open Science Collaborative, Citation2015).Footnote1 This initiative noted that successful replication occurred for about 40% of the effects tested from 100 published social science studies in select high-impact journals (i.e. Journal of Personality and Social Psychology, Psychological Science, and Journal of Experimental Psychology: Learning, Memory, and Cognition). Other studies have identified similar concerns. For example, a high-powered (i.e. sample sizes approximately five times larger than initial publication sample sizes) replication test of 21 social science publications from the journals Science and Nature observed that a significant effect in the same direction was obtained for roughly 62% of findings; however, the effect sizes for those outcomes were about 50% of those in the original publications (Camerer et al., Citation2018). Other disciplines (e.g. artificial intelligence, medicine, economics, and marketing) also have identified similarly troubling replication rates or concerns (Berman, Pekelis, Aisling, & Van den Bulte, Citation2018; Camerer et al., Citation2016; Hutson, Citation2018; Kaiser, Citation2017). Altogether, these findings suggest that replication and publication bias are important issues that gambling researchers would be wise to investigate.

Researchers have identified a variety of likely methodological contributors to the replication crisis. Researcher degrees of freedom (see Wicherts et al., Citation2016), for example, consist of potentially unprincipled data analysis decisions and practices that favour methods and techniques that yield statistically significant outcomes (Mackinnon, Citation2013; Simmons, Nelson, & Simonsohn, Citation2011). One possible expression of this practice might be failing to be transparent about searching for moderators in the absence of hypothesized main effects (e.g. testing for gender or age differences when a proposed main effect does not reach statistical significance). A second example might include selectively reporting outcome variables (e.g. dropping non-significant measures that ‘do not work’ from a manuscript and only reporting significant findings). Another form of researcher degrees of freedom might involve the selection of specific analytic approaches that are more likely to yield statistically significant findings (e.g. opting to report one-tailed tests in the absence of well-founded directional hypotheses).

Analytic decisions by researchers aren’t the only possible contributors to poor replication rates. HARKing (i.e. hypothesizing after the results are known; Kerr, Citation1998), occurs when researchers write their manuscripts with post hoc hypotheses as if these were developed a priori. This is problematic because this practice might advance and highlight findings that reflect type I errors. Likewise, small N studies (i.e. small sample sizes that risk both type I and type II errors; Anderson, Kelley, & Maxwell, Citation2017; Button et al., Citation2013; Wolf, Harrington, Clark, & Miller, Citation2013) dominate many areas of the published literature and therefore suggest the existence of a scientific foundation that is inherently shaky. Finally, even if researchers avoid methodological problems such as these, a biased literature might emerge due to the problem of publication bias during peer review (e.g. rejecting methodologically sound papers that report null findings or replication studies because they do not contribute something original to the literature; see Ferguson & Heene, Citation2012).Footnote2

Although Many Labs’ replication findings, and others like them, were shocking to many researchers, and are the subject of continuing debate in academic and non-academic circles (e.g. Adam, Citation2019; Gilbert, King, Pettigrew, & Wilson, Citation2016; Kupferschmidt, Citation2018; Palmer, Citation2016), this situation has ushered in a new era of contemporary research practices designed to improve the likelihood of reproducible and replicable behavioural research. For instance, many social science researchers now engage in research pre-registration. Research pre-registration at websites such as the Open Science Foundation (https://osf.io), AsPredicted (https://aspredicted.org), and clinicaltrials.gov consists of proactive public documentation of research ideas, hypotheses, methods, and analytics prior to data collection. The idea behind such pre-registration is that, by preparing and publicizing their research plans, researchers will proactively think through their research design and analytic approaches in advance of data collection; this strategy will reduce their likelihood of making even unintended researcher degrees of freedom choices that increase the chance of identifying non-replicable findings. A similar activity is the use of registered reports. These include pre-registration documents that undergo peer review themselves before any research starts. Papers accepted for publication that follow registered reports are published regardless of the statistical significance of findings, though they might be rejected for other reasons (e.g. not following the registered research plan or misrepresenting results). In addition to these research development practices, contemporary practices also include an increased preference for open data, open research materials/notebooks, and open publication practices. Many sites that host research pre-registration documents also provide space and support for these other open science practices. Finally, replicability experts have suggested that researchers place greater emphasis upon designs that have appropriate statistical power for the effects under consideration, with general recommendations for using increased sample sizes (Fraley & Vazire, Citation2014; Loken & Gelman, Citation2017).

To date, the extent to which gambling researchers use and have used the scientific methods that contributed to the replication crisis identified in other research fields is unclear. There has been no systematic examination of the field for these practices. However, there is no compelling reason to assume that gambling researchers have avoided the common pitfalls that have led to the replication crisis. Unfortunately, a gambling literature that possibly is littered with small scale non-pre-registered studies that liberally employ researcher degrees of freedom during design and analysis might very well be at risk for poor replication rates. The consequences of such a situation could be substantial. Gambling research often directly informs public gambling policies and programmes; therefore, a replicability crisis in the gambling field risks misdirecting attention and resources towards poorly supported actions. The importance of scientific self-reflection related to this possibility is paramount.

In some ways, the gambling research field is inching towards contemporary research practices that provide space for optimism. Upon writing this editorial, there were 41 research registrations on the Open Science Foundation website (www.osf.io) for studies that included the search term ‘gambling’. This is promising. However, in comparison to the number of gambling-related publications that become available annually (e.g. there were roughly 916 PsycINFO citation search results including the keyword ‘gambling’ during 2018), the much smaller number of pre-registrations suggests that a concerted effort to normalize such research practices is needed to move the field responsibly forward, supporting rigorous and transparent research practices.

Beyond the small number of publicly available research registrations, other open science practices also are available. For example, roughly 10 years ago the Division on Addiction (Division) created and published the Transparency Project (www.thetransparencyproject.org; Shaffer et al., Citation2009), a web-based open data archive. There, the Division makes available its datasets for published papers (especially those funded by private interests) and has extended an open invitation to others to include their own datasets. To date, researchers have used our own archived data to examine gambling-related topics (e.g. Brosowski, Meyer, & Hayer, Citation2012; Coussement & De Bock, Citation2013; Percy, Franca, Dragicevic, & d’Avila Garcez, Citation2016). However, we have not received any inquiries from others regarding inclusion of their own original data in the archive. The Transparency Project and other data archives help investigators to replicate findings, create confidence about published outcomes, and accelerate scientific advancement. Greater use of such resources should become standard for the field. Currently, the Division is working to further advance its open science practices by committing to pre-register its new research projects (e.g. https://osf.io/tnseq/) and engaging in education activities (i.e. attending bi-annual trainings and holding monthly seminars devoted to open science practices) that will allow the Division to be recognized as an open science institution.Footnote3

Although gambling researchers have yet to adopt contemporary research practices on a large scale, others who are supporting this action continue to advance, which means that focusing on open science practices for gambling research is urgent. Ironically, research suggests that prediction markets – in which participants bet for or against the successful replication of each study – can identify if psychology studies will replicate (Yong, Citation2018). Specifically, one study observed that prediction markets could accurately estimate the replication status of 29 of 41 tested studies (Dreber et al., Citation2015). The next steps for advancing contemporary research practices include support from the Defence Advanced Research Projects Agency (DARPA) to use artificial intelligence programs to improve upon what prediction markets can do and identify the likely reproducibility of research publications with a higher degree of accuracy (Center for Open Science, Citation2019; Rogers, Citation2019; Russell, Citation2019).

Despite the rapid advances that are occurring in other areas, there remain a number of more fundamental questions that gambling researchers must grapple with to ensure that the foundation upon which the field progresses is stable and valid. To start, to what extent does the gambling research field struggle with publication bias? Such bias is potentially extensive. Research that chronicled and ranked a variety of disciplines, from space science to psychology, suggested that, as scientific disciplines become progressively ‘softer,’ the likelihood of publishing results that confirm hypotheses increases (Fanelli, Citation2010). It is important to understand how gambling research might stand with respect to such publication tendencies. Next, to what extent are open science practices employed across the field? To best understand this, I suggest that researchers should engage in systematic reviews that assess the field’s standards and practices – specifically with respect to open science and replication. Finally, have there been systematic efforts to establish replication rates for key gambling research findings, particularly for experimental studies? Although so-called conceptual replication (i.e. testing a study’s hypothesis, but with a different methodological approach) might be a preferred replication strategy for behavioural research, generally, as Nussbaum noted, ‘There is no substitute for direct replication – if you cannot reproduce the same result using the same methods then you cannot have a cumulative science’ (Nussbaum, Citation2012). Therefore, estimating actual replication rates for our most compelling ideas is important to verifying the strength of gambling research evidence. Although answering these questions might create some disciplinary angst, ultimately, doing so will facilitate the emergence of a stronger science of gambling and gambling-related problems.

So, where do we go from here? The Many Labs research initiative and similar efforts it has stimulated (e.g. Many Babies, Many Classes, Many Primates, etc.)Footnote4 have directed more attention towards research fundamentals that are essential to a replicable scientific literature. What might this look like for gambling research? Perhaps the answer is a Many Casinos research replication project. Although the history of other such efforts suggests that a replication initiative might initially be met with some resistance, the Division is interested in pursuing such work in collaboration with other groups and doing so using contemporary open science practices.

At least two major benefits might emerge from a Many Casinos initiative. First, a research replicability initiative will provide a test of the gambling field’s robustness. Such replication might not be glamorous but is essential to assessing the value of gambling-related work. Second, advancing open science practices can provide a mechanism for increased confidence in gambling research. A recent Pew report observed increased public trust in research that uses modern practices and data sharing (Funk, Hefferon, Kennedy, & Johnson, Citation2019). This confidence is especially important given that the gambling research field currently is struggling with funding-related questions and concerns (e.g. Blaszczynski, Citation2018; Cowlishaw & Thomas, Citation2018). One unintended benefit of advancing open science practices, therefore, might be to help the field identify a satisfactory approach to research that is supported by diverse funding sources, including industry funding. Other addiction-related disciplines that currently are attempting to manage divergent and strongly held beliefs also recognize that these practices might help advance the development of an unbiased literature (Munafo, Citation2019; Przybylski, Weinstein, & Murayama, Citation2017). In sum, it is time for gambling-related research to self-reflect and elevate the value of fundamental research practices, including research replication and open science. Without open science and systematic research replication efforts, the value of the gambling research field remains unclear and the potential for a research crisis of our own making remains high.

Conflict of interest

I have no financial interests in the content discussed in this editorial. The Division on Addiction currently receives funding from DraftKings, Inc., The Foundation for Advancing Alcohol Responsibility (FAAR), The Healing Lodge of the Seven Nations via the Indian Health Service with funds approved by the National Institute of General Medical Sciences, National Institutes of Health; The Integrated Centre on Addiction Prevention and Treatment of the Tung Wah Group of Hospitals; the Gavin Foundation via the Substance Abuse and Mental Health Services Administration (SAMHSA); University of Nevada, Las Vegas via MGM Resorts International; and GVC Holdings, PLC. During the past 5 years, I have received speaker honoraria and travel support from the National Centre for Responsible Gaming and the National Collegiate Athletic Association. I am avolunteer board member of the New Hampshire Council on Problem Gambling.

Constraints on publishing

There are no contractual constraints on publishing this editorial. The funder did not review the topic or editorial prior to publishing.

Acknowledgements

I am grateful for comments and feedback from Howard J. Shaffer, Eric R. Louderback, and Sarah E. Nelson.

Additional information

Funding

I completed this editorial with funding support from GVC Holdings, PLC via a research grant to the Division on Addiction. GVC Holdings, PLC did not contribute to the development of this editorial.

Notes

1. See https://osf.io/89vqh/for more information.

2. These brief descriptions just scratch the surface of possible contributors to the replication crisis. For a more thorough review, please see Wicherts et al. (Citation2016).

3. For more information on becoming an open science institution, visit: https://osf.io/institutions.

References

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.