ABSTRACT
The use of non-probability Internet panels and crowdsource websites is increasing in gambling research. These paid online sampling methods offer a convenient and inexpensive recruitment strategy. The quality of data may be questionable due to careless responding and identity misrepresentation which can bias study results. It is necessary to investigate data quality given the important implications of gambling research in guiding policy decisions, public health initiatives, and treatments. In this review article, we 1) critically analyze the advantages and limitations of paid online recruitment methods, including associated threats to data quality in the gambling literature; 2) present findings from a rapid review of gambling studies using online panel and crowdsource data; and 3) outline recommendations for maximizing data quality and trustworthiness of findings. Substantially overinflated problem gambling rates were found in the 63 gambling studies we reviewed; less than one-quarter had incorporated data quality checks and reported participation rates. Future studies should incorporate pre-registration of methodology and analysis plans, robust participant screening procedures, mid-survey attention and response consistency items, and an analysis of response quality post data collection. Applying these recommendations to nonrepresentative online panel and crowdsource-based studies may enhance the replicability of findings in additional studies using representative samples
Acknowledgements
The authors wish to thank Dr Brittany Keen and Dr Robert Heirene for their contribution in reviewing the draft version manuscript and providing valuable feedback.
Disclosure statement
No potential conflict of interest was reported by the authors.
Supplementary material
Supplemental data for this article can be accessed here.
Notes
1. Some examples of commercial non-probability companies are Qualtrics, Ipsos, and Dynata (formerly Research Now).
2. The Replication Crisis is a systematic problem in many scientific fields where published study results have failed to reproduce in subsequent replication projects. Questionable research practices are held at least partly responsible for the replication crisis which has led to concerns about the credibility and accuracy of several important scientific findings (Fidler & Wilcox, Citation2018).
Additional information
Notes on contributors
Dylan Pickering
Dylan Pickering, PhD, is a researcher with the Gambling Treatment & Research Clinic at the University of Sydney directed by Associate Professor Sally Gainsbury. Dylan’s research interests include the conceptualisation and measurement of recovery in Gambling Disorder, in addition to the role of self-exclusion programs and online digital technologies in supporting recovery.
Alex Blaszczynski
Alex Blaszczynski BA, MA, Dip. Psych., PhD, MAPSs, is an Emeritus Professor and the former director of the Gambling Treatment Clinic at Sydney University. He is a clinical psychologist by training. In 1995, Dr. Blaszczynski received the American Council of Problem Gambling Directors Award, in 2004, the National Center for Responsible Gambling senior investigator’s research award, in 2013, the NSW Government’s Responsible Gambling Fund’s excellence award for contributions to gambling, and in 2014, the American National Council on Responsible Gambling’s Lifetime Research Achievement award. He is Editor-in-Chief, International Gambling Studies.