References
- Adam, D. (2019). Psychology’s reproducibility solution fails first test. Science, 364(6443), 813.
- Anderson, S., Kelley, K., & Maxwell, S. (2017). Sample-size planning for more accurate statistical power: A method adjusting sample effect sizes for publication bias. Psychological Science, 28(11), 1547–1562.
- Berman, R., Pekelis, L., Aisling, S., & Van den Bulte, C. (2018). p-Hacking and false discovery in A/B testing. Retrieved from SSRN. doi:10.2139/ssrn.3204791
- Blaszczynski, A. (2018). Responsible gambling: The need for colalborative government, industry, community and consumer involvement. Sucht, 64(5–6), 307–315.
- Brosowski, T., Meyer, G., & Hayer, T. (2012). Analyses of multiple types of online gambling within one provider: An extended evaluation framework of actual online gambling behavior. International Gambling Studies, 12(3). doi:10.1080/14459795.2012.698295
- Button, K. S., Ioannidid, J. P. A., Mokrysz, C., Nosek, B. A., Flint, J., Robinson, E. S. J., & Munafo, M. R. (2013). Power failure: Why small sample size undermines the reliabiilty of neuroscience. Nature Reviews Neuroscience, 14, 365–376.
- Camerer, C. F., Dreber, A., Forsell, E., Ho, T., Huber, J., Johannesson, M., … Wu, H. (2016). Evaluating replicability of laboratory experiments in economis. Science, 351(6280), 1433–1436.
- Camerer, C. F., Dreber, A., Holzmeister, F., Ho, T., Huber, J., Johannesson, M., … Wu, H. (2018). Evaluating the replicability of social science experiments in nature and science between 2010 and 2015. Nature Human Behavior, 2, 637–644.
- Center for Open Science. (2019). Can machines determine the credibility of research claims? The center for open science joins a new DARPA program to find out. Retrieved from https://cos.io/about/news/can-machines-determine-credibility-research-claims-center-open-science-joins-new-darpa-program-find-out/
- Coussement, K., & De Bock, K. W. (2013). Customer churn prediction in the online gambling industry: The beneficial effect of ensemble learning. Journal of Business Research, 66, 1629–1636.
- Cowlishaw, S., & Thomas, S. L. (2018). Industry interests in gambling research: Lessons learned from other forms of hazardous consumption. Addictive Behaviors, 78, 101–106.
- Dreber, A., Pfeiffer, T., Almenberg, J., Isaksson, S., Wilson, B., Chen, Y., … Johannesson, M. (2015). Using prediction markets to estimate the reproducibility of scientific research. Proceedings of the National Academy of Sciences of the United States of America (PNAS), 112(50), 15343–15347.
- Fanelli, D. (2010). “Positive” results increase down the hierarchy of the sciences. PLoS One, 5(4), e10068.
- Ferguson, C. J., & Heene, M. (2012). A vast graveyard of undead theories: Publication bias and psychological science’s aversion to the null. Perspective on Psychological Science, 7(6), 555–561.
- Fraley, R. C., & Vazire, S. (2014). The N-pact factor: Evaluating the quality of empirical journals with respect to sample size and statistical power. PLoS One, 9(10), e109019.
- Funk, C., Hefferon, M., Kennedy, B., & Johnson, C. (2019). Trust and mistrust in American’s views of scientific experts. Retrieved from https://www.pewresearch.org/science/2019/08/02/trust-and-mistrust-in-americans-views-of-scientific-experts/
- Gilbert, D. T., King, G., Pettigrew, S., & Wilson, T. D. (2016). Comment on “estimating the reproducibility of psychological science”. Science, 351(6277), 1037.
- Hutson, M. (2018). Arificial intelligence faces reproducibility crisis. Science, 359(6377), 725–729.
- Kaiser, J. (2017). Cancer studies pass reproducibility test. Science. doi:10.1126/science.aan7016
- Kerr, N. L. (1998). HARKing: Hypothesizing after the results are known. Personality and Social Psychology Review, 2(3), 196–217.
- Kupferschmidt, K. (2018). A recipe for rigor. Science, 361(6408), 1192–1193.
- Loken, E., & Gelman, A. (2017). Measurement error and the replication crisis. Science, 355(6325), 584–585.
- Mackinnon, S. (2013). Researcher degrees of freedom in data analysis. Retrieved from http://osc.centerforopenscience.org/2013/12/18/researcher-degrees-of-freedom/
- Munafo, M. R. (2019). E-cigarette research needs to adopt open science practices to improve quality. Addiction. doi:10.1111/add/14749
- Nosek, B. A., & Lakens, D. (2014). Registered reports: A method to increase the credibility of published results. Social Psychology, 45(3), 137–141.
- Nussbaum, D. (2012, May). Conceptual replication. The Psychologist. Retrieved from http://thepsychologist.bps.org.uk/volume-25/edition-5/replication-replication-replication
- Open Science Collaborative. (2015). Estimating the reproducibility of psychological science. Science, 349(6251), aac4716.
- Palmer, K. M. (2016, March 3). Psychology is in crisis over whether it’s in crisis. Wired. Retrieved from https://www.wired.com/2016/03/psychology-crisis-whether-crisis/
- Percy, C., Franca, M., Dragicevic, S., & d’Avila Garcez, A. (2016). Predicting online gambling self-exclusion: An analysis of learning models. International Gambling Studies, 16(2), 193–210.
- Przybylski, A. K., Weinstein, N., & Murayama, K. (2017). Open scientific practices are the way forward for internet gaming disorder research: Response to Yao et al. The American Journal of Psychiatry, 174, 487.
- Rogers, A. (2019, February 15). DARPA wants to solve science’s reproducibility crisis with AI. Wired. Retrieved from https://www.wired.com/story/darpa-wants-to-solve-sciences-replication-crisis-with-robots/
- Russell, A. (2019). Systematizing confidence in open research and evidence (SCORE). Retrieved from https://www.darpa.mil/program/systematizing-confidence-in-open-research-and-evidence
- Shaffer, H. J., LaPlante, D. A., Chao, Y. E., Planzer, S., LaBrie, R. A., & Nelson, S. E. (2009, March). Division on addictions creates new data repository. World Online Gambling Law Report, 8(3), 7.
- Simmons, J. P., Nelson, I. D., & Simonsohn, U. (2011). False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological Science, 22, 1359–1366.
- Wicherts, J. M., Veldkamp, C. L. S., Augusteijn, H. E. M., Bakker, M., van Aert, R. C. M., & van Assen, M. A. (2016). Degrees of freedom in planning, running, analyzing, and reporting psychological studies: A checklist to avoid p-hacking. Frontiers in Psychology, 7, 1–12.
- Wolf, E. J., Harrington, K. M., Clark, S. L., & Miller, M. W. (2013). Sample size requirements for structural equation models: An evaluation of power, bias, and solution propriety. Educational and Psychological Measurement, 73(6), 913–934.
- Yong, E. (2018, August 27). Online bettors can sniff out weak psychology studies. The Atlantic. Retrieved from https://www.theatlantic.com/science/archive/2018/08/scientists-can-collectively-sense-which-psychology-studies-are-weak/568630/