251
Views
1
CrossRef citations to date
0
Altmetric
Articles

Methodological Quality and Validity Issues in the Crime Prevention Literature

, &
Pages 120-143 | Received 18 Feb 2021, Accepted 20 Aug 2021, Published online: 04 Nov 2021

References

  • Berk, R., Barnes, G., Ahlman, L., & Kurtz, E. (2010). When second-best is good enough: A comparison between a true experiment and a regression discontinuity quasi-experiment. Journal of Experimental Criminology, 6(2), 191–208. doi:https://doi.org/10.1007/s11292-010-9095-3
  • Campbell, D. T., & Stanley, J. C. (1963). Experimental and quasi-experimental designs for research. Boston: Houghton Mifflin.
  • Cook, T. D. (2014). Generalizing causal knowledge in the policy sciences: External validity as a task of both multiattribute representation and multiattribute extrapolation. Journal of Policy Analysis and Management, 33(2), 527–536. doi:https://doi.org/10.1002/pam.21750
  • Cronbach, L. J., & Meehl, P. E. (1955). Construct validity in psychological tests. Psychological Bulletin, 52(4), 281–302. doi:https://doi.org/10.1037/h0040957
  • Eck, J. (2001). Learning from experience in problem oriented policing and crime prevention: The positive function of weak evaluations and the negative function of strong ones. Crime Prevention Studies, 14, 93–117.
  • Eisner, M. (2009). No effects in independent prevention trials: Can we reject the cynical view? Journal of Experimental Criminology, 5(2), 163–183. doi:https://doi.org/10.1007/s11292-009-9071-y
  • Fagan, A., & Buchanan, M. (2016). What works in crime prevention? Comparison and critical review of three crime prevention registries. Criminology & Public Policy, 15(3), 617–649. doi:https://doi.org/10.1111/1745-9133.12228
  • Farrington, D. P. (1983). Randomized experiments on crime and justice. In M. Tonry & N. Morris (Eds.), Crime and Justice (Vol. 4, pp. 257–308). Chicago: University of Chicago Press.
  • Farrington, D. P. (2003). Methodological quality standards for evaluation research. The Annals of the American Academy of Political and Social Science, 587(1), 49–68. doi:https://doi.org/10.1177/0002716202250789
  • Farrington, D. P., Gottfredson, D. C., Sherman, L. W., & Welsh, B. C. (2002a). The Maryland Scientific Methods Scale. In L. W. Sherman, D. P. Farrington, B. C. Welsh, & D. L. MacKenzie (Eds.), Evidence-based crime prevention (pp. 13-21). New York, NY: Routledge.
  • Farrington, D. P., Loeber, R., Yin, Y., & Anderson, S. J. (2002b). Are within-individual causes of delinquency the same as between-individual causes? Criminal Behaviour and Mental Health, 12(1), 53–68. doi:https://doi.org/10.1002/cbm.486
  • Farrington, D. P., & Petrosino, A. (2000). Systematic reviews of criminological interventions: The Campbell Collaboration Crime and Justice Group. International Annals of Criminology, 38, 49–66.
  • Farrington, D. P., & Petrosino, A. (2001). The Campbell Collaboration Crime and Justice Group. The Annals of the American Academy of Political and Social Science, 578(1), 35–49. doi:https://doi.org/10.1177/000271620157800103
  • Flay, B. R., Biglan, A., Boruch, R. F., Castro, F. G., Gottfredson, D., Kellam, S., … Ji, P. (2005). Standards of evidence: Criteria for efficacy, effectiveness and dissemination. Prevention Science, 6(3), 151–175. doi:https://doi.org/10.1007/s11121-005-5553-y
  • Gies, S., Healy, E., & Stephenson, R. (2020). The Evidence of Effectiveness: Beyond the Methodological Standards. Justice Evaluation Journal, 3(2), 155–177.
  • Gill, C. (2011). Missing links: How descriptive validity impacts the policy relevance of randomized controlled trials in criminology. Journal of Experimental Criminology, 7(3), 201–224. doi:https://doi.org/10.1007/s11292-011-9122-z
  • Goldkamp, J. (2008). Missing the target and missing the point: “Successful” random assignment but misleading results. Journal of Experimental Criminology, 4(2), 83–115. doi:https://doi.org/10.1007/s11292-008-9052-6
  • Hedges, L. V., & Hedberg, E. C. (2007). Intraclass correlations for planning group randomized experiments in rural education. Journal of Research in Rural Education, 22, 1–15.
  • Horne, C. S. (2017). Assessing and strengthening evidence-based program registries' usefulness for social service program replication and adaptation. Evaluation Review, 41(5), 407–435. doi:https://doi.org/10.1177/0193841X15625014
  • Lazarsfeld, P., & Barton, A. (1982). Some functions of qualitative analysis in social research. In P. Kendall (Ed.), The varied sociology of Paul Lazarsfeld (pp. 239–285). New York: Columbia University Press.
  • Lösel, F., & Köferl, P. (1989). Evaluation Research on Correctional Treatment in West Germany: A Meta-analysis. In: H. Wegener, F. Lösel, & J. Haisch (Eds.), Criminal Behavior and the Justice System. Research in Criminology. Berlin, Heidelberg: Springer.
  • Maruna, S. (2011). Mixed method research in criminology: Why not go both ways? In A. Piquero & D. Weisburd (Eds.), Handbook of quantitative criminology, pp.123-140. New York, NY: Springer.
  • Miller, J. H., & Miller, H. V. (2015). Rethinking program fidelity for criminal justice. Criminology & Public Policy, 14(2), 339–349. doi:https://doi.org/10.1111/1745-9133.12138
  • Morgan, C., Day, D., & Petrosino, P. (2017). Review of Evidence-Based Registries Relevant to Crime Prevention. Ottawa: Public Safety Canada.
  • Murray, J., Farrington, D.P., & Eisner, P. (2009). Drawing conclusions about causes from systematic reviews of risk factors: The Cambridge Quality Checklists. Journal of Experimental Criminology, 5(1), 1–23. doi:https://doi.org/10.1007/s11292-008-9066-0
  • National institute of Justice. (2021). Research and evaluation on the administration of justice, fiscal year 2021. [Request for proposal]. Washington, DC: Author.
  • Nelson, M., Wooditch, A., & Dario, L. (2015). Sample size, effect size, and statistical power: A replication study of Weisburd’s Paradox. Journal of Experimental Criminology, 11(1), 141–163. doi:https://doi.org/10.1007/s11292-014-9212-9
  • Perry, A. (2011). Descriptive validity and transparent reporting in randomised controlled trials. In A. Piquero & D. Weisburd (Eds.), Handbook of quantitative criminology, pp. 333–352. New York, NY: Springer.
  • Perry, A., & Johnson, M. (2008). Applying the consolidated standards of reporting trials (CONSORT) to studies of mental health provision for juvenile offenders: A research note. Journal of Experimental Criminology, 4(2), 165–185. doi:https://doi.org/10.1007/s11292-008-9051-7
  • Perry, A., Weisburd, D., & Hewitt, C. (2010). Are criminologists describing randomized controlled trials in ways that allow us to assess them? Findings from a sample of crime and justice trials. Journal of Experimental Criminology, 6(3), 245–262. doi:https://doi.org/10.1007/s11292-010-9099-z
  • Petrosino, A. (1995). The hunt for randomized experimental reports: Document search and efforts for a “what works?” meta-analysis. Journal of Crime and Justice, 18(2), 63–80. doi:https://doi.org/10.1080/0735648X.1995.9721049
  • Petrosino, A., & Boruch, R. (2014). Evidence-Based Policy in Crime and Justice. In: G. Bruinsma, & D. Weisburd (Eds.), Encyclopedia of Criminology and Criminal Justice. New York, NY: Springer.
  • Petrosino, A. (2003). Standards for evidence and evidence for standards: The case of school-based drug prevention. The Annals of the American Academy of Political and Social Science, 587(1), 180–207.
  • Petrosino, A., & Soydan, H. (2005). The impact of program developers as evaluators on criminal recidivism: Results from meta-analyses of experimental and quasi-experimental research. Journal of Experimental Criminology, 1(4), 435–450. doi:https://doi.org/10.1007/s11292-005-3540-8
  • Ryan, R. (2013). Cochrane consumers and communication review group. Cochrane Consumers and Communication Review Group: Data synthesis and analysis. http://cccrg.cochrane.org
  • Sampson, R. J. (2010). Gold standard myths: Observations on the experimental turn in quantitative criminology. Journal of Quantitative Criminology, 26(4), 489–500. doi:https://doi.org/10.1007/s10940-010-9117-3
  • Schindler, H., & Yoshikawa, H. (2016). How to identify and communicate what works in evaluation science. Criminology & Public Policy, 15(3), 661–667. doi:https://doi.org/10.1111/1745-9133.12238
  • Sherman, L., Gottfredson, D., MacKenzie, D., Eck, J., Reuter, P., & Bushway, S. (1997). Preventing crime: What works, what doesn’t, what’s promising. Washington, DC: US Department of Justice.
  • Sherman, L.W., & Strang, H. (2004). Verdicts or interventions? Interpreting results from randomized controlled trials in criminology. American Behavioral Scientist, 47(5), 575–607. doi:https://doi.org/10.1177/0002764203259294
  • Sullivan, G., & Feinn, R. (2012). Using effect size—Or why the P value is not enough. Journal of Graduate Medical Education, 4(3), 279–282. doi:https://doi.org/10.4300/JGME-D-12-00156.1
  • Tolan, P. (2014). Making and using lists of empirically tested programs: Value for violence interventions for progress and impact. In Institute of Medicine and National Research Council (Ed.), The evidence for violence prevention across the lifespan and around the world: Workshop summary, pp. 94-113. Washington, DC: The National Academies Press.
  • Weisburd, D. (2003). Ethical practice and evaluation of interventions in crime and justice. The moral imperative for randomized trials. Evaluation Review, 27(3), 336–354. doi:https://doi.org/10.1177/0193841X03027003007
  • Weisburd, D. (2010). Justifying the use of non-experimental methods and disqualifying the use of randomized controlled trials: Challenging the folklore in evaluation research in crime and justice. Journal of Experimental Criminology, 6(2), 209–227. doi:https://doi.org/10.1007/s11292-010-9096-2
  • Weisburd, D., Lum, C., & Petrosino, A. (2001). Does research design affect study outcomes in criminal justice? The Annals of the American Academy of Political and Social Science, 578(1), 50–70. doi:https://doi.org/10.1177/000271620157800104
  • Weisburd, D., Lum, C., & Yang, S. (2003). When can we conclude that treatments or programs “don’t work”? The Annals of the American Academy of Political and Social Science, 587(1), 31–48. doi:https://doi.org/10.1177/0002716202250782
  • Weisburd, D., Petrosino, A., & Mason, G. (1993). Design sensitivity in criminal justice experiments. Crime and Justice, 17, 337–379. doi:https://doi.org/10.1086/449216
  • Welsh, B. C., Farrington, D. P., & Gowar, B. R. (2015). Benefit-cost analysis of crime prevention programs. Crime and Justice, 44(1), 447–516. doi:https://doi.org/10.1086/681556

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.