ABSTRACT
The “what works” movement has increasingly gained recognition among correctional practitioners and traction within correctional settings. It is now well established that in order to obtain meaningful recidivism reductions, programs must maintain fidelity to the principles of effective interventions. After more than four decades of research supporting the Risk-Need-Responsivity model, little is presently known about how well correctional programs have adopted and implemented evidence-based correctional practices on a larger scale. The current study examines the results from an extensive collection of correctional program assessments completed across the United States over the course of 14 years. Results offer practitioners and researchers insight into the current state of fidelity to evidence-based practices and highlight areas for improvement in correctional programming.
Acknowledgments
The authors gratefully acknowledge the many UCCI staff who were involved in the collection of CPC assessment data used for the current study. Additionally, the authors would like to acknowledge all of the corrections staff across the country who worked to conduct CPC assessments of their agencies and returned their scoresheets.
Disclosure statement
No potential conflict of interest was reported by the authors.
Notes
1. Readers can reference Duriez et al. (Citation2018) for a more detailed account of the creation of both the CPAI and CPC.
2. Of note, analyses were performed on individual years, but the findings did not differ. Results are reported in the manner described in text for ease of interpretation and in consideration of space limitations. Samples sizes of CPC assessments for each two-year time interval were as follows: 86 in 2005–2006, 83 in 2007–2008, 91 in 2009–2010, 64 in 2011–2012, 84 in 2013–2014, 93 in 2015–2016, and 62 in 2017–2018.
3. The has a provision for the CPC training process that only government entities can be trained in the CPC.
4. The term “program” is used throughout the remainder of this paper to represent the different agencies/programs/facilities that were assessed and provided a CPC score.
5. This determination was made after consultation with the assessors that completed the site visits.
6. In 2015, Latessa and colleagues updated the CPC. The update combined patterned items (e.g., whether a program was assessing for risk factors, whether the tool being used was standardized and objective, and if the tool produced a risk level or score were all condensed to one single item). Additionally, two items were added during the update. This led to a new item total of 73 items worth a total of 79 points (see Duriez et al., Citation2018). The updated version is not described above as the default because 82.59% (n = 465) of the assessments included in the analyses for this paper were conducted using the original tool.
7. Tests of equal variances among groups dictated which post hoc test was used for each ANOVA. This footnote also applies to pairwise comparisons presented in .