373
Views
14
CrossRef citations to date
0
Altmetric
Original Articles

Measuring Practitioner Attitudes Toward Evidence-Based Treatments: A Validation Study

, , , , &
Pages 166-183 | Published online: 30 Mar 2011
 

Abstract

A better understanding of clinicians’ attitudes toward evidence-based treatments (EBT) will presumably enhance the transfer of EBTs for substance-abusing adolescents from research to clinical application. The reliability and validity of two measures of therapist attitudes toward EBT were examined: the Evidence-Based Practice Attitude Scale (Aarons, Citation2004), and Attitudes Toward Psychotherapy Treatment Manuals Scale (Addis & Krasnow, Citation2000). Participants included 543 public sector, master's-level mental health and substance abuse therapists who treat adolescents. Factor analyses generally corroborated factor structures of the instruments found previously. Beliefs that EBTs negatively affect treatment process were associated with relatively low openness to new treatments and with beliefs that EBTs do not produce positive outcomes.

Acknowledgments

This manuscript was supported by grant R01DA17487 from the National Institute on Drug Abuse to Scott W. Henggeler. We are grateful to the directors of the DAODAS and DMH provider organizations for their support. State-level leadership was ably provided by Dr. George Gintoli, W. Lee Catoe, Louise Johnson, James Wilson, and Ruthie Johnson. We also acknowledge the hard work of the research staff: Kevin Armstrong, Ann Ashby, and Geneene Thompson.

Notes

Note: Boldface numbers indicate factor loadings that exceeded .50.

*p < .05; **p < .01.

Note: SE = standard error. Percent variance reduction calculated for variance explained at level 1 (therapists).

a Variance reduction was less than zero and assumed to be negligible.

*p < .05; **p < .01.

Instead of looking solely at ICCs, L. K. Muthén (http://www.statmodel.com/discussion/ messages/12/2620.html#POST16678) recommends calculating design effects, a statistical method of assessing the likely impact of nonindependent data on statistical findings. Design effects (DEFFS) are based on ICCs and use the formula: DEFFi = 1 + (average cluster size ∗ ICCi). Although this formula for design effects does not specifically take into account different cluster sizes, L. Muthén indicates that this gives a reasonable indication of whether ignoring nonindependence may produce biased results. Seven ATPTM items and four EBPAS items exceeded the recommended design effect cutoff of 2. Therefore, we opted for the conservative approach of calculating estimates that took the multilevel nature of the data into account.

A more customary approach would be to split the sample in half randomly, then conduct exploratory factor analyses with each subsample. One concern with this approach with multilevel data is that splitting the sample reduces the number of therapists within agencies and that some agencies might not be represented in a subsample, possibly distorting the results. In light of these concerns, we checked the replicability of the solution by analyzing randomly split halves of the sample, but omitting the nesting of therapists within agencies for the split samples. Ignoring agency membership in these analyses seemed warranted in that partitioning of variance in item responses in the sample as a whole indicated most variance was attributable to items (86%) and therapists (14%), with almost none due to agencies (<1%). Analyses of the two subsamples showed identical factor structures and comparable patterns of loadings to those reported for the sample as a whole.

We also calculated correlations among subscale scores rather than factors using multilevel procedures. In general, these correlations were smaller in magnitude, with the difference between subscale score and factor score correlations generally ranging between .03 and .06. For example, the correlation between openness and appeal factors was .49; the correlation between subscale scores was .44. These differences would be expected because use of confirmatory factor analysis (CFA) procedures reduces the measurement error that contributes to subscale scores calculated by averaging item scores. Using obtained subscale scores rather than factor scores in analyses did not change the substance of most conclusions, although a few of the weaker relationships were not significant when assessed with subscale scores.

Subscale scores were used in these analyses instead of latent variables because these results would be the most likely to be compared to those found in studies where subscale scores are calculated and used instead of factor scores. In addition, analyses of obtained scores (i.e., subscale scores) speak more highly to the validity of those scores as they are customarily used. Analyses of latent variables, in contrast, address the more generalized relationships among the constructs assessed by the items.

Interestingly, Addis and Krasnow (Citation2000) also initially found a three-factor solution, but forced a two-factor solution because the items that formed the third factor also cross-loaded on a second factor.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.