4,958
Views
13
CrossRef citations to date
0
Altmetric
Evidence Base Update

Assessment and the Journal of Clinical Child and Adolescent Psychology’s Evidence Base Updates Series: Evaluating the Tools for Gathering Evidence

&

Abstract

In 2014, Michael Southam-Gerow and Mitch Prinstein launched the Evidence Base Updates series. As invited contributors, authors of Evidence Base Updates articles offer the field an invaluable resource: regular evaluations of the latest data on tools for addressing the mental health needs of children and adolescents. Until now, authors of Evidence Base Updates articles have focused exclusively on evaluating treatment techniques. In this article, we outline how the Evidence Base Updates series will evolve to also include evaluations of assessment techniques. In our treatment-focused updates, contributors follow strict criteria when evaluating the evidence. Following these criteria allows authors of Evidence Base Updates articles to provide mental health professionals with clear “take-home messages” about the evidence underlying the treatments evaluated. Similarly, we outline the criteria that authors will follow when preparing Evidence Base Updates articles that evaluate assessments. We also highlight the formats of these articles, which will include evaluations of condition-focused measures (e.g., anxiety, conduct problems); transdiagnostic constructs (e.g., parenting, rumination); specific, widely used measures that cut across conditions; and updates on field-wide considerations regarding measurement (e.g., clinical utility, incremental validity).

Over the past few decades, a defining feature of the movement toward evidence-based mental health care practices has involved developing and implementing standards for evaluating research focused on testing clinical techniques (Nathan & Gorman, Citation2007). A central goal of these efforts is to provide mental health professionals in policy, practice, and research settings with clear conclusions on which techniques meet or surpass acceptable thresholds of evidentiary support, based on the available research (Weisz & Kazdin, Citation2010). Implementing evaluative standards and identifying clinical techniques that meet these standards allow mental health professionals to distinguish techniques that could be considered “evidence-based practices” from those techniques that either await careful evaluation, or have undergone evaluations that resulted in subthreshold evidentiary support (Chambless & Ollendick, Citation2001).

When it comes to techniques focused on addressing the mental health needs of children and adolescents, the Journal of Clinical Child and Adolescent Psychology (JCCAP) stands at the forefront of this movement. Beyond publishing scores of empirical articles evaluating clinical techniques, JCCAP devotes considerable attention to keeping the field apprised of the ever-growing evidence base on these techniques. Until recently, this attention took the form of research syntheses published as collections of articles in the journal, namely, as special issues (Lonigan, Elbert, & Johnson, Citation1998; Mash & Hunsley, Citation2005; Silverman & Hinshaw, Citation2008). As noted in a recent editorial (De Los Reyes, Citation2017), articles in these special issues historically served as required reading for trainees, clinicians, policymakers, and researchers alike. Yet the special issue format proved ill-suited for keeping the field apprised of the evidence base surrounding clinical techniques. This is because the evidence base grows at a rate that quickly outpaces the “sell-by” date of the research covered in the special issues. Indeed, a decade’s time separates the publication of the two major JCCAP special issues focused on treatment (Lonigan et al., Citation1998; Silverman & Hinshaw, Citation2008). Think about the hundreds of articles published between 1998 and 2008 that reported the findings of controlled trials of treatments!

Consequently, 4 years ago Southam-Gerow and Prinstein (Citation2014) launched the Evidence Base Updates article series. As invited contributors, authors of Evidence Base Updates articles offer the field an invaluable resource: regular evaluations of the latest data on tools for addressing the mental health needs of children and adolescents. These articles cover a large swath of the field in terms of techniques for concerns as diverse as anxiety (Higa-McMillan, Francis, Rith-Najarian, & Chorpita, Citation2016), exposure to trauma (Dorsey et al., Citation2017), neurodevelopmental conditions (Evans, Owens, Wymbs, & Ray, Citation2018; Evans, Sarno Owens, & Bunford, Citation2014; Smith & Iadarola, Citation2015), depression (Weersing, Jeffreys, Do, Schwartz, & Bolano, Citation2017), substance use (Hogue, Henderson, Ozechowski, & Robbins, Citation2014), eating (Lock, Citation2015), and externalizing concerns (Dopp, Borduin, Rothman, & Letourneau, Citation2017; Kaminski & Claussen, Citation2017; McCart & Sheidow, Citation2016).

In support of the growing influence these articles have on the field, we report data about the impact of Evidence Base Updates articles in , , and . For comparative purposes, includes citation data for all articles from the Evidence Base Updates series (2014–2017), articles from a second of JCCAP’s invited article series, Future Directions (2012–2017), and all articles from JCCAP published within the time frame encompassing these two series (2012–2017). Similarly, in we report 2-year impact factors for each of these article types, for those years where impact factor data either were available or could be calculated from the available citation data. In , we report downloads in 2017 via the Taylor and Francis website for all JCCAP articles published between 2012 and 2017.

TABLE 1 Number of Annual Citations for all Articles in the Journal of Clinical Child and Adolescent Psychology and Evidence Base Updates and Future Directions Series (2012–2017)

TABLE 2 Impact Factors for all Articles in the Journal of Clinical Child and Adolescent Psychology and Articles in the Evidence Base Updates and Future Directions Series (2012–2016)

FIGURE 1 Downloads in 2017 of articles published between 2012 and 2017 in the Journal of Clinical Child and Adolescent Psychology (JCCAP). Note: We report download figures for all JCCAP articles downloaded at least once in 2017 (n = 436), as well as separate figures for articles in the Evidence Base Updates (published in volumes from 2014 to 2017; n = 16) and Future Directions (published in volumes from 2012 to 2017; n = 31) article series. Download data were provided to us by representatives of the Taylor and Francis Group, LLC.

FIGURE 1 Downloads in 2017 of articles published between 2012 and 2017 in the Journal of Clinical Child and Adolescent Psychology (JCCAP). Note: We report download figures for all JCCAP articles downloaded at least once in 2017 (n = 436), as well as separate figures for articles in the Evidence Base Updates (published in volumes from 2014 to 2017; n = 16) and Future Directions (published in volumes from 2012 to 2017; n = 31) article series. Download data were provided to us by representatives of the Taylor and Francis Group, LLC.

In these data, we see that articles from the Evidence Base Updates series have seen substantial year-over-year increases in citations rates (). In , we see that in 2016 the Evidence Base Updates articles achieved an impact factor (i.e., number of citations in 2016 for Evidence Base Updates articles published in 2015 and 2014) that is several times higher than JCCAP’s overall impact factor from that same year. In fact, in the 2016 Journal Citation Reports of journals in the “Psychology” category, the only journals that had an impact factor higher than the impact factor of the Evidence Base Updates articles were the Psychological Bulletin (16.79) and the Annual Review of Psychology (19.95). In the “Clinical Psychology” category, the Evidence Base Updates articles had a higher 2016 impact factor than the top-ranked journal in that category, the Annual Review of Clinical Psychology (12.13).

Of course, citation and impact factor data only show the influence these articles have on researchers. Thus, the download data reported in greatly complement the citation data, in that download figures are not specific to interest in these articles stemming from a particular constituency or group (e.g., clinicians, parents, policymakers, researchers). In we see that collectively, people downloaded JCCAP articles published between 2012 and 2017 (= 436) over 100,000 times, averaging roughly 237 downloads per article. Among these articles, the 16 Evidence Base Updates articles published between 2014 and 2017 were downloaded over 33,000 times, averaging over 2,100 downloads per article.

We see clear signs that readers find great value in articles from the Evidence Base Updates series. The purpose of this article is to provide a road map for how JCCAP will add further value to the series. Indeed, as valuable to the field as the Evidence Base Updates series has been, it currently provides incomplete coverage of clinical techniques used by mental health professionals who work with children and adolescents. This is because all of the articles to date in the Evidence Base Updates series have focused on evaluating treatment techniques. Not a single article has focused on evaluating assessment techniques. Think about this: What do you need to determine the extent to which a treatment “works” in addressing the needs of a child or adolescent in your care? Evidence! That’s assessment: It’s the “evidence” in “evidenced-based treatment”! Our confidence in the ability of treatments to improve the lives of the children and adolescents in our care should only be so strong as the quality of the tools we use to understand their concerns, plan their courses of treatment, and evaluate the effects of these treatments.

In this article, we delineate the criteria that contributors to the Evidence Base Updates series will follow when preparing articles that focus on evaluating assessments. We also describe the formats of these articles. Important to note, the formats of assessment-focused articles in the Evidence Base Updates series will share some similarities with the treatment-focused articles in the series, but also important differences.

CRITERIA FOR EVALUATING THE TOOLS FOR GATHERING EVIDENCE

At JCCAP we encourage the rigorous scientific study of child and adolescent mental health and we strive to make this research accessible and useful to clinicians and researchers. Our previous special issues on evidence-based assessments (see Mash & Hunsley, Citation2005) and the current Evidence Base Updates series exemplify these efforts. In line with these efforts, we expect our assessment-focused Evidence Base Updates to adopt specific criteria for evaluating the state of the science on clinical assessments. Specifically, the criteria set forth by Youngstrom and colleagues (Citation2017), extending the criteria initially established by Hunsley and Mash (Citation2008), provide clear, objective criteria to permit consistent evaluation across measures and constructs. Next we summarize the development of these criteria.

Following several efforts to establish criteria to evaluate assessment instruments (e.g., Robinson, Shaver, & Wrightsman, Citation1991), Hunsley and Mash (Citation2008, Citation2018) created a rubric to rate the psychometric properties of assessment instruments across nine categories: (a) norms, (b) internal consistency, (c) interrater reliability, (d) test–retest reliability, (e) content validity, (f) construct validity, (g) validity generalization, (h) treatment sensitivity, and (i) clinical utility. Each category includes a description of the quality of evidence required for a rating of adequate (minimal level of scientific rigor), good (solid scientific support), or excellent (extensive, high-quality support). Recently, Youngstrom and colleagues (Citation2017) expanded upon the Hunsley and Mash rubric, adding categories of repeatability, discriminative validity, and prescriptive validity. We report in the complete rubric for the evaluation of psychometric properties.

TABLE 3 Rubric for Evaluating Norms, Validity, and Utility (Hunsley & Mash, Citation2008, Extended by Youngstrom et al., Citation2017)

The rubric serves as the foundation for articles within the Evidence Based Updates series on assessment, but we encourage authors to extend their discussion beyond a justification of their ratings. For article formats that report on many instruments, authors should discuss the overall quality of available instruments on the listed psychometric categories. For instance, when evaluating measures of anxiety, individual instruments may vary on the degree to which they are sensitive to treatment-related change. However, the entire body of anxiety measures may demonstrate, as a whole, good or excellent ability to reflect changes in anxiety across time. For article formats that report on a smaller number of instruments, authors should discuss, in greater depth, any major and influential findings that have been based on a specific instrument, populations for which the instrument has particularly strong or weak support, and any cultural considerations when using the instrument. Across all article formats, authors should highlight what they view as the gaps and limitations of the evidence base, and propose directions for future research.

One element of the rubric warrants comment. Youngstrom and colleagues (Citation2017) included a rating category of “too good” when evaluating norms and reliability and a category of “too excellent” when evaluating validity and utility. These can occur as the result of weak study designs producing exaggerated results or reflect well-designed measures that may be specialized for a particular niche, among other reasons. Youngstrom and colleagues rightly noted that interpreting numerical indicators of assessment quality (e.g., Cronbach’s alpha) requires understanding the ways in which such indicators can be misleading. However, as Youngstrom and colleagues (Citation2017) also noted, there are not clear, consistent criteria for determining what is “too good” or “too excellent,” and in our judgment this precludes (for now) the use of these categories in the Evidence Based Updates series. Thus, we currently omit the “too good” and “too excellent” categories from the rubric delineated in . We say “currently omit” because in the context of the broader Evidence Base Updates series, we have two expectations regarding these categories. First, we will instruct authors to be mindful of the potential pitfalls of the evidence underlying assessments and acknowledge them explicitly when appropriate. By “acknowledge,” we mean screen psychometric studies included in their evaluations of measures for the presence of interpretive problems regarding reliability, validity, and/or utility evidence, and we note for readers those studies that may warrant interpretation with caution. Second, through the Evidence Base Updates series we will invite authors to contribute pieces that focus on refining the criteria for identifying “too good” or “too excellent” findings. The long-term goal of this commissioned work will be to refine the criteria for the “too good” or “too excellent” categories for use in future contributions to the Evidence Base Updates series (i.e., revisions to the rubric that include clear, explicit criteria for the “too good” and “too excellent” categories).

FORMATS FOR EVIDENCE BASED UPDATES ARTICLES ABOUT ASSESSMENTS

When contemplating the format of assessment-focused articles in the series, we thought carefully about the nature and scope of assessment work with child and adolescent clients. Mental health professionals vary widely in terms of how they use assessments, the specific assessments they use, and the characteristics of these assessments. Thus, assessment-focused articles in the Evidence Base Updates series have to be sensitive to various factors, including but not limited to (a) types of clinical decisions (e.g., diagnosis and case conceptualization, treatment planning and monitoring), (b) measurement domains (e.g., symptoms, mechanisms and moderators of change), and (c) measurement modalities (e.g., multi-informant surveys and interviews, behavioral observations, performance-based tasks, physiological functioning). Indeed, as Youngstrom and colleagues (Citation2017) noted, evaluating a measure should involve considering whether the research supports the use of scores from the measure to predict important clinical variables (e.g., diagnostic status, treatment response), prescribe clinical services (e.g., scores inform use of Treatment X vs. Treatment Y), and/or inform an understanding of the process of care (e.g., measuring changes in the “active ingredients” of the treatment administered to the client, tracking symptom change over the course of treatment, monitoring the therapeutic alliance; see also Youngstrom & Frazier, Citation2013).

The criteria we discussed previously take these factors into account. We also want to account for these factors in the formats of assessment-focused articles in the series. Therefore, next we briefly describe and provide a rationale for the kinds of assessment-focused articles that JCCAP will invite contributors to prepare for the Evidence Base Updates series.

Broad Updates about Assessments for Specific Conditions

As with treatment-focused articles in the series, we have strong clinical, empirical, and historical rationales for inviting authors to prepare Evidence Base Updates articles that evaluate assessments for specific conditions (see also Hunsley & Mash, Citation2018). Granted, current trends in treatment research include the advent of approaches that target transdiagnostic factors that cut across related conditions (e.g., negative affect characteristic of anxiety and mood disorders; McHugh, Murray, & Barlow, Citation2009); and flexible, modular therapies that include techniques for treating a diverse set of the most common child and adolescent conditions seen in clinic settings (e.g., anxiety, mood, and disruptive behavior disorders; Weisz et al., Citation2012). That being said, many of the field’s evidence-based treatments were originally developed to target specific conditions (e.g., Comer & Barlow, Citation2014). Thus, our field benefits from work that identifies evidence-based assessments that can sensitively track outcomes within and across specific conditions. Consequently and consistent with the format of articles in the previous JCCAP special issue on evidence-based assessment (e.g., McMahon & Frick, Citation2005; Silverman & Ollendick, Citation2005), JCCAP will leverage the Evidence Base Updates series to commission broad reviews of the state of evidence-based assessments for specific conditions.

Broad Updates about Assessments for Transdiagnostic Constructs

We must also acknowledge that the rationale to focus on condition-specific assessments conflicts with emerging bodies of work. These include (a) an increasing focus on transdiagnostic domains that cut across traditional clinical conditions (e.g., Insel et al., Citation2010), (b) the psychometric superiority of dimensional measures of psychopathology relative to discrete measures (Markon, Chmielewski, & Miller, Citation2011), and (c) the clinical reality that children and adolescents who require mental health care most commonly meet diagnostic criteria for multiple conditions and rarely meet criteria for only one condition (Drabick & Kendall, Citation2010). At JCCAP, we want the Evidence Base Updates series to address the pressing and immediate needs of frontline mental health professionals, policymakers, and researchers. We also want elements of the series to dovetail with where the field is headed.

Thus, assessment-focused articles in the Evidence Base Updates series will take on at least three other formats beyond the condition-focused review format described previously. For example, we intend to commission articles focused on evaluating assessments of constructs that cut across traditional diagnostic boundaries. Many constructs of interest to JCCAP readers fall into this category. Regardless of the manual-based treatment you implement when working with a child or adolescent client, the primary condition(s) covered by your research program, or the grant portfolio you manage within your funding agency, you probably want to know about the latest evidence for measures of parenting. In your work, you likely need to know about measures of adaptive and/or maladaptive emotions (e.g., negative affect), thought patterns (e.g., rumination), or behaviors (e.g., risk taking). Perhaps you also wish to know about the evidence base for measures of other aspects of the social (e.g., peer relations, stigma) or physical environment (e.g., neighborhood disadvantage). If so, you will likely find great value in a second assessment-focused format within the Evidence Base Updates series: updates about the state of evidence-based assessments for transdiagnostic constructs.

Narrow Updates about Specific Assessments

Another format for assessment-focused articles in the Evidence Base Updates series focuses attention at the level of the individual measure. In our work, not only do particular constructs cut across diagnostic boundaries, but specific measures do as well. In the parlance of clinical assessment, some measures serve as Rosetta stones for the field. With their item content, response options, scoring algorithms, and thresholds for identifying concerns in the “clinical range,” these measures provide us not only with an index to assess psychosocial functioning but also with a common language through which to speak about the functioning of the children and adolescents in our care. They also facilitate communication among health professionals from diverse training backgrounds (e.g., nurses, pediatricians, psychiatrists, psychologists, school counselors, social workers). For frequently used measures in our field, we want up-to-date data about what scores from these measures can and cannot tell us about child and adolescent mental health and the facets of decision making that they are particularly equipped to inform (e.g., diagnosis and case conceptualization, treatment planning and monitoring). Thus, some assessment-focused contributions to the Evidence Base Updates series will take the form of evaluations of specific measures, across the circumstances in which prior work has evaluated their use.

Updates about Special Issues in Clinical, Methodological, or Psychometric Techniques

At JCCAP, we envision commissioning one other type of assessment-focused article for the Evidence Based Updates series. On occasion, our field may require an update on the state of the science about specific clinical, methodological, or psychometric techniques. This kind of article might focus on such topics as how scholars in the field are testing assessments for particular psychometric properties, or whether or which assessments display a property that facilitates decision making in the context of clinical care. Our standard evaluation rubric may not apply to all articles published within this format; however, we will ask contributors to apply the rubric when appropriate (e.g., when contributors discuss measures to illustrate novel methodological techniques). We highlight two examples to illustrate potential topics that may use this update format.

Clinical Utility

Scholars in evidence-based assessment have long noted the relative lack of attention to testing and understanding whether our measures display clinical utility: “the extent to which the use of assessment data leads to demonstrable improvements in clinical services and, accordingly, results in improvements in client functioning” (Hunsley & Mash, Citation2007, 32). This measurement characteristic shares some similarities with, but differs from, the psychometric properties of scores taken from a measure. In fact, a measure can yield psychometrically sound data and still do little to improve clinical-decision making on key tasks (e.g., treatment planning, case conceptualization, monitoring treatment response), relative to no measure or alternative measures (see also Hunsley & Lee, Citation2014).

As an example of a measure with evidence supporting its clinical utility, consider research with adult clients completing the Outcome Questionnaire–45 (OQ-45): a weekly self-report measure of psychosocial functioning administered over the course of treatment (Lambert, Citation2007). Some innovative experiments using the OQ-45 involve their use among therapists in a clinic setting, who administer the OQ-45 to clients in order to track week-to-week changes in their functioning (e.g., Lambert et al., Citation2003). In these studies, researchers randomly assign therapists to two groups. The first group receives access to weekly OQ-45 data for each client on their caseload, coupled with feedback as to whether each client deteriorated, remained stable, or improved in functioning from week to week. The second group receives the assessment as usual, consisting of access to weekly OQ-45 data for each of their clients, without access to the feedback system. Meta-analytic reviews indicate that under some circumstances, data from the OQ-45 display clinical utility, in that relative to assessment as usual, feedback systems based on OQ-45 data reduce rates of client deterioration (Shimokawa, Lambert, & Smart, Citation2010).

Important to note, there are clear reasons why mental health assessments might lack evidence for clinical utility. Indeed, it often takes years to build an evidence base for a measure’s psychometric properties. The developer of a measure might want to be confident of these properties before subjecting the measure to tests of clinical utility. Consequently, building the evidence necessary to deem a measure as displaying clinical utility is admittedly a high threshold. That being said, when scholars raise awareness about gaps in the assessment literature—a dearth of measures displaying clinical utility being one of them—other scholars take notice, and they conduct research that directly addresses these gaps in the literature. Clinical utility is an example of a kind of concept about which JCCAP will commission articles for the Evidence Base Updates series.

Incremental Validity

As another example, consider the crucial concept of incremental validity, defined by Hunsley and Meyer (Citation2003, 446) as “Does a measure add to the prediction of a criterion above what can be predicted by other sources of data?” As others have noted (e.g., Pelham, Fabiano, & Massetti, Citation2005), scores on a measure may display acceptable levels of reliability and even yield data that can be validly inferred to tap the construct(s) that the measure was developed to assess, and yet this does not mean that the measure yields data that cannot be gleaned from an alternative instrument. Issues of incremental validity have important clinical implications. Picture yourself as a therapist in a low-resource clinic setting. You very much want to assess displays of conduct problems among adolescent clients on your caseload. You have available two well-supported measures designed to assess adolescent conduct problems. But these two measures vary considerably in the time it takes to administer them. They also differ in how difficult they are to score or interpret. And they vary in the level of clinical training required for their administration. Incremental validity tests can inform whether it is worth the time and effort to administer both of these measures or just the more efficient measure (see also Beidas et al., Citation2015; Youngstrom & De Los Reyes, Citation2015).

However, as with other domains of validity (e.g., criterion-related validity), a key driver of the findings of incremental validity studies involves the selection of criterion variables for these studies (Nunnally & Bernstein, Citation1994). In particular, incremental validity studies are susceptible to criterion contamination, or the fact that findings about incremental validity could be confounded by use of the same informant or measurement modality as both the criterion variable and the informant or modality used to construct the measure tested for incremental validity (for a review, see De Los Reyes et al., Citation2015). Stated another way, depending on the criterion measure(s) selected, it is quite easy to “stack the deck” for or against a measure being tested for incremental validity. Overall, concepts like incremental validity have profound implications for our field and may greatly influence the kinds of assessments used in clinical work and research. As such, this is another kind of concept for which JCCAP will commission articles for the Evidence Base Updates series. These kinds of articles might be expository in nature and focus on principles, practices, and the state of the science on the use of incremental validity research to inform selection of assessments in clinical work and research with children and adolescents.

CONCLUDING COMMENTS

JCCAP’s Evidence Base Updates series provides our field with crucial, current information on the status of clinical techniques relevant to delivering mental health care to children and adolescents. We take great pride in the performance of these articles on relevant quantitative metrics ( and ; ). We take an equal amount of pride in the qualitative data. From clinicians, who approach us to let us know that these articles serve as resources to guide their practice. From instructors, who approach us at professional meetings to tell us that they included these articles in their course syllabi. From the messages we receive from our colleagues, thanking us for sending out references of these articles on the e-mail Listservs of our scholarly societies. And of course, we get a big boost out of the “likes” we get on social media when we send out word that our latest Evidence Base Updates article is “in press.”

As we infuse assessment-focused articles into this series, we hope to continue providing our field with updates on data relevant to important aspects of our professional lives. To those of you who deliver care to children and adolescents, we hope you use these articles to inform selection of measures in your assessment batteries. To policymakers, we hope you consider these articles when deciding on matters relevant to the state of the science on outcomes of evidence-based treatments. To researchers, we expect these articles to streamline the Method sections of your empirical work on child and adolescent mental health. To all of you, we welcome your ideas for contributions to the Evidence Base Updates series! And above all, we hope these articles inspire you to devote some of your thoughts, time, and professional work to addressing gaps in the evidence base of clinical child and adolescent assessments.

ACKNOWLEDGMENTS

We thank Ngoc Le and Alyse Taggart for providing us with download figures reported in . We also thank Eric Youngstrom for providing thoughtful feedback on an earlier draft of this article.

REFERENCES

  • Beidas, R. S., Stewart, R. E., Walsh, L., Lucas, S., Downey, M. M., Jackson, K., … Mandell, D. S. (2015). Free, brief, and validated: Standardized instruments for low-resource mental health settings. Cognitive and Behavioral Practice, 22, 5–19. doi:10.1016/j.cbpra.2014.02.002
  • Bland, J. M., & Altman, D. G. (1986). Statistical methods for assessing agreement between two methods of clinical measurement. Lancet, 1, 307–310. doi:10.1016/S0140-6736(86)90837-8
  • Chambless, D. L., & Ollendick, T. H. (2001). Empirically supported psychological interventions: Controversies and evidence. Annual Review of Psychology, 52, 685–716. doi:10.1146/annurev.psych.52.1.685
  • Comer, J. S., & Barlow, D. H. (2014). The occasional case against broad dissemination and implementation: Retaining a role for specialty care in the delivery of psychological treatments. American Psychologist, 69, 1–18. doi:10.1037/a0033582
  • De Los Reyes, A. (2017). Inaugural editorial: Making the Journal of Clinical Child and Adolescent Psychology your “home journal”. Journal of Clinical Child and Adolescent Psychology, 46, 1–10. doi:10.1080/15374416.2016.1266649
  • De Los Reyes, A., Augenstein, T. M., Wang, M., Thomas, S. A., Drabick, D. A. G., Burgers, D., & Rabinowitz, J. (2015). The validity of the multi-informant approach to assessing child and adolescent mental health. Psychological Bulletin, 141, 858–900. doi:10.1037/a0038498
  • Dopp, A. R., Borduin, C. M., Rothman, D. B., & Letourneau, E. J. (2017). Evidence-based treatments for youths who engage in illegal sexual behaviors. Journal of Clinical Child and Adolescent Psychology, 46, 631–645. doi:10.1080/15374416.2016.1261714
  • Dorsey, S., McLaughlin, K. A., Kerns, S. E. U., Harrison, J. P., Lambert, H. K., Briggs, E. C., … Amaya-Jackson, L. (2017). Evidence base update for psychosocial treatments for children and adolescents exposed to traumatic events. Journal of Clinical Child and Adolescent Psychology, 46, 303–330. doi:10.1080/15374416.2016.1220309
  • Drabick, D. A. G., & Kendall, P. C. (2010). Developmental psychopathology and the diagnosis of mental health problems among youth. Clinical Psychology: Science and Practice, 17, 272–280. doi:10.1111/j.1468-2850.2010.01219.x
  • Evans, S. W., Owens, J. S., Wymbs, B. T., & Ray, A. R. (2018). Evidence-based psychosocial treatments for children and adolescents with attention deficit/hyperactivity disorder. Journal of Clinical Child and Adolescent Psychology, 47, 157–198. Advance online publication. doi:10.1080/15374416.2017.1390757
  • Evans, S. W., Sarno Owens, J., & Bunford, N. (2014). Evidence-based psychosocial treatments for children and adolescents with attention-deficit/hyperactivity disorder. Journal of Clinical Child and Adolescent Psychology, 43, 527–551. doi:10.1080/15374416.2013.850700
  • Higa-McMillan, C. K., Francis, S. E., Rith-Najarian, L., & Chorpita, B. F. (2016). Evidence base update: 50 years of research on treatment for child and adolescent anxiety. Journal of Clinical Child and Adolescent Psychology, 45, 91–113. doi:10.1080/15374416.2015.1046177
  • Hogue, A., Henderson, C. E., Ozechowski, T. J., & Robbins, M. S. (2014). Evidence base on outpatient behavioral treatments for adolescent substance use: Updates and recommendations 2007-2013. Journal of Clinical Child & Adolescent Psychology, 43, 695–720. doi:10.1080/15374416.2014.915550
  • Hunsley, J., & Lee, C. M. (2014). Introduction to clinical psychology (2nd ed.). Hoboken, NJ: Wiley & Sons.
  • Hunsley, J., & Mash, E. J. (2007). Evidence-based assessment. Annual Review of Clinical Psychology, 3, 29–51. doi:10.1146/annurev.clinpsy.3.022806.091419
  • Hunsley, J., & Mash, E. J. (Eds.). (2008). A guide to assessments that work. New York, NY: Oxford University Press.
  • Hunsley, J., & Mash, E. J. (Eds.). (2018). A guide to assessments that work (2nd ed.). New York, NY: Oxford University Press.
  • Hunsley, J., & Meyer, G. J. (2003). The incremental validity of psychological testing and assessment: Conceptual, methodological, and statistical issues. Psychological Assessment, 15, 446–455. doi:10.1037/1040-3590.15.4.446
  • Insel, T., Cuthbert, B., Garvey, M., Heinssen, R., Pine, D. S., Quinn, K., … Wang, P. (2010). Research domain criteria (RDoC): Toward a new classification framework for research on mental disorders. The American Journal of Psychiatry, 167, 748–751. doi:10.1176/appi.ajp.2010.09091379
  • Kaminski, J. W., & Claussen, A. H. (2017). Evidence base update for psychosocial treatments for disruptive behaviors in children. Journal of Clinical Child and Adolescent Psychology, 46, 477–499. doi:10.1080/15374416.2017.1310044
  • Lambert, M. (2007). Presidential address: What we have learned from a decade of research aimed at improving psychotherapy outcome in routine care. Psychotherapy Research, 17, 1–14. doi:10.1080/10503300601032506
  • Lambert, M. J., Whipple, J. L., Hawkins, E. J., Vermeersch, D. A., Nielsen, S. L., & Smart, D. W. (2003). Is it time for clinicians to routinely track patient outcome? A meta-analysis. Clinical Psychology: Science and Practice, 10, 288–301. doi:10.1093/clipsy.bpg025
  • Lock, J. (2015). An update on evidence-based psychosocial treatments for eating disorders in children and adolescents. Journal of Clinical Child and Adolescent Psychology, 44, 707–721. doi:10.1080/15374416.2014.971458
  • Lonigan, C. J., Elbert, J. C., & Johnson, S. B. (1998). Empirically supported psychosocial interventions for children: An overview. Journal of Clinical Child Psychology, 27, 138–145. doi:10.1207/s15374424jccp2702_1
  • Markon, K. E., Chmielewski, M., & Miller, C. J. (2011). The reliability and validity of discrete and continuous measures of psychopathology: A quantitative review. Psychological Bulletin, 137, 856–879. doi:10.1037/a0023678
  • Mash, E. J., & Hunsley, J. (2005). Evidence-based assessment of child and adolescent disorders: Issues and challenges. Journal of Clinical Child and Adolescent Psychology, 34, 362–379. doi:10.1207/s15374424jccp3403_1
  • McCart, M. R., & Sheidow, A. J. (2016). Evidence–Based psychosocial treatments for adolescents with disruptive behavior. Journal of Clinical Child and Adolescent Psychology, 45, 529–563. doi:10.1080/15374416.2016.1146990
  • McHugh, R. K., Murray, H. W., & Barlow, D. H. (2009). Balancing fidelity and adaptation in the dissemination of empirically-supported treatments: The promise of transdiagnostic interventions. Behaviour Research and Therapy, 47, 946–953. doi:10.1016/j.brat.2009.07.005
  • McMahon, R. J., & Frick, P. J. (2005). Evidence-based assessment of conduct problems in children and adolescents. Journal of Clinical Child and Adolescent Psychology, 34, 477–505. doi:10.1207/s15374424jccp3403_6
  • Nathan, P. E., & Gorman, J. M. (Eds.). (2007). A guide to treatments that work (3rd ed.). New York, NY: Oxford University Press.
  • Nunnally, J. C., & Bernstein, I. H. (1994). Psychometric theory (3rd ed.). New York, NY: McGraw-Hill.
  • Pelham, W. E., Jr., Fabiano, G. A., & Massetti, G. M. (2005). Evidence-based assessment of attention deficit hyperactivity disorder in children and adolescents. Journal of Clinical Child and Adolescent Psychology, 34, 449–476. doi:10.1207/s15374424jccp3403_5
  • Robinson, J. P., Shaver, P. R., & Wrightsman, L. S. (1991). Criteria for scale selection and evaluation. In J. P. Robinson, P. R. Shaver, & L. S. Wrightsman (Eds.), Measures of personality and social psychological attitudes (pp. 1–16). New York, NY: Academic Press.
  • Shimokawa, K., Lambert, M. J., & Smart, D. W. (2010). Enhancing treatment outcome of patients at risk of treatment failure: Meta-analytic and mega-analytic review of a psychotherapy quality assurance system. Journal of Consulting and Clinical Psychology, 78, 298–311. doi:10.1037/a0019247
  • Silverman, W. K., & Hinshaw, S. P. (2008). The second special issue on evidence-based psychosocial treatments for children and adolescents: A 10-year update. Journal of Clinical Child and Adolescent Psychology, 37, 1–7. doi:10.1080/15374410701817725
  • Silverman, W. K., & Ollendick, T. H. (2005). Evidence-based assessment of anxiety and its disorders in children and adolescents. Journal of Clinical Child and Adolescent Psychology, 34, 380–411. doi:10.1207/s15374424jccp3403_2
  • Smith, T., & Iadarola, S. (2015). Evidence base update for autism spectrum disorder. Journal of Clinical Child and Adolescent Psychology, 44, 897–922. doi:10.1080/15374416.2015.1077448
  • Southam-Gerow, M. A., & Prinstein, M. J. (2014). Evidence base updates: The evolution of the evaluation of psychological treatments for children and adolescents. Journal of Clinical Child and Adolescent Psychology, 43, 1–6. doi:10.1080/15374416.2013.855128
  • Vaz, S., Falkmer, T., Passmore, A. E., Parsons, R., & Andreou, P. (2013). The case for using the repeatability coefficient when calculating test-retest reliability. PLoS One, 8, e73990. doi:10.1371/journal.pone.0073990
  • Weersing, V. R., Jeffreys, M., Do, M. T., Schwartz, K. T. G., & Bolano, C. (2017). Evidence base update of psychosocial treatments for child and adolescent depression. Journal of Clinical Child and Adolescent Psychology, 46, 11–43. doi:10.1080/15374416.2016.1220310
  • Weisz, J. R., Chorpita, B. F., Palinkas, L. A., Schoenwald, S. K., Miranda, J., Bearman, S. K., & the Research Network on Youth Mental Health. (2012). Testing standard and modular designs for psychotherapy with youth depression, anxiety, and conduct problems: A randomized effectiveness trial. Archives of General Psychiatry, 69, 274–282. doi:10.1001/archgenpsychiatry.2011.147
  • Weisz, J. R., & Kazdin, A. E. (Eds.). (2010). Evidence-based psychotherapies for children and adolescents (2nd ed.). New York, NY: Guilford Press.
  • Youngstrom, E. A., & De Los Reyes, A. (2015). Commentary: Moving towards cost- effectiveness in using psychophysiological measures in clinical assessment: Validity, decision-making, and adding value. Journal of Clinical Child and Adolescent Psychology, 44, 352–361. doi:10.1080/15374416.2014.913252
  • Youngstrom, E. A., & Frazier, T. W. (2013). Evidence-based strategies for the assessment of children and adolescents: Measuring prediction, prescription, and process. In D. J. Miklowitz, W. E. Craighead, & L. Craighead (Eds.), Developmental psychopathology (2nd ed., pp. 36–79). New York, NY: Wiley.
  • Youngstrom, E. A., Van Meter, A., Frazier, T. W., Hunsley, J., Prinstein, M. J., Ong, M. L., & Youngstrom, J. K. (2017). Evidence-based assessment as an integrative model for applying psychological science to guide the voyage of treatment. Clinical Psychology: Science and Practice, 24, 331–363. doi:10.1111/cpsp.12207

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.