3,271
Views
4
CrossRef citations to date
0
Altmetric
INTRODUCTION

Introduction to the Special Issue. A Dozen Years of Demonstrating That Informant Discrepancies are More Than Measurement Error: Toward Guidelines for Integrating Data from Multi-Informant Assessments of Youth Mental Health

&

ABSTRACT

Validly characterizing youth mental health phenomena requires evidence-based approaches to assessment. An evidence-based assessment cannot rely on a “gold standard” instrument but rather, batteries of instruments. These batteries include multiple modalities of instrumentation (e.g., surveys, interviews, performance-based tasks, physiological readings, structured clinical observations). Among these instruments are those that require soliciting reports from multiple informants: People who provide psychometrically sound data about youth mental health (e.g., parents, teachers, youth themselves). The January 2011 issue of the Journal of Clinical Child and Adolescent Psychology (JCCAP) included a Special Section devoted to the most common outcome of multi-informant assessments of youth mental health, namely discrepancies across informants’ reports (i.e., informant discrepancies). The 2011 JCCAP Special Section revolved around a critical question: Might informant discrepancies contain data relevant to understanding youth mental health (i.e., domain-relevant information)? This Special Issue is a “sequel” to the 2011 Special Section. Since 2011, an accumulating body of work indicates that informant discrepancies often contain domain-relevant information. Ultimately, we designed this Special Issue to lay the conceptual, methodological, and empirical foundations of guidelines for integrating multi-informant data when informant discrepancies contain domain-relevant information. In this introduction to the Special Issue, we briefly review the last 12 years of research and theory on informant discrepancies. This review highlights limitations inherent to the most commonly used strategies for integrating multi-informant data in youth mental health. We also describe contributions to the Special Issue, including articles about informant discrepancies that traverse multiple content areas (e.g., autism, implementation science, measurement validation, suicide).

1985: One of the Guest Editors of this Special Issue (CCE) was a graduate student trainee and met with her very first client. Like any good graduate student, even back then, she obtained parent-report, teacher-report, child self-report, and did behavioral observations in the “playroom.” She triple-checked her hand-scored informants’ measures (as no computer programs for scoring were available then), plotted their scores on the respective informant’s profile sheet, listened to her cassette audiotape of her “behavioral observations” (as no video or digital recording was available then), and then attended her first supervision meeting. Her supervisor asked, “So what do we have”? She basically said, “I have no clue, the mother says the kid is very depressed, and nothing else; the teacher says the kid has significant aggressive behavior, and nothing else; the kid does not report a single symptom of anything; and in playing games with him it seemed to me that he was incredibly anxious about everything.” She remembers a lot about that first client. She remembers having to hospitalize the mother for high suicide risk the next time she met with the mother and boy. She remembers that the assessment “data” inspired her master’s thesis paper, master’s thesis research, and subsequent dissertation (and that the 1987 meta-analysis, Achenbach et al., Citation1987, was published just 2 years after her first client). She does not recall whether or not that discrepant assessment information was used in the treatment plan for that client (likely it was not). She does know that in her 29 years of supervising doctoral students in clinical practicum: 1) she always spent the first class covering suicide risk and management (“as clients are like a box of chocolates, you never know what you’ll get”); and 2) not once were informant discrepancies per se involved in treatment planning or case conceptualization (but they were almost always there!).

2006–2007: Roughly 20 years after the first story, one of the Guest Editors of this Special Issue (ADLR) was a graduate student trainee. He was making his way through interviews for placement in a psychology internship; a key requirement of doctoral training in clinical psychology. At an internship site that will remain nameless, during an interview with a faculty member who will remain nameless, the Guest Editor was asked to describe his research interests. His research interests were then as they are now: to understand why informants like those described in the first story often make discrepant reports when asked to rate the mental health of a child or adolescent (i.e., collectively referred to as youth unless otherwise specified). The faculty member’s response: “That is an area of research I am actively ignoring.” Incidentally, the faculty member’s expertise was in assessing and treating youth at risk for suicide.

Validly characterizing youth mental health phenomena requires evidence-based approaches to assessment (Hunsley & Mash, Citation2018). An evidence-based assessment cannot rely on scores taken from a “gold standard” instrument. Rather, evidence-based assessments produce multiple scores; the byproducts of administering batteries of instruments. These batteries include multiple modalities of instrumentation (e.g., surveys, interviews, performance-based tasks, physiological readings, structured clinical observations). Among these instruments are those that require soliciting reports from multiple informants: Untrained raters who provide psychometrically sound data about youth mental health (e.g., parents, teachers, youth themselves).

Each of the stories at the opening of this article highlight a ubiquitous observation when taking a multi-informant approach to assessing youth mental health: the presence of discrepant estimates or measurement outcomes regarding a given youth’s mental health, depending on the informants who provide reports about that youth and their psychological functioning (i.e., informant discrepancies). In fact, these stories are part of a long history of research and theory on informant discrepancies in this area of work. The earliest empirical observations of informant discrepancies trace back decades before the “1985 story” (e.g., Lapouse & Monk, Citation1958). Soon after this story, Achenbach et al. (Citation1987) published the seminal meta-analysis that first documented the robust nature of these discrepancies, based on the findings of over 100 studies. As part of that meta-analysis, Achenbach and colleagues advanced a notion they referred to as situational specificity: The informant discrepancies they observed in their meta-analytic findings reflected the idea that (a) informants like parents and teachers vary in terms of where they observe youth (i.e., home vs. school) and (b) youth vary in terms of where they display behaviors indicative of the mental health domains about which informants make reports. Stated another way, informant discrepancies reflect “real stuff”―data germane to understanding the very youth mental health domains about which informants provide reports (i.e., domain-relevant information). Yet, much of the history between the two stories (i.e., work published between 1985 and 2007) largely consisted of research and theory focused not on situational specificity, but rather on explaining away the informant discrepancies observed in youth mental health assessments purely as a function of measurement confounds, such as random error or rater biases (e.g., De Los Reyes, Citation2011; Richters, Citation1992). Perhaps because of this history surrounding measurement confounds, many scholars―including the faculty member in the “2006–2007 story”―did not feel a need to probe whether these informant discrepancies ought to be understood as something more than what McGuire (Citation1969) referred to as research artifacts. By “research artifacts,” we mean errors or flaws inherent to the process of estimating the mental health needs of youth clients.

This article serves as the introduction to a Special Issue on informant discrepancies in youth mental health assessments. In fact, this Special Issue serves as a “sequel” to a Special Section on this same topic, published in Issue 1 of the 2011 volume of this very journal―the Journal of Clinical Child and Adolescent Psychology (JCCAP; De Los Reyes, Citation2011). In the 12 years since publication of this Special Section in JCCAP, we have seen a flourishing of research and theory on informant discrepancies as they manifest in youth mental health assessments, and as we explain below, this work indicates that informant discrepancies often contain domain-relevant information (for reviews, see De Los Reyes et al., Citation2013, Citation2015, Citation2020, Citation2022).

However, we must be cognizant of this key reality of youth mental health: In this area of work, the ubiquitous presence of informant discrepancies traverses research and service delivery settings, mental health domains, the instruments used to assess mental health domains, the developmental periods of youth undergoing evaluation, and the types of clinical decisions informed by the outcomes of mental health assessments such as diagnosis, treatment planning, and treatment response (e.g., Al Ghriwati et al., Citation2018; Augenstein et al., Citation2022; Becker-Haimes et al., Citation2018; Bonadio et al., Citation2022; De Los Reyes, Cook, et al., Citation2019; De Los Reyes, Lerner, et al., Citation2019; De Los Reyes, Ohannessian et al., Citation2019; Humphreys et al., Citation2017; Lerner et al., Citation2017; Makol et al., Citation2020; Nichols & Tanner-Smith, Citation2022; Trang & Yates, Citation2020; Watts et al., Citation2022; Zilcha-Mano et al., Citation2021). If these informant discrepancies affect all the work that the community of youth mental health scholars produces, then addressing the issues these discrepancies raise cannot be self-contained into a set of research interests. We cannot expect the task of addressing these issues to fall on a few scholars who declare informant discrepancies as their primary area of expertise. The task of advancing knowledge about informant discrepancies falls on all scholars in this area. Consequently, we see a need for increased engagement among scholars in youth mental health who invariably encounter informant discrepancies in their work.

The central thesis of the current Special Issue is this: The informant discrepancies observed in youth mental health assessments cannot be ignored―they must be engaged. As we describe in this article, the consequences of failing to engage these informant discrepancies and understand their inner workings extend well beyond the findings of empirical studies. They affect the lives of the youth clients we serve, and across all the settings where we use mental health assessments to make high-stakes decisions about the services these clients receive.

We see a great deal of remaining work on informant discrepancies in youth mental health assessments. In particular, we must address a key barrier to research on assessment and in tandem, evidence-based service delivery. The “1985 story” illustrates this barrier, namely the lack of guidelines for conducting multi-informant assessments, interpreting their outcomes, and integrating data derived from these assessments (see also Beidas et al., Citation2015; De Los Reyes, Tyrell, et al., Citation2022). Along these lines, in this introductory article we address a series of aims. First, we briefly describe research documenting the presence of informant discrepancies in assessments of all known mental health domains. Second, we summarize research on the degree to which informant discrepancies contain domain-relevant information, focusing on work produced in the intervening years between the 2011 JCCAP Special Section and this current Special Issue. Third, we discuss a key driver of this current Special Issue: The most widely used approaches to integrating multi-informant data assume that informant discrepancies cannot contain domain-relevant information. Fourth, we highlight the benefits of developing and testing approaches to integrating multi-informant data that capitalize on the domain-relevant information in informant discrepancies, by describing conceptual models designed to guide work on these very benefits. Fifth, we briefly describe the contributions to this Special Issue, and we close with a renewed call to action to probe informant discrepancies in youth mental health.

Informant Discrepancies Define the Outcomes of Youth Mental Health Assessments

We previously noted that the body of work on informant discrepancies in youth mental health assessments numbers in the hundreds of studies. As in other areas that have accumulated large bodies of evidence (e.g., controlled trials of intervention efficacy; see Smith & Glass, Citation1977; Weisz et al., Citation2017), several meta-analyses have sought to characterize the nature and extent of the informant discrepancies that occur when assessing youth mental health. We previously cited the first of these meta-analyses (Achenbach et al., Citation1987), which examined 119 studies published between 1960 and 1986, and estimated that, on average, informants’ reports of youth mental health domains (i.e., internalizing and externalizing concerns) correspond at low-to-moderate magnitudes (i.e., mean r = .28; Cohen, Citation1988). Remarkably, in the years that followed this meta-analysis, very little changed with regard to the magnitudes of these effects. In fact, a meta-analysis conducted 30 years later―and based on examinations of 341 studies published after 1987 (i.e., between 1989 and 2014)―detected the exact same mean correspondence estimate observed by Achenbach and colleagues (i.e., r = .28; De Los Reyes et al., Citation2015).

The stability of the effects reported by both Achenbach et al. (Citation1987) and De Los Reyes et al. (Citation2015) is further corroborated by several other elements of the meta-analytic literature on informant discrepancies. First, this stability is not simply a function of all of these studies being conducted on homogenous samples (i.e., WEIRD effects; see Henrich et al., Citation2010). Indeed, recent meta-analytic work finds remarkably stable levels of correspondence (i.e., mean r = .28, with a 95% confidence interval of r = .24–.31) across hundreds of studies conducted in over 30 countries and spread across all inhabited continents (De Los Reyes, Lerner, et al., Citation2019). Second, this stability is not simply a function of all of these studies focused on the same youth mental health domains. Achenbach and colleagues and De Los Reyes and colleagues meta-analyzed cross-informant correspondence data related to multi-informant assessments of concerns on the internalizing and externalizing spectra. Beyond these two quantitative reviews, meta-analyses have revealed low-to-moderate magnitudes of correspondence when assessing (a) the mental health of youth with histories of maltreatment (Romano et al., Citation2018), (b) concerns on the autism spectrum (Stratis & Lecavalier, Citation2015), (c) associated features of mental health concerns (e.g., parenting; Hou et al., Citation2020; Korelitz & Garber, Citation2016), and (d) assessments of psychosocial strengths (e.g., social competence; Renk & Phares, Citation2004). Third, these informant discrepancies are not exclusive to understanding them as they manifest in a particular content area of youth mental health. In fact, meta-analyses find that these informant discrepancies occur when interpreting meta-analytic estimates of the effects from controlled trials (Casey & Berman, Citation1985; De Los Reyes & Kazdin, Citation2005, Citation2006, Citation2008; Weisz et al., Citation1987, Citation1995, Citation2006, Citation2017), and when estimating links between youth mental health concerns and domains thought to contribute to the development and maintenance of these concerns (e.g., parenting, peer victimization, parental depression; Hawker & Boulton, Citation2000; Hou et al., Citation2020; Ivanova et al., Citation2022). Across all of these meta-analyses, we find that informant discrepancies occur regardless of the study design in which assessments are administered: from highly controlled laboratory settings to uncontrolled field settings (e.g., community samples, school-based services, community mental health clinics). Taken together, decades of research make clear that informant discrepancies define assessments of all known youth mental health domains.

When Informant Discrepancies Reflect Domain-Relevant Information

One of the key contemporary issues in mental health research, as well as in psychological research generally, has to do with the replicability and reproducibility of research findings (see Open Science Collaboration, Citation2015; Tackett et al., Citation2017). The logic goes: How might we arrive at conclusions about or draw principles from research findings if (a) independent research teams investigating the same phenomenon using the same procedures do not arrive at similar conclusions (i.e., replication; Schmidt, Citation2009) and/or (b) similar conclusions do not manifest when testing a hypothesis―within the same study or dataset―using varied procedures for hypothesis testing such as different analytic procedures, outcome measures, or measurement sources (i.e., reproduction; Schekman, Citation2016)? In this respect, informant discrepancies are an outlier in the broader literature on replication and reproduction. Perhaps ironically, the most replicable finding in youth mental health is that multiple informants’ reports of the same youth fail to reproduce the same estimates of that youth’s mental health.

Beyond the replicability of the informant discrepancies phenomenon, we see an interesting parallel between how researchers have interpreted informant discrepancies and what they reflect, and how researchers have interpreted low levels of replicability and reproducibility in psychological research findings. That is, in both cases interpretations have largely focused on research artifacts (i.e., McGuire, Citation1969). For example, researchers have attributed the grand majority of low rates of replication and/or reproduction in psychological research to the culture of research practices in Psychology, namely (a) a focus on the novelty of research findings, (b) an emphasis on reporting significant research findings, and thus (c) a reinforcement of research practices that increase the likelihood of observing “a” and “b” (e.g., p-hacking, selective reporting of findings, selective use of analytic procedures; see Boutron et al., Citation2010; Open Science Collaboration, Citation2015; Silberzahn & Uhlmann, Citation2015). In essence, these interpretations frame the lack of replication and/or reproducibility as factors that are irrelevant to understanding psychological phenomena. As such, this interpretation has had a large influence on recommendations to address replication and reproduction in Psychology, namely in changing the culture of research practices and decreasing researcher behaviors geared toward novelty and emphasizing reports of significant findings (for a review, see Nosek et al., Citation2012).

In ways that are analogous to research on replication and reproduction in Psychology, we previously mentioned that, for much of the history of informant discrepancies research in youth mental health, these discrepancies have been attributed to measurement confounds such as random error and/or rater biases (De Los Reyes, Citation2011; Richters, Citation1992). Crucially, this attribution rests on the weakest of empirical evidence (for a recent review, see De Los Reyes & Makol, Citation2022). Nevertheless, large swaths of researchers who have sought to understand and interpret informant discrepancies in youth mental health assessments have grounded their work on this very idea―that informant discrepancies reflect measurement confounds and nothing more (cf. the faculty member in the “2006-2007 story,” and for a recent review, see De Los Reyes, Tyrell, et al., Citation2022).

Earlier in this paper we also highlighted Achenbach et al. (Citation1987), namely their notion of situational specificity. What we have yet to mention is that, unlike the “measurement confounds interpretation” of informant discrepancies, a considerable body of evidence has rapidly accumulated over the last 15 years, supporting the idea that informant discrepancies often reflect the kinds of domain-relevant information indicated by situational specificity. We graphically depict this evidence in . The studies cited in have been reviewed extensively in recent work (e.g., De Los Reyes, Lerner, et al., Citation2019; De Los Reyes, Ohannessian et al., Citation2019; De Los Reyes, Talbott, et al., Citation2022; De Los Reyes, Tyrell, et al. Citation2022).

Figure 1. Graphical depiction of studies supporting links between informant discrepancies and domain-relevant criterion variables.

Note: Citations supporting links described in this figure: Al Ghriwati et al. (Citation2018), Augenstein et al. (Citation2022), Becker-Haimes et al. (Citation2018), Bonadio et al. (Citation2022), Borelli et al. (Citation2016), Charamut et al. (Citation2022), De Los Reyes et al. (Citation2009), De Los Reyes, Lerner, et al., Citation2013, De Los Reyes, Salas, et al., Citation2013, De Los Reyes, Alfano, et al., Citation2016, De Los Reyes, Ohannessian et al., Citation2016, De Los Reyes, Cook, et al., Citation2022, Follet et al. (Citation2022), Human et al. (Citation2016), Humphreys et al. (Citation2017), Laird and De Los Reyes (Citation2013), Lerner et al. (Citation2017), Leung et al. (Citation2016), Luo et al. (Citation2020), Makol and Polo (Citation2018), Makol et al. (Citation2019), Makol et al. (Citation2020), Makol et al. (Citation2021), Nelemans et al. (Citation2016), Nichols and Tanner-Smith (Citation2022), Ohannessian and De Los Reyes (Citation2014), Ohannessian et al. (Citation2016), Okuno et al. (Citation2022), Trang and Yates (Citation2020), Van Heel et al. (Citation2019), Van Petegem et al. (Citation2020), and Zilcha-Mano et al. (Citation2021).
Figure 1. Graphical depiction of studies supporting links between informant discrepancies and domain-relevant criterion variables.

As we depict in , several characteristics of this evidence enhance the interpretability of the findings. In particular, studies have demonstrated links between informant discrepancies and criterion variables that traverse multiple modalities, ruling out the possibility that shared method variance explains the findings (see Garb, Citation2003). Further, these links exist for informant discrepancies observed with assessments across multiple domains relevant to understanding youth mental health, including symptoms (e.g., internalizing, externalizing, autism spectrum), areas of impairment or life interference, parenting, and family relationships.

Here, we highlight studies published in the years following the 2011 JCCAP Special Section. For instance, Lerner et al. (Citation2017) studied an intake clinical assessment of 283 diagnosed youth receiving outpatient services for autism spectrum concerns. In this study, the authors leveraged person-centered models of data analysis (see Bartholomew et al., Citation2002) to characterize patterns of parent and teacher reports of youth autism spectrum concerns and identified parent-teacher dyads who agreed in their reports of such concerns as well as disagreed (i.e., parent > teacher; teacher > parent). These patterns of parent-teacher discrepancies demonstrated links to independent assessments of clinical severity as indexed by diagnostic observation tasks (i.e., the Autism Diagnostic Observation Schedule; Lord et al., Citation2012), such that greater agreement between parent and teacher reports predicted greater levels of clinical severity (as indexed by receiving medication, special education, and an ASD diagnosis). Thus, this study highlighted the ability of variations in informant discrepancies to reflect domain-relevant variations in clinical phenomena (i.e., clinical severity) that inform our understanding of the current functioning of youth undergoing evaluation. Similar findings for various combinations of informants (e.g., parent-teacher, parent-youth, mother-father) have been made for informant discrepancies in assessments of such domains as depression (Makol & Polo, Citation2018), family relationships (Borelli et al., Citation2016), psychosocial impairments (De Los Reyes, Cook, et al., Citation2022), social anxiety (Deros et al., Citation2018), and broadband internalizing/externalizing concerns (De Los Reyes, Alfano, et al., Citation2016).

In the years following the 2011 JCCAP Special Section, we also learned about instances in which informant discrepancies can be used as tools for not only characterizing functioning concurrently as was demonstrated by Lerner et al. (Citation2017), but also for predicting youth outcomes longitudinally. Similar to Lerner and colleagues, Makol et al. (Citation2019) leveraged person-centered models of data analysis to characterize parent and youth reports of youth internalizing symptoms assessed at the intake assessments of 765 youth receiving psychiatric inpatient care. In this study, not only did the authors observe patterns of agreement and discrepancies between parent and youth reports, but also these patterns predicted clinically relevant characteristics of the care delivered to these youth. For instance, when parents reported greater internalizing concerns than youth self-reported, youth were particularly likely to exhibit suicidality when admitted, and to be administered intensive treatment services during their inpatient stay (e.g., locked-door seclusion, standing antipsychotics). Patterns of agreement were also predictive, such that instances in which parents and youth agreed on high levels of internalizing concerns was a marker of a relatively longer hospital stay. The predictive utility of discrepancies between parent and youth reports has been demonstrated with a range of predictors and outcomes, including (a) intake assessments of youth anxiety symptoms predicting treatment response (Becker-Haimes et al., Citation2018), (b) baseline assessments of parenting practices predicting internalizing and externalizing symptoms at 12-month follow-up among youth at risk for substance use relapse (Nichols & Tanner-Smith, Citation2022), and (c) baseline assessments of depressive symptoms predicting youth suicidal thoughts at three-month follow-up (Augenstein et al., Citation2022). In sum, since the 2011 JCCAP Special Section on informant discrepancies in youth mental health assessments, researchers have identified a variety of instances in which informant discrepancies reflect domain-relevant information.

When Procedures for Integrating Multi-Informant Data Make Measurements Less Valid

In considering the findings depicted in , we must make one thing clear: Informant discrepancies reflect unique variance. Specifically, among a set of informants’ reports about a youth’s mental health, this is unshared data or data that are particular to each informant’s report. Informant discrepancies appear to reflect the grand majority of variance in youth mental health assessments (see also De Los Reyes et al., Citation2015). As such, learning that at least a portion of these discrepancies contains domain-relevant information should be welcome news for scholars in youth mental health. Yet, we must make another thing clear: Our infrastructure for understanding, scoring, and modeling multi-informant data was not built to capitalize on domain-relevant unique variance. In fact, the dominant procedures for integrating, modeling, or otherwise interpreting multi-informant data and the informant discrepancies they often contain are simply ill-equipped to handle these discrepancies in any substantive sense. In , we list these procedures, and highlight the issues raised with using them, particularly when informant discrepancies contain domain-relevant information.

Table 1. Limitations of commonly implemented analytic procedures for integrating or modeling multi-informant data in youth mental health.

To start, it is important to keep in mind that what all of the procedures listed in have in common is they were developed during an era of conceptual and methodological paradigms that emphasized common variance: Among a set of informants’ reports about a youth’s mental health, this is shared data across informants’ reports. This is true of Converging Operations (Garner et al., Citation1956)―the conceptual paradigm that has dominated thinking for the last half-century on how to interpret patterns of research findings. Converging Operations holds that the degree to which multiple findings derived from tests of the same hypothesis point to the same conclusion should be seen as synonymous with the degree of empirical support for that hypothesis. For instance, much of the evidence supporting the efficacy of interventions tested within randomized controlled trials consists of outcome measures completed by multiple informants (see Weisz et al., Citation2005). Researchers use these outcome measures to gauge relative differences between the intervention examined in a trial and comparison conditions (e.g., wait-list control, alternative treatment), usually under an “umbrella hypothesis” that the intervention significantly improves functioning (e.g., reductions in symptoms of a specific mental health condition), relative to the improvements observed for the comparison condition (see De Los Reyes & Kazdin, Citation2006). Converging Operations holds that one should gauge an intervention’s efficacy based on the degree to which the multiple informants’ reports used as outcome measures in the study point to the same conclusion (e.g., intervention outperforms control condition). Assumptions underlying use of Converging Operations also hold true for the similarly dominant measurement validation paradigm for the last half-century―the Multi-Trait Multi-Method Matrix (MTMM; Campbell & Fiske, Citation1959)―which drew inspiration from Converging Operations (see Grace, Citation2001; Highhouse, Citation2009). With the MTMM, users assume that one should gauge measurement validity based on the degree to which a measure assessing a given domain (e.g., self-report of anxiety) yields estimates that converge with other measures of “fairly similar concepts and plausibly independent methods” (Fiske & Campbell, Citation1992, p. 393).

Together, both the Converging Operations and MTMM paradigms inspired users of the analytic procedures listed in to (a) prioritize common variance and treat unique variance components like informant discrepancies as measurement confounds and/or (b) make assumptions about informant discrepancies that cannot be empirically verified within the procedures themselves. De Los Reyes, Wang et al. (Citation2023) review these procedures extensively, discuss their limitations with regard to domain-relevant informant discrepancies, and cite many examples of how they are currently implemented in youth mental health research.

In highlighting issues raised with applying the analytic procedures listed in , we must make yet another thing clear: Highlighting the issues we raise is not synonymous with labeling an analytic procedure as “bad” or a procedure that researchers should avoid using. Indeed, no one analytic procedure is inherently “right” or “wrong” (see also Levy, Citation1969). Rather, a user might be “wrong” in applying the procedure to data conditions that do not “fit” the assumptions underlying its use. In the case of the procedures listed in , their use within data conditions that contain domain-relevant informant discrepancies has the logical consequence of depressing measurement validity. Indeed, when a user leverages a procedure like composite scoring or structural models focused on estimating common variance, they must also assume that common variance is the only source of domain-relevant variance. If the data conditions indicate otherwise―that domain-relevant informant discrepancies exist―these procedures leave all that domain-relevant variance on the “cutting room floor.” If a great deal of variance is left out of the equation, then what other conclusion can one come to other than that these procedures―when applied to data conditions that contain domain-relevant informant discrepancies―make measurements less valid? In essence, two sources of valid variance (i.e., both common variance and domain-relevant informant discrepancies) are better than one (e.g., only common variance).

When Informant Discrepancies Boost Measurement Validity

Taken together, scholars in youth mental health currently lack an infrastructure for understanding, scoring, and integrating multi-informant data when those data contain domain-relevant informant discrepancies. Yet, we see hope on the horizon for building this infrastructure. In fact, another development since the 2011 JCCAP Special Section is that we now have a conceptual foundation for such an infrastructure, in the form of a framework―the Operations Triad Model (De Los Reyes, Thomas, et al., Citation2013)―for conceptualizing the patterns of data observed within multi-informant assessments. We graphically depict the Operations Triad Model in . A key feature of the Operations Triad Model, unlike Converging Operations and the analytic procedures listed in , is that it acknowledges that domain-relevant variance can come from instances in which reports from multiple informants point to the same conclusion (i.e., Converging Operations; ), but also from instances in which informants’ reports point to different conclusions. However, within the Operations Triad Model not all of these informant discrepancies are created equal. That is, the Operations Triad Model distinguishes between the informant discrepancies that reflect domain-relevant information (i.e., Diverging Operations; ) from those discrepancies that reflect measurement confounds (i.e., Compensating Operations; ).

Figure 2. The Operations Triad Model (Reproduced from De Los Reyes, Thomas, et al., Citation2013).

Note: The top half (A) represents Converging Operations: a set of measurement conditions for interpreting patterns of findings based on the consistency within which findings yield similar conclusions. The bottom half denotes two circumstances within which researchers identify discrepancies across empirical findings derived from multiple informants’ reports and thus discrepancies in the research conclusions drawn from these reports. On the left (B) is a graphical representation of Diverging Operations: a set of measurement conditions for interpreting patterns of inconsistent findings based on hypotheses about variations in the behavior(s) assessed. The solid lines linking informants’ reports, empirical findings derived from these reports, and conclusions based on empirical findings denote the systematic relations among these three study components. Further, the presence of dual arrowheads in the figure representing Diverging Operations conveys the idea that one ties meaning to the discrepancies among empirical findings and research conclusions and thus how one interprets informants’ reports to vary as a function of variation in the behaviors being assessed. Lastly, on the right (C) is a graphical representation of Compensating Operations: a set of measurement conditions for interpreting patterns of inconsistent findings based on methodological features of the study’s measures or informants. The dashed lines denote the lack of systematic relations among informants’ reports, empirical findings, and research conclusions. Originally published in De Los Reyes, Thomas et al. (Citation2013). © Annual Review of Clinical Psychology. Copyright 2012 Annual Reviews. All rights reserved. The Annual Reviews logo, and other Annual Reviews products referenced herein are either registered trademarks or trademarks of Annual Reviews. All other marks are the property of their respective owner and/or licensor.
Figure 2. The Operations Triad Model (Reproduced from De Los Reyes, Thomas, et al., Citation2013).

The last decade of work demonstrating that informant discrepancies often contain domain-relevant information has largely been informed by the Operations Triad Model (see also De Los Reyes & Ohannessian, Citation2016). Behind the scenes of the findings depicted in are several innovations in measurement, study design, and analytic procedures; each of which highlight the potential for translating research about informant discrepancies into tangible improvements to measurement validity. We highlight one line of work here to illustrate these improvements.

The Operations Triad Model in Action: Assessments of Adolescent Social Anxiety

Mental health services for adolescent social anxiety are often informed by reports collected from adolescent clients as well as the stakeholder who often initiates services on their behalf, typically a parent or caregiver (De Los Reyes & Makol, Citation2019; Hunsley & Lee, Citation2014). When relying on just these two informants, not only do discrepancies commonly manifest between reports, but also these discrepancies have historically been interpreted as measurement confounds. Specifically, as adolescents who experience social anxiety often also avoid situations where they may be perceived in a negative light, the idea was that these discrepancies could be explained by adolescents downplaying their concerns when completing self-reports (e.g., social desirability bias; see De Los Reyes et al., Citation2012, Citation2015). Yet, here too situational specificity may play a key role in explaining informant discrepancies, particularly when one considers two key elements of adolescents and their social anxiety concerns. First, adolescents, relative to younger children, often spend considerably greater time outside of the home context (Smetana, Citation2008); places where they are quite likely to encounter the key kinds of social interactions germane to social anxiety (i.e., interactions with unfamiliar people; see Alfano & Beidel, Citation2011). Second, one crucial set of social interactions germane to adolescents receiving care involves interacting with same-age unfamiliar peers (see Cannon et al., Citation2020; Hofmann et al., Citation1999). Taken together, these issues beg the question: Might discrepancies between parent and adolescent reports of adolescent social anxiety occur, in part, because adolescents often display social anxiety concerns within clinically relevant contexts outside of the home?

Addressing this question required a few innovations in measurement and study design. To start, researchers required an additional informant who observes adolescents within interactions with unfamiliar peers, in an effort to compare and contrast adolescents’ and parents’ reports with those of an observer outside of the home. Researchers also required a context to gather such reports that could facilitate acquiring standardized data about these interactions, as well as data from additional, trained observers to reduce the impact of shared method variance on the interpretability of study findings. These considerations resulted in the development of the Unfamiliar Peer Paradigm (Cannon et al., Citation2020), a set of simulated social interactions in which adolescents interact with youthful-looking research personnel trained to “stand in” as same-age unfamiliar peers (i.e., peer confederates). Following these interactions, peer confederates complete parallel versions of the same sets of social anxiety surveys completed by parents and adolescents. Because peer confederates make these reports naive to assessment training, they essentially serve as a “proxy informant” used to estimate how same-age unfamiliar peers might perceive adolescents’ social anxiety.

The innovations in measurement and study design built into the Unfamiliar Peer Paradigm were not only informed by the Operations Triad Model, but they also resulted in several advancements in our understanding about how to improve measurement validity in multi-informant assessments of adolescent social anxiety. Consistent with situational specificity―and by extension Diverging Operations ()―adolescents’ self-reports displayed distinct patterns of correlations with parents’ and peer confederates’ reports (Makol et al., Citation2020). Further, these three informants’ reports varied in their relations to trained independent observers’ assessments of adolescents’ reactions to the tasks within the Unfamiliar Peer Paradigm (Glenn et al., Citation2019). Moreover, all three of these informants’ reports demonstrated psychometric soundness as evidenced by their links to (a) established survey measures of adolescent social anxiety and (b) their ability to distinguish adolescents on a clinically relevant index, namely referral status (Deros et al., Citation2018).

All of these findings led to a discovery: Might these reports, collectively, contain not only domain-relevant common variance but also domain-relevant unique variance as reflected in the discrepancies among the reports? If so, might improvements to measurement validity come not only from using all three of these informants’ reports to assess adolescent social anxiety, but also from identifying ways to integrate these multi-informant data with methods that retain both common variance and domain-relevant unique variance? Using just such a method developed by Kraemer et al. (Citation2003), Makol et al. (Citation2020) successfully integrated adolescent, parent, and peer confederate reports. Further, in this study scores taken from the Kraemer et al. (Citation2003) method outperformed the composite scoring procedures described in when predicting trained independent observers’ ratings of adolescent social anxiety. This last finding has since informed additional innovations in measurement, namely in tests of the psychometric soundness of reports taken from unfamiliar untrained observers―informants who based their reports on videos of adolescents’ interactions within the Unfamiliar Peer Paradigm (see Charamut et al., Citation2022; Follet et al., Citation2022; Okuno et al., Citation2022; Rezeppa et al., Citation2021). In sum, this line of work illustrates the potential for informant discrepancies research informed by the Operations Triad Model to result in improvements to measurement validity.

When Informant Discrepancies Optimize Clinical Decision-Making

Throughout this introductory article to the Special Issue, we have focused our attention on informant discrepancies as they manifest in research settings. However, we must return to a key reality of youth mental health assessments: These informant discrepancies are everywhere. As indicated by the “1985 story” that opened this introductory article, informant discrepancies often manifest when mental health professionals leverage multi-informant assessments to make clinical decisions in their work with individual clients (e.g., Fisher et al., Citation2017; Hawley & Weisz, Citation2003; Hoffman & Chu, Citation2015; Yeh & Weisz, Citation2001). Within these settings, what are the implications of informant discrepancies if here too they contain domain-relevant information?

Only recently have researchers laid the conceptual foundations for addressing questions surrounding the role of informant discrepancies in service delivery, and we graphically depict these foundations in . In essence, these conceptual foundations serve as an extension of the Operations Triad Model into research and theory in implementation science. Specifically, researchers designed the Needs-to-Goals Gap framework to characterize how informant discrepancies impact the outcomes of youth mental health service delivery (De Los Reyes, Talbott, et al., Citation2022). To understand this impact, it is important to note key themes linking informant discrepancies as they manifest in research settings to those discrepancies seen in service settings. We previously mentioned that informant discrepancies manifest across mental health domains studied in research (e.g., symptoms, impairments, associated features) as well as across topics of study (e.g., treatment outcomes, risk factors, basic laboratory research). Similarly, informant discrepancies occur across stages of care in service delivery. In this respect, these discrepancies factor into clinical decisions about not only for which general domains a youth client might require services (i.e., needs assessments), but also the specific aspects of these domains to target for intervention and the contexts in which these treatment targets manifest (i.e., goal-setting).

Figure 3. The Needs-to-Goals Gap framework (Reproduced from De Los Reyes, Talbott, et al., Citation2022).

Note: When professionals base their decisions regarding service goals on a single informant (e.g., client’s parent), such a decision would be evidence-based insofar as the client’s needs and the goals of services to address those needs manifest identically across domain-relevant contexts, such as home and school (A). However, when the evidence indicates that the client’s needs and/or goals of services manifest differently across contexts (B), or the evidence is inconclusive as to whether the client’s needs and/or goals of services manifest differently across contexts (C), professionals should base their decisions regarding service goals on the views of multiple informants (e.g., client’s parent, client’s teacher, client). A mismatch between assessment approaches (i.e., single vs. multiple informants) and the context(s) in which the client’s needs manifest is thought to increase the risk of creating a Needs-to-Goals Gap in service delivery, whereby the goals set for service delivery fail to meet the client’s needs.
Figure 3. The Needs-to-Goals Gap framework (Reproduced from De Los Reyes, Talbott, et al., Citation2022).

Figure 4. Graphical depiction of how needs assessment and goal setting processes conducted as part of service delivery might appear, if conducted consistent with the science of multi-informant assessment and the Needs-to-Goals Gap framework (Reproduced from De Los Reyes, Talbott, et al., Citation2022).

Note: In this depiction, we assume that caregivers initiate referral on behalf of the youth client.
Figure 4. Graphical depiction of how needs assessment and goal setting processes conducted as part of service delivery might appear, if conducted consistent with the science of multi-informant assessment and the Needs-to-Goals Gap framework (Reproduced from De Los Reyes, Talbott, et al., Citation2022).

Understanding how informant discrepancies impact goal-setting is of particular relevance to treatment plans within the frameworks of evidence-based interventions for youth mental health. Generally, a core feature of these interventions involves tailoring therapeutic activities to the unique needs of the clients receiving them (see Alfano & Beidel, Citation2011; Beck, Citation1993; Kazdin, Citation2013; Skinner, Citation1953). This necessitates understanding both the specific goals of services (e.g., improve social skills, decrease oppositional behavior), and the specific maintaining factors (i.e., contingencies) present in clinically relevant contexts that, if properly addressed and/or manipulated, may facilitate meeting clients’ needs.

If informant discrepancies contain domain-relevant data of the kind indicated by situational specificity, then their utility in service delivery is dictated by revealing to service providers the specific contexts in which needs assessments and goal setting processes ought to be focused. As such, the Needs-to-Goals Gap framework highlights the value of informant discrepancies when delivering services. By extension, they also highlight the consequences of ignoring them or constructing assessments that prevent one from detecting domain-relevant informant discrepancies. For example, it is common lore in service delivery that “optimal informants” exist for assessing particular mental health conditions, such as teachers for attention-deficit/hyperactivity disorder (ADHD; see Loeber et al., Citation1990). What if an intake assessment relies only on the teacher to assess ADHD-related needs? Within the Needs-to-Goals Gap framework, relying only on a single informant could only be justified if the client truly displayed symptoms and impairments consistently across contexts (), such that one informant could serve as an effective proxy for how the client behaves across relevant contexts (e.g., home and school). However, as we discussed previously these Converging Operations scenarios rarely occur, particularly within assessments conducted in service delivery settings. In fact, prior work indicates that informants such as parents and teachers cannot be relied upon to assess ADHD symptoms that manifest outside of their own context of observation (e.g., teacher for home symptoms, parent for school symptoms; see de Nijs et al., Citation2004; Talbott et al., Citation2021).

Under those circumstances that reflect Diverging Operations scenarios (), relying on a single informant would not only result in an inaccurate needs assessment, but would also have profound downstream effects on all aspects of services going forward. In particular, such an assessment would not have accurate context-sensitive data available for goal-setting, let alone for accurately estimating treatment responses that, presumably, might manifest in context-sensitive ways (e.g., symptom reductions observed mainly at home). Consequently, the Needs-to-Goals Gap framework highlights the importance of conducting needs assessments and engaging in goal-setting processes with an eye toward understanding whether patterns of multi-informant assessments reflect Converging Operations, Diverging Operations, or Compensating Operations scenarios (see ). Understanding which of these scenarios best fits the data obtained from informants facilitates accurate decision-making not only about crucial aspects of service delivery (e.g., goal-setting, monitoring intervention response), but also which informants are best positioned to guide decisions about contexts and contingencies germane to the client’s needs and goals for the services they receive. Taken together, we expect the Needs-to-Goals Gap framework to play an important role in future work seeking to understand the role of informant discrepancies and the implementation of multi-informant approaches to assessment within the delivery of youth mental health services.

Overview of Contributions to the Special Issue

This Special Issue includes articles that traverse several content areas. The Special Issue begins with an article that reconceptualizes measurement validation in youth mental health, by highlighting problems with applying the MTMM (Campbell & Fiske, Citation1959) to understand and interpret the outcomes of youth mental health assessments, and advancing a validation paradigm designed to address these problems (De Los Reyes, Wang et al., Citation2023). Three contributions focus on informant discrepancies observed within implementation science and the study of therapeutic processes, namely the therapeutic alliance (Roest et al., Citation2023), treatment fidelity (McLeod et al., Citation2023), and shared decision-making (Fitzpatrick et al., Citation2023). Three contributions focus on informant discrepancies observed when assessing specific mental health domains, including autism spectrum (Kang et al., Citation2023), the limited prosocial emotions specifier linked to assessments of conduct disorder (Castagna & Waschbusch, Citation2023), and suicide risk (Spears et al., Citation2023). We close the Special Issue with an editorial statement that highlights key findings, recommendations, and future directions for the study of informant discrepancies in youth mental health assessments (De Los Reyes, Epkins et al., Citation2023). The 70 coauthors on the statement traverse disciplines, career stages, and areas of expertise, and include journal editors and presidents of scholarly societies.

Concluding Comments

My hope is that this work paves the way toward a strong consensus among clinical researchers and practitioners that our field would greatly benefit from carefully conducted studies of the basic science behind informant discrepancies. Ideally, these studies should seek to actively distinguish the circumstances in which informant discrepancies reflect real constructs that are capable of informing clinical research and practice from the circumstances in which they should, in fact, be treated as methodological artifacts. Our field can then use the findings from this basic research to develop methods for increasing the interpretative power of informant discrepancies within clinical assessments of children. I am very curious about the work that lies ahead. After reading these articles, I hope you are curious as well. (De Los Reyes, Citation2011, p. 8)

We opened our concluding comments to this introductory article with the concluding paragraph of the introductory article from the 2011 JCCAP Special Section. A dozen years later, we see an accumulating body of research that clearly articulates instances in which informant discrepancies reflect domain-relevant information―data pertinent to the very domains about which informants provide reports (). In these last dozen years, we saw the advent of analytic models conducive to testing questions about informant discrepancies, as well as innovations in measurement and study design. A dozen years later, we now have conceptual models that help us (a) distinguish informant discrepancies that reflect domain-relevant information from discrepancies that reflect measurement confounds (Operations Triad Model; ), as well as (b) understand the implications of informant discrepancies when delivering mental health services to individual youth clients (Needs-to-Goals Gap; ). A key goal in 2011 was to question the prevailing approach to informant discrepancies. Are these discrepancies solely the byproducts of research artifacts as described by McGuire (Citation1969)? What if we revised our approach to informant discrepancies, and framed them as topics of curiosity in their own right? If given appropriate empirical and theoretical attention, might informant discrepancies even improve our understanding of youth mental health? To the degree that those of us who study informant discrepancies took this revised approach and made others more curious about what these discrepancies might reflect, then it appears we met the moment, so to speak.

Now, we all have a new moment to meet. Although we in youth mental health often encounter Diverging Operations scenarios when administering multi-informant assessments and interpreting their outcomes, we have all been carrying out this work in a “Converging Operations world.” In this world, our most commonly implemented analytic procedures for integrating multi-informant reports assume that Diverging Operations scenarios do not exist (). This reality of our research arguably has had downstream effects on settings beyond the laboratory; settings in which, much like the laboratory, we take multi-informant approaches to assessment. Unfortunately, we have yet to learn how best to integrate data from multi-informant assessments when making clinical decisions. In this moment, this is the conflict we face: We frequently encounter data conditions that are incompatible with the infrastructure we have in place for understanding, scoring, and integrating multi-informant data of all kinds—from the data collected in large-sample studies to the data resulting from clinical assessments administered to individual clients.

At the same time, we see a path to resolving this conflict. We see emerging work that sparks our curiosity and motivates us to do more. This motivation helps us build capacity to ask new questions and confront head-on the conflict we face. When informant discrepancies reflect domain-relevant information, what analytic procedures facilitate retaining this information? Do we need to develop entirely new analytic procedures, in addition to refining existing procedures? Might modified versions of procedures that were originally developed to integrate multi-informant data at the sample level facilitate such integration at the level of the individual client? To fully optimize these integrative procedures, should we also consider modifying the item content of our multi-informant instruments, or perhaps develop new instruments?

Are you curious about the answers to these questions? Do you think you have the motivation and will to address them, and perhaps extend the work of scholars who contributed to this Special Issue? Do you have new questions of your own that warrant addressing? If so, then we welcome your work on these questions. There is more than enough space for you to contribute. If informant discrepancies define the outcomes of youth mental health assessments, then we need every one of you who reads the articles in this Special Issue to put thought into questions surrounding these discrepancies. What more must we learn? What ideas must we explore? What tools must we develop? The days of actively ignoring these informant discrepancies are over. For the sake of youth mental health, we need your active engagement from here on out.

Disclosure Statement

No potential conflict of interest was reported by the authors.

Additional information

Funding

Efforts by the first author were supported by grants from the Fulbright U.S. Scholars Program (Fulbright Canada Research Chair in Mental Health), and the Institute of Education Sciences [R324A180032].

References

  • Achenbach, T. M., McConaughy, S. H., & Howell, C. T. (1987). Child/adolescent behavioral and emotional problems: Implications of cross-informant correlations for situational specificity. Psychological Bulletin, 101(2), 213–232. https://doi.org/10.1037/0033-2909.101.2.213
  • Alfano, C. A., & Beidel, D. C. (2011). Social anxiety in adolescents and young adults: Translating developmental science into practice. American Psychological Association.
  • Al Ghriwati, N., Winter, M. A., Greenlee, J. L., & Thompson, E. L. (2018). Discrepancies between parent and self-reports of adolescent psychosocial symptoms: Associations with family conflict and asthma outcomes. Journal of Family Psychology, 32(7), 992–997. https://doi.org/10.1037/fam0000459
  • Augenstein, T. M., Visser, K. H., Gallagher, K., De Los Reyes, A., D’Angelo, E. A., & Nock, M. K. (2022). Multi-informant reports of depressive symptoms and suicidal ideation among adolescent inpatients. Suicide & Life-Threatening Behavior, 52(1), 99–109. https://doi.org/10.1111/sltb.12803
  • Bartholomew, D. J., Steele, F., Moustaki, I., & Galbraith, J. I. (2002). The analysis and interpretation of multivariate data for social scientists. Chapman & Hall/CRC.
  • Beck, A. T. (1993). Cognitive therapy: Past, present, and future. Journal of Consulting and Clinical Psychology, 61(2), 194–198. https://doi.org/10.1037/0022-006X.61.2.194
  • Becker-Haimes, E. M., Jensen Doss, A., Birmaher, B., Kendall, P. C., & Ginsburg, G. S. (2018). Parent–youth informant disagreement: Implications for youth anxiety treatment. Clinical Child Psychology and Psychiatry, 23(1), 42–56. https://doi.org/10.1177/1359104516689586
  • Beidas, R. S., Stewart, R. E., Walsh, L., Lucas, S., Downey, M. M., Jackson, K., Fernandez, T., & Mandell, D. S. (2015). Free, brief, and validated: Standardized instruments for low-resource mental health settings. Cognitive and Behavioral Practice, 22(1), 5–19. https://doi.org/10.1016/j.cbpra.2014.02.002
  • Bonadio, F. T., Evans, S. C., Cho, G. Y., Callahan, K. P., Chorpita, B. F., Weisz, J. R., & Research Network on Youth Mental Health. (2022). Whose outcomes come out? Patterns of caregiver- and youth-reported outcomes based on caregiver-youth baseline discrepancies. Journal of Clinical Child & Adolescent Psychology, 51(4), 469–483.
  • Borelli, J. L., Smiley, P. A., Rasmussen, H. F., & Gómez, A. (2016). Is it about me, you, or us? Stress reactivity correlates of discrepancies in we-talk among parents and preadolescent children. Journal of Youth and Adolescence, 45(10), 1996–2010. https://doi.org/10.1007/s10964-016-0459-5
  • Boutron, I., Dutton, S., Ravaud, P., & Altman, D. G. (2010). Reporting and interpretation of randomized controlled trials with statistically nonsignificant results for primary outcomes. JAMA, 303(20), 2058–2064. https://doi.org/10.1001/jama.2010.651
  • Campbell, D. T., & Fiske, D. W. (1959). Convergent and discriminant validation by the multitrait- multimethod matrix. Psychological Bulletin, 56(2), 81–105. https://doi.org/10.1037/h0046016
  • Cannon, C. J., Makol, B. A., Keeley, L. M., Qasmieh, N., Okuno, H., Racz, S. J., & De Los Reyes, A. (2020). A paradigm for understanding adolescent social anxiety with unfamiliar peers: Conceptual foundations and directions for future research. Clinical Child and Family Psychology Review, 23(3), 338–364. https://doi.org/10.1007/s10567-020-00314-4
  • Casey, R. J., & Berman, J. S. (1985). The outcomes of psychotherapy with children. Psychological Bulletin, 98(2), 388–400. https://doi.org/10.1037/0033-2909.98.2.388
  • Castagna, P., & Waschbusch, D. (2023). Multi-informant ratings of childhood limited prosocial emotions: Mother, father, and teacher perspectives. Journal of Clinical Child and Adolescent Psychology, 52(1). https://doi.org/10.1080/15374416.2022.2151452
  • Charamut, N. R., Racz, S. J., Wang, M., & De Los Reyes, A. (2022). Integrating multi-informant reports of youth mental health: A construct validation test of Kraemer and Colleagues’ (2003) Satellite Model. Frontiers in Psychology, 13, 911629. https://doi.org/10.3389/fpsyg.2022.911629
  • Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Erlbaum.
  • De Los Reyes, A. (2011). More than measurement error: Discovering meaning behind informant discrepancies in clinical assessments of children and adolescents. Journal of Clinical Child & Adolescent Psychology, 40(1), 1–9. https://doi.org/10.1080/15374416.2011.533405
  • De Los Reyes, A., Aldao, A., Thomas, S. A., Daruwala, S., Swan, A. J., Van Wie, M., Goepel, K., & Lechner, W. V. (2012). Adolescent self-reports of social anxiety: Can they disagree with objective psychophysiological measures and still be valid? Journal of Psychopathology and Behavioral Assessment, 34(3), 308–322. https://doi.org/10.1007/s10862-012-9289-2
  • De Los Reyes, A., Alfano, C. A., Lau, S., Augenstein, T. M., & Borelli, J. L. (2016). Can we use convergence between caregiver reports of adolescent mental health to index severity of adolescent mental health concerns? Journal of Child and Family Studies, 25(1), 109–123. https://doi.org/10.1007/s10826-015-0216-5
  • De Los Reyes, A., Augenstein, T. M., Wang, M., Thomas, S. A., Drabick, D. A. G., Burgers, D., & Rabinowitz, J. (2015). The validity of the multi-informant approach to assessing child and adolescent mental health. Psychological Bulletin, 141(4), 858–900. https://doi.org/10.1037/a0038498
  • De Los Reyes, A., Cook, C. R., Gresham, F. M., Makol, B. A., & Wang, M. (2019). Informant discrepancies in assessments of psychosocial functioning in school-based services and research: Review and directions for future research. Journal of School Psychology, 74, 74–89. https://doi.org/10.1016/j.jsp.2019.05.005
  • De Los Reyes, A., Cook, C. R., Sullivan, M., Morrell, N., Hamlin, C., Wang, M., Gresham, F. M., Makol, B. M., Keeley, L. M., & Qasmieh, N. (2022). The work and social adjustment scale for youth: Psychometric properties of the teacher version and evidence of contextual variability in psychosocial impairments. Psychological Assessment, 34(8), 777–790. https://doi.org/10.1037/pas0001139
  • De Los Reyes, A., Drabick, D. A. G., Makol, B. A., & Jakubovic, R. (2020). Introduction to the special section: The research domain criteria’s units of analysis and cross-unit correspondence in youth mental health research. Journal of Clinical Child and Adolescent Psychology, 49(3), 279–296. https://doi.org/10.1080/15374416.2020.1738238
  • De Los Reyes, A., Epkins, C. C., Asmundson, G. J. G., Augenstein, T. M., Becker, K. D., Becker, S. P., Bonadio, F. T., Borelli, J. L., Boyd, R. C., Bradshaw, C. P., Burns, G. L., Casale, G., Causadias, J. M., Cha, C. B., Chorpita, B. F., Cohen, J. R., Comer, J. S., Crowell, S. E., Dirks, M. A., Youngstrom, E. A., et al. (2023). Editorial statement about JCCAP’s 2023 special issue on informant discrepancies in youth mental health assessments: Observations, guidelines, and future directions grounded in 60 years of research. Journal of Clinical Child & Adolescent Psychology, 52(1). https://doi.org/10.1080/15374416.2022.2158842
  • De Los Reyes, A., Henry, D. B., Tolan, P. H., & Wakschlag, L. S. (2009). Linking informant discrepancies to observed variations in young children’s disruptive behavior. Journal of Abnormal Child Psychology, 37(5), 637–652. https://doi.org/10.1007/s10802-009-9307-3
  • De Los Reyes, A., & Kazdin, A. E. (2005). Informant discrepancies in the assessment of childhood psychopathology: A critical review, theoretical framework, and recommendations for further study. Psychological Bulletin, 131(4), 483–509. https://doi.org/10.1037/0033-2909.131.4.483
  • De Los Reyes, A., & Kazdin, A. E. (2006). Conceptualizing changes in behavior in intervention research: The range of possible changes model. Psychological Review, 113(3), 554–583. https://doi.org/10.1037/0033-295X.113.3.554
  • De Los Reyes, A., & Kazdin, A. E. (2008). When the evidence says, “yes, no, and maybe so”: Attending to and interpreting inconsistent findings among evidence-based interventions. Current Directions in Psychological Science, 17(1), 47–51. https://doi.org/10.1111/j.1467-8721.2008.00546.x
  • De Los Reyes, A., Lerner, M. D., Keeley, L. M., Weber, R., Drabick, D. A. G., Rabinowitz, J., & Goodman, K. L. (2019). Improving interpretability of subjective assessments about psychological phenomena: A review and cross-cultural meta-analysis. Review of General Psychology, 23(3), 293–319. https://doi.org/10.1177/1089268019837645
  • De Los Reyes, A., Lerner, M. D., Thomas, S. A., Daruwala, S. E., & Goepel, K. A. (2013). Discrepancies between parent and adolescent beliefs about daily life topics and performance on an emotion recognition task. Journal of Abnormal Child Psychology, 41(6), 971–982. https://doi.org/10.1007/s10802-013-9733-0
  • De Los Reyes, A., & Makol, B. A. (2019). Evidence-based assessment. In T. H. Ollendick, L. Farrell, & P. Muris (Eds.), Innovations in CBT for childhood anxiety, OCD, and PTSD: Improving access and outcomes (pp. 28–51). Cambridge University Press.
  • De Los Reyes, A., & Makol, B. A. (2022). Informant reports in clinical assessment. In G. Asmundson (Ed.), Comprehensive clinical psychology (2nd ed., Vol. 4, pp. 105–122). Elsevier. https://doi.org/10.1016/B978-0-12-818697-8.00113-8
  • De Los Reyes, A., & Ohannessian, C. M. (2016). Introduction to the special issue: Discrepancies in adolescent-parent perceptions of the family and adolescent adjustment. Journal of Youth and Adolescence, 45(10), 1957–1972. https://doi.org/10.1007/s10964-016-0533-z
  • De Los Reyes, A., Ohannessian, C. M., & Laird, R. D. (2016). Developmental changes in discrepancies between adolescents’ and their mothers’ views of family communication. Journal of Child and Family Studies, 25(3), 790–797. https://doi.org/10.1007/s10826-015-0275-7
  • De Los Reyes, A., Ohannessian, C. M., & Racz, S. J. (2019). Discrepancies between adolescent and parent reports about family relationships. Child Development Perspectives, 13(1), 53–58. https://doi.org/10.1111/cdep.12306
  • De Los Reyes, A., Salas, S., Menzer, M. M., & Daruwala, S. E. (2013). Criterion validity of interpreting scores from multi-informant statistical interactions as measures of informant discrepancies in psychological assessments of children and adolescents. Psychological Assessment Psychological, 25(2), 509–519. https://doi.org/10.1037/a0032081
  • De Los Reyes, A., Talbott, E., Power, T., Michel, J., Cook, C. R., Racz, S. J., & Fitzpatrick, O. (2022). The Needs-to-Goals Gap: How informant discrepancies in youth mental health assessments impact service delivery. Clinical Psychology Review, 92, 102–114. https://doi.org/10.1016/j.cpr.2021.102114
  • De Los Reyes, A., Thomas, S. A., Goodman, K. L., & Kundey, S. M. A. (2013). Principles underlying the use of multiple informants’ reports. Annual Review of Clinical Psychology, 9(1), 123–149. https://doi.org/10.1146/annurev-clinpsy-050212-185617
  • De Los Reyes, A., Tyrell, F. A., Watts, A. L., & Asmundson, G. J. G. (2022). Conceptual, methodological, and measurement factors that disqualify use of measurement invariance techniques to detect informant discrepancies in youth mental health assessments. Frontiers in Psychology, 13, 931296. https://doi.org/10.3389/fpsyg.2022.931296
  • De Los Reyes, A., Wang, M., Lerner, M. D., Makol, B. A., Fitzpatrick, O., & Weisz, J. R. (2023). The Operations Traid Model and youth mental health assessments: Catalyzing a paradigm shift in measurement validation. Journal of Clinical Child and Adolescent Psychology, 52(1). https://doi.org/10.1080/15374416.2022.2111684
  • de Nijs, P. F., Ferdinand, R. F., de Bruin, E. I., Dekker, M. C., van Duijn, C. M., & Verhulst, D. C. (2004). Attention-deficit/hyperactivity disorder (ADHD): Parents’ judgment about school, teachers’ judgment about home. European Child & Adolescent Psychiatry, 13(5), 315–320. https://doi.org/10.1007/s00787-004-0405-z
  • Deros, D. E., Racz, S. J., Lipton, M. F., Augenstein, T. M., Karp, J. N., Keeley, L. M., Qasmieh, N., Grewe, B. I., Aldao, A., & De Los Reyes, A. (2018). Multi-informant assessments of adolescent social anxiety: Adding clarity by leveraging reports from unfamiliar peer confederates. Behavior Therapy, 49(1), 84–98. https://doi.org/10.1016/j.beth.2017.05.001
  • Edwards, J. R. (1994). The study of congruence in organizational behavior research: Critique and a proposed alternative. Organizational Behavior and Human Decision Processes, 58(1), 51–100. https://doi.org/10.1006/obhd.1994.1029
  • Fisher, E., Bromberg, M. H., Tai, G., & Palermo, T. M. (2017). Adolescent and parent treatment goals in an internet-delivered chronic pain self-management program: Does agreement of treatment goals matter? Journal of Pediatric Psychology, 42(6), 657–666. https://doi.org/10.1093/jpepsy/jsw098
  • Fiske, D. W., & Campbell, D. T. (1992). Citations do not solve problems. Psychological Bulletin, 112(3), 393–395. https://doi.org/10.1037/0033-2909.112.3.393
  • Fitzpatrick, O., Holcomb, J. M., Weisz, J. R., & Langer, D. A. (2023). Shared decision-making as a tool for navigating multi-stakeholder discrepancies in youth psychotherapy. Journal of Clinical Child and Adolescent Psychology, 52(1). https://doi.org/10.1080/15374416.2022.2127105
  • Follet, L., Okuno, H., & De Los Reyes, A. (2022). Assessing peer-related impairments linked to adolescent social anxiety: Strategic selection of informants optimizes prediction of clinically relevant domains. Behavior Therapy. Advance online publication. https://doi.org/10.1016/j.beth.2022.06.010
  • Garb, H. N. (2003). Incremental validity and the assessment of psychopathology in adults. Psychological Assessment, 15(4), 508–520. https://doi.org/10.1037/1040-3590.15.4.508
  • Garner, W. R., Hake, H. W., & Eriksen, C. W. (1956). Operationism and the concept of perception. Psychological Review, 63(3), 149–159. https://doi.org/10.1037/h0042992
  • Glenn, L. E., Keeley, L. M., Szollos, S., Okuno, H., Wang, X., Rausch, E., Deros, D. E., Karp, J. N., Qasmieh, N., Makol, B. A., Augenstein, T. M., Lipton, M. F., Racz, S. J., Scharfstein, L., Beidel, D. C., & De Los Reyes, A. (2019). Trained observers’ ratings of adolescents’ social anxiety and social skills within controlled, cross-contextual social interactions with unfamiliar peer confederates. Journal of Psychopathology and Behavioral Assessment, 41(1), 1–15. https://doi.org/10.1007/s10862-018-9676-4
  • Grace, R. C. (2001). On the failure of operationism. Theory & Psychology, 11(1), 5–33. https://doi.org/10.1177/0959354301111001
  • Hawker, D. S., & Boulton, M. J. (2000). Twenty years’ research on peer victimization and psychosocial maladjustment: A meta-analytic review of cross-sectional studies. Journal of Child Psychology and Psychiatry, and Allied Disciplines, 41(4), 441–455. https://doi.org/10.1111/1469-7610.00629
  • Hawley, K. M., & Weisz, J. R. (2003). Child, parent, and therapist (dis)agreement on target problems in outpatient therapy: The therapist’s dilemma and its implications. Journal of Consulting and Clinical Psychology, 71(1), 62–70. https://doi.org/10.1037/0022-006X.71.1.62
  • Henrich, J., Heine, S. J., & Norenzayan, A. (2010). The weirdest people in the world? Behavioral and Brain Sciences, 33(2–3), 61–83. https://doi.org/10.1017/S0140525X0999152X
  • Highhouse, S. (2009). Designing experiments that generalize. Organizational Research Methods, 12(3), 554–566. https://doi.org/10.1177/1094428107300396
  • Hoffman, L. J., & Chu, B. C. (2015). Target problem (mis) matching: Predictors and consequences of parent–youth agreement in a sample of anxious youth. Journal of Anxiety Disorders, 31, 11–19. https://doi.org/10.1016/j.janxdis.2014.12.015
  • Hofmann, S. G., Albano, A. M., Heimberg, R. G., Tracey, S., Chorpita, B. F., & Barlow, D. H. (1999). Subtypes of social phobia in adolescents. Depression and Anxiety, 9(1), 15–18. https://doi.org/10.1002/(SICI)1520-6394(1999)9:1<15:AID-DA2>3.0.CO;2-6
  • Hou, Y., Benner, A. D., Kim, S. Y., Chen, S., Spitz, S., Shi, Y., & Beretvas, T. (2020). Discordance in parents’ and adolescents’ reports of parenting: A meta-analysis and qualitative review. The American Psychologist, 75(3), 329–348. https://doi.org/10.1037/amp0000463
  • Human, L. J., Dirks, M. A., DeLongis, A., & Chen, E. (2016). Congruence and incongruence in adolescents’ and parents’ perceptions of the family: Using response surface analysis to examine links with adolescents’ psychological adjustment. Journal of Youth and Adolescence, 45(10), 2022–2035. https://doi.org/10.1007/s10964-016-0517-z
  • Humphreys, K. L., Weems, C. F., & Scheeringa, M. S. (2017). The role of anxiety control and treatment implications of informant agreement on child PTSD symptoms. Journal of Clinical Child & Adolescent Psychology, 46(6), 903–914. https://doi.org/10.1080/15374416.2015.1094739
  • Hunsley, J., & Lee, C. M. (2014). Introduction to clinical psychology (2nd ed.). Wiley.
  • Hunsley, J., & Mash, E. J. (Eds.). 2018. A guide to assessments that work. (2nd ed). Oxford University Press.
  • Ivanova, M., Achenbach, T. M., & Turner, L. (2022). Associations of parental depression with children’s internalizing and externalizing: Meta-analyses of cross-sectional and longitudinal effects. Journal of Clinical Child & Adolescent Psychology, 51(6), 827–849. https://doi.org/10.1080/15374416.2022.2127104
  • Kang, E., Lerner, M. D., & Gadow, K. D. (2023). The importance of parent-teacher informant discrepancy in characterizing autistic youth: A replication latent profile analysis. Journal of Clinical Child and Adolescent Psychology, 52(1). https://doi.org/10.1080/15374416.2022.2154217
  • Kazdin, A. E. (2013). Behavior modification in applied settings (7th ed.). Waveland.
  • Korelitz, K. E., & Garber, J. (2016). Congruence of parents’ and children’s perceptions of parenting: A meta-analysis. Journal of Youth and Adolescence, 45(10), 1973–1995. https://doi.org/10.1007/s10964-016-0524-0
  • Kraemer, H. C., Measelle, J. R., Ablow, J. C., Essex, M. J., Boyce, W. T., & Kupfer, D. J. (2003). A new approach to integrating data from multiple informants in psychiatric assessment and research: Mixing and matching contexts and perspectives. The American Journal of Psychiatry, 160(9), 1566–1577. https://doi.org/10.1176/appi.ajp.160.9.1566
  • Laird, R. D. (2020). Analytical challenges of testing hypotheses of agreement and discrepancy: Comment on Campione-Barr, Lindell, and Giron (2020). Developmental Psychology, 56(5), 970–977. https://doi.org/10.1037/dev0000763
  • Laird, R. D., & De Los Reyes, A. (2013). Testing informant discrepancies as predictors of early adolescent psychopathology: Why difference scores cannot tell you what you want to know and how polynomial regression may. Journal of Abnormal Child Psychology, 41(1), 1–14. https://doi.org/10.1007/s10802-012-9659-y
  • Lapouse, R., & Monk, M. A. (1958). An epidemiologic study of behavior characteristics in children. American Journal of Public Health, 48(9), 1134–1144. https://doi.org/10.2105/AJPH.48.9.1134
  • Lerner, M. D., De Los Reyes, A., Drabick, D. G., Gerber, A. H., & Gadow, K. D. (2017). Informant discrepancy defines discrete, clinically useful autism spectrum disorder subgroups. Journal of Child Psychology and Psychiatry, 58(7), 829–839. https://doi.org/10.1111/jcpp.12730
  • Leung, J. T., Shek, D. T., & Li, L. (2016). Mother–child discrepancy in perceived family functioning and adolescent developmental outcomes in families experiencing economic disadvantage in Hong Kong. Journal of Youth and Adolescence, 45(10), 2036–2048. https://doi.org/10.1007/s10964-016-0469-3
  • Levy, P. (1969). Platonic true scores and rating scales: A case of uncorrelated definitions. Psychological Bulletin, 71(4), 276–277. https://doi.org/10.1037/h0026855
  • Loeber, R., Green, S. M., & Lahey, B. B. (1990). Mental health professionals’ perception of the utility of children, mothers, and teachers as informants on childhood psychopathology. Journal of Clinical Child Psychology, 19(2), 136–143. https://doi.org/10.1207/s15374424jccp1902_5
  • Lord, C., Rutter, M., DiLavore, P. C., Risi, S., Gotham, K., & Bishop, S. L. (2012). Autism diagnostic observation schedule (2nd ed.). (ADOS-2) Manual (Part 1): Modules 1–4. Western Psychological Services.
  • Luo, R., Chen, F., Yuan, C., Ma, X., & Zhang, C. (2020). Parent–child discrepancies in perceived parental favoritism: Associations with children’s internalizing and externalizing problems in Chinese families. Journal of Youth and Adolescence, 49(1), 60–73. https://doi.org/10.1007/s10964-019-01148-2
  • Makol, B. A., De Los Reyes, A., Garrido, E., Harlaar, N., & Taussig, H. (2021). Assessing the mental health of maltreated youth with child welfare involvement using multi-informant reports. Child Psychiatry and Human Development, 52(1), 49–62. https://doi.org/10.1007/s10578-020-00985-8
  • Makol, B. A., De Los Reyes, A., Ostrander, R., & Reynolds, E. K. (2019). Parent-youth divergence (and convergence) in reports of youth internalizing problems in psychiatric inpatient care. Journal of Abnormal Child Psychology, 47(10), 1677–1689. https://doi.org/10.1007/s10802-019-00540-7
  • Makol, B. A., & Polo, A. J. (2018). Parent-child endorsement discrepancies among youth at chronic-risk for depression. Journal of Abnormal Child Psychology, 46(5), 1077–1088. https://doi.org/10.1007/s10802-017-0360-z
  • Makol, B. A., Youngstrom, E. A., Racz, S. J., Qasmieh, N., Glenn, L. E., & De Los Reyes, A. (2020). Integrating multiple informants’ reports: How conceptual and measurement models may address long-standing problems in clinical decision- making. Clinical Psychological Science, 8(6), 953–970. https://doi.org/10.1177/2167702620924439
  • McGuire, W. J. (1969). Suspiciousness of experimenter’s intent. In R. Rosenthal & R. L. Rosnow (Eds.), Artifact in behavioral research (pp. 13–57). Academic Press.
  • McLeod, B. M., Porter, N., Hogue, A., Becker-Haimes, E., & Jensen Doss, A. (2023). What is the status of multi-informant treatment fidelity research? Journal of Clinical Child and Adolescent Psychology, 52(1). https://doi.org/10.1080/15374416.2022.2151713
  • Nelemans, S. A., Branje, S. J., Hale, W. W., Goossens, L., Koot, H. M., Oldehinkel, A. J., & Meeus, W. H. (2016). Discrepancies between perceptions of the parent–adolescent relationship and early adolescent depressive symptoms: An illustration of polynomial regression analysis. Journal of Youth and Adolescence, 45(10), 2049–2063. https://doi.org/10.1007/s10964-016-0503-5
  • Nichols, L. M., & Tanner-Smith, E. E. (2022). Discrepant parent-adolescent reports of parenting practices: Associations with adolescent internalizing and externalizing symptoms. Journal of Youth and Adolescence, 51(6), 1153–1168. https://doi.org/10.1007/s10964-022-01601-9
  • Nosek, B. A., Spies, J. R., & Motyl, M. (2012). Scientific utopia: II. Restructuring incentives and practices to promote truth over publishability. Perspectives on Psychological Science, 7(6), 615–631. https://doi.org/10.1177/1745691612459058
  • Offord, D. R., Boyle, M. H., Racine, Y., Szatmari, P., Fleming, J. E., Sanford, M., & Lipman, E. L. (1996). Integrating assessment data from multiple informants. Journal of the American Academy of Child and Adolescent Psychiatry, 35(8), 1078–1085. https://doi.org/10.1097/00004583-199608000-00019
  • Ohannessian, C. M., & De Los Reyes, A. (2014). Discrepancies in adolescents’ and their mothers’ perceptions of the family and adolescent anxiety symptomatology. Parenting: Science and Practice, 14(1), 1–18. https://doi.org/10.1080/15295192.2014.870009
  • Ohannessian, C. M., Laird, R. D., & De Los Reyes, A. (2016). Discrepancies in adolescents’ and their mother’s perceptions of the family and mothers’ psychological symptomatology. Journal of Youth and Adolescence, 45(10), 2011–2021. https://doi.org/10.1007/s10964-016-0477-3
  • Okuno, H., Rezeppa, T., Raskin, T., & De Los Reyes, A. (2022). Adolescent safety behaviors and social anxiety: Links to psychosocial impairments and functioning with unfamiliar peer confederates. Behavior Modification, 46(6), 1314–1345. https://doi.org/10.1177/01454455211054019
  • Open Science Collaboration. (2015). Estimating the reproducibility of psychological science. Science, 349(6251), aac4716. https://doi.org/10.1126/science.aac4716
  • Renk, K., & Phares, V. (2004). Cross-informant ratings of social competence in children and adolescents. Clinical Psychology Review, 24(2), 239–254. https://doi.org/10.1016/j.cpr.2004.01.004
  • Rezeppa, T., Okuno, H., Qasmieh, N., Racz, S. J., Borelli, J. L., & De Los Reyes, A. (2021). Unfamiliar untrained observers’ ratings of adolescent safety behaviors within social interactions with unfamiliar peer confederates. Behavior Therapy, 52(3), 564–576. https://doi.org/10.1016/j.beth.2020.07.006
  • Richters, J. E. (1992). Depressed mothers as informants about their children: A critical review of the evidence for distortion. Psychological Bulletin, 112(3), 485–499. https://doi.org/10.1037/0033-2909.112.3.485
  • Roest, J. J., Welmers Van de Poll, M. J., Van der Helm, G. P., Stams, G. J. J., & Hoeve, M. (2023). A meta-analysis on differences and associations between alliance ratings in child and adolescent psychotherapy. Journal of Clinical Child & Adolescent Psychology, 52(1). https://doi.org/10.1080/15374416.2022.2093210
  • Romano, E., Weegar, K., Babchishin, L., & Saini, M. (2018). Cross-informant agreement on mental health outcomes in children with maltreatment histories: A systematic review. Psychology of Violence, 8(1), 19–30. https://doi.org/10.1037/vio0000086
  • Rubio-Stipec, M., Fitzmaurice, G., Murphy, J., & Walker, A. (2003). The use of multiple informants in identifying the risk factors of depressive and disruptive disorders: Are they interchangeable? Social Psychiatry and Psychiatric Epidemiology, 38(2), 51–58. https://doi.org/10.1007/s00127-003-0600-0
  • Schekman, R. (2016). Introduction: The challenge of reproducibility. Annual Review of Cell and Developmental Biology, 32(1). https://doi.org/10.1146/annurev-cb-32-100316-100001
  • Schmidt, S. (2009). Shall we really do it again? The powerful concept of replication is neglected in the social sciences. Review of General Psychology, 13(2), 90–100. https://doi.org/10.1037/a0015108
  • Silberzahn, R., & Uhlmann, E. L. (2015). Crowdsourced research: Many hands make tight work. Nature, 526(7572), 189–191. https://doi.org/10.1038/526189a
  • Skinner, B. F. (1953). Science and human behavior. MacMillan.
  • Smetana, J. G. (2008). ‘‘It’s 10 o’clock: Do you know where your children are?’’ Recent advances in understanding parental monitoring and adolescents’ information management. Child Development Perspectives, 2(1), 19–25. https://doi.org/10.1111/j.1750-8606.2008.00036.x
  • Smith, M. L., & Glass, G. V. (1977). Meta-analysis of psychotherapy outcome studies. The American Psychologist, 32(9), 752–760. https://doi.org/10.1037/0003-066X.32.9.752
  • Spears, A. P., Gratch, I., Nam, R. J., Goger, P., & Cha, C. B. (2023). Future directions in understanding and interpreting discrepant reports of suicidal thoughts and behaviors among youth. Journal of Clinical Child & Adolescent Psychology, 52(1). https://doi.org/10.1080/15374416.2022.2145567
  • Stratis, E. A., & Lecavalier, L. (2015). Informant agreement for youth with autism spectrum disorder or intellectual disability: A meta-analysis. Journal of Autism and Developmental Disorders, 45(4), 1026–1041. https://doi.org/10.1007/s10803-014-2258-8
  • Tackett, J. L., Lilienfeld, S. O., Patrick, C. J., Johnson, S. L., Krueger, R. F., Miller, J. D., Oltmanns, T. F., & Shrout, P. E. (2017). It’s time to broaden the replicability conversation: Thoughts for and from clinical psychological science. Perspectives on Psychological Science, 12(5), 742–756. https://doi.org/10.1177/1745691617690042
  • Talbott, E., De Los Reyes, A., Power, T., Michel, J., & Racz, S. J. (2021). A team-based collaborative care model for youth with attention deficit hyperactivity disorder in education and pediatric health care settings. Journal of Emotional and Behavioral Disorders, 29(1), 24–33. https://doi.org/10.1177/1063426620949987
  • Trang, D. T., & Yates, T. M. (2020). (In)congruent parent–child reports of parental behaviors and later child outcomes. Journal of Child and Family Studies, 29(7), 1845–1860. https://doi.org/10.1007/s10826-020-01733-1
  • Van Heel, M., Bijttebier, P., Colpin, H., Goossens, L., Van Den Noortgate, W., Verschueren, K., & Van Leeuwen, K. (2019). Adolescent-parent discrepancies in perceptions of parenting: Associations with adolescent externalizing problem behavior. Journal of Child and Family Studies, 28(11), 3170–3182. https://doi.org/10.1007/s10826-019-01493-7
  • Van Petegem, S., Antonietti, J. P., Eira Nunes, C., Kins, E., & Soenens, B. (2020). The relationship between maternal overprotection, adolescent internalizing and externalizing problems, and psychological need frustration: A multi-informant study using response surface analysis. Journal of Youth and Adolescence, 49(1), 162–177. https://doi.org/10.1007/s10964-019-01126-8
  • Watts, A. L., Makol, B. A., Palumbo, I. M., De Los Reyes, A., Olino, T. M., Latzman, R. D., DeYoung, C. G., Wood, P. K., & Sher, K. J. (2022). How robust is the p-factor? Using multitrait-multimethod modeling to inform the meaning of general factors of psychopathology in youth. Clinical Psychological Science, 10(4), 640–661. https://doi.org/10.1177/21677026211055170
  • Weisz, J. R., Jensen Doss, A., & Hawley, K. M. (2005). Youth psychotherapy outcome research: A review and critique of the evidence base. Annual Review of Psychology, 56(1), 337–363. https://doi.org/10.1146/annurev.psych.55.090902.141449
  • Weisz, J. R., Kuppens, S., Ng, M. Y., Eckshtain, D., Ugueto, A. M., Vaughn-Coaxum, R., Jensen Doss, A., Hawley, K. M., Krumholz Marchette, L. S., Chu, B. C., Weersing, V. R., & Fordwood, S. R. (2017). What five decades of research tells us about the effects of youth psychological therapy: A multilevel meta-analysis and implications for science and practice. The American Psychologist, 72(2), 79–117. https://doi.org/10.1037/a0040360
  • Weisz, J. R., McCarty, C. A., & Valeri, S. M. (2006). Effects of psychotherapy for depression in children and adolescents: A meta-analysis. Psychological Bulletin, 132(1), 132–149. https://doi.org/10.1037/0033-2909.132.1.132
  • Weisz, J. R., Weiss, B., Alicke, M. D., & Klotz, M. L. (1987). Effectiveness of psychotherapy with children and adolescents: A meta-analysis for clinicians. Journal of Consulting and Clinical Psychology, 55(4), 542–549. https://doi.org/10.1037/0022-006X.55.4.542
  • Weisz, J. R., Weiss, B., Han, S. S., Granger, D. A., & Morton, T. (1995). Effects of psychotherapy with children and adolescents revisited: A meta-analysis of treatment outcome studies. Psychological Bulletin, 117(3), 450–468. https://doi.org/10.1037/0033-2909.117.3.450
  • Yeh, M., & Weisz, J. R. (2001). Why are we here at the clinic? Parent-child (dis)agreement on referral problems at outpatient treatment entry. Journal of Consulting and Clinical Psychology, 69(6), 1018–1025. https://doi.org/10.1037/0022-006X.69.6.1018
  • Zilcha-Mano, S., Shimshoni, Y., Silverman, W. K., & Lebowitz, E. R. (2021). Parent-child agreement on family accommodation differentially predicts outcomes of child-based and parent-based child anxiety treatment. Journal of Clinical Child & Adolescent Psychology, 50(3), 427–439. https://doi.org/10.1080/15374416.2020.1756300

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.