857
Views
1
CrossRef citations to date
0
Altmetric
Articles

Intervention integrity in the Low Countries: Interventions targeting social-emotional behaviors in the school

, , &

Abstract

The current study presents a review of intervention studies conducted in the Low Countries (i.e., The Netherlands and Flanders) focusing on social-emotional behaviors in the school. The primary purpose of this review was to assess whether studies included an operational definition of the intervention under study and reported data on the implementation of intervention. The former is important with regard to replication, whereas the latter is required to support the claim that the intervention under study actually produced the changes in the behavior targeted by the intervention. A total of 102 studies were reviewed and published in international or Dutch/Flemish journals between 2000 and 2015. Results indicate that a substantial amount of studies included an operational definition of the intervention or referred to its source (82.3%), but only a minority of studies provided quantitative or qualitative intervention integrity data (32.4%). It was concluded that researchers in the Low Countries should raise the level of attention to intervention integrity issues while journals should adopt more stringent criteria for evaluating intervention integrity data.

Intervention programs applied within a school context should be evidence-based. In discussing issues involved in the application of evidence-based intervention in school practice, Kratochwill (Citation2004) argued that an intervention might receive the predicate “evidence-based” only when information about its contextual application is specified and its efficacy under the conditions of implementation is demonstrated (Kratochwill, Citation2004, p. 35). It is obviously impossible to estimate the extent to which an intervention is implemented as intended without adequate assessment of its “integrity” or “fidelity” (Yeaton & Sechrest, Citation1981). The assessment of intervention integrity is of vital importance to the evaluation of any program pretending to be evidence-based. In order to attribute changes in the behavior targeted by the intervention it must be demonstrated that these changes result from manipulations of the independent variable (i.e., the critical components of the intervention). When intervention integrity is not evaluated, it is difficult to decide whether poor outcome is due to a deficient conceptualization of the intervention or to an inadequate administration.

At face value, the definition of intervention integrity in terms of “implemented as intended” seems to capture the essence of an intervention that is put into practice faithfully. But the factors associated with the presence or absence of intervention integrity are more complex than this “implemented as intended” definition implies. In reviewing intervention studies published between 1980 and 1994, Dane and Schneider (Citation1998) pointed to the multidimensionality of intervention integrity. They distinguished between intervention “adherence” (i.e., the extent to which the intervention objectives are met), “exposure” (i.e., how much of the intervention is delivered), “quality of delivery” (i.e., how well critical intervention components are delivered), responsiveness (i.e., the engagement of the participant in the intervention program) and intervention “differentiation” (i.e., the uniqueness of intervention components vs. extraneous or irrelevant influences). In discussing the complexities of intervention integrity across conceptual models, Sanetti and Kratochwill (Citation2009) arrived at a working definition of intervention integrity as the extent to which essential intervention components are delivered in a comprehensive and consistent manner by an interventionist trained to deliver the intervention (cf., Sanetti & Kratochwill, Citation2009; p. 448). This would require at least an operational definition of the essential components of the intervention and a thorough assessment of its implementation in context (e.g., Gearing et al., Citation2011; Nelson, Cordray, Hulleman, Darrow, & Sommer, Citation2012).

Unfortunately, a rigorous assessment of the quality of implementation is frequently omitted in studies concerned with the demonstration that a particular intervention is based on solid evidence proving its effectiveness. Gresham, Gansle, Noell, and Cohen (Citation1993) examined intervention integrity in child studies published in the Journal of Applied Behavior Analysis (JABA) between 1980 and 1990. These authors considered two major aspects relating to intervention integrity. One aspect referred to the operational definition of the intervention. Thus, the study should provide enough information for adequate replication. The other aspect concerned a proper assessment of intervention integrity. With regard to this aspect, studies should report a kind of monitoring of intervention integrity (either by a numerical index, by using an intervention protocol, or just by stating that integrity was verified). Of the 158 studies included in the assessment, only 34.2% reported an operational definition of the intervention under consideration. Only 15.8% of the studies reported some numerical index of intervention integrity. In a follow-up study, this group of authors examined the integrity of school-based interventions, delivered to individuals from 0 to 18 years, reported in studies published in JABA between 1991 and 2005 (McIntyre, Gresham, DiGennaro & Reed, Citation2007). Nearly all of the 142 studies coded (95%) provided a definition of the independent variable, but only 30% of the studies provided intervention integrity data. Furthermore, the proportion of studies reporting intervention integrity data remained remarkably stable over the period under consideration.

The issues surrounding the integrity of school-based interventions received greater attention during the last decade (e.g., Bruhn, Hirsch, & Loyd, Citation2015; Fiske, Citation2008; Forman et al., Citation2013; Harn, Parisi & Stoolmiller, Citation2013; Keller-Margulis, Citation2012; Kelly & Perkins, Citation2012; Maynard, Peters, Vaughn, & Sarteschi, Citation2013; McKenna, Flower, & Ciullo, Citation2014; Nelson et al., Citation2012; Owens et al., Citation2014; Sanetti & Kratochwill, Citation2014; Schulte, Easton & Parker, Citation2009; Snyder, Hemmeter, Fox, Bishop, & Miller, Citation2013). The study reported by Sanetti, Gritter, and Dobey (Citation2011) provides a representative example of the increasing trend in reporting intervention integrity data. These authors examined all studies published in four school psychology journals between 1995 and 2008; a total of 223 studies met the inclusion criteria. Their data revealed that about one-third of the studies reported an operational definition of the intervention under study (31.8%), whereas another third referred to another source enabling information concerning the intervention (38.6%). The remainder of the studies did not provide an adequate definition of the intervention. Quantitative information regarding intervention integrity was lacking for about one third of the studies (37.2%), but there was a rise in reporting intervention integrity data from 39.7% for studies published during the 1995–1999 year period to 59.1% for studies published during the 2004–2008 period.

More recently, the same group of authors performed a similar analysis of studies published in a single journal, School Psychology International, between 1995 and 2010 (Sanetti et al., Citation2014). A total of 26 studies were coded, only 10 of which provided an operational definition of intervention integrity and only 2 studies reported quantitative intervention integrity data. These numbers are disturbingly low and considerably less than the numbers reported in their previous studies. The authors noted that they considered studies published in a journal with a mission to publish research from around the world and, thus, there might be a misalignment between the U.S.-centric coding criteria used in their study and the studies reviewed by them (Sanetti et al., Citation2014; p. 380). Moreover, the limited number of studies included in their review seems to prevent strong conclusions.

The aim of the current study was to examine intervention integrity data reported in studies of interventions targeting social-emotional behaviors in a school context published by Dutch and Flemish authors. The focus on studies from the Low Countries was to present a case study of the hypothesis put forward by Sanetti et al. (Citation2014), assuming that intervention integrity criteria might differ across academic communities. In the Netherlands, there is an increasing awareness that intervention integrity is of vital importance. The Netherlands Youth Institute requires a demonstration of intervention integrity before crediting a qualification of the effectiveness of intervention.Footnote1 Such a demonstration requires that interventions are (a) defined in sufficient detail, (b) based on solid evidence, and (c) effective in the designated context. In line with international practice, the intervention should be implemented as intended. In the current study, we coded intervention studies adopting the criteria, reported in the Sanetti et al. (Citation2014) study, concerned with the operational definition and implementation of the intervention. In view of the quality demands imposed by the Netherlands Youth Institute, we anticipated that the number of studies satisfying these intervention integrity criteria would be considerably larger than in the Sanetti et al. study, coming close to the numbers reported by the same authors in School Psychology Review (Sanetti et al., Citation2011).

We focused on interventions targeting social-emotional problem behaviors, as these behaviors may greatly impede the child's learning (Zins, Elias, Greenberg & Weissberg, Citation2000). Social-emotional problem behaviors include externalizing behaviors (e.g., ADHD), aggressive behaviors directed toward peers (e.g., bullying), deficient social skills (e.g., antagonism), anxiety-related behaviors (e.g., failure anxiety, avoidance of people or places) and low self-concept. Efforts promoting social-emotional competencies may greatly enhance the child's school success (e.g., Durlak, Weissberg, Dymnicki, Taylor, & Schellinger, Citation2011; Lane & Pierson, Citation2001).

In coding the studies, we included several variables that may influence outcome. We anticipated that the number of studies presenting adequate information on intervention integrity would increase during the period under review (2000–2015), given the growing awareness of the importance of intervention integrity. In addition, we coded studies in terms of publication source—English-language versus Dutch-language journals. In view of the low number of studies paying attention to integrity issues published in School Psychology International (Sanetti et al., Citation2014), relative to the number of studies published in Psychology Review (Sanetti et al., Citation2011), we expected that attention to intervention integrity would be more pronounced for studies published in internationally oriented English-language journals compared to journals oriented toward a Dutch-language readership. Finally, we asked whether student age, intervention location, intervention agent, and dependent variable might influence the information provided on intervention integrity.

Method

In broad outline, the current study adopted the research strategy presented in Sanetti et al. (Citation2014). A total of 102 studies from 37 different Dutch- and English-language journals were coded (n = 8 and n = 29, respectively). The list of coded studies is available in the supplementary materials. To be included, studies had to meet five criteria. First, the study had to be published between 2000 and 2015. Second, as we were interested in interventions in the school context, the students included in the study had to be younger than 18 years (i.e., the age limit of obligatory education in the Low Countries). Third, to be considered, studies had to be experimental or quasi-experimental; an independent variable (i.e., the intervention) had to be manipulated to induce a change in the dependent variable (i.e., target behavior). Furthermore, the study design, either between- or within-group or single-case, had to meet internationally accepted standards. Fourth, studies had to be conducted in the Low Countries (i.e., The Netherlands and Flanders). Finally, studies should be concerned with interventions targeting social-emotional behaviors in the school. It should be noted, however, that this criterion does not imply that the intervention is actually delivered in the school. It requires the intervention to be concerned with social-emotional behaviors manifested at the school premises.

Keywords for identifying pertinent studies were intervention, training, protocol, prevention, effectiveness, effect, effectiveness, school, education, pupil, student social, relations, interaction, bullying, emotional, emotion, self-confidence, self-concept, anxiety, or Dutch/Flemish equivalents for searching Dutch/Flemish scientific journals. The keywords were used in literature searches in Web of Knowledge and PsycINFO.

Coding

The primary focus of the current study was on (a) on intervention integrity; that is, how interventions aimed at changing social-emotional behaviors in the school were operationally defined and their implementation measured; and (b) whether the intervention integrity information provided would be related to publication year, publication source, intervention agent, target behavior (dependent variable), and student age. Coding procedures for each of these variables are described below.

Operational definition

For each study, it was determined whether the intervention had been operationally defined. This implied answering the question of whether the intervention could be replicated with the information provided (Sanetti et al., Citation2011). Studies were coded as “yes,” “no,” or “partial.” Studies coded as “partial” did not provide sufficient information for replication but included a reference to a source (e.g., handbook, technical report, other study) allegedly providing such information. The reliability of the reference was not checked, however.

Intervention implementation

Studies were coded “yes,” “no,” or “partial.” Studies were coded “yes” when they specified a method for implementation measurement (e.g., observations, checklists, self-monitoring) and reported quantitative data (McKenna et al., Citation2014). Studies were coded “no” if there was no mention of intervention implementation assessment (Sanetti et al., Citation2011). Studies were coded “partial” if they indicated that intervention implementation was assessed but failed to provide quantitative data.

Publication year

The publication year of each study was coded (i.e., 2000–2015).

Publication source

All studies were coded with regard to publication source: English-language journals versus Dutch-language journals.

Intervention agent

The individual responsible for implementing the intervention was coded into one of the following categories: (a) teacher, (b) professional (e.g., psychologist), (c) paraprofessional (e.g., social worker), (d) researcher, or (e) other (e.g., psychology student). Teachers delivering the intervention were coded “teacher” when not trained and “interventionist” when they had received a dedicated training for delivering the intervention. School psychologists or psychotherapists providing the training were coded “interventionist.” A member of the research team involved in collecting data for the intervention study was coded “researcher.” Individuals who did not fit into any of the three categories (e.g., parents, co-students) were coded “other.”

Intervention location

The location of the intervention was coded into one of the following seven categories, corresponding to the Dutch/Flemish school system: (a) preschool, (b) regular primary school, (c) special-education primary school, (d) secondary school, (e) special-education secondary school, (f) other (therapeutic institute, university setting), and (g) multiple (more than one of these settings).

Student age

The age of the participants was coded into one the following categories, corresponding to the Dutch/Flemish school system: (a) kindergarten age (4–6 years), (b) primary school age (6–12 years), (c) secondary school age (12–18 years), and (d) multiple (4–18 years).

Dependent variable

All reported dependent variables pertained to the social-emotional domain. They were coded into one of six categories: (a) disruptive behaviors lacking impulse control (e.g., ADHD); (b) bullying peers and classmates in school or at the school playground; (c) anxiety-related behaviors (e.g., irrational fears, failure anxiety, and excessive avoidance of people and places); (d) behaviors characterized by deficient social skills (e.g., antagonism and unwillingness to share); (e) other social-emotional problem behaviors (e.g., low self-esteem and low perceived competence). Studies fitting into more than one category received the code (f) “multiple.”

Rater training and interrater reliability

The first author, a faculty member in school psychology, trained two advanced graduate students, the second and third authors, in the use of the coding criteria. The training consisted of three steps. First, the students examined the pertinent literature on intervention integrity in psychology provided by the first author and studied the coding manual prepared for the current study. The second step included independent coding by the students of a subsample of the selected studies (n = 20). Interrater agreement was 90% across the double-coded studies. The third and final step included a discussion of disagreement in ratings, clarification of confusing codes, and a revision of the coding manual. Double-coding was maintained after training for the complete set of studies (n = 102).

Data analysis

Chi-square analyses were performed to statistically assess differences in the proportion of studies reporting an operational definition of the intervention or providing qualitative or quantitative data on intervention implementation in relation to publication year of the selected studies, intervention agent, location of the intervention, student age, or target behavior (dependent variable). The level of significance was set at .05 for all analyses.

Results

The results will be presented in two sections. First, we summarize the data obtained from studies that met our inclusion criteria pertaining to the operational definition of the independent variable and intervention implementation. Second, we will relate the intervention integrity data to the study characteristics; publication year, intervention agent, location of intervention, student age, and dependent variable.

Intervention integrity information

The proportion of studies presenting intervention integrity information is presented in Figure . It can be seen that majority of the studies (72.3%) presented an operational definition of the intervention or referred to the source of such information. In contrast, only a minority of the studies presented quantitative or qualitative information on the implementation of the intervention (32.4%). Finally, only eight studies provided an operational definition and reported quantitative data on intervention integrity (7.8%).

Figure 1 Proportion of studies reporting information on the operational definition or the implementation of intervention. “Yes” refers to studies providing information, “no” refers to studies that do not provide information, and “partial” refers to studies referring to an external source of the operational definition or studies providing qualitative rather than quantitative information on the implementation of intervention. See text for further details.
Figure 1 Proportion of studies reporting information on the operational definition or the implementation of intervention. “Yes” refers to studies providing information, “no” refers to studies that do not provide information, and “partial” refers to studies referring to an external source of the operational definition or studies providing qualitative rather than quantitative information on the implementation of intervention. See text for further details.

Study characteristics

Publication year

Studies meeting our inclusion criteria were unequally distributed across publication year. We distinguished between an early versus late publication period (respectively, 2000–2007 and 2008–2015). The majority of studies were published during the latter period (n = 72). The percentages of studies presenting information on the operational definition of the intervention across publication period are presented in the upper panel of Figure and the percentages relating to intervention implementation are presented in the lower panel of the figure. It can be seen that there are only marginal differences between studies published during the early versus late period, ps>.05.

Figure 2 Proportion of studies providing information on the operational definition of the intervention (left panel) or implementation of the intervention (right panel) as a function of publication year. “Yes” refers to studies providing information, “no” refers to studies that do not provide information, and “partial” refers to studies referring to an external source of the operational definition or studies providing qualitative rather than quantitative information on the implementation of intervention. See text for further details.
Figure 2 Proportion of studies providing information on the operational definition of the intervention (left panel) or implementation of the intervention (right panel) as a function of publication year. “Yes” refers to studies providing information, “no” refers to studies that do not provide information, and “partial” refers to studies referring to an external source of the operational definition or studies providing qualitative rather than quantitative information on the implementation of intervention. See text for further details.

Publication source

In Figure , the proportion of studies presenting intervention integrity information is presented for studies published in English- and Dutch-language journals, separately (n = 47 and n = 55, respectively). In the upper panel, it can be seen that the proportion of studies providing information on the operational definition of the intervention does not vary with publication source (p>.05). In contrast, the lower panel indicates that the proportion of studies reporting quantitative or qualitative data on the implementation of intervention is substantially lower for studies reported in Dutch-language relative to English-language journals; X2(2) = 17.06; p < .01.

Figure 3 Proportion of studies providing information on the operational definition of the intervention (left panel) or implementation of the intervention (right panel) as a function of publication source. “Yes” refers to studies providing information, “no” refers to studies that do not provide information, and “partial” refers to studies referring to an external source of the operational definition or studies providing qualitative rather than quantitative information on the implementation of intervention. See text for further details.
Figure 3 Proportion of studies providing information on the operational definition of the intervention (left panel) or implementation of the intervention (right panel) as a function of publication source. “Yes” refers to studies providing information, “no” refers to studies that do not provide information, and “partial” refers to studies referring to an external source of the operational definition or studies providing qualitative rather than quantitative information on the implementation of intervention. See text for further details.

Intervention agent

The great majority of studies involved a professional intervention agent (n = 73). There was also a considerable number of studies reporting that a teacher delivered the intervention (n = 24). Studies concerned with other intervention agents (paraprofessional, n = 0, researcher, n = 4, or “other, n = 1) were not included in the data presented in Figure . In the upper panel of the figure, it can be seen that the proportion of studies including an operational definition or a referral to its source, does not vary with the agent delivering the intervention. A similar pattern can be observed for the proportion of studies providing intervention implementation information (lower panel of Figure ). This proportion is somewhat higher for studies with a teacher as intervention agent relative to studies with a professional intervention agent; but this difference was not significant (all ps>.05).

Figure 4 Proportion of studies providing information on the operational definition of the intervention (left panel) or implementation of the intervention (right panel) as a function of intervention agent. “Yes” refers to studies providing information, “no” refers to studies that do not provide information, and “partial” refers to studies referring to an external source of the operational definition or studies providing qualitative rather than quantitative information on the implementation of intervention. See text for further details.
Figure 4 Proportion of studies providing information on the operational definition of the intervention (left panel) or implementation of the intervention (right panel) as a function of intervention agent. “Yes” refers to studies providing information, “no” refers to studies that do not provide information, and “partial” refers to studies referring to an external source of the operational definition or studies providing qualitative rather than quantitative information on the implementation of intervention. See text for further details.

Intervention location

We used seven categories for coding the location of interventions. The coding resulted in the following numbers of studies: preschool (n = 1), regular primary school (n = 31), special-education primary school (n = 3), regular secondary school (n = 10), special-education secondary school (n = 4), other locations (n = 46), and multiple locations (n = 7). We condensed these categories to interventions delivered in the school (n = 48) versus interventions delivered at nonschool locations (n = 54). In the upper panel of Figure , it can be seen that the proportion of studies providing information on the operational definition of the intervention differed between locations. The proportion of school-based intervention studies including an operational definition or a referral to its source was significantly higher than the proportion of studies reporting on interventions delivered elsewhere (e.g., university, therapeutic institute, preschool), X2(2) = 9.27, p < .01. Location of intervention did not discriminate between the proportions of studies reporting on intervention implementation assessment or failing to provide such information (lower panel of Figure ; p>.05).

Figure 5 Proportion of studies providing information on the operational definition of the intervention (left panel) or implementation of the intervention (right panel) as a function of intervention location. “Yes” refers to studies providing information, “no” refers to studies that do not provide information, and “partial” refers to studies referring to an external source of the operational definition or studies providing qualitative rather than quantitative information on the implementation of intervention. See text for further details.
Figure 5 Proportion of studies providing information on the operational definition of the intervention (left panel) or implementation of the intervention (right panel) as a function of intervention location. “Yes” refers to studies providing information, “no” refers to studies that do not provide information, and “partial” refers to studies referring to an external source of the operational definition or studies providing qualitative rather than quantitative information on the implementation of intervention. See text for further details.

Student age

There were only 4 studies in which the intervention was delivered to kindergarten children. These studies were not included in the analysis; thus, age categories were reduced from 4 to 3: 6–12 age group (n = 49), 12–18 age group (n = 22), and multiple age groups (n = 27). The data are plotted in Figure . In can be seen that the proportion of studies providing information on the operational definition (upper panel) or implementation (lower panel) of the intervention is not systematically related to student age, ps>.05.

Figure 6 Proportion of studies providing information on the operational definition of the intervention (left panel) or implementation of the intervention (right panel) as a function of student age. “Yes” refers to studies providing information, “no” refers to studies that do not provide information, and “partial” refers to studies referring to an external source of the operational definition or studies providing qualitative rather than quantitative information on the implementation of intervention. See text for further details.
Figure 6 Proportion of studies providing information on the operational definition of the intervention (left panel) or implementation of the intervention (right panel) as a function of student age. “Yes” refers to studies providing information, “no” refers to studies that do not provide information, and “partial” refers to studies referring to an external source of the operational definition or studies providing qualitative rather than quantitative information on the implementation of intervention. See text for further details.

Dependent variable

We considered the following coding categories: behavior problems (n = 41), anxiety (n = 21), and multiple (i.e., studies that fitted into multiple coding categories; n = 21). The other coding categories were lumped together in “other” (n = 19). In Figure , it can be seen that the proportion of studies providing information on the operational definition (upper panel) or intervention implementation (lower panel) is not systematically related to dependent variable, ps>.05.

Figure 7 Proportion of studies providing information on the operational definition of the intervention (left panel) or implementation of the intervention (right panel) as a function of dependent variable. “Yes” refers to studies providing information, “no” refers to studies that do not provide information, and “partial” refers to studies referring to an external source of the operational definition or studies providing qualitative rather than quantitative information on the implementation of intervention. See text for further details.
Figure 7 Proportion of studies providing information on the operational definition of the intervention (left panel) or implementation of the intervention (right panel) as a function of dependent variable. “Yes” refers to studies providing information, “no” refers to studies that do not provide information, and “partial” refers to studies referring to an external source of the operational definition or studies providing qualitative rather than quantitative information on the implementation of intervention. See text for further details.

Discussion

The primary goal of the current study was to assess intervention integrity in the Low Countries. More specifically, the focus was on studies examining interventions targeted at social-emotional behaviors of students in schools. The major question was whether these studies reported an operational definition of the intervention or a reference to a source that could be used to obtain such a definition and whether studies reported quantitative or qualitative data with regard to the assessment of intervention integrity. A set of more specific questions asked whether attention paid to intervention integrity would vary across publication years or between studies published in Dutch- versus English-language studies. Secondary questions related to the intervention agent, the location of the intervention, the age of the students participating in the study, and the specific behaviors targeted by the intervention.

A well-defined intervention is prerequisite to the dissemination of effective interventions. The current review indicated that 43.1% of the studies provided an operational definition of the independent variable. The proportion of studies including an operational definition of the intervention is considerably higher than the 7.7% reported by Sanetti et al. (Citation2014) in School Psychology International. It should be noted, however, that the extremely low proportion reported by Sanetti et al. (Citation2014) refers to only 2 studies, whereas the proportion observed in our study refers to 40 studies. The current proportion of studies including an operational definition, is also somewhat higher than the 31.8% (71 studies) reported by Sanetti et al. (Citation2011) in their review of studies in the school psychology literature (1995–2008) and the 34.2% (54 studies) reported previously by Gresham et al. (1999), who examined treatment integrity of school-based interventions with children published in the Journal of Applied Behavior Analysis(JABA) from 1991 to 2005. McIntyre et al. (Citation2007) examined treatment integrity of studies published in the same journal from 1991 to 2005 and reported that 95% (144 studies) of the studies included a description of the independent variable; a tremendous increase in the proportion of studies including an operational definition. Unfortunately, the rapid increase in the proportion of studies including an operational definition seems rather specific for studies published in JABA. The growing awareness of the necessity to provide adequate information on the independent variable is less pronounced in other journals concerned with school-based interventions with children.

It could be argued that studies should provide information of the independent variable, either by including an operational definition or by referring to an external source. The current review showed that 39.2% of the studies referred to an external source providing information on the independent variable. This percentage is similar to the 38.6% reported previously by Sanetti et al. (Citation2011) for studies in the school psychology literature. Together with the proportion of studies including an operational definition, the current analysis indicates that the great majority of studies (82.3%) provided information on the independent variable. It should be noted, however, that we did not check the external source of the information regarding the independent variably. Thus, we do not know whether the description of the operational definition is adequate or can be readily accessed. Moreover, a substantial proportion of studies (17.7%) failed to provide information on the independent variable. The lack of this information presents a serious threat to maintaining the quality of an intervention as intended by its developers (Dusenbury, Branningan, Hansen, Walsh, & Falco, Citation2004).

The current analysis revealed that the majority of studies (67.6%) neither reported quantitative intervention implementation data nor reported qualitative data regarding intervention implementation. This percentage is much higher than the 37.2% reported by Sanetti et al. (Citation2011) for studies in the school psychology literature but lower than the 80.8% reported by Sanetti et al. (Citation2014) for studies published in School Psychology International. The current proportion comes close to the 61% reported by McIntyre et al. (Citation2007) for studies published in JABA. Only 26% of the current studies published quantitative data on the implementation of the intervention. The reasons for not reporting intervention implementation data are not entirely clear. McIntyre et al. (Citation2007) suggested that low rates may be due to editorial policies. Thus, space limitations in journals may prevent reporting intervention implementation data. Although this might have been possible in the recent past, most journals now offer the possibility for posting supplementary material. An alternative possibility for the apparent lack in reporting intervention implementation data is that authors themselves do not value the importance of such data; in particular, when the intervention had the desired effect (Dusenbury et al., 2007).

The current analysis yielded few significant differences between operational definition and intervention implementation assessment across study characteristics. In view of the growing awareness of the importance of intervention integrity (e.g., Sanetti & Kratochwill, Citation2014), we anticipated an increased rate in the proportion of studies including an operational definition and providing a quantitative or qualitative assessment of intervention implementation. In contrast to expectations, however, the results failed to show such an increase for studies published during the 2008—2015 period relative to the 2000—2007 period. Previously, Sanetti et al. (Citation2011) reported a rising trend in the proportion of studies reporting intervention integrity data for studies published in the school psychology literature from 1995 to 2008. Such a trend was not reported for studies published in School Psychology International (Sanetti et al., Citation2014). It should be noted, however, that the total number of studies coded in the latter report was limited: only 26 studies. The current absence of a rising trend in reporting intervention integrity data is discouraging vis-à-vis the increasing focus on accountability and evidence-based practice managed care agencies in the Low Countries (e.g., What Works? Dutch Youth Institute) and elsewhere (e.g., What Works Clearinghouse, Institute of Education Sciences).

In discussing the pattern of findings obtained for studies published in School Psychology International, Sanetti et al. (Citation2014) conjectured that the relatively low proportion of studies including an operational definition or providing intervention implementation assessment data might be due the application of U.S.-centric coding criteria to studies adhering to research foci or methodologies valued in numerous countries (cf. Sanetti et al., Citation2014, p. 380). An alternative interpretation would be that editorial policies regarding intervention integrity vary between journals (e.g., between School Psychology International vs. other journals in the school psychology domain). We asked whether the proportion of studies providing intervention integrity information would differ between English- versus Dutch-language journals. The results showed that the proportion of studies failing to report intervention implementation data was significantly higher for studies published in a Dutch-language relative to an English-language journal (87.5% vs. 50%). This contrast should not be interpreted to suggest a difference in the awareness of the need to consider intervention integrity, as the proportion of studies including an operational definition or a referral to its source did not discriminate by publication source. Possibly, the standards for reporting intervention integrity information are more lenient for Dutch- compared to English-language journals.

The proportion of studies including an operational definition or a referral to its source did not vary with intervention agent but was somewhat higher for studies concerned with school-based interventions compared to studies reporting on interventions applied in a different setting. Intervention agent and location did not affect the proportion of studies providing information on intervention implementation assessment. Finally, the current analysis showed that the proportion of studies providing information on the operational definition or intervention implementation did not systematically vary with student age or dependent variable. This pattern of result is similar to the one reported in Sanetti et al. (Citation2011).

Conclusion

The major purpose of the present study was to examine intervention studies conducted in the Low Countries targeted to social-emotional behaviors manifested by students in school. The overall pattern of results was similar to the findings reported previously by Sanetti et al. (Citation2011). That is, (a) the majority of studies included an operational definition or a reference to its source, (b) only a minority of studies presented quantitative or qualitative data on intervention implementation, and (c) the overall pattern was basically similar across study characteristics. The obvious correspondence between the current data and the results reported by Sanetti et al. (Citation2011) is important for at least two reasons. First, the current review focused on intervention studies targeting social-emotional behaviors while the review of Sanetti et al. (Citation2011) included intervention studies concerned with a wider range of academically related behaviors, including academic and daily living skills. In this regard, the consistency across studies suggests that the dependent variable is not a major source of variability in reporting on intervention integrity.

Secondly, Sanetti et al. (Citation2014) conjectured that their application of U.S.-centric coding criteria to studies published in School Psychology International might have contributed to the alarmingly low proportion of studies reporting intervention monitoring data (19.2%). The results of the current study showed that 51% of the studies reported intervention monitoring data, which comes close to the 63.2% reported by Sanetti et al. (Citation2011) for studies in the school psychology literature. The current case study seems to suggest that the Sanetti et al. (Citation2011, Citation2014) coding criteria are meshing quite well with the research methodology of school-based intervention studies performed in the Low Countries. This conclusion should be qualified, however, in view of the significant difference between studies reported in English- versus Dutch-language journals. The proportion of studies reporting intervention-integrity data was considerably higher for the former compared to the latter; respectively, 50% versus 12.5%. The proportion of studies reporting intervention-monitoring data published in Dutch-language journals is even lower than the proportion reported by Sanetti et al. (Citation2014) for studies published in School Psychology International. Thus, the current findings might suggest a misalignment between the Sanetti et al. (Citation2011, Citation2014) coding criteria and the research methodology adopted by at least some of the intervention-integrity researchers in the Low Countries, namely those who published in Dutch-language journals. Alternatively, the editorial standards regarding intervention-integrity might be lower in Dutch- relative to English-language studies.

In line with the results reported by Sanetti et al. (Citation2011), and consistent with the growing awareness of the importance of intervention integrity, we anticipated that the proportion of studies reporting intervention-monitoring data would steadily increase with publication date. In contrast with expectations, however, the current results showed that the proportion of studies reporting intervention-monitoring data was similar for the early versus later period of publication (2000–2007 vs. 2008–2015). This observation is rather disappointing; in particular, with regard to results from a meta-analysis reported by Durlak and Dupre (2008) indicating that higher levels of intervention integrity are associated with better outcome. Indeed, without an adequate assessment of intervention integrity, its effectiveness cannot be evaluated. The take-home message of the current case study is that researchers in the Low Countries should invest greater effort in evaluating intervention implementation, while journals should adopt more stringent integrity criteria for evaluating school-based intervention studies. The former issue (that is, evaluating intervention implementation) has been addressed by Domitrovich et al. (Citation2008), who present an interesting conceptual framework for maximizing the implementation quality of evidence-based (preventive) interventions in schools. These authors present a multilevel model (macro, school, and individual) to bridge the gap between research and school practice by identifying multiple factors influencing intervention implementation at school settings. The latter issue (that is, evaluating studies concerned with intervention implementation) can be addressed by developing internationally accepted criteria for intervention integrity. Some journals solicit experts for papers presenting guidelines for procedures relating to the collection, analysis, and interpretation of data (e.g., Berntson et al., Citation1997). It seems that the school psychology literature is in need of a guideline paper presenting criteria for the appropriate analysis of intervention integrity. Journals in this domain can then use these criteria for the evaluation of intervention-implementation studies.

Additional information

Notes on contributors

Margot Taal

Margot Taal is retired associate professor at the Department of Developmental Psychology at the University of Amsterdam. Many of her publications concern the field of school psychology. At present, she is an associate professor at the Postgraduate Training Program for School Psychologists at the Regional Institutions for Continuing Education and Training (RINO), Amsterdam, The Netherlands (Institute for Postgraduate Studies).

Elles Ekels

Elles Ekels, MS, graduated from the University of Amsterdam in 2013 and passed the master examination in Healthcare Psychology with distinction (cum laude). During her study she chose to specialize in clinical developmental psychology and school psychology. Currently, she is working in developmental and school psychology, primarily active in the field of dyslexia. At the moment, her main activities include diagnosing and treating children of primary school age.

Cindel van der Valk

Cindel van der Valk, MS, graduated from the University of Amsterdam. She studied clinical developmental psychology, with a specialization in school psychology. Since she graduated, she has been working in education, treating and supporting children with learning difficulties. Furthermore, she is studying to obtain a Bachelor of Education.

Maurits van der Molen

Maurits W. van der Molen is professor of psychology, University of Amsterdam. He has been chair of the Developmental Psychology program and coordinator of the School Psychology track that has been accredited by the International School Psychology Association (ISPA), and is founding director of the Cognitive Science Centre Amsterdam (now Amsterdam Brain and Cognition) and of EPOS, the graduate school in experimental psychology of a consortium of Dutch Universities. He has been managing editor of Acta Psychologica and associate editor of Psychophysiology. He has published widely on the development of cognitive control and individual differences. He reviews for a wide range of international journals and funding agencies.

Notes

1. The Netherlands Youth Institute (Nederlands JeugdInstituut, NJI, http://www.nji.nl) is the Dutch equivalent of the U.S. What Works Clearinghouse (WWC, http://ies.ed.gov/ncee/wwc), which identifies studies providing credible and reliable evidence of the effectiveness of a given intervention.

References

  • Berntson, G. G., Bigger, J. T., Eckberg, D. L., Grossman, P., Kaufmann, P. G., Malik, M., … van der Molen, M. W. (1997). Heart rate variability: Origins, methods, and interpretive caveats. Psychophysiology, 31, 623–648.
  • Bruhn, A. L., Hirsch, S. E., & Loyd, J. W. (2015). Treatment integrity in school-wide programs: A review of the literature (1993–2012). Journal of Primary Prevention, 36, 335–349.
  • Dane, A. V., & Schneider, B. H. (1998). Program integrity in primary and early secondary prevention: Are implementation effects out of control? Clinical Psychology Review, 18(1), 23–45.
  • Domitrovich, C. E., Bradshaw, C. P., Poduska, J. M., Hoagwood, K., Buckley, J. A., Olin, S., … Ialongo, N. S. (2008). Maximizing the implementation quality of evidence-based interventions in schools: A conceptual framework. Advances in School Mental Health Promotion, 1(3), 6–28.
  • Durlak, J. A., Weissberg, R. P., Dymnicki, A. B., Taylor, R. D., & Schellinger, K. B. (2011). The impact of enhancing students' social and emotional learning: A meta-analysis of school-based universal interventions. Child Development, 82(1), 405–432.
  • Dusenbury, L., Branningan, R., Hansen, W. B., Walsh, J., & Falco, M. (2004). Quality of implementation: Developing measures crucial to understanding the diffusion of preventive interventions. Health Education Research, 20(3), 308–313.
  • Fiske, K. E. (2008). Treatment integrity of school-based behavior analytic interventions: A review of the research. Behavior Analysis in Practice, 1(2), 19–25.
  • Forman, S. G., Shapiro, E. S., Codding, R. S., Gonzales, J. E., Reddy, L. A., Rosenfield, S. A., … Stoiber, K. C. (2013). Implementation science and school psychology. School Psychology Quarterly, 28(2), 77–100.
  • Gearing, R. E., El-Bassel, N., Ghesquiere, A., Baldwin, S., Gillies, J., & Ngeow, E. (2011). Major ingredients of fidelity: A review and scientific guide to improving quality of intervention research implementation. Clinical Psychology Review, 31(1), 79–88.
  • Gresham, F. M., Gansle, K. A., Noell, G. H., & Cohen, S. (1993). Treatment integrity of school-based behavioral intervention studies: 1980–1990. School Psychology Review, 22(2), 254–272.
  • Harn, B., Parisi, D., & Stoolmiller, M. (2013). Balancing fidelity with flexibility and fit: What do we really know about fidelity of implementation in schools? Exceptional Children, 79(2), 181–193.
  • Keller-Margulis, M. A. (2012). Fidelity of implementation framework: A critical need for response to intervention models. Psychology in the Schools, 49(4), 342–352.
  • Kelly, B. & Perkins, D. F. (2012). Handbook of implementation science for psychology in education. Cambridge, NY: Cambridge University Press.
  • Kratochwill, T. R. (2004). Evidence-based practice: Promoting evidence-based interventions in school psychology. School Psychology Review, 33(1), 34–49.
  • Lane, K. L., & Pierson, M. (2001). Designing effective interventions for children at-risk for antisocial behavior: An integrated model of components necessary for making valid inferences. Psychology in Schools, 38(4), 365–379.
  • Maynard, B. R., Peters, K. E., Vaughn, M. G., & Sarteschi, C. M. (2013). Fidelity in after-school program intervention research: A systematic review. Research on Social Work Practice, 23(6), 613–623.
  • McIntyre, L. L., Gresham, F. M., DiGennaro, F. D., & Reed, D. D. (2007). Treatment integrity of school-based interventions with children in the Journal of Applied Behavior Analysis 1991–2005. Journal of Applied Behavior Analysis, 40(4), 659–672.
  • McKenna, J. W., Flower, A., & Ciullo, S. (2014). Meauring fidelity to improve intervention effectiveness. Intervention in School and Clinic, doi:10.1177/1053451214532348.
  • Nelson, M. C., Cordray, D. S., Hulleman, C. S., Darro, C. L., & Sommer, E. C. (2012). A procedure for assessing intervention fidelity in experiments testing educational behavioral interventions. Journal of Behavioral Health Services & Research, 39(4), 374–396.
  • Owens, J. S., Lyon, A. R., Brandt, N. E., Warner, C. M., Nadeem, E., Spiel, C., & Wagner, M. (2014). Implementation science in school mental health: Key constructs in a developing research agenda. School Mental Health, 6(2), 99–111.
  • Sanetti, L. M. H., Dobey, L. M., & Gallucci, J. (2014). Treatment integrity of interventions with children in School Psychology International from 1995–2010. School Psychology International, 35(4), 370–383.
  • Sanetti, L. M. H., Gritter, K. L., & Dobey, L. M. (2011). Treatment integrity of interventions with children in the school psychology literature from 1995 to 2008. School Psychology Review, 40(1), 72–84.
  • Sanetti, L. M. H., & Kratochwill, T. R. (2009). Treatment integrity assessment in the schools: An evaluation of the Treatment Integrity Planning Protocol. School Psychology Quarterly, 24(1), 24–35.
  • Sanetti, L. M. H., & Kratochwill, T. R. (2014). Treatment integrity: A foundation for evidence-based practice in applied psychology. Washington, DC: American Psychological Association.
  • Schulte, A. C., Easton, J. E., & Parker, J. (2009). Advances in treatment integrity research: Multidisciplinary perspectives on the conceptualization, measurement, and enhancement of treatment integrity. School Psychology Review, 38(4), 460–475.
  • Snyder, P. A., Hemmeter, M. L., Fox, L., Bishop, C., & Miller, M. D. (2013). Developing and gathering psychometric evidence for a fidelity instrument: The teaching pyramid observation tool—pilot version. Journal of Early Intervention, 35(2), 150–172.
  • Yeaton, W. H., & Sechrest, L. (1981). Critical dimensions in the choice and maintenance of successful treatments: Strength, integrity, and effectiveness. Journal of Consulting and Clinical Psychology, 49(2), 156–167.
  • Zins, J. E., Elias, M. J., Greenberg, M. T., & Weissberg, R. P. (2000). Promoting social and emotional competence in children. In K. M. Minke & G. C. Bear (Eds.), Preventing school problems—promoting school success: Strategies and programs that work (pp. 71–99). Washington, DC: National Association of School Psychologists.