Abstract
In this paper, we report results from a multisite, student-level randomized controlled trial that examined the impact of the Building Assets, Reducing Risks (BARR) model on ninth-grade students. The BARR model is a comprehensive, strength-based approach that uses eight interlocking strategies to build intentional staff-to-staff, staff-to-student, and student-to-student relationships. This student-level RCT included approximately 4,000 ninth-grade students randomly assigned to treatment or control conditions in eleven schools. We examined six constructs of student experience including expectations and rigor, engagement, supportive relationships, social and emotional learning, sense of belonging, and grit; and five measures of academic success including course failure, core credits earned, grade point average, and Northwest Evaluation Association’s (NWEA) Measures of Academic Progress (MAP) English Language Arts and Mathematics test scores. Findings suggest that BARR significantly increased core credits earned and mathematics achievement as measured by the NWEA MAP. Relative to the control group, BARR improved several aspects of student experiences in school, including an increased sense of supportive relationships and teacher expectations and rigor.
Keywords:
Notes
1 Prior to randomization, schools were offered the option to exclude certain students from random assignment and from the study. In general, this exclusion was only used for students classified as special education participants who received services in self-contained classrooms.
2 Business as usual for control group teachers included working together as a fixed group of core subject teachers, similar to the BARR treatment group teachers. This is a consequence of the within-school random assignment design, which required two (or more) distinct blocks of students and teachers. However, the control group teachers did not receive special support or guidance for whether or how to collaborate within their blocks and did not have access to a designated BARR coordinator, BARR training and coaching, and the I-Time curriculum.
3 We discovered these differences in response to a request by JREE reviewers of an earlier draft of this paper. We had not previously considered using teacher characteristics collected at follow-up to examine or control for pre-existing teacher background differences. We greatly appreciate the reviewers’ requests, which prompted us to re-examine these data.
4 Specifically, 63 percent of treatment group teachers had an advanced credential, compared to 72 percent of control group teachers. There were 33 percent of treatment group teachers who had more than 10 years of experiences, compared to 55 percent of control group teachers.
5 Because of the way in which the teacher-level background variables were collected, we were unable to link data on individual teachers to individual student records. We therefore created teacher background variables at the school/research group level and controlled for the resulting aggregates at the student level, which also was the level of randomization.
6 For a more extensive discussion of program implementation and fidelity of implementation in this evaluation, please refer to Bos et al., Citation2019.
7 Most attrition was the result of students leaving the study school and moving to another high school within or outside the study districts. Attrition on the NWEA outcomes was higher because (a) those assessments were administered at the end of the school year when more students would have left the school, and (b) some students did not show up to take the NWEA test. Please see Bos et al., (Citation2019) for an extensive discussion of NWEA attrition in this study.
8 When attrition is reasonably assumed to be unrelated to the intervention, WWC reviewers use the optimistic boundaries (WWC, Citation2020). We believe this is a reasonable assumption for this study because the attrition was caused primarily by students not being available to take an assessment or participate in taking the survey, rather than refusing to participate. Similar assumptions are often made in other studies involving high school students (WWC, Citation2020).
9 Levels of attrition for the survey outcomes were similar to those for the NWEA outcomes for similar reasons. By design, the surveys were done late in the school year when more students leaving the school would have left. The differential attrition reflects less interest in completing the survey among control group students than we found for BARR students. Thus, the control students who did respond may have been more motivated than the BARR respondents.
10 These comparisons of baseline characteristics only included students for whom these baseline data were not missing. We did not impute missing baseline characteristics for these equivalence analyses.
11 We realize that the overall number of impact estimates we include in this paper presents a risk of finding estimates that appear statistically significant only by chance. We did not apply “multiple comparison” adjustments to address this potential problem (e.g., Benjamini & Hochberg, Citation1995) but we consider all subgroup estimates to be exploratory in nature and have refrained from highlighting isolated nonzero impact estimates that are inconsistent with the overall pattern of effects within a given outcome dimension.