Abstract
The purpose of this study was to employ the meta-analytic method of Reliability Generalization to investigate the magnitude and variability of reliability estimates obtained across studies using Curriculum-Based Measurement reading aloud. Twenty-eight studies that met the inclusion criteria were used to calculate the overall mean reliability of Curriculum-Based Measurement reading aloud. The estimated mean alternate-form and test-retest reliability were .89 and .95, respectively. Yet it was difficult to generalize such a high reliability estimate for scores on Curriculum-Based Measurement reading aloud due to significant variability between and within studies. Grade level, testing interval length, universal screening/progress monitoring distinction, and ratio of students receiving special education services were significant moderator variables contributing to the variability found in the alternate-form reliability of Curriculum-Based Measurement reading aloud.
Notes
*p < .05;
**p < .01;
***p < .001.
*p < .05;
**p < .01;
***p < .001.
*p < .05;
**p < .01;
***p < .001.
*p < .05;
**p < .01;
***p < .001.