Abstract
A Monte Carlo study compared the statistical performance of standard and robust multilevel mediation analysis methods to test indirect effects for a cluster randomized experimental design under various departures from normality. The performance of these methods was examined for an upper-level mediation process, where the indirect effect is a fixed effect and a group-implemented treatment is hypothesized to impact a person-level outcome via a person-level mediator. Two methods—the bias-corrected parametric percentile bootstrap and the empirical-M test—had the best overall performance. Methods designed for nonnormal score distributions exhibited elevated Type I error rates and poorer confidence interval coverage under some conditions. Although preliminary, the findings suggest that new mediation analysis methods may provide for robust tests of indirect effects.
Notes
* indicates an error rate that is outside of CitationBradley's (1978) criteria of 92.5% to 97.5%. Dashes indicate that this method was not implemented for the given nonnormality condition. z = z test; Emp-M = empirical-M test; Boot = parametric percentile bootstrap; BC Boot = bias-corrected parametric percentile bootstrap; Robust z = z test with robust standard errors; Robust Emp-M = empirical-M test with robust standard errors; NP Boot = nonparametric percentile bootstrap; BCNP Boot = bias-corrected nonparametric percentile bootstrap; SNP Boot = stratified nonparametric percentile bootstrap; BCSNP Boot = bias-corrected stratified nonparametric percentile bootstrap.