Abstract
In the application of the Satorra–Bentler scaling correction, the choices of normal-theory weight matrices (i.e., the model-predicted vs. the sample covariance matrix) in the calculation of the correction remains unclear. Different software programs use different matrices by default. This simulation study investigates the discrepancies due to the weight matrices in the robust chi-square statistics, standard errors, and chi-square-based model fit indexes. This study varies the sample sizes at 100, 200, 500, and 1,000; kurtoses at 0, 7, and 21; and degrees of model misspecification, measured by the population root mean square error of approximation (RMSEA), at 0, .03, .05, .08, .10, and .15. The results favor the use of the model-predicted covariance matrix because it results in less false rejection rates under the correctly specified model, as well as more accurate standard errors across all conditions. For the sample-corrected robust RMSEA, comparative fit index (CFI) and Tucker–Lewis index (TLI), 2 matrices result in negligible differences.
ACKNOWLEDGMENTS
The research reported in this article was performed when Yan Xia was a graduate intern at SAS Institute under the supervision of Yiu-Fai Yung. Yan Xia is currently a Ph.D. candidate at Florida State University. Any opinions expressed in this publication are those of the authors and not necessarily of SAS Institute.