267
Views
0
CrossRef citations to date
0
Altmetric
Research Articles

The Impact of Ignoring Cross-loadings on the Sensitivity of Fit Measures in Measurement Invariance Testing

Pages 64-80 | Received 28 Jan 2023, Accepted 06 Jun 2023, Published online: 28 Jul 2023

References

  • Akaike, H. (1974). A new look at the statistical model identification. IEEE Transactions on Automatic Control, 19, 716–723. https://doi.org/10.1109/TAC.1974.1100705
  • Asparouhov, T., & Muthén, B. (2009). Exploratory structural equation modeling. Structural Equation Modeling: A Multidisciplinary Journal, 16, 397–438. https://doi.org/10.1080/10705510903008204
  • Bentler, P. M. (1990). Comparative fit indexes in structural models. Psychological Bulletin, 107, 238–246. https://doi.org/10.1037/0033-2909.107.2.238
  • Bollen, K. (1989). Structural equations with latent variables. Wiley.
  • Brannick, M. T. (1995). Critical comments on applying covariance structure modeling. Journal of Organizational Behavior, 16, 201–213. https://doi.org/10.1177/0049124192021002005
  • Browne, M. W., & Cudeck, R. (1992). Alternative ways of assessing model fit. Sociological Methods & Research, 21, 230–258. https://doi.org/10.1177/0049124192021002005
  • Cao, C., & Liang, X. (2022). The impact of model size on the sensitivity of fit measures in measurement invariance testing. Structural Equation Modeling: A Multidisciplinary Journal, 29, 744–754. https://doi.org/10.1080/10705511.2022.2056893
  • Chen, F. F. (2007). Sensitivity of goodness of fit indexes to lack of measurement invariance. Structural Equation Modeling: A Multidisciplinary Journal, 14, 464–504. https://doi.org/10.1080/10705510701301834
  • Cheung, G. W., & Rensvold, R. B. (2002). Evaluating goodness-of-fit indexes for testing measurement invariance. Structural Equation Modeling: A Multidisciplinary Journal, 9, 233–255. https://doi.org/10.1207/S15328007SEM0902_5
  • Cohen, J. (1988). Statistical power analysis for the behavioral sciences. Lawrence Erlbaum.
  • Cudeck, R., & O’Dell, L. L. (1994). Applications of standard error estimates in unrestricted factor analysis: Significance tests for factor loadings and correlations. Psychological Bulletin, 115, 475–487. https://doi.org/10.1037/0033-2909.115.3.475
  • Glanville, J. L., & Wildhagen, T. (2007). The measurement of school engagement assessing dimensionality and measurement invariance across races and ethnicity. Educational and Psychological Measurement, 67, 1019–1041. https://doi.org/10.1037/0033-2909.115.3.475
  • Guo, J., Marsh, H. W., Parker, P. D., Dicke, T., Lüdtke, O., & Diallo, T. M. O. (2019). A systematic evaluation and comparison between exploratory structural equation modeling and Bayesian structural equation modeling. Structural Equation Modeling: A Multidisciplinary Journal, 26, 529–556. https://doi.org/10.1080/10705511.2018.1554999
  • Hayes, T., & Usami, S. (2020). Factor score regression in connected measurement models containing cross-loadings. Structural Equation Modeling: A Multidisciplinary Journal, 27, 942–951. https://doi.org/10.1080/10705511.2020.1729160
  • Hu, L., & Bentler, P. M. (1998). Fit indices in covariance structure modeling: Sensitivity to unparameterized model misspecification. Psychological Methods, 3, 424–453. https://doi.org/10.1037/1082-989X.3.4.424
  • Jacobucci, R., & Grimm, K. J. (2018). Comparison of frequentist and Bayesian regularization in structural equation modeling. Structural Equation Modeling : a Multidisciplinary Journal, 25, 639–649. https://doi.org/10.1080/10705511.2017.1410822
  • Joo, S., & Kim, E. S. (2019). Impact of error structure misspecification when testing measurement invariance and latent-factor mean difference using MIMIC and multiple-group confirmatory factor analysis. Behavior Research Methods, 51, 2688–2699. https://doi.org/10.3758/s13428-018-1124-6
  • Kenny, D. A., & McCoach, D. B. (2003). Effect of the number of variables on measures of fit in structural equation modeling. Structural Equation Modeling: A Multidisciplinary Journal, 10, 333–351. https://doi.org/10.1207/S15328007SEM1003_1
  • Kim, S.-H., Cohen, A. S., Cho, S.-J., & Eom, H. J. (2019). Use of information criteria in the study of group differences in trace lines. Applied Psychological Measurement, 43, 95–112. https://doi.org/10.1177/0146621618772292
  • Kim, E. S., & Yoon, M. (2011). Testing measurement invariance: A comparison of multiple-group categorical CFA and IRT. Structural Equation Modeling: A Multidisciplinary Journal, 18, 212–228. https://doi.org/10.1080/10705511.2011.557337
  • Lai, K. (2019). A simple analytic confidence interval for CFI given nonnormal data. Structural Equation Modeling: A Multidisciplinary Journal, 26, 757–777. https://doi.org/10.1080/10705511.2018.1562351
  • Li, Y., Wen, Z., Hau, K.-T., Yuan, K.-H., & Peng, Y. (2020). Effects of cross-loadings on determining the number of factors to retain. Structural Equation Modeling: A Multidisciplinary Journal, 27, 841–863. https://doi.org/10.1080/10705511.2020.1745075
  • Liang, X. (2020). Prior sensitivity in Bayesian structural equation modeling for sparse factor loading structures. Educational and Psychological Measurement, 80, 1025–1058. https://doi.org/10.1177/0013164420906449
  • Liang, X., & Cao, C. (2022). Small-variance priors in Bayesian factor analysis with ordinal data. The Journal of Experimental Education. Advanced online publication. 1–26. https://doi.org/10.1080/00220973.2022.2100731
  • Liang, X., & Luo, Y. (2020). A comprehensive comparison of model selection models for testing factorial invariance. Structural Equation Modeling: A Multidisciplinary Journal, 27, 380–395. https://doi.org/10.1080/10705511.2019.1649983
  • Little, T. D. (1997). Mean and covariance structures (MACS) analyses of cross-cultural data: Practical and theoretical issues. Multivariate Behavioral Research, 32, 53–76. https://doi.org/10.1207/s15327906mbr3201_3
  • Lu, Z. H., Chow, S. M., & Loken, E. (2016). Bayesian factor analysis as a variable-selection problem: Alternative priors and consequences. Multivariate Behavioral Research, 51, 519–539. https://doi.org/10.1080/00273171.2016.1168279
  • Mai, Y., Zhang, Z., & Wen, Z. (2018). Comparing exploratory structural equation modeling and existing approaches for multiple regression with latent variables. Structural Equation Modeling: A Multidisciplinary Journal, 25, 737–749. https://doi.org/10.1080/10705511.2018.1444993
  • Marsh, H. W., Liem, G. A. D., Martin, A. J., Nagengast, B., & Morin, A. (2011). Methodological-measurement fruitfulness of exploratory structural equation modeling (ESEM): New approaches to key substantive issues in motivation and engagement. Journal of Psychoeducational Assessment, 29, 322–346. https://doi.org/10.1177/0734282911406657
  • Marsh, H. W., Lüdtke, O., Nagengast, B., Morin, A. J. S., & Von Davier, M. (2013). Why item parcels are (almost) never appropriate: Two wrongs do not make a right–Camouflaging misspecification with item parcels in CFA models. Psychological Methods, 18, 257–284. https://doi.org/10.1037/a0032773
  • Maydeu-Olivares, A. (2017). Assessing the size of model misfit in structural equation models. Psychometrika, 82, 533–558. https://doi.org/10.1007/s11336-016-9552-7
  • Maydeu-Olivares, A., Shi, D., & Rosseel, Y. (2018). Assessing fit in structural equation models: A Monte-Carlo evaluation of RMSEA versus SRMR confidence intervals and tests of close fit. Structural Equation Modeling: A Multidisciplinary Journal, 25, 389–402. https://doi.org/10.1080/10705511.2017.1389611
  • Meade, A. W., Johnson, E. C., & Braddy, P. W. (2008). Power and sensitivity of alternative fit indices in tests of measurement invariance. The Journal of Applied Psychology, 93, 568–592. https://doi.org/10.1037/0021-9010.93.3.568
  • Meade, A. W., & Lautenschlager, G. J. (2004). A Monte-Carlo study of confirmatory factor analytic tests of measurement equivalence/invariance. Structural Equation Modeling: A Multidisciplinary Journal, 11, 60–72. https://doi.org/10.1207/S15328007SEM1101_5
  • Meredith, W. (1993). Measurement invariance, factor analysis and factorial invariance. Psychometrika, 58, 525–543. https://doi.org/10.1007/BF02294825
  • Millsap, R. E. (2011). Statistical approaches to measurement invariance. Routledge.
  • Muthén, B., & Asparouhov, T. (2012). Bayesian structural equation modeling: A more flexible representation of substantive theory. Psychological Methods, 17, 313–335. https://doi.org/10.1037/a0026802
  • Muthén, L. K., & Muthén, B. O. (1998–2019). Mplus user’s guild (8th ed.). Muthén & Muthén.
  • Reis, D. (2019). Further insights into the German version of the Multidimensional Assessment of Interoceptive Awareness (MAIA): Exploratory and Bayesian structural equation modeling approaches. European Journal of Psychological Assessment, 35, 317–325. https://doi.org/10.1027/1015-5759/a000404
  • Rosseel, Y. (2012). lavaan: An R package for structural equation modeling. Journal of Statistical Software, 48, 1–36. https://doi.org/10.18637/jss.v048.i02
  • Savalei, V. (2012). The relationship between root mean square error of approximation and model misspecification in confirmatory factor analysis models. Educational and Psychological Measurement, 72, 910–932. https://doi.org/10.1177/0013164412452564
  • Schlotz, W., Yim, I. S., Zoccola, P. M., Jansen, L., & Schulz, P. (2011). The perceived stress reactivity scale: Measurement Invariance, stability, and validity in three countries. Psychological Assessment, 23, 80–94. https://doi.org/10.1037/a0021148
  • Schwarz, G. (1978). Estimating the dimension of a model. The Annals of Statistics, 6, 461–464. https://doi.org/10.1214/aos/1176344136
  • Sclove, S. L. (1987). Application of model-selection criterion to some problems in multivariate analysis. Psychometrika, 52, 333–343. https://doi.org/10.1007/BF02294360
  • Shi, D., Maydeu-Olivares, A., & DiStefano, C. (2018). The relationship between the standardized root mean square residual and model misspecification in factor analysis models. Multivariate Behavioral Research, 53, 676–694. https://doi.org/10.1080/00273171.2018.1476221
  • Stark, S., Chernyshenko, O. S., & Drasgow, F. (2006). Detecting differential item functioning with confirmatory factor analysis and item response theory: Toward a unified strategy. The Journal of Applied Psychology, 91, 1292–1306. https://doi.org/10.1037/0021-9010.91.6.1292
  • Steiger, J. H. (2007). Understanding the limitations of global fit assessment in structural equation modeling. Personality and Individual Differences, 42, 893–898. https://doi.org/10.1016/j.paid.2006.09.017
  • Tanaka, J. S. (1993). Multifaceted conceptions of fit in structural equation models. In K. A. Bollen & J. S. Long (Eds.), Testing structural equation models (pp. 10–39). Sage.
  • Vrieze, S. I. (2012). Model selection and psychological theory: A discussion of the differences between the Akaike information criterion (AIC) and the Bayesian information criterion (BIC). Psychological Methods, 17, 228–243. https://doi.org/10.1037/a0027127
  • Wang, M.-T., Willett, J. B., & Eccles, J. S. (2011). The assessment of school engagement: Examining dimensionality and measurement invariance by gender and race/ethnicity. Journal of School Psychology, 49, 465–480. https://doi.org/10.1016/j.jsp.2011.04.001
  • Widaman, K. F., & Reise, S. P. (1997). Exploring the measurement invariance of psychological instruments: Applications in the substance use domain. In K. J. Bryant, M. Windle, & S. G. West (Eds.), The science of prevention: Methodological advances from alcohol and substance abuse research (pp. 281–324). American Psychological Association.
  • Xiao, Y., Liu, H., & Hau, K.-T. (2019). A comparison of CFA, ESEM, and BSEM in test structure analysis. Structural Equation Modeling: A Multidisciplinary Journal, 26, 665–677. https://doi.org/10.1080/10705511.2018.1562928
  • Ximénez, C., Maydeu-Olivares, A., Shi, D., & Revuelta, J. (2022). Assessing cutoff values of SEM fit indices: Advantages of the unbiased SRMR index and its cutoff criterion based on communality. Structural Equation Modeling: A Multidisciplinary Journal, 29, 368–380. https://doi.org/10.1080/10705511.2021.1992596
  • Ximénez, C., Revuelta, J., & Castañeda, R. (2022). What are the consequences of ignoring cross-loadings in bifactor models? A simulation study assessing parameter recovery and sensitivity of goodness-of-fit indices. Frontiers in Psychology, 13, 923877. https://doi.org/10.3389/fpsyg.2022.923877
  • Zhang, B., Luo, J., Sun, T., Cao, M., & Drasgow, F. (2023). Small but nontrival: A comparision of six strategies to handle cross-loadings in bifactor predictor models. Multivariate Behavioral Research, 58, 115–132. https://doi.org/10.1080/00273171.2021.1957664

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.