350
Views
2
CrossRef citations to date
0
Altmetric
Original Articles

The Impact of Moderate Priors For Bayesian Estimation and Testing of Item Factor Analysis Models When Maximum Likelihood is Unsuitable

Pages 80-93 | Published online: 13 Aug 2018
 

Abstract

In psychological research, available data are often insufficient to estimate item factor analysis (IFA) models using traditional estimation methods, such as maximum likelihood (ML) or limited information estimators. Bayesian estimation with common-sense, moderately informative priors can greatly improve efficiency of parameter estimates and stabilize estimation. There are a variety of methods available to evaluate model fit in a Bayesian framework; however, past work investigating Bayesian model fit assessment for IFA models has assumed flat priors, which have no advantage over ML in limited data settings. In this paper, we evaluated the impact of moderately informative priors on ability to detect model misfit for several candidate indices: posterior predictive checks based on the observed score distribution, leave-one-out cross-validation, and widely available information criterion (WAIC). We found that although Bayesian estimation with moderately informative priors is an excellent aid for estimating challenging IFA models, methods for testing model fit in these circumstances are inadequate.

Notes

1 This is because limited information estimation is based on polychoric correlation coefficients, which are highly sensitive to low frequencies in item-by-item contingency tables (Olsson, Citation1979; Savalei, Citation2011).

2 However, the choice of (1) results in a model that is only locally identified because the signs for the factor loadings could be either positive or negative (Bollen & Bauldry, Citation2010; Loken, Citation2005).

3 Though more computationally demanding, ML estimation is superior to limited information approaches when frequencies are sparse because limited information estimators are sensitive to low frequencies in bivariate contingency tables, whereas ML estimation is sensitive to univariate (item-level) sparse frequencies (Wirth & Edwards, Citation2007).

4 This prior for factor loadings requires specifying the sign of the relationship between the latent factor and each indicator, this restriction can often reasonably be made a priori and has computational advantages, but may be relaxed (Bainter, Citation2017; Fox & Glas, Citation2003 also constrain loadings to positive values).

5 “Moderately informative” is meant to distinguish from so-called “informative” and “uninformative” priors, which refer to peaked and diffuse priors, respectively. The informative versus uninformative labeling is itself problematic, as a flat prior may be counterintuitively influential in some cases and the level of information in a particular prior varies on a case by case basis. For comparison, the default “non-informative” prior in Mplus for item loadings is N(0,∞), but this prior is not recommended for parameters on the logit scale, especially when sample size is not large (Asparouhov & Muthén, Citation2010). In contrast, an “informative” prior may be centered on an expected value with variance weighted according to the strength of the prior belief or evidence (e.g. Gelman & Hill, Citation2007, pp. 392–393).

6 We checked that 200 replications were sufficient to yield stable results by plotting the running median across converged replications in each condition.

7 This restriction is computationally advantageous to avoid sign indeterminacy, but may be relaxed if not reasonable substantively (see Bainter, Citation2017).

8 We encountered a non-zero number of divergent transitions after warm-up for a number of replications, depending on condition (some not reported here). Divergent transitions may be an indication that results are untrustworthy and warns of potential pathologies (Monnahan, Thorson, & Branch, Citation2017). Decreasing the step size may eliminate divergent transitions, but also increases computational time. We reran a subset of conditions with decreased step size, which almost completely eliminated divergent transitions. We then compared replications that had one or more divergent transitions before the adjustment and zero divergent transitions after the adjustment, and we found no difference. Therefore, we retained our original results with the default step size for this simulation, but for a single replication (and when the population-generating model is not available for comparison), decreasing the step size can help eliminate divergent transitions in these conditions.

9 Limited information fit tests such as M2 are better, though they require the same sample sizes needed for stable model convergence, and our models did not converge using FlexMIRT software in order to obtain these fit indices.

10 To further investigate overall parameter estimate efficiency, we also examine root mean square error and the median absolute deviation about the median. These results are consistent with the pattern demonstrated in and with previous literature (Bainter, Citation2017), so they are omitted here for brevity, but they are available for interested readers in supplemental materials, tables S1 and S2.

11 In the Stan output, this posterior standard deviation is labeled “sd.” The Monte Carlo standard error is also printed and labeled “se_mean”; this value quantifies accuracy in the calculation of the posterior and is not related to uncertainty of the parameters.

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 53.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 412.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.