613
Views
5
CrossRef citations to date
0
Altmetric
Articles

Latent Growth Curve Model Selection: The Potential of Individual Case Residuals

&
Pages 20-30 | Published online: 31 Jan 2014
 

Abstract

An individual case residual procedure for model comparison in longitudinal studies is discussed, which can be used as an aid to overall goodness-of-fit indexes in the process of choosing between rival models as means of data description. The method is based on estimates of subject-specific residuals associated with each repeated measure. The approach facilitates assessment of local fit of considered models and can be especially useful with large samples. The outlined procedure is similarly helpful in the general case for fit assessment of competing structural equation models. The method is readily applicable using the popular software Mplus and R, and is illustrated with a simulated data example as well as with empirical data from a large-scale study of older adults.

ACKNOWLEDGMENTS

We are grateful to R. E. Millsap for comments on LGC modeling and individual residuals, to L. Muthén for advice on factor score estimation in the software used, and to A. Zajacova for providing access to the empirical data employed in the illustration section. We thank the editor and two anonymous referees for valuable comments on an earlier version of the article, which contributed significantly to its improvement.

Notes

1 If using overall goodness-of-fit indexes one finds that a member of a set of rival LGC models (or structural equation models in general) is not plausible as an overall means of description for a given data set, that model can be excluded from further considerations and the process of model selection. The use of ICRs later in the section for the linear model provides additional evidence of the meaningfulness of a decision to rule out this model from subsequent considerations, but in general there is no need to compare ICRs between plausible models and those that are not, for a particular data set. We stress that the utility of the ICRs does not depend on overall model plausibility, but the latter is not assessed by using ICRs in this article that are merely employed for model comparison purposes. The utility of ICRs is especially enhanced in empirical cases when the concern is to select from among a set (or subset) of models that are plausible as means of data description in a given study (see later, and Footnote 2). The ICRs can be particularly useful with very large samples—as in large-scale, nationally representative studies—when overall fit indexes of the cubic model turn out to be significantly better than those of the quadratic model (unlike in this example) mostly due to excessively high statistical power.

2 As could be implied from the preceding discussion, this article does not consider the process of model choice to be concerned with identification of the “true” model, but rather with the selection of a most useful model from a given set of competing, plausible models as rival means for description of a particular data set. This selection, as well known, is generally best based on substantive considerations, the research question, and an extended set of statistical indexes, including overall fit indexes, information criteria, and local fit indexes (see also CitationRaykov & Zajacova, 2012). In this example, one could also rule out the cubic model on the grounds that its Bayesian information criterion (BIC) is higher than that of the quadratic model (and lower than that index of the linear model). The value of the ICRs is especially enhanced in cases when potentially due to a large sample and excessive statistical power (a) the mean, variance, or both of the cubic parameter d are significant; or (b) a comparison of the models’ BIC indexes might not suggest ruling out a more complex model. Last but not least, we stress that an informed model selection decision is arrived at by examining the ICRs at all assessment occasions (which would lead in this example to the same final conclusion as mentioned, as the same pattern of results is observed at all five measurement points).

3This conclusion for the selection of the quadratic model could be considered further supported by the observation that the BIC index reaches its minimum at this model—being 190,576.163 for the linear model, 190,435.099 for the quadratic model, and 190,446.464 for the cubic model. We stress, however, that the difference in BIC between the quadratic and cubic model, which is only 11.365 here, does not seem to be large enough—especially given the sample size of more than 4,000 subjects—to make an informed conclusion for preferring the quadratic to the cubic model based merely on the BIC index. With this in mind, the present is an example of the ICRs being particularly helpful in model comparison, possibly over and above use of the BIC that represents an overall fit index unlike the ICRs.

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 53.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 412.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.