ABSTRACT
This journal recently published a systematic review of simulation studies on the performance of Bayesian approaches for estimating latent variable models in small samples. The authors of this review highlighted that Bayesian approaches can perform poorly (i.e., by exhibiting bias) when the prior distributions are not thoughtfully constructed on the basis of previous knowledge. In this comment, we question whether the bias is the most important criterion when the sample size is small. We argue that the variability is more important and should therefore not be ignored. Moreover, because one of the most important selling points of Bayesian approaches was not addressed in the article, we argue that although somewhat biased, Bayesian approaches allow for more accurate estimates (i.e., a smaller mean squared error) than Maximum Likelihood (ML) in small samples, and we show one such approach that is more accurate than ML.
Notes
1 Strictly speaking, what constitutes a small sample depends not only on the actual number of observations but also on many other factors (e.g., the number of latent variables in the model; see Smid et al., Citation2020).
2 One might be inclined to think that the use of priors to stabilize estimates and regularized estimation (Jacobucci et al., Citation2016) are similar, and in our own prior research (Zitzmann, Citation2018), even we established such a connection. However, despite many similarities at the technical level, we acknowledge that the two methods have somewhat different goals. Stabilization is used to lower the variability of an estimator and thus to reduce its MSE in small samples, whereas regularization is typically applied to select parameters and thus to create simpler models (see Liang & Jacobucci, Citationin press; Serang et al., Citation2017).
3 The within-group slope describes the relationship between the predictor and the dependent variable at the individual level.
4 Notice, however, that the difference between this and the RMSE that emerges with is only marginal.