246
Views
227
CrossRef citations to date
0
Altmetric
Theory and Method

Maximum Likelihood Computations with Repeated Measures: Application of the EM Algorithm

, &
Pages 97-105 | Received 01 May 1985, Published online: 12 Mar 2012
 

Abstract

The purpose of this article is to consider the use of the EM algorithm (Dempster, Laird, and Rubin 1977) for both maximum likelihood (ML) and restricted maximum likelihood (REML) estimation in a general repeated measures setting using a multivariate normal data model with linear mean and covariance structure (Anderson 1973). Several models and methods of analysis have been proposed in recent years for repeated measures data; Ware (1985) presented an overview. Because the EM algorithm is a general-purpose, iterative method for computing ML estimates with incomplete data, it has often been used in this particular setting (Dempster et al. 1977; Andrade and Helms 1984; Jennrich and Schluchter 1985).

There are two apparently different approaches to using the EM algorithm in this setting. In one application, each experimental unit is observed under a standard protocol specifying measurements at each of n occasions (or under n conditions), and incompleteness implies that the number of measurements actually collected on each unit is less than the requisite n for at least some units. In this circumstance, incompleteness may be modeled if one regards the measurements actually collected as the observed data, the conceptual set of n measurements on each individual as the complete data, and the unobserved data as the missing measurements on those units with fewer than n observations. Application of the EM algorithm in this setting [referred to as “missing data” in Dempster et al. (1977) and “incomplete data” in Jennrich and Schluchter (1985)] was discussed by Orchard and Woodbury (1972), Beale and Little (1975), and Jennrich and Schluchter (1985).

One drawback of this approach in the longitudinal data setting is that the multivariate model with linear mean and covariance structure does not, in general, possess closed-form solutions even with complete data (Anderson 1973; Szatrowski 1980). Thus implementing the EM algorithm requires either an iterative M step within each EM iteration or the use of a generalized EM (GEM) algorithm that requires only that the complete data likelihood be increased rather than maximized at each M step. A second drawback is that this approach requires specification of the covariates for both the observed and the missing observations. If the covariates are unknown for the missing observations, arbitrary values must be specified, which may affect the rate but not the final point of convergence (Jennrich and Schluchter 1985).

The second application of the EM algorithm arises naturally when we use mixed models to analyze serial measurements. In this setting, the incomplete data are modeled quite differently. The observed data are as before, that is, the measurements actually collected on each unit. The complete data, however, consist of the observed data plus the unobservable random parameters and error terms specified in the mixed model. Thus the missing data (the random parameters and error terms) would not be viewed as data in the traditional statistical sense. Laird and Ware (1982) and Andrade and Helms (1984) took this approach.

This article shows that the latter approach is more general and encompasses the missing-data approach as a special case. This result has several important applications. First, it means that EM algorithms encoded for models with random effects can also be used for multivariate normal models with arbitrary covariance structure and missing data. Second, this approach avoids specification of covariates for missing observations. Finally, use of the general formulation means that closed-form solutions for the complete data maximization will exist for a much broader class of models, enabling one to avoid use of GEM or iterations within each M step.

For a certain class of multivariate growth curve models with random effects structure (Reinsel 1982), closed-form solutions exist for both ML and REML estimates of the mean and covariance parameters. Formulas for these closed-form solutions are given that are applicable whenever the solution is not on the boundary.

The choice of starting values for the EM iterations is important, since the EM algorithm will not, in general, converge from arbitrary starting values to the closed-form solution (if it exists) in one iteration. Several possibilities for starting values are given.

The rate of convergence of the EM algorithm is generally linear. The actual speed of convergence in two data examples is shown to depend heavily on both the actual data set and the assumed structure for the covariance matrix. We discuss two methods for accelerating convergence, which we find are most useful when the covariance matrix is assumed to have a random effects structure. When the covariance matrix is assumed to be arbitrary, the EM iterations reduce to familiar iteratively reweighted least squares (IRLS) computations. The EM algorithm has the unusual property in this setting that when all of the data are complete (no missing observations), the iterations are still IRLS, but the rate of convergence changes from linear to quadratic.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.