Abstract
Suppose that the normal model is used for data Y 1,…, Y n, but that the true distribution is a t distribution with location and scale parameters ζ and σ and m degrees of freedom. The normal model corresponds to m = ∞. Using a local asymptotic framework where m is allowed to increase with n, two classes of estimands are identified. One small class, which in particular contains the functions of ζ alone, is affected by t-ness only to the second order, and maximum likelihood estimation in the two- or three-parameter models becomes equivalent. For all other estimands, it is shown that if m ≥ 1.458 √n, then maximum likelihood estimation using the incorrect normal model is still more precise than using the correct three-parameter model. This is further shown to be true in regression models with t-distributed residuals. We also propose and analyze compromise estimators that in various ways interpolate between the normal and the nonnormal models. A separate section extends the t-ness results to general normal scale mixtures, in which case the tolerance radius around the normal error distribution takes the form of an upper bound .3429/ √n for the variance of the scale mixture distribution. Proving our results requires somewhat nonstandard “corner asymptotics,” because behavior of estimators must be studied when the crucial parameter γ = 1/m is close to 0, which is not an inner point of the parameter space, and the maximum likelihood estimator of m is equal to ∞ with positive probability.