Abstract
A prediction density function g* for the normal linear model is derived. This function is shown to dominate three well-known prediction densities by first constructing a specified class of densities that includes these three and then proving that g* is the optimal member of this class in the sense of minimizing a criterion based on the Kullback—Leibler divergence. g* coincides with a Bayesian prediction density assuming diffuse prior.