Abstract
The theory presented in Mason, Keating, Sen and Blaylock (1990) and applied to the comparison of ridge regression estimators using Pitman's measure of closeness is illustrated on two data sets, one that contains a strong collinearity and one that is nearly orthogonal. The regions of preference for the ridge regression estimator over the least squares estimator are established and shown to be dependent on the eigenvalues and eigenvectors of the correlation matrix of the predictor variables, the unknown coefficient vector, the error variance, and the value of the biasing constant in the ridge estimator.