Abstract
Variable-selection regression methods are oriented towards selecting a single model as the vehicle for further inferences. The appropriate inference about variables not present in the selected model is unclear—the conclusion that they are unimportant and have no effect is unsatisfactory and may be misleading. Also, quite often, there will be a large number of subsets with nearly equal fit as measured by any of a number of regression criteria, making the selection of one subset all the more problematic. We propose that combining information across all models provides useful information that is usually ignored. Additionally, we feel the objective of the statistical analysis should not be to select a subset of variables for inclusion in a model, but to assess the role of each predictor variable in any relationship with the response. The term variable assessment we think is more descriptive of this objective. A single model is not desired necessarily, but an assessment of each variable. We develop a method for variable assessment that makes use of Bayesian modelselection methodology and combines information from all possible models. Exact implementation of the method would require fitting all possible regression models. However, using the Gibbs sampler approach to computations leads to substantial computational savings over fitting every possible model. Simulations suggest that combining information from all models, i.e., model averaging, has good statistical properties.