Abstract
Ridge regression was introduced to deal with the instability issue of the ordinary least squares estimate due to multicollinearity. It essentially penalizes the least squares loss by applying a ridge penalty on the regression coefficients. The ridge penalty shrinks the regression coefficient estimate toward zero, but not exactly zero. For this reason, the ridge regression has long been criticized of not being able to perform variable selection. In this article, we proposed a new variable selection method based on an individually penalized ridge regression, a slightly generalized version of the ridge regression. An adaptive version is also provided. Our new methods are shown to perform competitively based on simulation and a real data example.
Supplementary Materials
Additional simulation results: A separate pdf file contains the simulation results of the simulation example in Section 5.1 for the case with σ = 1 and n = 40, and the case with σ = 3 and n = 40.
R code: A file (RSO.R) contains the R code for the ridge selection operator and another file (demoRSO.R) demonstrates how to apply the ridge selection operator and perform refitting.
Acknowledgments
We thank three reviewers, an associate editor, and the editor for their most helpful comments that lead to substantial improvements in the article.