Abstract
We intend to combine Huber's loss with an adaptive reversed version as a penalty function. The purpose is twofold: first we would like to propose an estimator that is robust to data subject to heavy-tailed errors or outliers. Second we hope to overcome the variable selection problem in the presence of highly correlated predictors. For instance, in this framework, the adaptive least absolute shrinkage and selection operator (lasso) is not a very satisfactory variable selection method, although it is a popular technique for simultaneous estimation and variable selection. We call this new penalty the adaptive BerHu penalty. As for elastic net penalty, small coefficients contribute through their norm to this penalty while larger coefficients cause it to grow quadratically (as ridge regression). We will show that the estimator associated with Huber's loss combined with the adaptive BerHu penalty enjoys theoretical properties in the fixed design context. This approach is compared to existing regularisation methods such as adaptive elastic net and is illustrated via simulation studies and real data.
Acknowledgments
We are grateful to Anestis Antoniadis for constructive and fruitful discussions. The authors would like to thank the reviewer team's constructive comments, which improve significantly the quality of this paper.
Disclosure statement
No potential conflict of interest was reported by the authors.