Abstract
We study the estimation property of the Elastic Net estimator in high-dimensional linear regression models where the number of parameters p is comparable to or larger than the sample size n. In such a situation, one often assumes sparsity of the true regression coefficient vector , i.e., assuming that belongs to an -ball with radius , , for some . In this paper, we provide -estimation error bounds for the Elastic Net and naive Elastic Net estimators under a unified framework for high-dimensional analysis of M-estimators proposed by Negahban et al. [A unified framework for high-dimensional analysis of M-estimators with decomposable regularizers. Adv Neural Inf Process Syst. 2009;22:1348–1356]. We show that for both cases of exact sparsity and weak sparsity, under the same conditions on the design matrix, the Elastic Net estimator achieves a slightly better error bound than the Lasso estimator by suitably choosing the tuning parameters.
Acknowledgments
The authors are grateful to the associate editor, two reviewers, and professor Bin Yu for many insightful comments and helpful suggestions, which led to a substantially improved manuscript.
Disclosure statement
No potential conflict of interest was reported by the author(s).