References
- Breheny P. The group exponential lasso for bi-level variable selection. Biometrics. 2015;71(3):731–740. doi: 10.1111/biom.12300
- Breheny P, Huang J. Penalized methods for bi-level variable selection. Stat Interface. 2009;2:369–380. doi: 10.4310/SII.2009.v2.n3.a10
- Breheny P, Huang J. Coordinate descent algorithms for nonconvex penalized regression, with applications to biological feature selection. Ann Appl Stat. 2011;5(1):232–253. doi: 10.1214/10-AOAS388
- Breheny P, Huang J. Group descent algorithms for nonconvex penalized linear and logistic regression models with grouped predictors. Stat Comput. 2015;25(2):173–187. doi: 10.1007/s11222-013-9424-2
- Bühlmann P, van de Geer S. Statistics for high-dimensional data: methods, theory and applications. Berlin: Springer; 2011.
- Hastie T, Tibshirani R, Friedman J. The elements of statistical learning. 2nd ed. New York: Springer; 2009.
- Rish I, Grabarnik GY. Sparse modeling: theory, algorithms, and applications. Boca Raton: CRC; 2014.
- Fan J, Lv J. A selective overview of variable selection in high dimensional feature space. Stat Sin. 2010;20:101–148.
- Tibshirani R. Regression shrinkage and selection via the lasso. J R Stat Soc Ser B (Methodol). 1996;58(1):267–288.
- Fan J, Li R. Variable selection via nonconcave penalized likelihood and its oracle properties. J Am Stat Assoc. 2001;96(456):1348–1360. doi: 10.1198/016214501753382273
- Soubies E, Blanc-féraud L, Aubert G. A unified view of exact continuous penalties for ℓ2-ℓ0 minimization. SIAM J Optim. 2017;27(3):2034–2060. doi: 10.1137/16M1059333
- Lv J, Fan Y. A unified approach to model selection and sparse recovery using regularized least squares. Ann Stat. 2009;37(6A):3498–3528. doi: 10.1214/09-AOS683
- Zou H, Li R. One-step sparse estimates in nonconcave penalized likelihood models. Ann Stat. 2008;36(4):1509–1533. doi: 10.1214/009053607000000802
- Lin W, Lv J. High-dimensional sparse additive hazards regression. J Am Stat Assoc. 2013;108(501):247–264. doi: 10.1080/01621459.2012.746068
- Shi Y, Cao Y, Jiao Y, et al. SICA for Cox's proportional hazards model with a diverging number of parameters. Acta Math Appl Sin Eng Ser. 2014;30(4):887–902. doi: 10.1007/s10255-014-0402-z
- Shi Y, Jiao Y, Yan L, et al. A modified BIC tuning parameter selector for SICA-penalized Cox regression models with diverging dimensionality. J Math. 2017;37(4):723–730.
- Shi Y, Wu Y, Xu D, et al. An ADMM with continuation algorithm for non-convex SICA-penalized regression in high dimensions. J Stat Comput Simul. 2018c;88(9):1826–1846. doi: 10.1080/00949655.2018.1448397
- Zhang S, Xin J. Minimization of transformed L1 penalty: closed form representation and iterative thresholding algorithms. Commun Math Sci. 2017;15(2):511–537. doi: 10.4310/CMS.2017.v15.n2.a9
- Zhang S, Xin J. Minimization of transformed L1 penalty: theory, difference of convex function algorithm, and robust application in compressed sensing. Math Program. 2018;169:307–336. doi: 10.1007/s10107-018-1236-x
- Fan Q, Jiao Y, Lu X. A primal dual active set algorithm with continuation for compressed sensing. IEEE Trans Signal Process. 2014;62(23):6276–6285. doi: 10.1109/TSP.2014.2362880
- Jiao Y, Jin B, Lu X. A primal dual active set with continuation algorithm for the ℓ0-regularized optimization problem. Appl Comput Harmon Anal. 2015;39(3):400–426. doi: 10.1016/j.acha.2014.10.001
- Jiao Y, Jin B, Lu X. Iterative soft/hard thresholding with homotopy continuation for sparse recovery. IEEE Signal Process Lett. 2017b;24(6):784–788. doi: 10.1109/LSP.2017.2693406
- Tseng P. Convergence of a block coordinate descent method for nondifferentiable minimization. J Optim Theory Appl. 2001;109(3):475–494. doi: 10.1023/A:1017501703105
- Huang J, Jiao Y, Jin B, et al. A unified primal dual active set algorithm for nonconvex sparse recovery; 2018a. arXiv preprint arXiv:1310.1147v4.
- Golub GH, VanLoan CF. Matrix computations. Vol. 3. Baltimore: JHU Press; 2012.
- Mazumder R, Friedman JH, Hastie T. Sparsenet: Coordinate descent with nonconvex penalties. J Am Stat Assoc. 2011;106(495):1125–1138. doi: 10.1198/jasa.2011.tm09738
- Shi Y, Cao Y, Yu J, et al. Variable selection via generalized SELO-penalized linear regression models. Appl Math J Chinese Univ. 2018a;33(2):145–162. doi: 10.1007/s11766-018-3496-x
- Huang J, Jiao Y, Lu X, et al. SNAP: a semismooth Newton algorithm for pathwise optimization with optimal local convergence rate and oracle properties; 2018b. arXiv preprint arXiv:1810.03814v1.
- Huang J, Jiao Y, Lu X, et al. Robust decoding from 1-bit compressive sampling with least squares. SIAM J Sci Comput. 2018c;40(4):A2062–A2086. doi: 10.1137/17M1154102
- Jiao Y, Jin B, Lu X. Group sparse recovery via the ℓ0(ℓ2) penalty: theory and algorithm. IEEE Trans Signal Process. 2017a;65(4):998–1012. doi: 10.1109/TSP.2016.2630028
- Shi Y, Huang J, Jiao Y, et al. Semi-smooth Newton algorithm for non-convex penalized linear regression; 2018b. arXiv preprint arXiv:1802.08895v2.
- Chen J, Chen Z. Extended Bayesian information criteria for model selection with large model spaces. Biometrika. 2008;95(3):759–771. doi: 10.1093/biomet/asn034
- Chen J, Chen Z. Extended BIC for small-n-large-P sparse GLM. Stat Sin. 2012;22:555–574.
- Kim Y, Kwon S, Choi H. Consistent model selection criteria on high dimensions. J Mach Learn Res. 2012;13:1037–1057.
- Wang H, Li B, Leng C. Shrinkage tuning parameter selection with a diverging number of parameters. J R Stat Soc Ser B (Stat Methodol). 2009;71(3):671–683. doi: 10.1111/j.1467-9868.2008.00693.x
- Wang H, Li R, Tsai C-L. Tuning parameter selectors for the smoothly clipped absolute deviation method. Biometrika. 2007;94(3):553–568. doi: 10.1093/biomet/asm053
- Wang L, Kim Y, Li R. Calibrating nonconvex penalized regression in ultra-high dimension. Ann Stat. 2013;41(5):2505–2536. doi: 10.1214/13-AOS1159
- Becker S, Bobin J, Candès EJ. Nesta: A fast and accurate first-order method for sparse recovery. SIAM J Imaging Sci. 2011;4(1):1–39. doi: 10.1137/090756855
- Breheny P. Marginal false discovery rates for penalized regression models. Biostatistics. 2018;doi:10.1093/biostatistics/kxy004.
- Lv S, Lin H, Lian H, et al. Oracle inequalities for sparse additive quantile regression in reproducing Kernel Hilbert space. Ann Stat. 2018;46(2):781–813. doi: 10.1214/17-AOS1567
- Yi C, Huang J. Semismooth Newton coordinate descent algorithm for elastic-net penalized Huber loss regression and quantile regression. J Comput Graph Stat. 2017;26(3):547–557. doi: 10.1080/10618600.2016.1256816
- Huang J, Horowitz JL, Wei F. Variable selection in nonparametric additive models. Ann Stat. 2010;38(4):2282–2313. doi: 10.1214/09-AOS781
- Huang J, Ma S, Zhang C-H. Adaptive lasso for sparse high-dimensional regression models. Stat Sin. 2008;18:1603–1618.
- Tan A, Huang J. Bayesian inference for high-dimensional linear regression under mnet priors. Can J Stat. 2016;44(2):180–197. doi: 10.1002/cjs.11283