References
- Alfons, A., C. Croux, and S. Gelper. 2013. Sparse least trimmed squares regression for analyzing high-dimensional large data sets. The Annals of Applied Statistics 7 (1):226–48. doi:https://doi.org/10.1214/12-AOAS575.
- Bar, H., J. Booth, and M. T. Wells. 2018. A scalable empirical Bayes approach to variable selection in generalized linear models. arXiv preprint, arXiv:1803.09735.
- Belloni, A., and V. Chernozhukov. 2011. L1-penalized quantile regression in high-dimensional sparse models. The Annals of Statistics 39 (1):82–130. doi:https://doi.org/10.1214/10-AOS827.
- Belloni, A., V. Chernozhukov, and K. Kato. 2015. Uniform post-selection inference for least absolute deviation regression and other z-estimation problems. Biometrika 102 (1):77–94. doi:https://doi.org/10.1093/biomet/asu056.
- Bradic, J., J. Fan, and W. Wang. 2011. Penalized composite quasi-likelihood for ultrahigh dimensional variable selection. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 73 (3):325–49. doi:https://doi.org/10.1111/j.1467-9868.2010.00764.x.
- Bühlmann, P., M. Kalisch, and L. Meier. 2014. High-dimensional statistics with a view toward applications in biology. Annual Review of Statistics and Its Application 1 (1):255–78. doi:https://doi.org/10.1146/annurev-statistics-022513-115545.
- Chen, J., and Z. Chen. 2012. Extended BIC for small-n-large-p sparse GLM. Statistica Sinica 22:555–74. doi:https://doi.org/10.5705/ss.2010.216.
- Dicker, L., B. Huang, and X. Lin. 2013. Variable selection and estimation with the seamless-L0 penalty. Statistica Sinica 23:929–62. doi:https://doi.org/10.5705/ss.2011.074.
- Fan, J., and R. Li. 2001. Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American Statistical Association 96 (456):1348–60. doi:https://doi.org/10.1198/016214501753382273.
- Fan, J., Y. Fan, and E. Barut. 2014. Adaptive robust variable selection. The Annals of Statistics 42 (1):324. doi:https://doi.org/10.1214/13-AOS1191.
- Friedman, J., T. Hastie, and R. Tibshirani. 2010. Regularization paths for generalized linear models via coordinate descent. Journal of Statistical Software 33 (1):1–22. doi:https://doi.org/10.18637/jss.v033.i01.
- Hubert, M., and P. J. Rousseeuw. 1998. The catline for deep regression. Journal of Multivariate Analysis 66 (2):270–96. doi:https://doi.org/10.1006/jmva.1998.1751.
- Jiang, Y. 2015. Robust estimation in partially linear regression models. Journal of Applied Statistics 42 (11):2497–508. doi:https://doi.org/10.1080/02664763.2015.1043862.
- Jiang, Y. 2016. An exponential-squared estimator in the autoregressive model with heavy-tailed errors. Statistics and Its Interface 9 (2):233–8. doi:https://doi.org/10.4310/SII.2016.v9.n2.a10.
- Jiang, Y., G. Tian, and Y. Fei. 2017. A robust and efficient estimation method for partially nonlinear models via a new mm algorithm. Statistical Papers. doi:https://doi.org/10.1007/s00362-017-0909-5.
- Jiang, Y., Q. Ji, and B. Xie. 2017. Robust estimation for the varying coefficient partially nonlinear models. Journal of Computational and Applied Mathematics 326:31–43. doi:https://doi.org/10.1016/j.cam.2017.04.028.
- Jiang, Y., Y. G. Wang, L. Fu, and X. Wang. 2019. Robust estimation using modified Huber’s functions with new tails. Technometrics 61 (1):111–22. doi:https://doi.org/10.1080/00401706.2018.1470037.
- Lederer, J., and C. Müller. 2015. Don’t fall for tuning parameters: Tuning-free variable selection in high dimensions with the TREX. In Twenty-Ninth AAAI Conference on Artificial Intelligence.
- Karunamuni, R. J., L. Kong, and W. Tu. 2019. Efficient robust doubly adaptive regularized regression with applications. Statistical Methods in Medical Research 28 (7):2210–26. doi:https://doi.org/10.1177/0962280218757560.
- Koenker, R., and G. Bassett, Jr. 1978. Regression quantiles. Econometrica: Journal of the Econometric Society 46 (1):33–50. doi:https://doi.org/10.2307/1913643.
- Kong, D., H. Bondell, and Y. Wu. 2018. Fully efficient robust estimation, outlier detection and variable selection via penalized regression. Statistica Sinica. doi:https://doi.org/10.5705/ss.202016.0441.
- Lambert-Lacroix, S., and L. Zwald. 2011. Robust regression through the Huber’s criterion and adaptive lasso penalty. Electronic Journal of Statistics 5:1015–53. doi:https://doi.org/10.1214/11-EJS635.
- Leng, C. 2010. Variable selection and coefficient estimation via regularized rank regression. Statistica Sinica 20:167–81. doi:https://doi.org/10.5705/ss.202016.0441.
- Li, G., H. Peng, and L. Zhu. 2011. Nonconcave penalized m-estimation with a diverging number of parameters. Statistica Sinica 21:391–419.
- Li, Y., and J. Zhu. 2008. L1-norm quantile regression. Journal of Computational and Graphical Statistics 17 (1):163–85. doi:https://doi.org/10.1198/106186008X289155.
- Neykov, N., P. Čížek, P. Filzmoser, and P. Neytchev. 2012. The least trimmed quantile regression. Computational Statistics & Data Analysis 56:1757–70. doi:https://doi.org/10.1016/j.csda.2011.10.023.
- Tibshirani, R. 1996. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society: Series B (Methodological) 58:267–88. doi:https://doi.org/10.1111/j.2517-6161.1996.tb02080.x.
- Wang, L., and R. Li. 2009. Weighted Wilcoxon-type smoothly clipped absolute deviation method. Biometrics 65 (2):564–71. doi:https://doi.org/10.1111/j.1541-0420.2008.01099.x.
- Wang, L., Y. Wu, and R. Li. 2012. Quantile regression for analyzing heterogeneity in ultra-high dimension. Journal of the American Statistical Association 107 (497):214–22. doi:https://doi.org/10.1080/01621459.2012.656014.
- Wang, S., B. Nan, S. Rosset, and J. Zhu. 2011. Random lasso. The Annals of Applied Statistics 5 (1):468. doi:https://doi.org/10.1214/10-AOAS377.
- Wang, X., Y. Jiang, M. Huang, and H. Zhang. 2013. Robust variable selection with exponential squared loss. Journal of the American Statistical Association 108 (502):632–43. doi:https://doi.org/10.1080/01621459.2013.766613.
- Wu, Y., and Y. Liu. 2009. Variable selection in quantile regression. Statistica Sinica 19:801. doi:https://doi.org/10.5705/ss.2011.100.
- Xue, F., and A. Qu. 2017. Variable selection for highly correlated predictors. arXiv preprint, arXiv:1709.04840.
- Yali, F. 2015. Two-step variable selection in quantile regression models. Journal of Shanghai Normal University (Natural Sciences) 44:270–83.
- Zou, H. 2006. The adaptive lasso and its oracle properties. Journal of the American Statistical Association 101 (476):1418–29. doi:https://doi.org/10.1198/016214506000000735.
- Zou, H., and T. Hastie. 2005. Regularization and variable selection via the elastic net. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 67 (2):301–20. doi:https://doi.org/10.1111/j.1467-9868.2005.00503.x.