References
- Ahmed, S. E. (2014). Penalty, shrinkage and pretest strategies: Variable selection and estimation. New York: Springer.
- Ahmed, S. E., Doksum, K. A., Hossain, S., & You, J. (2007). Shrinkage, pretest and absolute penalty estimators in partially linear models. Australian & New Zealand Journal of Statistics, 49, 435–454.
- Ahmed, S. E., Hossain, S., & Doksum, K. A. (2012). LASSO and shrinkage estimation in Weibull censored regression models. Journal of Statistical Planning and Inference, 142(6), 1273–1284. Retrieved from http://dx.doi.org/10.1016/j.jspi.2011.12.027
- Aldahmani, S., & Dai, H. (2015). Unbiased estimation for linear regression when nv. International Journal of Statistics and Probability, 4(3), 61–73 . Retrieved from http://dx.doi.org/10.5539/ijsp.v4n3p61
- Armagan, A., Dunson, D. B., & Lee, J. (2013). Generalized double Pareto shrinkage. Statistica Sinica, 23(1), 119.
- Bhattacharya, A., Pati, D., Pillai, N. S., & Dunson, D. B. (2012). Bayesian shrinkage. arXiv preprint arXiv:1212.6088 Retrieved from http://arxiv.org/abs/1212.6088
- Bickel, P. J., Ritov, Y., & Tsybakov, A. B. (2009). Simultaneous analysis of Lasso and Dantzig selector. Annals of Statistics, 37, 1705–1732. Retrieved from http://dx.doi.org/10.1214/08-AOS620
- Carvalho, C. M., Polson, N. G., & Scott, J. G. (2010). The horseshoe estimator for sparse signals. Biometrika, 97, 465–480.
- Efron, B., Hastie, T., Johnstone, I., & Tibshirani, R. (2004). Least angle regression. Annals of Statistics, 32(2), 407–499.
- Fan, J. & Li, R. (2001). Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American Statistical Association, 96(456), 1348–1360.
- Frank, I., & Friedman, J. (1993). A statistical view of some chemometrics regression tools (with discussion). Technometrics, 35, 109–148.
- Hansen, B. E. (2013). The risk of James-Stein and lasso shrinkage. Retrieved from http://www.ssc.wisc.edu/{\~}bhansen/papers/lasso.pdf
- Huang, J. & Ma, S. (2008). Adaptive Lasso for sparse high-dimensional regression models. Statistica Sinica, 1603–1618.
- Kim, Y., Choi, H., & Oh, H. S. (2008). Smoothly clipped absolute deviation on high dimensions. Journal of the American Statistical Association, 103(484), 1665–1673.
- Leng, C., Lin, Y., & Wahba, G. (2006). A note on the Lasso and related procedures in model selection. Statistica Sinica, 16, 1273–1284. Retrieved from http://www3.stat.sinica.edu.tw/statistica/oldpdf/A16n410.pdf
- Lu, T., Pan, Y., Kao, S. Y., Li, C., Kohane, I., Chan, J., & Yankner, B. A. (2004). Gene regulation and DNA damage in the ageing human brain. Nature, 429(6994), 883–891.
- Scheetz, T. E., Kim, K. Y. A., Swiderski, R. E., Philp, A. R., Braun, T. A., Knudtson, K. L., ...Sheffield, V. C. (2006). Regulation of gene expression in the mammalian eye and its relevance to eye disease. Proceedings of the National Academy of Sciences, 103(39), 14429–14434.
- Schelldorfer, J., Bühlmann, P., & van de Geer, S. (2011). Estimation for high dimensional linear mixed effects models using l1-penalization. Scandinavian Journal of Statistics, 38(2), 197–214. Retrieved from http://dx.doi.org/10.1111/j.1467-9469.2011.00740.x
- Tibshirani, R. (1996). Regression shrinkage and selection via the Lasso. Journal of the Royal Statistical Society. Series B (Methodological), 58(1), 267–288. Retrieved from http://statweb.stanford.edu/~tibs/lasso/lasso.pdf
- Tibshirani, R., Saunders, M., Rosset, S., Zhu, J., & Knight, K. (2005). Sparsity and smoothness via the fused lasso. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 67(1), 91–108. Retrieved from http://dept.stat.lsa.umich.edu/~jizhu/pubs/Tibs-JRSSB05.pdf
- Tran, M. N. (2011). The loss rank criterion for variable selection in linear regression analysis. Scandinavian Journal of Statistics, 38(3), 466–479.
- Wang, H., & Leng, C. (2007). Unified LASSO estimation by least squares approximation. Journal of the American Statistical Association, 102(479), 1039–1048. Retrieved from http://dx..doi.org/10.1198/016214507000000509
- Yuan, M., & Lin, Y. (2006). Model selection and estimation in regression with grouped variables. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 68(1), 49–67.
- Zhang, C. H. (2010). Nearly unbiased variable selection under minimax concave penalty. Annals of Statistics, 38, 894–942.
- Zhang, C. H., & Zhang, S. S. (2014). Confidence intervals for low-dimensional parameters in high-dimensional linear models. Annals of Statistics, 76, 217–242.
- Zhao, P., & Yu, B. (2006). On model selection consistency of Lasso. Journal of Machine Learning Research, 7, 2541–2563.
- Zou, H. (2006). The adaptive lasso and its oracle properties. Journal of the American Statistical Association, 101(476), 1418–1429. Retrieved from http://dx.doi.org/10.1198/016214506000000735
- Zuber, V., & Strimmer, K. (2011). High-dimensional regression and variable selection using CAR scores. Statistical Applications in Genetics and Molecular Biology, 10(1), Article 34, 27pp. Retrieved from http://dx.doi.org/10.2202/1544-6115.1730