111
Views
0
CrossRef citations to date
0
Altmetric
Articles

Inference for sparse linear regression based on the leave-one-covariate-out solution path

, ORCID Icon & ORCID Icon
Pages 6640-6657 | Received 17 Jul 2021, Accepted 17 Jan 2022, Published online: 02 Feb 2022

References

  • Albert, A., and J. A. Anderson. 1984. On the existence of maximum likelihood estimates in logistic regression models. Biometrika 71 (1):1–10. doi:10.1093/biomet/71.1.1.
  • Alon, U., N. Barkai, D. A. Notterman, K. Gish, S. Ybarra, D. Mack, and A. J. Levine. 1999. Broad patterns of gene expression revealed by clustering analysis of tumor and normal colon tissues probed by oligonucleotide arrays. Proceedings of the National Academy of Sciences of the United States of America 96 (12):6745–50. doi:10.1073/pnas.96.12.6745.
  • Breiman, L. 2001. Random forests. Machine Learning 45 (1):5–32. doi:10.1023/A:1010933404324.
  • Bühlmann, P., M. Kalisch, and L. Meier. 2014. High-dimensional statistics with a view toward applications in biology. Computational Statistics 29 (3–4):407–30. doi:10.1007/s00180-013-0436-3.
  • Candes, E., T. Tao, et al. 2007. The dantzig selector: Statistical estimation when p is much larger than n. Annals of Statistics 35 (6):2313–51.
  • Chatterjee, A., and S. N. Lahiri. 2011. Bootstrapping lasso estimators. Journal of the American Statistical Association 106 (494):608–25. doi:10.1198/jasa.2011.tm10159.
  • Chatterjee, A., and S. N. Lahiri. 2013. Rates of convergence of the adaptive lasso estimators to the oracle distribution and higher order refinements by the bootstrap. The Annals of Statistics 41 (3):1232–59. doi:10.1214/13-AOS1106.
  • Das, D.,. K. Gregory, and S. Lahiri. 2019. Perturbation bootstrap in adaptive lasso. The Annals of Statistics 47 (4):2080–116. doi:10.1214/18-AOS1741.
  • Dezeure, R., P. Bühlmann, L. Meier, and N. Meinshausen. 2015. High-dimensional inference: Confidence intervals, p-values and R-software hdi. Statistical Science 30 (4):533–58. doi:10.1214/15-STS527.
  • Dudoit, S., J. Fridlyand, and T. P. Speed. 2002. Comparison of discrimination methods for the classification of tumors using gene expression data. Journal of the American Statistical Association 97 (457):77–87. doi:10.1198/016214502753479248.
  • Fan, J., Y. Feng, and R. Song. 2011. Nonparametric independence screening in sparse ultra-high dimensional additive models. Journal of the American Statistical Association 106 (494):544–57. doi:10.1198/jasa.2011.tm09779.
  • Fan, J., and R. Li. 2001. Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American Statistical Association 96 (456):1348–60. doi:10.1198/016214501753382273.
  • Fan, J., and J. Lv. 2008. Sure independence screening for ultrahigh dimensional feature space. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 70 (5):849–911. doi:10.1111/j.1467-9868.2008.00674.x.
  • Fan, J., and R. Song. 2010. Sure independence screening in generalized linear models with np-dimensionality. The Annals of Statistics 38 (6):3567–604. doi:10.1214/10-AOS798.
  • Fisher, A., C. Rudin, and F. Dominici. (2018). All models are wrong but many are useful: Variable importance for black-box, proprietary, or misspecified prediction models, using model class reliance. arXiv preprint arXiv:1801.01489.
  • Friedman, J., T. Hastie, H. Höfling, and R. Tibshirani. 2007. Pathwise coordinate optimization. The Annals of Applied Statistics 1 (2):302–32. doi:10.1214/07-AOAS131.
  • Friedman, J., T. Hastie, and R. Tibshirani. 2010. Regularization paths for generalized linear models via coordinate descent. Journal of Statistical Software 33 (1):1–22. doi:10.18637/jss.v033.i01.
  • Golub, T. R., D. K. Slonim, P. Tamayo, C. Huard, M. Gaasenbeek, J. P. Mesirov, H. Coller, M. L. Loh, J. R. Downing, M. A. Caligiuri, et al. 1999. Molecular classification of cancer: Class discovery and class prediction by gene expression monitoring. Science (New York, N.Y.) 286 (5439):531–7. doi:10.1126/science.286.5439.531.
  • Javanmard, A., and A. Montanari. 2014. Confidence intervals and hypothesis testing for high-dimensional regression. The Journal of Machine Learning Research 15 (1):2869–909.
  • Ke, T., J. Jin, and J. Fan. 2014. Covariance assisted screening and estimation. Annals of Statistics 42 (6):2202–42.
  • Lei, J., M. G’Sell, A. Rinaldo, R. J. Tibshirani, and L. Wasserman. 2018. Distribution-free predictive inference for regression. Journal of the American Statistical Association 113 (523):1094–111. doi:10.1080/01621459.2017.1307116.
  • Lockhart, R., J. Taylor, R. J. Tibshirani, and R. Tibshirani. 2014. A significance test for the lasso. Annals of Statistics 42 (2):413–68.
  • Mammen, E. 2012. When does bootstrap work?: Asymptotic results and simulations. Vol. 77. Springer, Berlin: Springer Science & Business Media.
  • Meinshausen, N., L. Meier, and P. Bühlmann. 2009. P-values for high-dimensional regression. Journal of the American Statistical Association 104 (488):1671–81. doi:10.1198/jasa.2009.tm08647.
  • Saldana, D. F., and Y. Feng. 2018. SIS: An R package for sure independence screening in ultrahigh-dimensional statistical models. Journal of Statistical Software 83 (2):1–25. doi:10.18637/jss.v083.i02.
  • Singh, D., P. G. Febbo, K. Ross, D. G. Jackson, J. Manola, C. Ladd, P. Tamayo, A. A. Renshaw, A. V. D'Amico, J. P. Richie, et al. 2002. Gene expression correlates of clinical prostate cancer behavior. Cancer Cell 1 (2):203–9. doi:10.1016/S1535-6108(02)00030-2.
  • Tibshirani, R. 1996. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society: Series B (Methodological) 58 (1):267–88. doi:10.1111/j.2517-6161.1996.tb02080.x.
  • Van de Geer, S., P. Bühlmann, Y. Ritov, and R. Dezeure. 2014. On asymptotically optimal confidence regions and tests for high-dimensional models. The Annals of Statistics 42 (3):1166–202. doi:10.1214/14-AOS1221.
  • Wasserman, L., and K. Roeder. 2009. High dimensional variable selection. Annals of Statistics 37 (5A):2178–201.
  • Zhang, C.-H. 2010. Nearly unbiased variable selection under minimax concave penalty. The Annals of Statistics 38 (2):894–942. doi:10.1214/09-AOS729.
  • Zhang, C.-H., and S. S. Zhang. 2014. Confidence intervals for low dimensional parameters in high dimensional linear models. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 76 (1):217–42. doi:10.1111/rssb.12026.
  • Zou, H. 2006. The adaptive lasso and its oracle properties. Journal of the American Statistical Association 101 (476):1418–29. doi:10.1198/016214506000000735.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.