211
Views
0
CrossRef citations to date
0
Altmetric
Articles

High-dimensional statistical inference via DATE

, , &
Pages 65-79 | Received 04 Jan 2020, Accepted 20 Mar 2021, Published online: 05 Apr 2021

References

  • Belloni, A., V. Chernozhukov, and K. Kato. 2019. Valid post-selection inference in high-dimensional approximately sparse quantile regression models. Journal of the American Statistical Association 114 (526):749–58. doi:10.1080/01621459.2018.1442339.
  • Berk, R., L. Brown, A. Buja, K. Zhang, and L. Zhao. 2013. Valid post-selection inference. The Annals of Statistics 41 (2):802–37. doi:10.1214/12-AOS1077.
  • Bickel, P. J., Y. A. Ritov, and A. B. Tsybakov. 2009. Simultaneous analysis of Lasso and Dantzig selector. The Annals of Statistics 37 (4):1705–32. doi:10.1214/08-AOS620.
  • Candès, E., and T. Tao. 2007. The Dantzig selector: Statistical estimation when p is much larger than n (with discussion). Ann. Statist 35:2313–404.
  • Fan, J., and J. Lv. 2011. Nonconcave penalized likelihood with NP-dimensionality. IEEE Transactions on Information Theory 57 (8):5467–84. doi:10.1109/TIT.2011.2158486.
  • Fan, J., and R. Li. 2001. Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American Statistical Association 96 (456):1348–60. doi:10.1198/016214501753382273.
  • Fan, Y., and J. Lv. 2013. Asymptotic equivalence of regularization methods in thresholded parameter space. Journal of the American Statistical Association 108 (503):1044–61. doi:10.1080/01621459.2013.803972.
  • Fan, Y., E. Demirkaya, and J. Lv. 2019. Nonuniformity of p-values can occur early in diverging dimensions. J. Mach. Learn. Res 20:1–33.
  • Hao, N., Y. Feng, and H. Zhang. 2018. Model selection for high-dimensional quadratic regression via regularization. Journal of the American Statistical Association 113 (522):615–25. doi:10.1080/01621459.2016.1264956.
  • Javanmard, A., and A. Montanari. 2014. Confidence intervals and hypothesis testing for high-dimensional regression. Journal of Machine Learning Research 15:2869–909.
  • Javanmard, A., and J. Lee. 2020. A Flexible Framework for Hypothesis Testing in High dimensions. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 82 (3):685–718. doi:10.1111/rssb.12373.
  • Kong, Y., Z. Zheng, and J. Lv. 2016. The constrained dantzig selector with enhanced consistency. J. Mach. Learn. Res 17:1–22.
  • Lee, J., D. Sun, Y. Sun, and J. Taylor. 2016. Exact post-selection inference with application to the Lasso. The Annals of Statistics 44 (3):907–27. doi:10.1214/15-AOS1371.
  • Lv, J., and Y. Fan. 2009. A unified approach to model selection and sparse recovery using regularized least squares. Annals of Statistics 44:3498–528.
  • Meinshausen, N., and P. Bühlmann. 2010. Stability selection. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 72 (4):417–73. doi:10.1111/j.1467-9868.2010.00740.x.
  • Meinshausen, N., L. Meier, and P. Bühlmann. 2009. p-values for high-dimensional regression. Journal of the American Statistical Association 104 (488):1671–81. doi:10.1198/jasa.2009.tm08647.
  • Minnier, L., L. Tian, and T. Cai. 2011. A perturbation method for inference on regularized regression estimates. Journal of the American Statistical Association 106 (496):1371–82. doi:10.1198/jasa.2011.tm10382.
  • Sun, T., and C.-H. Zhang. 2012. Scaled sparse linear regression. Biometrika 99 (4):879–98. doi:10.1093/biomet/ass043.
  • Tibshirani, R. 1996. Regression shrinkage and selection via the Lasso. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 58:267–88.
  • Tibshirani, R., A. Rinaldo, R. Tibshirani, and L. Wasserman. 2018. Uniform asymptotic inference and the bootstrap after model selection. The Annals of Statistics 46 (3):1255–87. doi:10.1214/17-AOS1584.
  • van de Geer, S., P. Bühlmann, Y. Ritov, and R. Dezeure. 2014. On asymptotically optimal confidence regions and tests for high-dimensional models. The Annals of Statistics 42 (3):1166–202. doi:10.1214/14-AOS1221.
  • Wasserman, L., and K. Roeder. 2009. High dimensional variable selection. Annals of Statistics 37 (5A):2178. doi:10.1214/08-AOS646.
  • Zhang, C.-H., and J. Huang. 2010. Nearly unbiased variable selection under minimax concave penalty. The Annals of Statistics 38 (2):894–942. doi:10.1214/09-AOS729.
  • Zhang, C.-H., and S. Zhang. 2014. Confidence intervals for low dimensional parameters in high dimensional linear models. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 76 (1):217–42. doi:10.1111/rssb.12026.
  • Zheng, Z.,. Y. Fan, and J. Lv. 2014. High-dimensional thresholded regression and shrinkage effect. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 76 (3):627–49. doi:10.1111/rssb.12037.
  • Zou, H., and R. Li. 2008. One-step sparse estimates in nonconcave penalized likelihood models. Ann. Statist 36:1509–66.
  • Zou, H., and T. Hastie. 2005. Regularization and variable selection via the elastic net. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 67 (2):301–20. doi:10.1111/j.1467-9868.2005.00503.x.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.