1,276
Views
1
CrossRef citations to date
0
Altmetric
Original Articles

D-Trace estimation of a precision matrix with eigenvalue control

Pages 1231-1247 | Received 10 Jul 2018, Accepted 04 Feb 2019, Published online: 12 Mar 2019

References

  • Avagyan, V., A. M. Alonso, and F. J. Nogales. 2016. D-trace estimation of a precision matrix using adaptive lasso penalties. Advances in Data Analysis and Classification 12 (2):425–47. doi: 10.1007/s11634-016-0272-8.
  • Avagyan, V., A. M. Alonso, and F. J. Nogales. 2017. Improving the graphical lasso estimation for the precision matrix through roots of the sample covariance matrix. Journal of Computational and Graphical Statistics 26 (4):865–72. doi: 10.1080/10618600.2017.1340890.
  • Banerjee, O., L. El Ghaoui, A. d’Aspremont, and G. Natsoulis. 2006. Convex optimization techniques for fitting sparse gaussian graphical models. Pittsburg. Proceedings of the 23rd International Conference on Machine Learning.
  • Cai, T., W. Liu, and X. Luo. 2011. A constrained ℓ1 minimization approach to sparse precision matrix estimation. Journal of the American Statistical Association 106 (494):594–607. doi: 10.1198/jasa.2011.tm10155.
  • Chicco, D. 2017. Ten quick tips for machine learning in computational biology. BioData Mining 10 (1):35doi: 10.1186/s13040-017-0155-3.
  • Dempster, A. 1972. Covariance selection. Biometrics 28 (1):157–75. doi: 10.2307/2528966.
  • Fan, J., J. Feng, and Y. Wu. 2009. Network exploration via the adaptive Lasso and Scad penalties. The Annals of Applied Statistics 3(2):521–41. doi: 10.1214/08-AOAS215SUPP.
  • Friedman, J., T. Hastie, and R. Tibshirani. 2008. Sparse inverse covariance estimation with the graphical lasso. Biostatistics 9(3):432–41. doi: 10.1093/biostatistics/kxm045.
  • Goto, S., and Y. Xu. 2015. Improving mean variance optimization through sparse hedging restrictions. Journal of Financial and Quantitative Analysis 50 (06):1415–41. doi: 10.1017/S0022109015000526.
  • Hannan, E. J., and B. G. Quinn. 1979. The determination of the order of an autoregression. Journal of the Royal Statistical Society: Series B 41 (2):190–5. doi: 10.1111/j.2517-6161.1979.tb01072.x.
  • Lauritzen, S. 1996. Graphical models. Oxford: Clarendon Press.
  • Liu, H., L. Wang, and T. Zhao. 2014. Sparse covariance matrix estimation with eigenvalue constraints. Journal of Computational and Graphical Statistics 23 (2):439–59. doi: 10.1080/10618600.2013.782818.
  • Matthews, B. W. 1975. Comparison of the predicted and observed secondary structure of t4 phage lysozyme. Biochimica et Biophysica Acta 405(2):442–51. doi: 10.1016/0005-2795(75)90109-9.
  • Maurya, A. 2014. A joint convex penalty for inverse covariance matrix estimation. Computational Statistics & Data Analysis 75:15–27. doi: 10.1016/j.csda.2014.01.015.
  • McLachlan, S. 2004. Discriminant analysis and statistical pattern recognition. New York: Willey Interscience.
  • Meinshausen, N., and P. Bühlmann. 2006. High-dimensional graphs and variable selection with the lasso. The Annals of Statistics 34 (3):1436–62. doi: 10.1214/009053606000000281.
  • Powers, D. M. 2011. Evaluation: from precision, recall and f-measure to roc, informedness, markedness and correlation. Journal of Machine Learning Technologies 2 (1):37–63.
  • Rothman, A., P. Bickel, E. Levina, and J. Zhu. 2008. Sparse permutation invariant covariance estimation. Electronic Journal of Statistics 2 (0):494–515. doi: 10.1214/08-EJS176.
  • Ryali, S., T. Chen, K. Supekar, and V. Menon. 2012. Estimation of functional connectivity in fMRI data using stability selection-based sparse partial correlation with elastic net penalty. NeuroImage 59(4):3852–61. doi: 10.1016/j.neuroimage.2011.11.054.
  • Stifanelli, P. F., T. M. Creanza, R. Anglani, V. C. Liuzzi, S. Mukherjee, F. P. Schena, and N. Ancona. 2013. A comparative study of covariance selection models for the inference of gene regulatory networks. Journal of Biomedical Informatics 46 (5):894–904. doi: 10.1016/j.jbi.2013.07.002.
  • Tibshirani, R. 1996. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society: Series B 58 (1):267–88. doi: 10.1111/j.2517-6161.1996.tb02080.x.
  • van der Pas, S. L., and P. D. Grünwald. 2014. Almost the best of three worlds: Risk, consistency and optional stopping for the switch criterion in single parameter model selection. arXiv Preprint arXiv:1408.5724.
  • van Wieringen, W. N., and C. F. Peeters. 2016. Ridge estimation of inverse covariance matrices from high-dimensional data. Computational Statistics & Data Analysis 103:284–303. doi: 10.1016/j.csda.2016.05.012.
  • Yuan, M. 2010. High dimensional inverse covariance matrix estimation via linear programming. Journal of Machine Learning Research 11:2261–86.
  • Yuan, M., and Y. Lin. 2007. Model selection and estimation in the Gaussian graphical model. Biometrika 94 (1):19–35. doi: 10.1093/biomet/asm018.
  • Zerenner, T., P. Friederichs, K. Lehnertz, and A. Hense. 2014. A gaussian graphical model approach to climate networks. Chaos: An Interdisciplinary Journal of Nonlinear Science 24 (2):023103. doi: 10.1063/1.4870402.
  • Zhang, T., and H. Zou. 2014. Sparse precision matrix estimation via lasso penalized d-trace loss. Biometrika 88:1–18.