References
- Amit, Y., Fink, M., Srebro, N., and Ullman, S. (2007), “Uncovering Shared Structures in Multiclass Classification,” in Proceedings of the 24th International Conference on Machine Learning, ICML ’07, New York: ACM, pp. 17–24.
- Anitescu, M., Chen, J., and Wang, L. (2012), “A Matrix-free Approach for Solving the Parametric Gaussian Process Maximum Likelihood Problem,” SIAM Journal on Scientific Computing, 34, A240–A262.
- Araki, Y., and Hattori, S. (2013), “Efficient Regularization Parameter Selection via Information Criteria,” Communications in Statistics: Simulation and Computation, 42, 280–293.
- Argyriou, A., Evgeniou, T., and Pontil, M. (2007), “Multi-Task Feature Learning,” in Advances in Neural Information Processing Systems (Vol. 19), eds. B. Schölkopf, J. C. Platt, and T. Hoffman, Cambridge, MA: MIT Press, pp. 41–48
- Avron, H., and Toledo, S. (2011), “Randomized Algorithms for Estimating the Trace of an Implicit Symmetric Positive Semi-Definite Matrix,” Journal of the ACM, 58, 1–34.
- Burer, S., and Monteiro, D. R. (2003), “A Nonlinear Programming Algorithm for Solving Semidefinite Programs via Low-Rank Factorization,” Mathematical Programming, 95, 329–357.
- Byrd, R. H., Hansen, S. L., Nocedal, J., and Singer, Y. (2016), “A Stochastic Quasi-Newton Method for Large-Scale Optimization,” SIAM Journal on Optimization, 26, 1008–1031.
- Cai, D., He, X., Han, J., and Huang, T. S. (2011), “Graph Regularized Nonnegative Matrix Factorization for Data Representation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 33, 1548–1560.
- Cai, J.-F., Candès, E. J., and Shen, Z. (2010), “A Singular Value Thresholding Algorithm for Matrix Completion,” SIAM Journal on Optimization, 20, 1956–1982.
- Candés, E. J., and Plan, Y. (2010), “Matrix Completion with Noise,” Proceedings of the IEEE, 98, 925–936.
- Candés, E. J., and Recht, B. (2009), “Exact Matrix Completion via Convex Optimization,” Foundations of Computational Mathematics, 9, 717–772.
- Candés, E. J., Sing-Long, C. A., and Trzasko, J. D. (2013), “Unbiased Risk Estimates for Singular Value Thresholding and Spectral Estimators,” IEEE Transactions on Signal Processing, 61, 4643–4657.
- Chen, P., and Suter, D. (2004), “Recovering the Missing Components in a Large Noisy Low-Rank Matrix: Application to SFM,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 26, 1051–1063.
- Chi, E. C., Allen, G. I., and Baraniuk, R. G. (2017), “Convex Biclustering,” Biometrics, 73, 10–19.
- Chi, E. C., Zhou, H., Chen, G. K., Del Vecchyo, D. O., and Lange, K. (2013), “Genotype Imputation via Matrix Completion,” Genome Research, 23, 509–518.
- Craven, P., and Wahba, G. (1978), “Smoothing Noisy Data with Spline Functions,” Numerische Mathematik, 31, 377–403.
- Efron, B. (2004), “The Estimation of Prediction Error,” Journal of the American Statistical Association, 99, 619–632.
- Fazel, M. (2002), “Matrix Rank Minimization with Applications,” Ph.D. dissertation, Stanford University, Stanford, CA.
- George, A. (1973), “Nested Dissection of a Regular Finite Element Mesh,” SIAM Journal on Numerical Analysis, 10, 345–363.
- Golub, G. H., Heath, M., and Wahba, G. (1979), “Generalized Cross-Validation as a Method for Choosing a Good Ridge Parameter,” Technometrics, 21, 215–223.
- Hutchinson, M. F. (1989), “A Stochastic Estimator of the Trace of the Influence Matrix for Laplacian Smoothing Splines,” Communications in Statistics—Simulation and Computation, 18, 1059–1076.
- Kalofolias, V., Bresson, X., Bronstein, M., and Vandergheynst, P. (2014), “Matrix Completion on Graphs,” in NIPS Workshop on “Out of the Box: Robustness in High Dimension”, available at https://arxiv.org/abs/1408.1717.
- Konishi, S., and Kitagawa, G. (1996), “Generalised Information Criteria in Model Selection,” Biometrika, 83, 875–890.
- Koren, Y., Bell, R., and Volinsky, C. (2009), “Matrix Factorization Techniques for Recommender Systems,” Computer, 42, 30–37.
- Mazumder, R., Hastie, T., and Tibshirani, R. (2010), “Spectral Regularization Algorithms for Learning Large Incomplete Matrices,” Journal of Machine Learning Research, 11, 2287–2322.
- Price, B. S., Geyer, C. J., and Rothman, A. J. (2015), “Ridge Fusion in Statistical Learning,” Journal of Computational and Graphical Statistics, 24, 439–454.
- Rao, N., Yu, H.-F., Ravikumar, P. K., and Dhillon, I. S. (2015), “Collaborative Filtering with Graph Information: Consistency and Scalable Methods,” in Advances in Neural Information Processing Systems (Vol. 28), eds. C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, Red Hook, NY: Curran Associates, Inc., pp. 2107–2115
- R Core Team (2013), R: A Language and Environment for Statistical Computing, Vienna, Austria: R Foundation for Statistical Computing.
- Roosta-Khorasani, F., and Ascher, U. (2015), “Improved Bounds on Sample Size for Implicit Matrix Trace Estimators,” Foundations of Computational Mathematics, 15, 1187–1212.
- Sadhanala, V., Wang, Y.-X., and Tibshirani, R. (2016), “Graph Sparsification Approaches for Laplacian Smoothing,” in Proceedings of the 19th International Conference on Artificial Intelligence and Statistics (Vol. 51), Cadiz, Spain: PMLR, pp. 1250–1259.
- Schwarz, G. (1978), “Estimating the Dimension of a Model,” Annals of Statistics, 6, 461–464.
- Shahid, N., Perraudin, N., Kalofolias, V., Puy, G., and Vandergheynst, P. (2016), “Fast Robust PCA on Graphs,” IEEE Journal of Selected Topics in Signal Processing, 10, 740–756.
- Srebro, N., Rennie, J., and Jaakkola, T. S. (2005), “Maximum-Margin Matrix Factorization,” in Advances in Neural Information Processing Systems (Vol. 17), eds. L. K. Saul, Y. Weiss, and L. Bottou, Cambridge, MA: MIT Press, pp. 1329–1336
- Wickham, H. (2009), ggplot2: Elegant Graphics for Data Analysis, New York: Springer-Verlag.
- Wu, T. T., and Lange, K. (2015), “Matrix Completion Discriminant Analysis,” Computational Statistics and Data Analysis, 92, 115–125.