References
- Amari, S. (1982), “Differential Geometry of Curved Exponential Families — Curvatures and Information Loss,” Annals of Statistics, 10, 357–385.
- Ashikhmin, A., and Calderbank, A. R. (2003), “Grassmannian Packings From Operator Reed-Muller Codes,” IEEE Transactions on Information Theory, 56, 5689–5714. DOI: https://doi.org/10.1109/TIT.2010.2070192.
- Bache, K., and Lichman, M. (2013), “UCI Machine Learning Repository.” Available at: http://archive.ics.uci.edu/ml.
- Banerjee, A., Dhillon, I. S., Ghosh, J., Sra, S. (2005), “Clustering on the Unit Hypersphere Using von Mises-Fisher Distributions,” Journal of Machine Learning Research, 6, 1345–1382.
- Belkin, M., and Niyogi, P. (2003), “Laplacian Eigenmaps for Dimensionality Reduction and Data Representation,” Neural Computation, 15, 1373–1396. DOI: https://doi.org/10.1162/089976603321780317.
- Bendich, P., Mukherjee, S., and Wang, B. (2012), “Local Homology Transfer and Stratification Learning,” ACM-SIAM Symposium on Discrete Algorithms. DOI: https://doi.org/10.1137/1.9781611973099.107.
- Blei, D. M., Ng, A.Y., and Jordan, M. I. (2003), Latent Dirichlet Allocation Journal of Machine Learning Research, 3, 993– 1022.
- Carvalho, C.M., Chang, J., Lucas, J.E., Nevins, J.R., Wang, Q., West, M. (2008), “High-Dimensional Sparse Factor Modeling: Applications in Gene Expression Genomics,” Journal of the American Statistical Association, 103, 1438–1456. DOI: https://doi.org/10.1198/016214508000000869.
- Chandra, K., Canale, A., and Dunson, D. (2020), “Escaping the Curse of Dimensionality in Bayesian Model Based Clustering,” arXiv e-prints 2006.02700.
- Chen, A., De, A., and Vijayaraghavan, A. (2021), “Learning a Mixture of Two Subspaces Over Finite Fields,” Proceedings of the 32nd International Conference on Algorithmic Learning Theory, PMLR, pp. 481–504.
- Conway, J.H., Hardin, R.H., and Sloane, N.J.A. (1996), “Packing Lines, Planes, Etc.: Packings in GRassmannian Spaces,” Experiment. Math., 5, 83–159.
- Cook, R. (2007), “Fisher Lecture: Dimension Reduction in Regression,” Statistical Science, 22, 1–26. DOI: https://doi.org/10.1214/088342306000000682.
- Detrano, R., Janosi, A., Steinbrunn, W., Pfisterer, M., Schmid, J., Sandhu, S., Guppy, K., Lee, S., Froelicher, V. (1989), “International Application of a New Probability Algorithm for the Diagnosis of Coronary Artery Disease,” American Journal of Cardiology. 604, 304–310. DOI: https://doi.org/10.1016/0002-9149(89)90524-9.
- Deerwester, S., Dumais, S. T., Furnas, G. W., Landauer, T.K., Harshman, R. (1990), Indexing by latent semantic analysis. Journal of the American Society for Information Science, 41, 391–407. DOI: https://doi.org/10.1002/(SICI)1097-4571(199009)41:6<391::AID-ASI1>3.0.CO;2-9.
- Donoho, D., and Grimes, C. (2003), “Hessian Eigenmaps: New Locally Linear Embedding Techniques for High-Dimensional Data,” Proceedings of the National Academy of Sciences, 100, 5591–5596. DOI: https://doi.org/10.1073/pnas.1031596100.
- Elhamifar, E., and Vidal, R. (2009), “Sparse Subspace Clustering,” IEEE Conference on Computer Vision and Pattern Recognition, 2790–2797.
- Efron, B. (1978), “The Geometry of Exponential Families,” Annals of Statistics, 6, 362–376.
- Fisher, R. A. (1953), “Dispersion on a Sphere,” Proceedings of the Royal Society of London, Series A, 217, 295–305.
- Dan Geiger, D. H., King, H., and Meek, C. (2001), “Stratified Exponential Families: Graphical Models and Model Selection,” Annals of Statistics, 29, 505–529.
- Giné, E., and Koltchinskii, V. (2006), “Empirical Graph Laplacian Approximation of Laplace-Beltrami Operators: Large Sample Results,” in High-Dimensional Probability, Vol. 51 of IMS Lecture Notes Monogr. Ser., 238–259. Beachwood, OH: Inst. Math. Statist.
- Golub, G., and Van Loan, C. (2013), Matrix Computations, (4th ed.), Baltimore, MD: John Hopkins University Press.
- Goresky, M., and MacPherson, R. (1988), Stratified Morse Theory, Springer-Verlag.
- Green, P. J. (1995), “Reversible Jump Markov Chain Monte Carlo Computation and Bayesian Model Determination,” Biometrika, 82, 711–732. DOI: https://doi.org/10.1093/biomet/82.4.711.
- Hamm, J., and Lee, D. D. (2005), “Grassmann Discriminant Analysis: A Unifying View on Subspace-Based Learning,” Advances in NIPS, 17.
- Hansen, T. F., and Houle, D. (2008), “Measuring and Comparing Evolvability and Constraint in Multivariate Characters,” Journal of Evolutionary Biology, 21,1201–1219. DOI: https://doi.org/10.1111/j.1420-9101.2008.01573.x.
- Haro, G., Randall, G., and Sapiro, G. (2007), “Stratification Learning: Detecting Mixed Density and Dimensionality in High Dimensional Point Clouds,” Advances in Neural Information Processing Systems, 19, 553–560.
- Hoff, P. D. (2009), “Simulation of the Matrix Bingham–von Mises–Fisher Distribution, With Applications to Multivariate and Relational Data,” The Journal of Computational and Graphical Statistics, 18, 438– 456. DOI: https://doi.org/10.1198/jcgs.2009.07177.
- Hoffman, T. (1999), “Probabilistic Latent Semantic Indexing,” Proceedings of the 22nd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 50–57.
- Huang, K., Ma, Y., and Vidal, R. (1999), “Minimum Effective Dimension for Mixtures of Subspaces: A Robust GPCA Algorithm and Its Applications,” IEEE Conference on Computer Vision and Pattern Recognition, Vol. II, pp. 631–638.
- Jiang, W., and Tanner, M. A. (2008), “Gibbs Posterior for Variable Selection in High-Dimensional Classification and Data Mining,” Annals of Statistics, 36, 2025–2550.
- Kendall, D. G. (1984), “Shape Manifolds, Procrustean Metrics, and Complex Projective Spaces,” Bulletin of the London Mathematical Society, 16, 81–121. DOI: https://doi.org/10.1112/blms/16.2.81.
- Laaksone, J., and Oja, E. (1996), “Subspace Dimension Selection and Averaged Learning Subspace Method in Handwritten Digit Classification,” in: Proceedings of International Conference on Artificial Neural Networks, pp. 227–232.
- Lande, R. (1979), “Quantitative Genetic-Analysis of Multivariate Evolution, Applied to Brain-Body Size Allometry,” Evolution, 33, 402–416. DOI: https://doi.org/10.2307/2407630.
- Lerman, G., and Zhang, T. (2010), “Probabilistic Recovery of Multiple Subspaces in Point Clouds by Geometric lp Minimization,” Annals of Statistics, 39, 2686–2715.
- Lipor, J., Hong, D., Tan, Y. S., and Balzano, L. (2021), “Subspace Clustering Using Ensembles of K-Subspaces,” arXiv: 1709.04744.
- Liu, G., Lin, Z., and Yu, Y. (2010), “Robust Subspace Segmentation by Low-Rank Representation,” International Conference on Machine Learning, pp. 663–670.
- Lu, C., Min, H., Zhao, Z., Zhu, L., Huang, D., and Yan, S. (2012), “Robust and Efficient Subspace Segmentation Via Least Squares Regression,” European Conference on Computer Vision, 7578, 347–360.
- Mangasarian, O. L., and Wolberg, W. H. (1990), “Cancer Diagnosis Via Linear Programming,” Siam News, 23, 1 and 18.
- McCallum, A. K. (2002), MALLET: A Machine Learning for Language Toolkit. Available at: http://mallet.cs.umass.edu.
- Mukherjee, S., Zhou, D-X., and Wu, Q. (2010), “Learning Gradients and Feature Selection on Manifolds,” Bernoulli, 16, 181–207. DOI: https://doi.org/10.3150/09-BEJ206.
- NSF 2010 Awards. Available at: http://www.nsf.gov/awardsearch/download.jsp.
- Page, G., Bhattacharya, A., and Dunson, D. B. (2013), “Classification Via Bayesian Nonparametric Learning of Affine Subspaces,” Journal of American Statistical Association, 108, 187–201. DOI: https://doi.org/10.1080/01621459.2013.763566.
- Pimentel-Alarcón, D., Balzano, L., Marcia, R., Nowak, R., and Willett, R. (2017), “Mixture Regression as Subspace Clustering,” 2017 International Conference on Sampling Theory and Applications (SampTA), Tallin, pp. 456–459. DOI: https://doi.org/10.1109/SAMPTA.2017.8024386.
- Pritchard, J. K., Stephens, M., and Donnelly, P. (2000), “Inference of Population Structure Using Multilocus Genotype Data,” Genetics, 155, 945–959. DOI: https://doi.org/10.1093/genetics/155.2.945.
- Rao, C. R. (1945), “Information and Accuracy Obtainable in the Estimation of Statistical Parameters,” Bulletin Calcutta Mathematical Society, 37, 81–91.
- Reisinger, J., Waters, A., Silverthorn, B., and Mooney, R. J. (2010), “Spherical Topic Models,” Proceedings of the 27th ICML.
- Roweis, S., and Saul, L. (2000), “Nonlinear Dimensionality Reduction by Locally Linear Embedding,” Science, 290, 2323–2326. DOI: https://doi.org/10.1126/science.290.5500.2323.
- Schwartz, L. (1965), “On Bayes Procedures,” Zeitschrift für Wahrscheinlichkeitstheorie und verwandte Gebiete, 4, 10–26. DOI: https://doi.org/10.1007/BF00535479.
- Siebert, J. P. (1987), “Vehicle Recognition Using Rule Based Methods,” Turing Institute Research Memorandum TIRM-87-018, Glasgow, Scotland: Turing Institute.
- Soltanolkotabi, M., and Candés, E. (2012), “A Geometric Analysis of Subspace Clustering With Outliers,” Annals of Statistics, 40, 2195–2238.
- Vidal, R., Ma, Y., and Sastry, S. (2005), “Generalized Principal Component Analysis (GPCA),” IEEE Transactions on Pattern Analysis and Machine Intelligence, 27, 1945–1959. DOI: https://doi.org/10.1109/TPAMI.2005.244.
- Vidal, R., and Favaro, P. (2014), “Low Rank Subspace Clustering (LRSC),” Pattern Recognition Letters, 43, 47–61. DOI: https://doi.org/10.1016/j.patrec.2013.08.006.
- Wu, H. C., Luk, R. W. P., Wong, K. F., Kwok, K. L. (2008), “Interpreting TF-IDF Term Weights As Making Relevance Decisions,” ACM Transactions on Information Systems, 26, 1–37. DOI: https://doi.org/10.1145/1361684.1361686.
- Zheng, L., and Tse, D. N. C. (2002), “Communication on the Grassmann Manifold: A Geometric Approach to the Noncoherent Multiple-Antena Channel,” IEEE Transactions on Information Theory, 48, 359– 383. DOI: https://doi.org/10.1109/18.978730.
- Zheng, H. (2011), “Mixture of Subspace Learning With Adaptive Dimensionality: A Self-Organizing Approach,” in: Foundations of Intelligent Systems. Advances in Intelligent and Soft Computing, eds. Y. Wang and T. Li, Vol. 122. Berlin: Springer.
- Zhong, S., and Ghosh, J. (2005), “Generative Model-Based Document Clustering: A Comparative Study,” Knowledge and Information Systems, 8, 374–384. DOI: https://doi.org/10.1007/s10115-004-0194-1.