Abstract
Previous versions of sparse principal component analysis (PCA) have presumed that the eigen-basis (a p × k matrix) is approximately sparse. We propose a method that presumes the p × k matrix becomes approximately sparse after a k × k rotation. The simplest version of the algorithm initializes with the leading k principal components. Then, the principal components are rotated with an k × k orthogonal rotation to make them approximately sparse. Finally, soft-thresholding is applied to the rotated principal components. This approach differs from prior approaches because it uses an orthogonal rotation to approximate a sparse basis. One consequence is that a sparse component need not to be a leading eigenvector, but rather a mixture of them. In this way, we propose a new (rotated) basis for sparse PCA. In addition, our approach avoids “deflation” and multiple tuning parameters required for that. Our sparse PCA framework is versatile; for example, it extends naturally to a two-way analysis of a data matrix for simultaneous dimensionality reduction of rows and columns. We provide evidence showing that for the same level of sparsity, the proposed sparse PCA method is more stable and can explain more variance compared to alternative methods. Through three applications—sparse coding of images, analysis of transcriptome sequencing data, and large-scale clustering of social networks, we demonstrate the modern usefulness of sparse PCA in exploring multivariate data. An R package, epca, and the supplementary materials for this article are available online.
Acknowledgments
We thank Sündüz Keleş, Sébastien Roch, Po-Ling Loh, Michael A Newton, Yini Zhang, Muzhe Zeng, Alex Hayes, E Auden Krauska, Jocelyn Ostrowski, Daniel Conn, and Shan Lu for all the helpful discussions.
Disclosure Statement
The authors report there are no competing interests to declare.
Notes
1 The -norm constraint could be replaced by other sparsity constraints, for example, the -norm analogue.
2 MRE depicts the unexplained variation in the data, akin to the sum of squares error in regression.
3 This restricted formulation is essentially a low-rank SVD with an additional sparsity constraint on the right singular vectors.
4 This is for the set to intersect with the Stiefel manifold .
5 More investigation is needed in order to understand the statistical properties of PRS. For example, in a recent paper (Rohe and Zeng Citation2020), we showed that PCA with the varimax rotation is a consistent estimator for a broad class of modern factor models, that includes the degree corrected stochastic block model (Karrer and Newman Citation2011).
6 The article originally considers the PMD with k = 1. The PMD finds multiple factors sequentially using a deflation technique.
7 We provide an R package epca, for exploratory principal component analysis, which implements SCA and SMA with various algorithmic options. The package is available from CRAN (https://CRAN.R-project.org/package=epca).
8 We also experimented with . The results are comparable.
9 The coefficient 2.5 is calculated from , assuming that the 16 sparse PCs have equally distributed -norm.
10 Since SPCA and SPCAvRP performs worse than SPC and GPower (Zou and Xue Citation2018), we excluded the two methods in this simulation for simplicity.
11 The columns of A are not centered nor scaled. One alternative is to use the normalized version of A. For example, define the regularized graph Laplacian as with , where is the sum of the ith row of A, is the sum the jth column of A. Here, and are the means of ri’s and cj’s respectively. (Zhang and Rohe Citation2018).
12 Generally, if Z and Y are full-rank, or if either Z or Y is singular, where A+ is the Moore–Penrose inverse of matrix A.