461
Views
0
CrossRef citations to date
0
Altmetric
Dimension Reduction and Sparse Modeling

A New Basis for Sparse Principal Component Analysis

ORCID Icon &
Pages 421-434 | Received 16 Jan 2022, Accepted 19 Jul 2023, Published online: 09 Nov 2023
 

Abstract

Previous versions of sparse principal component analysis (PCA) have presumed that the eigen-basis (a p × k matrix) is approximately sparse. We propose a method that presumes the p × k matrix becomes approximately sparse after a k × k rotation. The simplest version of the algorithm initializes with the leading k principal components. Then, the principal components are rotated with an k × k orthogonal rotation to make them approximately sparse. Finally, soft-thresholding is applied to the rotated principal components. This approach differs from prior approaches because it uses an orthogonal rotation to approximate a sparse basis. One consequence is that a sparse component need not to be a leading eigenvector, but rather a mixture of them. In this way, we propose a new (rotated) basis for sparse PCA. In addition, our approach avoids “deflation” and multiple tuning parameters required for that. Our sparse PCA framework is versatile; for example, it extends naturally to a two-way analysis of a data matrix for simultaneous dimensionality reduction of rows and columns. We provide evidence showing that for the same level of sparsity, the proposed sparse PCA method is more stable and can explain more variance compared to alternative methods. Through three applications—sparse coding of images, analysis of transcriptome sequencing data, and large-scale clustering of social networks, we demonstrate the modern usefulness of sparse PCA in exploring multivariate data. An R package, epca, and the supplementary materials for this article are available online.

Acknowledgments

We thank Sündüz Keleş, Sébastien Roch, Po-Ling Loh, Michael A Newton, Yini Zhang, Muzhe Zeng, Alex Hayes, E Auden Krauska, Jocelyn Ostrowski, Daniel Conn, and Shan Lu for all the helpful discussions.

Disclosure Statement

The authors report there are no competing interests to declare.

Notes

1 The l1-norm constraint could be replaced by other sparsity constraints, for example, the l0-norm analogue.

2 MRE depicts the unexplained variation in the data, akin to the sum of squares error in regression.

3 This restricted formulation is essentially a low-rank SVD with an additional sparsity constraint on the right singular vectors.

4 This is for the set {YRp×k|Y1=γ} to intersect with the Stiefel manifold V(p,k).

5 More investigation is needed in order to understand the statistical properties of PRS. For example, in a recent paper (Rohe and Zeng Citation2020), we showed that PCA with the varimax rotation is a consistent estimator for a broad class of modern factor models, that includes the degree corrected stochastic block model (Karrer and Newman Citation2011).

6 The article originally considers the PMD with k = 1. The PMD finds multiple factors sequentially using a deflation technique.

7 We provide an R package epca, for exploratory principal component analysis, which implements SCA and SMA with various algorithmic options. The package is available from CRAN (https://CRAN.R-project.org/package=epca).

8 We also experimented with γ=pk=40. The results are comparable.

9 The coefficient 2.5 is calculated from λ/16, assuming that the 16 sparse PCs have equally distributed l1-norm.

10 Since SPCA and SPCAvRP performs worse than SPC and GPower (Zou and Xue Citation2018), we excluded the two methods in this simulation for simplicity.

11 The columns of A are not centered nor scaled. One alternative is to use the normalized version of A. For example, define the regularized graph Laplacian as LRn×p with Lij=Aij/(ri+r¯)(cj+c¯), where ri=jAij is the sum of the ith row of A, cj=iAij is the sum the jth column of A. Here, r¯ and c¯ are the means of ri’s and cj’s respectively. (Zhang and Rohe Citation2018).

12 Generally, B*=(ZTZ)1ZTXY(YTY)1 if Z and Y are full-rank, or B*=(ZTZ)+ZTXY(YTY)+ if either Z or Y is singular, where A+ is the Moore–Penrose inverse of matrix A.

Additional information

Funding

The authors gratefully acknowledge National Science Foundation grant DMS-1612456 and DMS-1916378 and Army Research Office grant W911NF-15-1-0423.

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 61.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 180.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.