Abstract
Principal fitted component (PFC) models are a class of likelihood-based inverse regression methods that yield a so-called sufficient reduction of the random p-vector of predictors X given the response Y. Assuming that a large number of the predictors has no information about Y, we aimed to obtain an estimate of the sufficient reduction that ‘purges’ these irrelevant predictors, and thus, select the most useful ones. We devised a procedure using observed significance values from the univariate fittings to yield a sparse PFC, a purged estimate of the sufficient reduction. The performance of the method is compared to that of penalized forward linear regression models for variable selection in high-dimensional settings.
Acknowledgements
The authors thank the two referees and the associate editor for their constructive comments and suggestions that helped substantially improve this paper. The authors are grateful to Professor Anindya Roy for his earlier comments and suggestions, and to Dr. Heather L. White for proofreading the manuscript.
Disclosure statement
No potential conflict of interest was reported by the authors.