Abstract
It is common in modern prediction problems for many predictor variables to be counts of rarely occurring events. This leads to design matrices in which many columns are highly sparse. The challenge posed by such “rare features” has received little attention despite its prevalence in diverse areas, ranging from natural language processing (e.g., rare words) to biology (e.g., rare species). We show, both theoretically and empirically, that not explicitly accounting for the rareness of features can greatly reduce the effectiveness of an analysis. We next propose a framework for aggregating rare features into denser features in a flexible manner that creates better predictors of the response. Our strategy leverages side information in the form of a tree that encodes feature similarity. We apply our method to data from TripAdvisor, in which we predict the numerical rating of a hotel based on the text of the associated review. Our method achieves high accuracy by making effective use of rare words; by contrast, the lasso is unable to identify highly predictive words if they are too rare. A companion R package, called rare, implements our new estimator, using the alternating direction method of multipliers. Supplementary materials for this article are available online.
Supplementary Materials
Supplementary materials for “Rare Feature Selection in High
Dimensions”: Proofs of theorems, lemmas, and corollaries (and intermediate results); ADMM algorithm for our method.
Acknowledgments
The authors thank Andy Clark for calling our attention to the challenge of rare features.
Notes
1 Notes
For example, in the R text mining library tm (Feinerer and Hornik Citation2017), removeSparseTerms is a commonly used function for removing any terms with sparsity level above a certain threshold.
2 Data source: http://times.cs.uiuc.edu/∼wang296/Data/
3 Data source: http://nlp.stanford.edu/data/glove.6B.zip