Abstract
Cross-validation is a widely used technique to estimate prediction error, but its behavior is complex and not fully understood. Ideally, one would like to think that cross-validation estimates the prediction error for the model at hand, fit to the training data. We prove that this is not the case for the linear model fit by ordinary least squares; rather it estimates the average prediction error of models fit on other unseen training sets drawn from the same population. We further show that this phenomenon occurs for most popular estimates of prediction error, including data splitting, bootstrapping, and Mallow’s . Next, the standard confidence intervals for prediction error derived from cross-validation may have coverage far below the desired level. Because each data point is used for both training and testing, there are correlations among the measured accuracies for each fold, and so the usual estimate of variance is too small. We introduce a nested cross-validation scheme to estimate this variance more accurately, and show empirically that this modification leads to intervals with approximately correct coverage in many examples where traditional cross-validation intervals fail. Lastly, our analysis also shows that when producing confidence intervals for prediction accuracy with simple data splitting, one should not refit the model on the combined data, since this invalidates the confidence intervals. Supplementary materials for this article are available online.
Acknowledgments
The authors would like to acknowledge Frank Harrell for a seminar and personal correspondence alerting them to the miscoverage of cross-validation in the high-dimensional logistic regression model. We would like to thank Alexandre Bayle, Michael Celentano, Bradley Efron, Lester Mackey, Adam Smoulder, Ryan Tibshirani, Larry Wasserman, and three anonymous reviewers/editors for helpful comments on earlier versions of this manuscript.
Disclosure Statement
The authors report there are no competing interests to declare.
Notes
1 We thank an anonymous reviewer for feedback on this topic.
2 The width in is reported relative to the version of cross-validation that holds out two folds at a time, since this is what is computed internally during NCV. In table and elsewhere, we instead report widths relative to the usual K-fold CV.