Abstract
We study the validation of prediction rules such as regression models and classification algorithms through two out-of-sample strategies, cross-validation and accumulated prediction error. We use the framework of Efron (Citation1983) where measures of prediction errors are defined as sample averages of expected errors and show through exact finite sample calculations that cross-validation and accumulated prediction error yield different smoothing parameter choices in nonparametric regression. The difference in choice does not vanish as sample size increases.
Acknowledgments
We acknowledge the financial support of the Swedish Research Council (grant 70246501, the Ageing and Living Condition Program and the Swedish Initiative for Microdata Research in the Medical and Social Sciences). The simulation results obtained were run on facilities made available by the High Performance Computing Center North at Umeå University.