Abstract
Out-of-sample prediction is the acid test of predictive models, yet an independent test dataset is often not available for assessment of the prediction error. For this reason, out-of-sample performance is commonly estimated using data splitting algorithms such as cross-validation or the bootstrap. For quantitative outcomes, the ratio of variance explained to total variance can be summarized by the coefficient of determination or in-sample R2, which is easy to interpret and to compare across different outcome variables. As opposed to in-sample R2, out-of-sample R2 has not been well defined and the variability on out-of-sample has been largely ignored. Usually only its point estimate is reported, hampering formal comparison of predictability of different outcome variables. Here we explicitly define out-of-sample R2 as a comparison of two predictive models, provide an unbiased estimator and exploit recent theoretical advances on uncertainty of data splitting estimates to provide a standard error for . The performance of the estimators for R2 and its standard error are investigated in a simulation study. We demonstrate our new method by constructing confidence intervals and comparing models for prediction of quantitative Brassica napus and Zea mays phenotypes based on gene expression data. Our method is available in the R-package oosse.
Supplementary Materials
R-package: The R-code for calculation of out-of-sample and its standard error is available in the R-package oosse from CRAN (https://cran.r-project.org/web/packages/oosse). (url)
R-code: R-code for running all simulations and analyses is available at https://github.com/maerelab/Rsquared. (url)
Supplementary material Exhaustive simulation results, proofs, and software versions. (pdf)