Abstract
In the predictive modeling context, the credibility estimator is a point predictor; it is easy to calculate and avoids the model misspecification risk asymptotically, but it provides no quantification of inferential uncertainty. A Bayesian prediction interval quantifies uncertainty of prediction, but it often requires expensive computation and is subject to model misspecification risk even asymptotically. Is there a way to get the best of both worlds? Based on a powerful machine learning strategy called conformal prediction, this article proposes a method that converts the credibility estimator into a conformal prediction credibility interval. This conformal prediction credibility interval contains the credibility estimator, has computational simplicity, and guarantees finite-sample validity at a pre-assigned coverage level.
ACKNOWLEDGMENTS
Thanks are due to the co-editor and the two anonymous reviewers for many helpful comments and suggestions.
Notes
1 Note that is open in though it is not open in