538
Views
1
CrossRef citations to date
0
Altmetric
Research Article

Isotonic recalibration under a low signal-to-noise ratio

&

References

  • Ayer, M., Brunk, H. D., Ewing, G. M., Reid, W. T. & Silverman, E. (1955). An empirical distribution function for sampling with incomplete information. Annals of Mathematical Statistics 26, 641–647.
  • Balabdaoui, F., Durot, C. & Jankowski, H. (2019). Least squares estimation in the monotone single index model. Bernoulli 25, 3276–3310.
  • Barlow, R. E., Bartholomew, D. J., Brenner, J. M. & Brunk, H. D. (1972). Statistical inference under order restrictions. Wiley.
  • Barlow, R. E. & Brunk, H. D. (1972). The isotonic regression problem and its dual. Journal of the American Statistical Association 67(337), 140–147.
  • Blier-Wong, C., Lamontagne, L. & Marceau, E. (2022). A representation-learning approach for insurance pricing with images. Insurance Data Science Conference, 15–17 June 2022, Milan.
  • Brunk, H. D., Ewing, G. M. & Utz, W. R. (1957). Minimizing integrals in certain classes of monotone functions. Pacific Journal of Mathematics 7, 833–847.
  • Busing, F. M. T. A. (2022). Monotone regression: a simple and fast O(n) PAVA implementation. Journal of Statistical Software 102, 1–25.
  • de Leeuw, J., Hornik, K. & Mair, P. (2009). Isotone optimization in R: pool-adjacent-violators algorithm (PAVA) and active set methods. Journal of Statistical Software 32(5), 1–24.
  • Denuit, M., Charpentier, A. & Trufin, J. (2021). Autocalibration and Tweedie-dominance for insurance pricing with machine learning. Insurance: Mathematics & Economics 101, 485–497.
  • Dimitriadis, T., Dümbgen, L., Henzi, A., Puke, M. & Ziegel, J. (2022). Honest calibration assessment for binary outcome predictions. arXiv:2203.04065.
  • Dutang, C. & Charpentier, A. (2018). CASdatasets R Package Vignette. Reference Manual. Version 1.0-8, packaged 2018-05-20.
  • Gneiting, T. (2011). Making and evaluating point forecasts. Journal of the American Statistical Association 106(494), 746–762.
  • Gneiting, T. & Resin, J. (2022). Regression diagnostics meets forecast evaluation: conditional calibration, reliability diagrams and coefficient of determination. arXiv:2108.03210v3.
  • Henzi, A., Kleger, G.-R. & Ziegel, J. F. (2023). Distributional (single) index models. Journal of the American Statistical Association 118(541), 489–503.
  • Henzi, A., Ziegel, J. F. & Gneiting, T. (2021). Isotonic distributional regression. Journal of the Royal Statistical Society Series B: Statistical Methodology 83, 963–993.
  • Hosmer, D. W. & Lemeshow, S. (1980). Goodness of fit tests for the multiple logistic regression model. Communications in Statistics - Theory and Methods 9, 1043–1069.
  • Karush, W. (1939). Minima of functions of several variables with inequalities as side constraints [MSc thesis]. Department of Mathematics, University of Chicago.
  • Kingma, D. P. & Welling, M. (2019). An introduction to variational autoencoders. Foundations and Trends in Machine Learning 12(4), 307–392.
  • Krüger, F. & Ziegel, J. F. (2021). Generic conditions for forecast dominance. Journal of Business & Economics Statistics 39(4), 972–983.
  • Kruskal, J. B. (1964). Nonmetric multidimensional scaling: a numerical method. Psychometrika 29, 115–129.
  • Kuhn, H. W. & Tucker, A. W. (1951). Nonlinear programming. Proceedings of 2nd Berkeley Symposium. University of California Press. P. 481–492.
  • Lindholm, M., Lindskog, F. & Palmquist, J. (2023). Local bias adjustment, duration-weighted probabilities, and automatic construction of tariff cells. Scandinavian Actuarial Journal, published online: 14 Feb 2023.
  • Loader, C. (1999). Local regression and likelihood. Springer.
  • Lundberg, S. M. & Lee, S.-I. (2017). A unified approach to interpreting model predictions. In Advances in Neural Information Processing Systems 30. Guyon, I., Luxburg, U.V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., Garnett, R. (Eds.). Curran Associates. P. 4765–4774.
  • Mayer, M., Meier, D. & Wüthrich, M. V. (2023). SHAP for actuaries: explain any model. SSRN Manuscript ID 4389797.
  • Menon, A. K., Jiang, X., Vembu, S., Elkan, C. & Ohno-Machado, L. (2012). Predicting accurate probabilities with ranking loss. ICML'12: Proceedings of the 29th International Conference on Machine Learning. Omnipress, 2600 Anderson Street, Madison, WI, United States P. 659–666.
  • Meyer, M. & Woodroofe, M. (2000). On the degrees of freedom in shape-restricted regression. Annals of Statistics 28(4), 1083–1104.
  • Miles, R. E. (1959). The complete amalgamation into blocks, by weighted means, of a finite set of real numbers. Biometrika 46, 317–327.
  • Murphy, A. H. (1973). A new vector partition of the probability score. Journal of Applied Meteorology 12(4), 595–600.
  • Peiris, H., Jeong, H. & Kim, J.-K. (2023). Integration of traditional and telematics data for efficient insurance claims prediction. SSRN Manuscript ID 4344952.
  • Savage, L. J. (1971). Elicitable of personal probabilities and expectations. Journal of the American Statistical Association 66(336), 783–801.
  • Schervish, M. J. (1989). A general method of comparing probability assessors. The Annals of Statistics 17(4), 1856–1879.
  • Tasche, D. (2021). Calibrating sufficiently. Statistics: A Journal of Theoretical and Applied Statistics 55(6), 1356–1386.
  • Therneau, T. M. & Atkinson, E. J. (2022). An introduction to recursive partitioning using the RPART routines. R Vignettes, version of October 21, 2022. Rochester: Mayo Foundation.
  • Tibshirani, R. J., Hoefling, H. & Tibshirani, R. (2011). Nearly-isotonic regression. Technometrics 53(1), 54–61.
  • Wüthrich, M. V. (2023). Model selection with Gini indices under auto-calibration. European Actuarial Journal 13(1), 469–477.
  • Wüthrich, M. V. & Merz, M. (2023). Statistical foundations of actuarial learning and its applications. Springer Actuarial.
  • Zadrozny, B. & Elkan, C. (2002). Transforming classifier scores into accurate multiclass probability estimates. Proceedings of the Eighth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining.  Association for Computing Machinery, New York, NY, United State P. 694–699.