References
- Achille, A., and Soatto. S. (2018), “Emergence of Invariance and Disentanglement in Deep Representations,” Journal of Machine Learning Research, 19, 1–34.
- Breiman, L. (2001), “Statistical Modeling: The Two Cultures,” Statistical Science, 16, 199–231. DOI: 10.1214/ss/1009213726.
- Breiman, L., Friedman, J. H., Olshen, R. A., and Stone, C. J. (1984), Classification and Regression Trees, Wadsworth Statistics/Probability Series, Belmont, CA: Wadsworth Advanced Books and Software.
- Brinker, T. J., Hekler, A., Enk, A. H., Berking, C., Haferkamp, S., Hauschild, A., Weichenthal, M., Klode, J., Schadendorf, D., Holland-Letz, T., von Kalle, C., Fröhling, S., Schilling, B., and Utikal, J. S. (2019), “Deep Neural Networks Are Superior to Dermatologists in Melanoma Image Classification,” European Journal of Cancer, 119, 11–17. DOI: 10.1016/j.ejca.2019.05.023.
- Efron, B. (2009), “Empirical Bayes Estimates for Large-Scale Prediction Problems,” Journal of the American Statistical Association, 104, 1015–1028. DOI: 10.1198/jasa.2009.tm08523.
- ——— (2010), Large-Scale Inference: Empirical Bayes Methods for Estimation, Testing, and Prediction (Vol. 1), Institute of Mathematical Statistics Monographs, Cambridge: Cambridge University Press.
- ——— (2011), “Tweedie’s Formula and Selection Bias,” Journal of the American Statistical Association, 106, 1602–1614.
- Efron, B., and Feldman, D. (1991), “Compliance as an Explanatory Variable in Clinical Trials,” Journal of the American Statistical Association, 86, 9–17. DOI: 10.1080/01621459.1991.10474996.
- Efron, B., and Hastie, T. (2016), Computer Age Statistical Inference: Algorithms, Evidence, and Data Science, Institute of Mathematical Statistics Monographs, Cambridge: Cambridge University Press.
- Hara, S., and Hayashi, K. (2016), “Making Tree Ensembles Interpretable,” arXiv no. 1606.05390.
- Hastie, T., Montanari, A., Rosset, S., and Tibshirani, R. J. (2019), “Surprises in High-Dimensional Ridgeless Least Squares Interpolation,” arXiv no. 1903.08560.
- Hastie, T., Tibshirani, R., and Friedman, J. (2009), The Elements of Statistical Learning: Data Mining, Inference, and Prediction, Springer Series in Statistics (2nd ed.), New York: Springer.
- Ikram, M. K., Xueling, S., Jensen, R. A., Cotch, M. F., Hewitt, A. W., Ikram, M. A., Wang, J. J., Klein, R., Klein, B. E., Breteler, M. M., and Cheung, N. (2010), “Four Novel Loci (19q13, 6q24, 12q24, and 5q14) Influence the Microcirculation In Vivo,” PLOS Genetics, 6, 1–12. DOI: 10.1371/annotation/841bfadf-85d1-4059-894f-2863d73fa963.
- Johnstone, I. M., and Silverman, B. W. (2004), “Needles and Straw in Haystacks: Empirical Bayes Estimates of Possibly Sparse Sequences,” Annals of Statistics, 32, 1594–1649. DOI: 10.1214/009053604000000030.
- Mediratta, R., Tazebew, A., Behl, R., Efron, B., Narasimhan, B., Teklu, A., Shehibo, A., Ayalew, M., and Kache, S. (2019), “Derivation and Validation of a Prognostic Score for Neonatal Mortality Upon Admission to a Neonatal Intensive Care Unit in Gondar, Ethiopia” (submitted).
- Mosteller, F., and Tukey, J. (1977), Data Analysis and Regression: A Second Course in Statistics, Addison-Wesley Series in Behavioral Science: Quantitative Methods, Reading, MA: Addison-Wesley.
- Murdoch, W. J., Singh, C., Kumbier, K., Abbasi-Asl, R., and Yu, B. (2019), “Interpretable Machine Learning: Definitions, Methods, and Applications,” arXiv no. 1901.04592.
- Qian, J., Du, W., Tanigawa, Y., Aguirre, M., Tibshirani, R., Rivas, M. A., and Hastie, T. (2019), “A Fast and Flexible Algorithm for Solving the Lasso in Large-Scale and Ultrahigh-Dimensional Problems,” bioRxiv no. 630079.
- Schmidt, C. (2019), “Real-Time Flu Tracking,” Nature, 573, S58–S59. DOI: 10.1038/d41586-019-02755-6.
- Tibshirani, R. (1996), “Regression Shrinkage and Selection via the Lasso,” Journal of the Royal Statistical Society, Series B, 58, 267–288. DOI: 10.1111/j.2517-6161.1996.tb02080.x.
- Vellido, A., Martín-Guerrero, J. D., and Lisboa, P. J. G. (2012), “Making Machine Learning Models Interpretable,” in Proceedings of the 20th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN 2012), Bruges, Belgium, April 25–27, 2012, pp. 163–172.
- Wager, S., Hastie, T., and Efron, B. (2014), “Confidence Intervals for Random Forests: The Jackknife and the Infinitesimal Jackknife,” Journal of Machine Learning Research, 15, 1625–1651.
- Yu, B., and Kumbier, K. (2019), “Three Principles of Data Science: Predictability, Computability, and Stability (PCS),” arXiv no. 1901.08152.