References
- Andrews, D. W. (1997), “A Conditional Kolmogorov Test,” Econometrica: Journal of the Econometric Society, 65, 1097–1128. DOI: 10.2307/2171880.
- Bai, J. (2003), “Testing Parametric Conditional Distributions of Dynamic Models,” Review of Economics and Statistics, 85, 531–549. DOI: 10.1162/003465303322369704.
- Barber, R. F., Candes, E. J., Ramdas, A., and Tibshirani, R. J. (2019), “Predictive Inference with the jackknife+,” arXiv preprint arXiv:1905.02928.
- Bates, S., Candès, E., Lei, L., Romano, Y., and Sesia, M. (2021), “Testing for Outliers with Conformal p-values,” arXiv preprint arXiv:2104.08279.
- Bickel, S., Brückner, M., and Scheffer, T. (2007), “Discriminative learning for differing training and test distributions,” in Proceedings of the 24th International Conference on Machine Learning, pp. 81–88. DOI: 10.1145/1273496.1273507.
- Bickel, S., Brückner, M., and Scheffer, T. (2009), “Discriminative Learning Under Covariate Shift,” Journal of Machine Learning Research, 10, 2137–2155.
- Bühlmann, P. (2020), “Invariance, Causality and Robustness,” Statistical Science, 35, 404–426. DOI: 10.1214/19-STS721.
- Cheng, K. F., and Chu, C.-K. (2004), “Semiparametric Density Estimation Under a Two-Sample Density Ratio Model,” Bernoulli, 10, 583–604. DOI: 10.3150/bj/1093265631.
- Chernozhukov, V., Wüthrich, K., and Zhu, Y. (2021), “Distributional Conformal Prediction,” Proceedings of the National Academy of Sciences, 118, e2107794118. DOI: 10.1073/pnas.2107794118.
- Corradi, V., and Swanson, N. R. (2006), “Bootstrap Conditional Distribution Tests in the Presence of Dynamic Misspecification,” Journal of Econometrics, 133, 779–806. DOI: 10.1016/j.jeconom.2005.06.013.
- Csurka, G. (2017), Domain Adaptation in Computer Vision Applications (Vol. 2), Cham: Springer.
- DiCiccio, C. J., DiCiccio, T. J., and Romano, J. P. (2020), “Exact Tests via Multiple Data Splitting,” Statistics & Probability Letters, 166, 108865. DOI: 10.1016/j.spl.2020.108865.
- Dua, D., and Graff, C. (2019), “UCI Machine Learning Repository.”
- Fan, Y., Li, Q., and Min, I. (2006), “A Nonparametric Bootstrap Test of Conditional Distributions,” Econometric Theory, 22, 587–613. DOI: 10.1017/S0266466606060294.
- Fedorova, V., Gammerman, A., Nouretdinov, I., and Vovk, V. (2012), “Plug-in Martingales for Testing Exchangeability On-line,” arXiv preprint arXiv:1204.3251.
- Gretton, A., Smola, A., Huang, J., Schmittfull, M., Borgwardt, K., and Schölkopf, B. (2009), “Covariate Shift by Kernel Mean Matching,” Dataset Shift in Machine Learning, eds. J. Quiñonero-Candela, M. Sugiyama, A. Schwaighofer, and N. D. Lawrence, pp. 131–160, Cambridge, MA: MIT Press.
- Guan, L., and Tibshirani, R. (2019), “Prediction and Outlier Detection in Classification Problems,” arXiv preprint arXiv:1905.04396.
- Hall, P., and Hart, J. D. (1990), “Bootstrap Test for Difference between Means in Nonparametric Regression,” Journal of the American Statistical Association, 85, 1039–1049. DOI: 10.1080/01621459.1990.10474974.
- Hardle, W., and Marron, J. S. (1990), “Semiparametric Comparison of Regression Curves,” The Annals of Statistics, 18, 63–89. DOI: 10.1214/aos/1176347493.
- Kanamori, T., Hido, S., and Sugiyama, M. (2009), “A Least-Squares Approach to Direct Importance Estimation,” Journal of Machine Learning Research, 10, 1391–1445.
- Kim, B., Xu, C., and Barber, R. F. (2020), “Predictive Inference is Free with the Jackknife+-after-Bootstrap,” arXiv preprint arXiv:2002.09025.
- Kivaranovic, D., Johnson, K. D., and Leeb, H. (2020), “Adaptive, Distribution-Free Prediction Intervals for Deep Networks,” in International Conference on Artificial Intelligence and Statistics, PMLR, pp. 4346–4356.
- Kouw, W. M., and Loog, M. (2018), “An Introduction to Domain Adaptation and Transfer Learning,” arXiv preprint arXiv:1812.11806.
- Kuchibhotla, A. K., and Ramdas, A. K. (2019), “Nested Conformal Prediction and the Generalized Jackknife+,” arXiv preprint arXiv:1910.10562.
- Kulasekera, K. (1995), “Comparison of Regression Curves Using Quasi-Residuals,” Journal of the American Statistical Association, 90, 1085–1093. DOI: 10.1080/01621459.1995.10476611.
- Kulasekera, K., and Wang, J. (1997), “Smoothing Parameter Selection for Power Optimality in Testing of Regression Curves,” Journal of the American Statistical Association, 92, 500–511. DOI: 10.1080/01621459.1997.10474003.
- Lei, J., G’Sell, M., Rinaldo, A., Tibshirani, R. J., and Wasserman, L. (2018), “Distribution-Free Predictive Inference for Regression,” Journal of the American Statistical Association, 113, 1094–1111. DOI: 10.1080/01621459.2017.1307116.
- Lei, J., Rinaldo, A., and Wasserman, L. (2015), “A Conformal Prediction Approach to Explore Functional Data,” Annals of Mathematics and Artificial Intelligence, 74, 29–43. DOI: 10.1007/s10472-013-9366-6.
- Lei, J., Robins, J., and Wasserman, L. (2013), “Distribution-Free Prediction Sets,” Journal of the American Statistical Association, 108, 278–287. DOI: 10.1080/01621459.2012.751873.
- Lei, J., and Wasserman, L. (2014), “Distribution-Free Prediction Bands for Non-parametric Regression,” Journal of the Royal Statistical Society, Series B, 76, 71–96. DOI: 10.1111/rssb.12021.
- Li, S., Sesia, M., Romano, Y., Candès, E., and Sabatti, C. (2021), “Searching for Robust Associations with a Multi-Environment Knockoff Filter,” Biometrika, 109, 611–629. DOI: 10.1093/biomet/asab055.
- Meinshausen, N., Meier, L., and Bühlmann, P. (2009), “P-values for High-Dimensional Regression,” Journal of the American Statistical Association, 104, 1671–1681. DOI: 10.1198/jasa.2009.tm08647.
- Neumeyer, N., and Dette, H. (2003), “Nonparametric Comparison of Regression Curves: An Empirical Process Approach,” The Annals of Statistics, 31, 880–920. DOI: 10.1214/aos/1056562466.
- Pan, S. J., and Yang, Q. (2009), “A Survey on Transfer Learning,” IEEE Transactions on Knowledge and Data Engineering, 22, 1345–1359. DOI: 10.1109/TKDE.2009.191.
- Pardo-Fernández, J. C., Jiménez-Gamero, M. D., and El Ghouch, A. (2015), “Tests for the Equality of Conditional Variance Functions in Nonparametric Regression,” Electronic Journal of Statistics, 9, 1826–1851. DOI: 10.1214/15-EJS1058.
- Peters, J., Bühlmann, P., and Meinshausen, N. (2016), “Causal Inference by Using Invariant Prediction: Identification and Confidence Intervals,” Journal of the Royal Statistical Society, Series B, 78, 947–1012. DOI: 10.1111/rssb.12167.
- Qin, J. (1998), “Inferences for Case-Control and Semiparametric Two-Sample Density Ratio Models,” Biometrika, 85, 619–630. DOI: 10.1093/biomet/85.3.619.
- Rinaldo, A., Wasserman, L., G’Sell, M. (2019), “Bootstrapping and Sample Splitting for High-Dimensional, Assumption-Lean Inference,” The Annals of Statistics, 47, 3438–3469. DOI: 10.1214/18-AOS1784.
- Romano, Y., Patterson, E., and Candes, E. (2019), “Conformalized Quantile Regression,” in Advances in Neural Information Processing Systems (Vol. 32).
- Sesia, M., and Romano, Y. (2021), “Conformal Prediction using Conditional Histograms,” in Advances in Neural Information Processing Systems (Vol. 34).
- Shimodaira, H. (2000), “Improving Predictive Inference Under Covariate Shift by Weighting the Log-Likelihood Function,” Journal of Statistical Planning and Inference, 90, 227–244. DOI: 10.1016/S0378-3758(00)00115-4.
- Sollich, P. (2000), “Probabilistic Methods for Support Vector Machines,” in Advances in Neural Information Processing Systems, pp. 349–355.
- Sugiyama, M., and Kawanabe, M. (2012), Machine Learning in Non-stationary Environments: Introduction to Covariate Shift Adaptation, Cambridge, MA: MIT Press.
- Sugiyama, M., Krauledat, M., and Müller, K.-R. (2007), “Covariate Shift Adaptation by Importance Weighted Cross Validation,” Journal of Machine Learning Research, 8, 985–1005.
- Sugiyama, M., Nakajima, S., Kashima, H., Buenau, P. V., and Kawanabe, M. (2008), “Direct Importance Estimation with Model Selection and its Application to Covariate Shift Adaptation,” in Advances in neural information processing systems, pp. 1433–1440.
- Tibshirani, R. J., Barber, R. F., Candes, E., and Ramdas, A. (2019), “Conformal Prediction Under Covariate Shift,” in Advances in Neural Information Processing Systems, pp. 2526–2536.
- Tsuboi, Y., Kashima, H., Hido, S., Bickel, S., and Sugiyama, M. (2009), “Direct Density Ratio Estimation for Large-Scale Covariate Shift Adaptation,” Journal of Information Processing, 17, 138–155. DOI: 10.2197/ipsjjip.17.138.
- Vovk, V. (2019), “Testing Randomness,” arXiv preprint arXiv:1906.09256.
- Vovk, V. (2020), “Testing for Concept Shift Online,” arXiv preprint arXiv:2012.14246.
- Vovk, V., Gammerman, A., and Shafer, G. (2005), Algorithmic Learning in a Random World, New York: Springer.
- Vovk, V., Nouretdinov, I., and Gammerman, A. (2003), “Testing Exchangeability On-line,” in Proceedings of the 20th International Conference on Machine Learning, pp. 768–775.
- Vovk, V., Petej, I., Nouretdinov, I., Ahlberg, E., Carlsson, L., and Gammerman, A. (2021), “Retrain or Not Retrain: Conformal Test Martingales for Change-Point Detection,” in Conformal and Probabilistic Prediction and Applications, pp. 191–210, PMLR.
- Wasserman, L., and Roeder, K. (2009), “High Dimensional Variable Selection,” Annals of Statistics, 37, 2178–2201. DOI: 10.1214/08-aos646.
- Zheng, J. X. (2000), “A Cconsistent Test of Conditional Parametric Distributions,” Econometric Theory, 16, 667–691. DOI: 10.1017/S026646660016503X.
- Zhu, J., and Hastie, T. (2005), “Kernel Logistic Regression and the Import Vector Machine,” Journal of Computational and Graphical Statistics, 14, 185–205. DOI: 10.1198/106186005X25619.