References
- Agresti, A. (2015). Foundations of linear and generalized linear models. John Wiley & Sons.
- Alvarez-Melis, D., & Tommi, S. J. (2018). On the robustness of interpretability methods. arXiv preprint arXiv:1806.08049.
- Apley, D. W., & Zhu, J. (2020). Visualizing the effects of predictor variables in black box supervised learning models. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 82.4 (2020), 1059–1086. https://doi.org/https://doi.org/10.1111/rssb.12377
- Beyer, K., Goldstein, J., Ramakrishnan, R., & Shaft, U. (1999). When is “nearest neighbor” meaningful? In International Conference on Database Theory (pp. 217–235). Springer.
- Billingsley, P. (2008). Probability and measure. John Wiley & Sons.
- Brame, R., Paternoster, R., Mazerolle, P., & Piquero, A. (1998). Testing for the equality of maximum-likelihood regression coefficients between two independent equations. Journal of Quantitative Criminology, 14(3), 245–261. https://doi.org/https://doi.org/10.1023/A:1023030312801
- Breiman, L. (2001). Random forests. Machine Learning, 45(1), 5–32. https://doi.org/https://doi.org/10.1023/A:1010933404324
- Craven, M., & Shavlik, J. W. (1996). Extracting tree-structured representations of trained networks. In Advances in neural information processing systems (pp. 24–30).
- Eba, E. B. A. (2020). Report on big data and advanced analytics. https://eba.europa.eu/file/609786/
- Friedman, J. H. (2001). Greedy function approximation: A gradient boosting machine. The Annals of Statistics, 29(5), 1189–1232. https://doi.org/https://doi.org/10.1214/aos/1013203451
- Goldstein, A., Kapelner, A., Bleich, J., & Pitkin, E. (2015). Peeking inside the black box: Visualizing statistical learning with plots of individual conditional expectation. Journal of Computational and Graphical Statistics, 24(1), 44–65. https://doi.org/https://doi.org/10.1080/10618600.2014.907095
- Gosiewska, A., & Biecek, P. (2019). IBreakDown: Uncertainty of model explanations for non-additive predictive models. arXiv Preprint arXiv:1903.11420.
- Greene, W. H. (2003). Econometric analysis. Pearson Education India.
- Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., & Pedreschi, D. (2018). A survey of methods for explaining black box models. ACM Computing Surveys (CSUR), 51(5), 93. https://doi.org/https://doi.org/10.1145/3236009
- Hall, P., & Gill, N. (2018). An introduction to machine learning interpretability - Dataiku Version. O’Reilly Media, Incorporated.
- Hand, D. J. (2001). Modelling consumer credit risk. IMA Journal of Management Mathematics, 12(2), 139–155. https://doi.org/https://doi.org/10.1093/imaman/12.2.139
- Hastie, T., Tibshirani, R., & Friedman, J. (2009). The elements of statistical learning: Data mining, inference, and prediction. Springer Science & Business Media.
- Hleg, A. (2019). Ethics guidelines for trustworthy AI. https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai
- Hoerl, A. E., & Kennard, R. W. (1970). Ridge regression: Biased estimation for nonorthogonal problems. Technometrics, 12(1), 55–67. https://doi.org/https://doi.org/10.1080/00401706.1970.10488634
- Johnston, J., & DiNardo, J. (1972). Econometric methods (Vol. 2).
- Kingston, J. (2017). Using artificial intelligence to support compliance with the general data protection regulation. Artificial Intelligence and Law, 25(4), 429–443. https://doi.org/https://doi.org/10.1007/s10506-017-9206-9
- Lei, J., G’Sell, M., Rinaldo, A., Tibshirani, R. J., & Wasserman, L. (2018). Distribution-free predictive inference for regression. Journal of the American Statistical Association, 113(523), 1094–1111. https://doi.org/https://doi.org/10.1080/01621459.2017.1307116
- Lundberg, S. M., & Lee, S.-I. (2017). Advances in Neural Information Processing Systems. In 31st Annual Conference on Neural Information Processing Systems, NIPS 2017, Long Beach, United States, 4–9 December (Vol. 2017, pp. 4765–4774).
- Molnar, C. (2020a). Interpretable machine learning. Lulu.com.
- Molnar, C. (2020b). Limitations of interpretable machine learning methods. https://compstat-lmu.github.io/iml_methods_limitations/.
- Moscatelli, M., Parlapiano, F., Narizzano, S., & Viggiano, G. (2020). Corporate default forecasting with machine learning. Expert Systems with Applications, 161, 113567. https://doi.org/https://doi.org/10.1016/j.eswa.2020.113567
- Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). Why should i trust you?: Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 1135–1144). ACM.
- Ribeiro, M. T., Singh, S., & Guestrin, C. (2018). Anchors: High-precision model-agnostic explanations. In Thirty-Second AAAI Conference on Artificial Intelligence, New Orleans, United States, 2–7 February 2018. AAAI.
- Shankaranarayana, S. M., & Runje, D. (2019). ALIME: Autoencoder based approach for local interpretability. In International Conference on Intelligent Data Engineering and Automated Learning (pp. 454–463). Springer.
- van Wieringen, W. N. (2015). Lecture notes on ridge regression. arXiv preprint arXiv:1509.09169.
- Visani, G., Bagli, E., & Chesani, F. (2020). OptiLIME: Optimized LIME explanations for diagnostic computer algorithms. In 2020 International Conference on Information and Knowledge Management Workshops, CIKMW 2020, Galway, Ireland, 19–23 October 2020.
- Visani, G., Chesani, F., Bagli, E., Capuzzo, D., & Poluzzi, A. (2019). Explanations of machine learning predictions: A mandatory step for its application to operational processes. https://arxiv.org/pdf/2012.15103.pdf.
- Zafar, M. R., & Khan, N. M. (2019). DLIME: A deterministic local interpretable model-agnostic explanations approach for computer-aided diagnosis systems. Proceedings ofAnchorage ’19: ACM SIGKDD Workshop on Explainable AI/ML (XAI) for Accountability,Fairness, and Transparency (Anchorage ’19).
- Zhou, Y., & Hooker, G. (2016). Interpreting models via single tree approximation. arXiv preprint arXiv:1610.09036.