References
- Amer, M., & Goldstein, M. (2012). Nearest-neighbor and clustering based anomaly detection algorithms for rapidminer [Paper presentation]. Proceedings of the 3rd RapidMiner Community Meeting and Conference (pp. 1–12).
- Banasik, J., & Crook, J. (2005). Credit scoring, augmentation and lean models. Journal of the Operational Research Society, 56(9), 1072–1081. https://doi.org/https://doi.org/10.1057/palgrave.jors.2602017
- Banasik, J., Crook, J., & Thomas, L. (2003). Sample selection bias in credit scoring models. Journal of the Operational Research Society, 54(8), 822–832. https://doi.org/https://doi.org/10.1057/palgrave.jors.2601578
- Bicego, M., & Figueiredo, M. A. (2009). Soft clustering using weighted one-class support vector machines. Pattern Recognition, 42(1), 27–32. https://doi.org/https://doi.org/10.1016/j.patcog.2008.07.004
- Breunig, M. M., Kriegel, H. P., Ng, R. T., & Sander, J. (2000, May). LOF: Identifying density-based local outliers [Paper presentation]. Proceedings of the 2000 ACM SIGMOD International Conference on Management of data (pp. 93–104).
- Chen, T., & Guestrin, C. (2016). Xgboost: A scalable tree boosting system [Paper presentation]. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 785–794).
- Crook, J., & Banasik, J. (2004). Does reject inference really improve the performance of application scoring models? Journal of Banking & Finance, 28(4), 857–874. https://doi.org/https://doi.org/10.1016/S0378-4266(03)00203-6
- Hand, D. J., & Henley, W. E. (1993). Can reject inference ever work? IMA Journal of Management Mathematics, 5(1), 45–55. https://doi.org/https://doi.org/10.1093/imaman/5.1.45
- Hosmer, D. W., Jr., Lemeshow, S., & Sturdivant, R. X. (2013). Applied logistic regression (Vol. 398). Wiley.
- Hsieh, N. C. (2005). Hybrid mining approach in the design of credit scoring models. Expert Systems with Applications, 28(4), 655–665. https://doi.org/https://doi.org/10.1016/j.eswa.2004.12.022
- Karlos, S., Fazakis, N., Kotsiantis, S., & Sgarbas, K. (2015). September). Self-train logitboost for semi-supervised learning. In International Conference on Engineering Applications of Neural Networks (pp. 139–148). Springer.
- Khan, S. S., & Madden, M. G. (2009, August). A survey of recent trends in one class classification. In Irish Conference on Artificial Intelligence and Cognitive Science (pp. 188–197) Springer.
- Kim, A., & Cho, S. B. (2019). An ensemble semi-supervised learning method for predicting defaults in social lending. Engineering Applications of Artificial Intelligence, 81, 193–199. https://doi.org/https://doi.org/10.1016/j.engappai.2019.02.014
- Li, Z., Tian, Y., Li, K., Zhou, F., & Yang, W. (2017). Reject inference in credit scoring using semi-supervised support vector machines. Expert Systems with Applications, 74, 105–114. https://doi.org/https://doi.org/10.1016/j.eswa.2017.01.011
- Liu, Y., Li, X., & Zhang, Z. (2020). A new approach in reject inference of using ensemble learning based on global semi-supervised framework. Future Generation Computer Systems, 109, 382–391. https://doi.org/https://doi.org/10.1016/j.future.2020.03.047
- Mancisidor, R. A., Kampffmeyer, M., Aas, K., & Jenssen, R. (2020). Deep generative models for reject inference in credit scoring. Knowledge-Based Systems, 196, 105758. https://doi.org/https://doi.org/10.1016/j.knosys.2020.105758
- Paleologo, G., Elisseeff, A., & Antonini, G. (2010). Subagging for credit scoring models. European Journal of Operational Research, 201(2), 490–499. https://doi.org/https://doi.org/10.1016/j.ejor.2009.03.008
- Rassokhin, D. N., & Agrafiotis, D. K. (2000). Kolmogorov-Smirnov statistic and its application in library design. Journal of Molecular Graphics & Modelling, 18(4–5), 368–382. https://doi.org/https://doi.org/10.1016/s1093-3263(00)00063-2
- Refaat, M. (2011). Credit risk scorecard: Development and implementation using SAS. Lulu.com.
- Shen, F., Zhao, X., & Kou, G. (2020). Three-stage reject inference learning framework for credit scoring using unsupervised transfer learning and three-way decision theory. Decision Support Systems, 137, 113366. https://doi.org/https://doi.org/10.1016/j.dss.2020.113366
- Tax, D. M., & Duin, R. P. (2004). Support vector data description. Machine Learning, 54(1), 45–66. https://doi.org/https://doi.org/10.1023/B:MACH.0000008084.60811.49
- Tian, Y., Yong, Z., & Luo, J. (2018). A new approach for reject inference in credit scoring using kernel-free fuzzy quadratic surface support vector machines. Applied Soft Computing, 73, 96–105. https://doi.org/https://doi.org/10.1016/j.asoc.2018.08.021
- Tsai, C. F., & Chen, M. L. (2010). Credit rating by hybrid machine learning techniques. Applied Soft Computing, 10(2), 374–380. https://doi.org/https://doi.org/10.1016/j.asoc.2009.08.003
- Xia, Y. (2019). A novel reject inference model using outlier detection and gradient boosting technique in peer-to-peer lending. IEEE Access, 7, 92893–92907. https://doi.org/https://doi.org/10.1109/ACCESS.2019.2927602
- Xia, Y., Yang, X., & Zhang, Y. (2018). A rejection inference technique based on contrastive pessimistic likelihood estimation for P2P lending. Electronic Commerce Research and Applications, 30, 111–124. https://doi.org/https://doi.org/10.1016/j.elerap.2018.05.011
- Zhu, X., & Goldberg, A. B. (2009). Introduction to semi-supervised learning. Synthesis Lectures on Artificial Intelligence and Machine Learning, 3(1), 1–130. https://doi.org/https://doi.org/10.2200/S00196ED1V01Y200906AIM006