References
- AICPA Staff. 2014. Reimagining auditing in a wired world. USA: University of Zurich, Department of Informatics.
- Bradley, A. 1997. The use of the area under the ROC curve in the evaluation of machine learning algorithms. Pattern Recognition Elsevier. 30 (7):1145–59. doi:10.1016/S0031-3203(96)00142-2.
- Chambers, J. M. 1977. Computational methods for data analysis, 152–89. New York: Wiley.
- Cosserat, G. 2009. Accepting the engagement and planning the audit. Modern auditing, 73436. John Wiley Sons, Kingston University, the University of Technology, Sydney.
- Dietterich, T. G. 2000. Ensemble methods in machine learning. In International workshop on multiple classifier systems 2000 Jun 21 (pp. 1-15). Springer, Berlin, Heidelberg.
- Fanning, K. 1998. Neural network detection of management fraud using published financial data. International Journal of Intelligent Systems in Accounting, Finance & Management 7 (1):21–41. John Wiley & Sons, Ltd. doi:10.1002/(SICI)1099-1174(199803)7:1<21::AID-ISAF138>3.0.CO;2-K.
- Green, B. 1997. Assessing the risk of management fraud through neural network technology. Auditing 16 (1):14. American Accounting Association.
- Hooda, N. 2018. Fraudulent firm classification: A case study of an external audit. Applied Artificial Intelligence 32 (1):49–51. doi:10.1080/08839514.2018.1451032.
- Iba, W. and Langley, P. 1992. Induction of one-level decision trees, Proceedings of the ninth international conference on machine learning, Aberdeen, Scotland, United Kingdom, 233–40
- Keerthi, S. S and Gilbert E. G. 2002. Convergence of a generalized SMO algorithm for SVM classifier design. Machine Learning Springer. 46(1–3):351–60. doi:10.1023/A:1012431217818.
- Kotsiantis. 2006. Forecasting fraudulent financial statements using data mining. International Journal of Computational Intelligence 3 (2):104–10.
- Liaw, A., et al. 2002. Classification and regression by randomForest. R News 2 (3):18–22.
- Majid, B. 2012. A state-of the-art survey of TOPSIS applications. Expert Systems with Applications 39 (17):13051–69. doi:10.1016/j.eswa.2012.05.056.
- Qi, Y. 2012. Random Forest for Bioinformatics. Ensemble Machine Learning Methods and Applications 1:307–23.
- Quinlan, J. R. 1986. Induction of decision trees. Machine Learning 1 (1):81–106. doi:10.1007/BF00116251.
- Ravisankar. 2011. Detection of financial statement fraud and feature selection using data mining techniques. Decision Support Systems 50 (2):491–500. doi:10.1016/j.dss.2010.11.006.
- Rish, I. 2001. An empirical study of the naive Bayes classifier. IJCAI 2001 Workshop on Empirical Methods in Artificial intelligence,IBM New York 3 (22):41–46.
- Ross Quinlan, J. 1996. Improved use of continuous attributes in C4. 5. Journal of Artificial Intelligence Research 4:77–90. doi:10.1613/jair.279.
- Russell, S. 2003. Artificial intelligence: A modern approach, 134–81. Upper Saddle River: Prentice hall.
- Schapire. 1999. A short introduction to boosting. Journal of Japanese Society for Artificial Intelligence 14 (771–780):1612.
- Spathis, C. T. 2002. Detecting false financial statements using published data: Some evidence from Greece. Managerial Auditing Journal 17 (4):179–91. doi:10.1108/02686900210424321.
- Triantaphyllou, E. 2000. Multi-criteria decision making methods. In Multi-criteria decision making methods: A comparative study 2000 (pp. 5-21). Springer, Boston, MA.
- Wolpert, D. H., and W. G. Macready. 1997. No free lunch theorems for optimization. IEEE Transactions on Evolutionary Computation 1 (1):67–82. doi:10.1109/4235.585893.
- Zhang, C., and Y. Ma. 2012. Ensemble machine learning methods and applications. In Ensemble learning, 11–34. China: Springer Publishing Company.