References
- Adler, M. (1985). Stardom and talent. American Economic Review, 75(1), 208–212.
- Apostolou, K., & Tjortjis, C. (2019). Sports Analytics algorithms for performance prediction. In 10th International Conference on Information, Intelligence, Systems and Applications (IISA), PATRAS, Greece. https://doi.org/https://doi.org/10.1109/IISA.2019.8900754.
- Batista, G. E. A. P. A., Prati, R. C., & Monard, M. C. (2004). A study of the behavior of several methods for balancingmachine learning training data. ACM SIGKDD Explorations Newsletter, 6(1), 20–29. https://doi.org/https://doi.org/10.1145/1007730.1007735
- Batuwita, R., & Palade, V. (2009). Micropred: Effective classification of pre-miRNAs for human miRNA gene prediction. Bioinformatics (Oxford, England), 25(8), 989–995. https://doi.org/10.1093/bioinformatics/btp107
- Berri, D. J., & Schmidt, M. B. (2006). On the road with the National Basketball Association’s superstar externality. Journal of Sports Economics, 7(4), 347–358. https://doi.org/https://doi.org/10.1177/1527002505275094
- Blagus, R., & Lusa, L. (2013). SMOTE for high-dimensional class-imbalanced data. BMC Bioin-Formatics, 14(1), 106. https://doi.org/https://doi.org/10.1186/1471-2105-14-106
- Boser, B. E., Guyon, I. M., & Vapnik, V. N. (1992, July 27 - 29). A training algorithm for optimal margin classifiers. Proceedings of the 5th Annual Workshop on Computational Learning Theory (COLT’92), Pittsburgh.
- Bottou, L., & Bousquet, O. (2012). The tradeoffs of large scale learning. In S. Sra, S. Nowozin, & S. J. Wright (Eds.), Optimization for machine learning (pp. 351–368). MIT Press. ISBN 978-0-262-01646-9.
- Breiman, L. (1996). Bagging predictors. Machine Learning, 24, 123–140. https://doi.org/https://doi.org/10.1007/BF00058655
- Bromley, J., Bentz, J. W., Bottou, L., Guyon, I., Lecun, Y., Moore, C., Säckinger, E., & Shah, R. (1993). Signature verification using a “siamese” time delay neural network. International Journal of Pattern Recognition and Artificial Intelligence, 7(04), 669–688.
- Brownlee, J. (2016). XGBoost with Python: Gradient boosted trees with XGBoost and Scikit-Learn (pp. 10–11). Machine Learning Mastery.
- Burges, Christopher J. C. (1998). Data Mining and Knowledge Discovery, 2(2), 121–167. https://doi.org/https://doi.org/10.1023/A:1009715923555
- Chawla, N. V., Bowyer, K. W., Hall, L. O., & Kegelmeyer, W. P. (2002). SMOTE: Synthetic minority over-sampling technique. Journal of Artificial Intelligence Research, 16, 321–357. https://doi.org/https://doi.org/10.1613/jair.953
- Chawla, N. V., Japkowicz, N., & Kolcz, A. (2004). Special issue learning imbalanced datasets, SIGKDD Explor. Newsl, 6, 1–6. https://doi.org/https://doi.org/10.1145/1007730.1007733
- Chen, C., Liaw, A., & Breiman, L. (2004). Using random forest to learn imbalanced data. University of California. 110: pp.1–12.
- Cieslak, D. A., Chawla, N. W., & Striegel, A. (2006). Combating imbalance in network intrusion datasets. In Proceedings of the IEEE International Conference on Granular Computing, Atlanta, Georgia, USA.
- Colet, E., & Parker, J. (1997). Advanced scout: Data mining and knowledge discovery in NBA data. Data Mining and Knowledge Discovery, 1(1), 121–125. https://doi.org/https://doi.org/10.1023/A:1009782106822
- Drucker, H., Burges, C. J., Kaufman, L., Smola, A., & Vapnik, V. (1997). Support vector regression machines. Advances in Neural Information Processing Systems, 9, 155–161.
- Drummond, C., & Robert, C. H. (2003). C4. 5, class imbalance, and cost sensitivity: Why under-sampling beats over-sampling. In Workshop on Learning from Imbalanced Datasets II, 11. Citeseer.
- Fallahi, A., & Jafari, S. (2011). An expert system for detection of breast cancer using data pre-processing and Bayesian network. International Journal Advanced Science Technology, 34, 65–70.
- Freund, Y., & Schapire, R. E. (1997). A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences, 55(1), 119–139. https://doi.org/https://doi.org/10.1006/jcss.1997.1504
- Galar, M., Fernandez, A., Barrenechea, E., Bustince, H., & Herrera, F. (2011). A review on ensembles for the class imbalance problem: Bagging-, boosting-, and hybrid-based approaches. IEEE Transactions on systems, Man, and Cybernetics. Part C (Applications and Reviews), 42(4), 463–484. https://doi.org/https://doi.org/10.1109/TSMCC.2011.2161285
- Guo, H., & Herna, L. V. (2004). Learning from imbalanced data sets with boosting and data generation. ACM Sigkdd Explorations Newsletter, 6(1), 30–39. https://doi.org/https://doi.org/10.1145/1007730.1007736
- He, H., & Garcia, E. A. (2009). Learning from imbalanced data. IEEE Transactions on Knowledge and Data Engineering, 21(9), 1263–1284. https://doi.org/https://doi.org/10.1109/TKDE.2008.239
- Hosmer, D. W., & Lemeshow, S. (2000). Applied logistic regression. Wiley-Interscience.
- Hothorn, T., Hornik, K., & Zeileis, A. (2006). Unbiased recursive partitioning: A conditional inference framework. Journal of Computational and Graphical Statistics, 15(3), 651–674. https://doi.org/https://doi.org/10.1198/106186006X133933
- Hulse, J. V., Khoshgoftaar, T. M., & Napolitano, A. (2007). Experimental perspectives on learning from imbalanced data. In Proceedings of the 24th International Conference on Machine Learning (pp. 935–942). Oregon State University.
- Humphreys, B. R., & Johnson, C. (2020). The effect of superstars on game attendance: Evidence from the NBA. Journal of Sports Economics, 21(2), 152–175. https://doi.org/https://doi.org/10.1177/1527002519885441
- Kahn, J. (2003). Neural network prediction of NFL Football Games.
- Kubat, M., & Matwin, S. (1997). Addressing the curse of imbalanced data sets: One-sided sampling. In Proceedings of the 14th International Conference on Machine Learning (pp. 179–186). Morgan Kaufmann.
- Langley, P., Iba, W., & Thompson, K.. (1992). An analysis of Bayesian classifiers. The Tenth National Conference on Artificial Intelligence, 223–228. AAAI Press. https://doi.org/https://doi.org/10.5555/1867135.1867170
- Leung, C. K., & Joseph, K. W.. (2014). Sports data mining: Predicting results for the college football games. Procedia Computer Science, 35, 710–719. https://doi.org/https://doi.org/10.1016/j.procs.2014.08.153
- Ling, C., & Li, C. (1998). Data mining for direct marketing problems and solutions (1998).
- Madhavan, V. (2016). Predicting NBA game outcomes with hidden Markov models. Berkeley University.
- Maher, M. J. (1982). Modelling association football scores. Statistica Neerlandica, 36(3), 109–118. https://doi.org/https://doi.org/10.1111/j.1467-9574.1982.tb00782.x
- Marcus, G. (2018). Deep learning: A critical appraisal. arXiv preprint arXiv, 1801.00631.
- McCabe, A., & Trevathan, J.. (2008). Artificial intelligence in sports prediction. Fifth International Conference on Information Technology: New Generations (itng 2008), 1194–1197. https://doi.org/https://doi.org/10.1109/ITNG.2008.203
- Miljković, D., Gajić, L., Kovačević, A., & Konjović, Z. (2010). The use of data mining for basketball matches outcomes prediction. In IEEE 8th International Symposium on Intelligent Systems and Informatics, Subotica. https://doi.org/https://doi.org/10.1109/SISY.2010.5647440.
- Nguyen, N., Ma, B., & Hu, J. (2020). Predicting National Basketball Association players performance and popularity: A data mining approach. Computational Collective Intelligence. ICCCI 2020, Da Nang, Nov 28 - Dec 3. Lecture Notes in Computer Science, vol 12496. Springer, Cham. https://doi.org/https://doi.org/10.1007/978-3-030-63007-2_23.
- Pifer, N. D., Mak, J., Bae, W., & Zhang, J. (2015). Examining the relationship between star player characteristics and brand equity in professional sport teams. Marketing Management Journal, 25, 88–106.
- Releases, Forbes Press. (2019, February 6). Forbes releases 21st annual NBA team valuations. Forbes. Retrieved May 26, 2021, from www.forbes.com/sites/forbespr/2019/02/06/forbes-releases-21st-annual-nba-team-valuations/?sh=72543d3511a7
- Rotshtein, P., Posner, M., & Rakityanskaya, A. B. (2015). Football predictions based on a fuzzy model with genetic and neural tuning. Cybernetics and Systems Analysis, 41(4). https://doi.org/https://doi.org/10.1007/s10559-005-0098-4
- Schwenk, H., & Bengio, Y.. (1997). Artificial Neural Networks — ICANN'97. ICANN 1997. Lecture Notes in Computer Science, Vol. 1327. Berlin: Springer. https://doi.org/https://doi.org/10.1007/BFb0020278
- Thabtah, F., Zhang, L., & Abdelhamid, N. (2019). NBA game result prediction using feature analysis and machine learning. Annals of Data Science, 6(1), 103. https://doi.org/https://doi.org/10.1007/s40745-018-00189-x
- Tichy, W. (2016). Changing the Game: ‘Dr. Dave’ Schrader.
- Wirth, R., & Hipp, J. (2000, April). CRISP-DM: Towards a standard process model for data mining. In Proceedings of the 4th International Conference on the Practical Applications of Knowledge Discovery and Data Mining, Citeseer.
- Yan, X., & Su, X. (2009). Linear regression analysis: Theory and computing (pp. 2–3). https://doi.org/https://doi.org/10.1142/6986
- Yanofsky, N. (2015). Probably approximately correct: Nature’s algorithms for learning and prospering in a complex world. Common Knowledge, 21(2), 340–340. https://doi.org/https://doi.org/10.1215/0961754X-2872666
- Zimmermann, A., Moorthy, S., & Shi, Z. (2013). Predicting college basketball match outcomes using machine learning techniques: some results and lessons learned. arXiv preprint arXiv:1310.3607.
- Zuccolotto, P., Manisera, M., & Sandri, M. (2018). Big data analytics for modelling scoring probability in basketball: The effect of shooting under high-pressure conditions. International Journal of Sports Science & Coaching, 13(4), 569–589. https://doi.org/https://doi.org/10.1177/1747954117737492