10,337
Views
16
CrossRef citations to date
0
Altmetric
Original Articles

Statistical, machine learning and deep learning forecasting methods: Comparisons and ways forward

, ORCID Icon, , , &
Pages 840-859 | Received 24 Jul 2020, Accepted 17 Aug 2022, Published online: 05 Sep 2022

References

  • Abadi, M., Barham, P., Chen, J., Chen, Z., Davis, A., Dean, J., Devin, M., Ghemawat, S., Irving, G., Isard, M., Kudlur, M., Levenberg, J., Monga, R., Moore, S., Murray, D. G., Steiner, B., Tucker, P., Vasudevan, V., Warden, P., … Zheng, X. (2016). Tensorflow: A system for large-scale machine learning. In 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI 16) (pp. 265–283). USENIX Association.
  • Adya, M., & Collopy, F. (1998). How effective are neural networks at forecasting and prediction? A review and evaluation. Journal of Forecasting, 17(5–6), 481–495. https://doi.org/10.1002/(SICI)1099-131X(1998090)17:5/6<481::AID-FOR709>3.0.CO;2-Q
  • Ahmed, N. K., Atiya, A. F., Gayar, N. E., & El-Shishiny, H. (2010). An empirical comparison of machine learning models for time series forecasting. Econometric Reviews, 29(5-6), 594–621. https://doi.org/10.1080/07474938.2010.481556
  • Alexandrov, A., Benidis, K., Bohlke-Schneider, M., Flunkert, V., Gasthaus, J., Januschowski, T., Maddix, D. C., Rangapuram, S. S., Salinas, D., Schulz, J., Stella, L., Türkmen, A. C., & Wang, Y. (2019). Gluonts: Probabilistic time series models in python. CoRR, abs/1906.05264.
  • Bandara, K., Bergmeir, C., & Smyl, S. (2020). Forecasting across time series databases using recurrent neural networks on groups of similar series: A clustering approach. Expert Systems with Applications, 140, 112896. https://doi.org/10.1016/j.eswa.2019.112896
  • Barker, J. (2020). Machine learning in m4: What makes a good unstructured model? International Journal of Forecasting, 36(1), 150–155. https://doi.org/10.1016/j.ijforecast.2019.06.001
  • Ben Taieb, S., Bontempi, G., Atiya, A. F., & Sorjamaa, A. (2012). A review and comparison of strategies for multi-step ahead time series forecasting based on the nn5 forecasting competition. Expert Systems with Applications, 39(8), 7067–7083. https://doi.org/10.1016/j.eswa.2012.01.039
  • Ben Taieb, S., Sorjamaa, A., Bontempi, G. (2010). Multiple-output modelling for multi-step-ahead time series forecasting. (Subspace Learning/Selected Papers from the European Symposium on Time Series Prediction). Neurocomputing, 73, 1950–1957.
  • Bergmeir, C., Hyndman, R. J., & Benítez, J. M. (2016). Bagging exponential smoothing methods using stl decomposition and box–cox transformation. International Journal of Forecasting, 32(2), 303–312. https://doi.org/10.1016/j.ijforecast.2015.07.002
  • Bergstra, J. S., Bardenet, R., Bengio, Y., & Kégl, B. (2011). Algorithms for hyper-parameter optimization. In Advances in neural information processing systems (pp. 2546–2554).
  • Bergstra, J., Komer, B., Eliasmith, C., Yamins, D., & Cox, D. (2015). Hyperopt: A python library for model selection and hyperparameter optimization. Computational Science & Discovery, 8(1), 014008. https://doi.org/10.1088/1749-4699/8/1/014008
  • Bojer, C. S., & Meldgaard, J. P. (2021). Kaggle forecasting competitions: An overlooked learning opportunity. International Journal of Forecasting, 37(2), 587–603. https://doi.org/10.1016/j.ijforecast.2020.07.007
  • Box, G., & Jenkins, G. (1970). Time series analysis: Forecasting and control. Holden-Day.
  • Chae, Y. T., Horesh, R., Hwang, Y., & Lee, Y. M. (2016). Artificial neural network model for forecasting sub-hourly electricity usage in commercial buildings. Energy and Buildings, 111, 184–194. https://doi.org/10.1016/j.enbuild.2015.11.045
  • Chatfield, C. (1993). Neural networks: Forecasting breakthrough or passing fad? International Journal of Forecasting, 9(1), 1–3. https://doi.org/10.1016/0169-2070(93)90043-M
  • Claeskens, G., Magnus, J. R., Vasnev, A. L., & Wang, W. (2016). The forecast combination puzzle: A simple theoretical explanation. International Journal of Forecasting, 32(3), 754–762. https://doi.org/10.1016/j.ijforecast.2015.12.005
  • Crone, S. F., Hibon, M., & Nikolopoulos, K. (2011). Advances in forecasting with neural networks? Empirical evidence from the NN3 competition on time series prediction. International Journal of Forecasting, 27(3), 635–660. https://doi.org/10.1016/j.ijforecast.2011.04.001
  • Dantas, T. M., & Cyrino Oliveira, F. L. (2018). Improving time series forecasting: An approach combining bootstrap aggregation, clusters and exponential smoothing. International Journal of Forecasting, 34(4), 748–761. https://doi.org/10.1016/j.ijforecast.2018.05.006
  • Dekker, M., van Donselaar, K., & Ouwehand, P. (2004). How to use aggregation and combined forecasting to improve seasonal demand forecasts. International Journal of Production Economics, 90(2), 151–167. https://doi.org/10.1016/j.ijpe.2004.02.004
  • Deng, L. (2014). A tutorial survey of architectures, algorithms, and applications for deep learning – Erratum. APSIPA Transactions on Signal and Information Processing, 3(1), 1-29. https://doi.org/10.1017/ATSIP.2013.9
  • Faust, O., Hagiwara, Y., Hong, T. J., Lih, O. S., & Acharya, U. R. (2018). Deep learning for healthcare applications based on physiological signals: A review. Computer Methods and Programs in Biomedicine, 161, 1–13.
  • Fildes, R., & Petropoulos, F. (2015). Simple versus complex selection rules for forecasting many time series. Journal of Business Research, 68(8), 1692–1701. https://doi.org/10.1016/j.jbusres.2015.03.028
  • Fiorucci, J. A., Pellegrini, T. R., Louzada, F., Petropoulos, F., & Koehler, A. B. (2016). Models for optimising the theta method and their relationship to state space models. International Journal of Forecasting, 32(4), 1151–1161. https://doi.org/10.1016/j.ijforecast.2016.02.005
  • Fischer, T., & Krauss, C. (2018). Deep learning with long short-term memory networks for financial market predictions. European Journal of Operational Research, 270(2), 654–669. https://doi.org/10.1016/j.ejor.2017.11.054
  • Fry, C., & Brundage, M. (2020). The m4 forecasting competition – A practitioner’s view. International Journal of Forecasting, 36(1), 156–160. https://doi.org/10.1016/j.ijforecast.2019.02.013
  • Gardner, E. S. (2006). Exponential smoothing: The state of the art—Part II. International Journal of Forecasting, 22(4), 637–666. https://doi.org/10.1016/j.ijforecast.2006.03.005
  • Hamzaçebi, C., Akay, D., & Kutay, F. (2009). Comparison of direct and iterative artificial neural network forecast approaches in multi-periodic time series forecasting. Expert Systems with Applications, 36(2), 3839–3844. https://doi.org/10.1016/j.eswa.2008.02.042
  • Hyndman, R. J. (2020). A brief history of forecasting competitions. International Journal of Forecasting, 36(1), 7–14. https://doi.org/10.1016/j.ijforecast.2019.03.015
  • Hyndman, R., & Khandakar, Y. (2008). Automatic time series forecasting: The forecast package for R. Journal of Statistical Software, 26, 1–22.
  • Hyndman, R. J., & Koehler, A. B. (2006). Another look at measures of forecast accuracy. International Journal of Forecasting, 22(4), 679–688. https://doi.org/10.1016/j.ijforecast.2006.03.001
  • Hyndman, R. J., Koehler, A. B., Snyder, R. D., & Grose, S. (2002). A state space framework for automatic forecasting using exponential smoothing methods. International Journal of Forecasting, 18(3), 439–454. https://doi.org/10.1016/S0169-2070(01)00110-8
  • Januschowski, T., Gasthaus, J., Wang, Y., Salinas, D., Flunkert, V., Bohlke-Schneider, M., & Callot, L. (2020). Criteria for classifying forecasting methods. International Journal of Forecasting, 36(1), 167–177. https://doi.org/10.1016/j.ijforecast.2019.05.008
  • Jin, H., Song, Q., & Hu, X. (2019). Auto-keras: An efficient neural architecture search system [Paper presentation]. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, (pp. 1946–1956). ACM.
  • Kang, Y., Hyndman, R. J., & Smith-Miles, K. (2017). Visualising forecasting algorithm performance using time series instance spaces. International Journal of Forecasting, 33(2), 345–358. https://doi.org/10.1016/j.ijforecast.2016.09.004
  • Koning, A. J., Franses, P. H., Hibon, M., & Stekler, H. (2005). The M3 competition: Statistical tests of the results. International Journal of Forecasting, 21(3), 397–409. https://doi.org/10.1016/j.ijforecast.2004.10.003
  • Kourentzes, N., Barrow, D. K., & Crone, S. F. (2014). Neural network ensemble operators for time series forecasting. Expert Systems with Applications, 41(9), 4235–4244. https://doi.org/10.1016/j.eswa.2013.12.011
  • Krizhevsky, A., Sutskever, I., & Hinton, G. (2012). Imagenet classification with deep convolutional neural networks. Advances in Neural Information Processing Systems, 2, 1097–1105. [Database][Mismatch
  • Li, F.-F., & Li, J. (2018). Cloud automl: Making ai accessible to every business.
  • Makridakis, S. (1993). Accuracy measures: Theoretical and practical concerns. International Journal of Forecasting, 9(4), 527–529. https://doi.org/10.1016/0169-2070(93)90079-3
  • Makridakis, S. (2017). The forthcoming artificial intelligence (AI) revolution: Its impact on society and firms. Futures, 90, 46–60. https://doi.org/10.1016/j.futures.2017.03.006
  • Makridakis, S., & Hibon, M. (2000). The m3-competition: Results, conclusions and implications. International Journal of Forecasting, 16(4), 451–476. https://doi.org/10.1016/S0169-2070(00)00057-1
  • Makridakis, S., Hibon, M., & Moser, C. (1979). Accuracy of forecasting: An empirical investigation (with discussion). Journal of the Royal Statistical Society A, 142(2), 97–145. https://doi.org/10.2307/2345077
  • Makridakis, S., Hyndman, R. J., & Petropoulos, F. (2020a). Forecasting in social settings: The state of the art. International Journal of Forecasting, 36(1), 15–28. https://doi.org/10.1016/j.ijforecast.2019.05.011
  • Makridakis, S., Spiliotis, E., & Assimakopoulos, V. (2018). Statistical and machine learning forecasting methods: Concerns and ways forward. Plos One, 13(3), e0194889–26. https://doi.org/10.1371/journal.pone.0194889
  • Makridakis, S., Spiliotis, E., & Assimakopoulos, V. (2020b). The m4 competition: 100,000 time series and 61 forecasting methods. International Journal of Forecasting, 36(1), 54–74. https://doi.org/10.1016/j.ijforecast.2019.04.014
  • Makridakis, S., Spiliotis, E., & Assimakopoulos, V. (2020c). Predicting/hypothesizing the findings of the m4 competition. International Journal of Forecasting, 36(1), 29–36. https://doi.org/10.1016/j.ijforecast.2019.02.012
  • Makridakis, S., Spiliotis, E., Assimakopoulos, V., Chen, Z., Gaba, A., Tsetlin, I., & Winkler, R. L. (2020d). The M5 Uncertainty competition: Results, findings and conclusions. Working paper.
  • Markham, I. S., & Rakes, T. R. (1998). The effect of sample size and variability of data on the comparative performance of artificial neural networks and regression. Computers & Operations Research, 25(4), 251–263. https://doi.org/10.1016/S0305-0548(97)00074-9
  • Moghaddam, A. H., Moghaddam, M. H., & Esfandyari, M. (2016). Stock market index prediction using artificial neural network. Journal of Economics, Finance and Administrative Science, 21(41), 89–93. https://doi.org/10.1016/j.jefas.2016.07.002
  • Montero-Manso, P., Athanasopoulos, G., Hyndman, R. J., & Talagala, T. S. (2020). Fforma: Feature-based forecast model averaging. International Journal of Forecasting, 36(1), 86–92. https://doi.org/10.1016/j.ijforecast.2019.02.011
  • Montero-Manso, P., & Hyndman, R. J. (2021). Principles and algorithms for forecasting groups of time series: Locality and globality. International Journal of Forecasting, 37(4), 1632–1653. https://doi.org/10.1016/j.ijforecast.2021.03.004
  • Nikolopoulos, K., & Petropoulos, F. (2018). Forecasting for big data: Does suboptimality matter? Computers & Operations Research, 98, 322–329.
  • Oreshkin, B. N., Carpov, D., Chapados, N., & Bengio, Y. (2019). N-beats: Neural basis expansion analysis for interpretable time series forecasting.
  • Pak, M., & Kim, S. (2017). A review of deep learning in image recognition [Paper presentation]. In 2017 4th International Conference on Computer Applications and Information Processing Technology (CAIPT), (pp. 1–3). https://doi.org/10.1109/CAIPT.2017.8320684
  • Petropoulos, F., Makridakis, S., Assimakopoulos, V., & Nikolopoulos, K. (2014). ’Horses for Courses’ in demand forecasting. European Journal of Operational Research, 237(1), 152–163. https://doi.org/10.1016/j.ejor.2014.02.036
  • Petropoulos, F., & Svetunkov, I. (2020). A simple combination of univariate models. International Journal of Forecasting, 36(1), 110–115. https://doi.org/10.1016/j.ijforecast.2019.01.006
  • Robinson, C., Dilkina, B., Hubbs, J., Zhang, W., Guhathakurta, S., Brown, M. A., & Pendyala, R. M. (2017). Machine learning approaches for estimating commercial building energy consumption. Applied Energy, 208, 889–904. https://doi.org/10.1016/j.apenergy.2017.09.060
  • Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., Berg, A., & Fei-Fei, L. (2015). Imagenet large scale visual recognition challenge. International Journal of Computer Vision, 115(3), 211–252. https://doi.org/10.1007/s11263-015-0816-y
  • Salinas, D., Flunkert, V., Gasthaus, J., & Januschowski, T. (2020). Deepar: Probabilistic forecasting with autoregressive recurrent networks. International Journal of Forecasting, 36(3), 1181–1191. https://doi.org/10.1016/j.ijforecast.2019.07.001
  • Seide, F., & Agarwal, A. (2016). Cntk: Microsoft’s open-source deep-learning toolkit [Paper presentation]. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining KDD ’16, New York, NY (p. 2135). Association for Computing Machinery.
  • Semenoglou, A.-A., Spiliotis, E., Makridakis, S., & Assimakopoulos, V. (2021). Investigating the accuracy of cross-learning time series forecasting methods. International Journal of Forecasting, 37(3), 1072–1084. https://doi.org/10.1016/j.ijforecast.2020.11.009
  • Smyl, S. (2020). A hybrid method of exponential smoothing and recurrent neural networks for time series forecasting. International Journal of Forecasting, 36(1), 75–85. https://doi.org/10.1016/j.ijforecast.2019.03.017
  • Smyl, S., Kuber, K. (2016). Data preprocessing and augmentation for multiple short time series forecasting with recurrent neural networks. In 36th International Symposium on Forecasting, Santander, 2016 (pp. 1–13).
  • Spiliotis, E., Assimakopoulos, V., & Makridakis, S. (2020a). Generalizing the theta method for automatic forecasting. European Journal of Operational Research, 284(2), 550–558. https://doi.org/10.1016/j.ejor.2020.01.007
  • Spiliotis, E., Assimakopoulos, V., & Nikolopoulos, K. (2019a). Forecasting with a hybrid method utilizing data smoothing, a variation of the theta method and shrinkage of seasonal factors. International Journal of Production Economics, 209, 92–102. https://doi.org/10.1016/j.ijpe.2018.01.020
  • Spiliotis, E., Kouloumos, A., Assimakopoulos, V., & Makridakis, S. (2020b). Are forecasting competitions data representative of the reality? International Journal of Forecasting, 36(1), 37–53. https://doi.org/10.1016/j.ijforecast.2018.12.007
  • Spiliotis, E., Makridakis, S., Semenoglou, A.-A., & Assimakopoulos, V. (2022). Comparison of statistical and machine learning methods for daily sku demand forecasting. Operational Research, 22(3), 3037–3061. https://doi.org/10.1007/s12351-020-00605-2
  • Spiliotis, E., Nikolopoulos, K., & Assimakopoulos, V. (2019b). Tales from tails: On the empirical distributions of forecasting errors and their implication to risk. International Journal of Forecasting, 35(2), 687–698. https://doi.org/10.1016/j.ijforecast.2018.10.004
  • Tashman, L. J. (2000). Out-of-sample tests of forecasting accuracy: An analysis and review. International Journal of Forecasting, 16(4), 437–450. https://doi.org/10.1016/S0169-2070(00)00065-0
  • van den Oord, A., Dieleman, S., Zen, H., Simonyan, K., Vinyals, O., Graves, A., Kalchbrenner, N., Senior, A. W., & Kavukcuoglu, K. (2016). Wavenet: A generative model for raw audio. CoRR, abs/1609.03499.
  • Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, u., & Polosukhin, I. (2017). Attention is all you need [Paper presentation]. In Proceedings of the 31st International Conference on Neural Information Processing Systems NIPS’17, Red Hook, NY (p. 6000–6010). Curran Associates Inc.
  • Voyant, C., Notton, G., Kalogirou, S., Nivet, M.-L., Paoli, C., Motte, F., & Fouilloy, A. (2017). Machine learning methods for solar radiation forecasting: A review. Renewable Energy, 105, 569–582. https://doi.org/10.1016/j.renene.2016.12.095
  • Wang, H., Lei, Z., Zhang, X., Zhou, B., & Peng, J. (2019). A review of deep learning for renewable energy forecasting. Energy Conversion and Management, 198, 111799. https://doi.org/10.1016/j.enconman.2019.111799
  • Young, T., Hazarika, D., Poria, S., & Cambria, E. (2018). Recent trends in deep learning based natural language processing [review article]. IEEE Computational Intelligence Magazine, 13(3), 55–75. https://doi.org/10.1109/MCI.2018.2840738
  • Zhang, G., Eddy Patuwo, B., & Hu, M. Y. (1998). Forecasting with artificial neural networks: The state of the art. International Journal of Forecasting, 14(1), 35–62. https://doi.org/10.1016/S0169-2070(97)00044-7
  • Zotteri, G., & Kalchschmidt, M. (2007). A model for selecting the appropriate level of aggregation in forecasting processes. International Journal of Production Economics, 108(1–2), 74–83. Principles and practices of managing inventories. https://doi.org/10.1016/j.ijpe.2006.12.030

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.