3,713
Views
2
CrossRef citations to date
0
Altmetric
Research Article

IMA(1,1) as a new benchmark for forecast evaluation

ORCID Icon

ABSTRACT

Many forecasting studies compare the forecast accuracy of new methods or models against a benchmark model. Often, this benchmark is the random walk model. In this note, I argue that for various reasons an IMA(1,1) model is a better benchmark in many cases.

JEL CLASSIFICATION:

I. Introduction

It is a common practice to compare the forecast performance of a new model or method with that of a benchmark model. This holds in particular in these days where many new and advanced econometric models are put forward, like various versions of dynamic factor models and where many studies emerge using novel machine learning methods, see Kim and Swanson (Citation2018) for a recent extensive survey and application.

Typically, one chooses as the benchmark for one-step-ahead forecasts a simple autoregressive time series model, and most often one seems to choose for a random walk model. When yt denotes a time series to be predicted, then the random walk forecast for t+1 is

yˆt+1|t=yt

which is based on the random walk model

yt=yt1+εt

where εt is a mean-zero white noise process with variance σε2. One motivation to consider this model is of course that there is no parameter to estimate, and hence there is no effort involved to create this forecast.

In many situations, however, the random walk model rarely fits the actual data. For financial time series, one may perhaps encounter this model as it associates with asset price movements, but for many other time series like in macroeconomics or business, the random walk model does not provide a good fit. It is therefore that in this note I propose to replace the random walk benchmark model by another model, which has more face value for a wider range of economic variables. This new benchmark model is the Integrated Moving Average model of order (1,1) [with acronym: IMA(1,1)], which looks like

(1) yt=yt1+εt+θεt1(1)

This IMA(1,1) basically is a random walk model with an additional-lagged error term θεt1. The θ parameter, which can be positive or negative and which is usually bounded by −1 and 1, in this IMA(1,1) model can be estimated using Maximum Likelihood or Iterative Least Squares. As an example, Nelson and Plosser (Citation1982) and Rossana and Seater (Citation1995) find much empirical evidence of this model for a range of macroeconomic variables.

Writing

ut=εt+θεt1

then the variance of ut, γ0u, is

γ0u=1+θ2σε2

using the methods outlined in Chapter 3 of Franses, van Dijk, and Opschoor (Citation2014), and the first-order autocovariance, γ1u, is

γ1u=θσε2

This makes that the first-order autocorrelation of ut, ρ1u, is

ρ1u=γ1uγ0u=θ1+θ2

When θ>0, then ρ1u>0, and when θ<0, then ρ1u<0.

In this note, I will show that the IMA(1,1) model follows naturally in a variety of settings. First, there will be some theoretical arguments. Next, I provide two additional, empirics-based, arguments. The last section concludes.

II. How can an IMA(1,1) model arise?

This section shows that an IMA(1,1) can follow from temporal aggregation of a random walk process, that it can follow from a simple basic structural model, that it associates with a time series process which experiences permanent and immediate shocks, and that it can be viewed as a simple and sensible forecasting updating process associated with exponential smoothing.

Aggregation of a random walk

Suppose that there is a variable yτ where τ is of a higher frequency than t. For example, τ amounts to months, where t can concern years. Suppose further that the variable at the higher frequency τ obeys a random walk model, that is,

yτ=yτ1+ετ

where ετ is a mean-zero white noise process with some variance. Suppose that this high-frequency random walk is temporally aggregated to a variable with frequency t, and suppose that this aggregation involves m steps. So, aggregation from months to years implies that m = 12. Working (Citation1960) shows that such temporal aggregation results in the following model:

yt=yt1+ut

where the first-order autocorrelation of ut, say, ρ1uis the only non-zero valued autocorrelation, and this autocorrelation is

ρ1u=m2122m2+1

When m, ρ1u14. When m=2, ρ1u=16. In other words, aggregation of a high-frequency random walk leads to an IMA(1,1) model with a positive valued θ.

Basic structural model

Consider the basic structural time series model (Harvey Citation1989)

yt=μt1+εt

with

μt=μt1+βεt

Writing the latter expression as

μt=βεt1L

where L is the familiar lag operator, then we have

yt=βεt11L+εt

Multiplying both sides with 1L and ordering the variables gives the joint expression for yt:

yt=yt1+εt+β1εt1

Here the IMA(1,1) model in (1) appears with θ=β1 . The MA(1) parameter θ is negative when β<1,and it is positive when β>1. Note that when the error source in the two equations of the basic structural model is not the same εt, that then still the IMA(1,1) model appears, see Harvey and Koopman (Citation2000).

Permanent and temporary shocks

Another but related way to arrive at an IMA(1,1) model is given by the following. Suppose that a time series can be decomposed into a part with permanent shocks and a part with only transitory shocks, like

yt=vt1L+wt

As such, the white-noise shocks vt with variance σv2 have a permanent effect, because of the 1L operator, and the white noise shocks wt with variance σw2have a temporary (immediate) effect. Multiplying both sides with 1L results in

1Lyt=vt+1Lwt

This is

yt=yt1+ut

with the variance of ut equal to

γ0u=σv2+2σw2

The first-order autocovariance is equal to

γ1u=σw2

and hence

ρ1u=σw2σv2+2σw2

which is non-zero and negative because of the positive-valued variance σw2.

Forecast updates

A final simple motivation to favour an IMA(1,1) model as a benchmark is because it can be written as a simple random walk forecast update but now where past forecast errors are accommodated, where still the prediction interval can simply be computed (Chatfield, Citation1993). Consider again

yt=yt1+εt+θεt1

The one-step-ahead forecast is based on

yˆt+1|t=yt+θεt

The error term can be viewed as the forecast error from the previous forecast, that is

εt=ytyˆt|t1

Hence,

yˆt+1|t=yt+θ(ytyˆt|t1)

There are now four possible cases in terms of forecast updates, and these depend on the sign of θ and on the sign of ytyˆt|t1. Note that the latter expression associates with a so-called simple exponential smoothing model (Chatfield et al. Citation2001).

III. Further arguments

Two further arguments which would make the IMA(1,1) model a better benchmark are the following. First, as Hyndman and Billah (Citation2003) show, the IMA(1,1) model has the same forecasting function as the so-called ‘Theta’ method, proposed in Assimakopoulos and Nikolopoulos (Citation2000). The Theta method is a simple benchmark that performs well in forecasting competitions like the M3 and M4, see Makridakis and Hibon (Citation2000), and Makridakis, Spiliotis, and Assimakopoulos (Citation2019), respectively.

Finally, an IMA(1,1) process can have autocorrelations that associate with long memory. At the same time, long memory associates with aggregation across time series variables (Granger Citation1980) and structural breaks (Granger and Hyung Citation2004). Consider again,

yt=yt1+εt+θεt1

Using the lag operator, this can be written as

1Lyt=1+θLεt

And hence

1L1+θLyt=εt

This can be written as

1Lytθyt1+θ2yt2θ3yt3+=εt

or

ytθ+1yt1+θ2+θyt2θ3+θ2yt3+=εt

Put simpler, the approximate infinite autoregression reads as

yt=α1yt1+α2yt2+α3yt3++εt

with

α1=θ+1α2=θ2+θα3=θ3+θ2α4=θ4+θ3   

Now consider the fractionally integrated model

(1L)dyt=εt

with 0<d<1, see Granger and Joyeux (Citation1980). Franses, van Dijk and Opschoor (Citation2014, 91) show that this can be written again as an infinite autoregression

yt=α1yt1+α2yt2+α3yt3++εt

where now

α1=d
α2=d1d2!
α3=d1d2d3!
α4=d1d2d3d4!

For particular values of θ and d, the patterns of the autoregressive parameters of the IMA(1,1) and the fractionally integrated process can look very similar. Consider for example which gives the first 10 autoregressive parameters, that is α1 to α10 for θ=0.9 and d=0.3.

Figure 1. The first 10 autoregressive parameters in an approximate autoregressive model, that is α1 to α10 for θ=0.9 and d=0.3.

Figure 1. The first 10 autoregressive parameters in an approximate autoregressive model, that is α1 to α10 for θ=−0.9 and d=0.3.

IV. Conclusion

In this note, I proposed to replace the random walk benchmark model in forecast evaluations by another model, which has more face value for many economic variables. This new benchmark model is the Integrated Moving Average model of order (1,1). I have put forward six arguments why this IMA(1,1) model is a suitable benchmark model in practice.

Disclosure statement

No potential conflict of interest was reported by the author.

References

  • Assimakopoulos, V., and K. Nikolopoulos. 2000. “The Theta Model: A Decomposition Approach to Forecasting.” International Journal of Forecasting 16: 521–530. doi:10.1016/S0169-2070(00)00066-2.
  • Chatfield, C. 1993. “Calculating Interval Forecasts (With discussion).” Journal of Business and Economic Statistics 11: 121–144.
  • Chatfield, C., A. B. Koehler, J. K. Ord, and R. D. Snyder. 2001. “A New Look at Models for Exponential Smoothing.” The Statistician 50: 146–159.
  • Franses, P. H., D. van Dijk, and A. Opschoor. 2014. Time Series Models for Business and Economic Forecasting. Cambridge UK: Cambridge University Press.
  • Granger, C. W. J. 1980. “Long Memory Relationships and the Aggregation of Dynamic Models.” Journal of Econometrics 14: 227–238. doi:10.1016/0304-4076(80)90092-5.
  • Granger, C. W. J., and N. Hyung. 2004. “Occasional Structural Breaks and Long Memory with an Application to the S&P 500 Absolute Stock Returns.” Journal of Empirical Finance 11: 399–421. doi:10.1016/j.jempfin.2003.03.001.
  • Granger, C. W. J., and R. Joyeux. 1980. “An Introduction to Long-memory Time Series Models and Fractional Differencing.” Journal of Time Series Analysis 1: 15–39. doi:10.1111/j.1467-9892.1980.tb00297.x.
  • Harvey, A. C. 1989. Forecasting, Structural Time Series Models and the Kalman Filter. Cambridge UK: Cambridge University Press.
  • Harvey, A. C., and S. J. Koopman. 2000. “Signal Extraction and the Formulation of Unobserved Components Models.” Econometrics Journal 3: 84–107. doi:10.1111/1368-423X.00040.
  • Hyndman, R. J., and B. Billah. 2003. “Unmasking the Theta Method.” International Journal of Forecasting 19: 187–290. doi:10.1016/S0169-2070(01)00143-1.
  • Kim, H. H., and N. R. Swanson. 2018. “Mining Big Data Using Parsimonious Factor, Machine Learning, Variable Selection and Shrinkage Methods.” International Journal of Forecasting 34: 339–354. doi:10.1016/j.ijforecast.2016.02.012.
  • Makridakis, S., and M. Hibon. 2000. “The M3-competitions: Results, Conclusions and Implications.” International Journal of Forecasting 16: 451–476. doi:10.1016/S0169-2070(00)00057-1.
  • Makridakis, S., E. Spiliotis, and V. Assimakopoulos. 2019. “The M4 Competition: 100,000 Time Series and 61 Forecasting Methods.” International Journal of Forecasting. in print. doi:10.1016/j.ijforecast.2019.04.14.
  • Nelson, C. R., and C. I. Plosser. 1982. “Trends and Random Walks in Macroeconomic Time Series: Some Evidence and Implications.” Journal of Monetary Economics 10: 139–162. doi:10.1016/0304-3932(82)90012-5.
  • Rossana, R., and J. Seater. 1995. “Temporal Aggregation and Economic Time Series.” Journal of Business and Economic Statistics 13: 441–451.
  • Working, H. 1960. “Note on the Correlation of First Differences of Averages in a Random Chain.” Econometrica 28: 916–918. doi:10.2307/1907574.