720
Views
2
CrossRef citations to date
0
Altmetric
TIME SERIES ECONOMETRICS

Identification-robust moment-based tests for Markov switching in autoregressive models

&

ABSTRACT

This paper develops tests of the null hypothesis of linearity in the context of autoregressive models with Markov-switching means and variances. These tests are robust to the identification failures that plague conventional likelihood-based inference methods. The approach exploits the moments of normal mixtures implied by the regime-switching process and uses Monte Carlo test techniques to deal with the presence of an autoregressive component in the model specification. The proposed tests have very respectable power in comparison with the optimal tests for Markov-switching parameters of Carrasco et al. (Citation2014), and they are also quite attractive owing to their computational simplicity. The new tests are illustrated with an empirical application to an autoregressive model of USA output growth.

JEL CLASSIFICATION:

1. Introduction

The extension of the linear autoregressive model proposed by Hamilton (Citation1989) allows the mean and variance of a time series to depend on the outcome of a latent process, assumed to follow a Markov chain. The evolution over time of the latent state variable gives rise to an autoregressive process with a mean and variance that switch according to the transition probabilities of the Markov chain. Hamilton (Citation1989) applies the Markov-switching model to USA output growth rates and argues that it encompasses the linear specification. This class of models has also been used to model potential regime shifts in foreign exchange rates (Engel and Hamilton, Citation1990), stock market volatility (Hamilton and Susmel, Citation1994), real interest rates (Garcia and Perron, Citation1996) , corporate dividends (Timmermann, Citation2001), the term structure of interest rates (Ang and Bekaert, Citation2002b), portfolio allocation (Ang and Bekaert, Citation2002a), and government policy (Davig, Citation2004). A comprehensive treatment of Markov-switching models and many references are found in Kim and Nelson (Citation1999), and more recent surveys of this class of models are provided by Guidolin (Citation2011) and Hamilton (Citation2016).

A fundamental question in the application of such models is whether the data-generating process (DGP) is indeed characterized by regime changes in its mean or variance. Statistical testing of this hypothesis poses serious difficulties for conventional likelihood-based methods because two important assumptions underlying standard asymptotic theory are violated under the null hypothesis of no regime change. Indeed, if a two-regime model is fitted to a single-regime linear process, the parameters which describe the second regime are unidentified. Moreover, the derivatives of the likelihood function with respect to the mean and variance are identically zero when evaluated at the constrained maximum under both the null and alternative hypotheses. These difficulties combine features of the statistical problems discussed in Davies (Citation1977, Citation1987), Watson and Engle (Citation1985), and Lee and Chesher (Citation1986). The end result is that the information matrix is singular under the null hypothesis, and the usual likelihood-ratio test does not have an asymptotic chi-squared distribution in this case. Conventional likelihood-based inference in the context of Markov-switching models can thus be very misleading in practice. Indeed, the simulation results reported by Psaradakis and Sola (Citation1998) reveal just how poor the first-order asymptotic approximations to the finite-sample distribution of the maximum-likelihood (ML) estimates can be.

Hansen (Citation1992, Citation1996) and Garcia (Citation1998) proposed likelihood-ratio tests specifically tailored to deal with the kind of violations of the regularity conditions which arise in Markov-switching models. Their methods differ in terms of which parameters are considered of interest and those taken as nuisance parameters. Both methods require a search over the intervening nuisance parameter space with an evaluation of the Markov-switching likelihood function at each considered grid point, which makes them computationally expensive. Carrasco et al. (Citation2014) derive asymptotically optimal tests for Markov-switching parameters. These information matrix-type tests only require estimating the model under the null hypothesis, which is a clear advantage over Hansen (Citation1992, Citation1996) and Garcia (Citation1998). However, the asymptotic distribution of the optimal tests is not free of nuisance parameters, so Carrasco et al. (Citation2014) suggest a parametric bootstrap procedure to find the critical values.

In this paper, we propose new tests for the Markov-switching models which, just like the Carrasco et al. (Citation2014) tests, circumvent the statistical problems and computational costs of likelihood-based methods. Specifically, we first propose computationally simple test statistics—based on least-squares residual moments—for the hypothesis of no Markov switching (or linearity) in autoregressive models. The residual moment statistics considered include statistics focusing on the mean, variance, skewness, and excess kurtosis of estimated least-squares residuals. The different statistics are combined through the minimum or the product of approximate marginal p-values.

Second, we exploit the computational simplicity of the test statistics to obtain exact and asymptotically valid test procedures, which do not require deriving the asymptotic distribution of the test statistics and automatically deal with the identification difficulties associated with such models. Even if the distributions of these combined statistics may be difficult to establish analytically, the level of the corresponding test is perfectly controlled. This is made possible through the use of Monte Carlo (MC) test methods. When no new nuisance parameter appears in the null distribution of the test statistic, such methods allow one to control perfectly the level of a test, irrespective of the distribution of the test statistic, as long as the latter can be simulated under the null hypothesis; see Dwass (Citation1957), Barnard (Citation1963), Birnbaum (Citation1974), and Dufour (Citation2006). This feature holds for a fixed number of replications, which can be quite small. For example, 19 replications of the test statistic are sufficient to obtain a test with exact level 0.05. A larger number of replications decreases the sensitivity of the test to the underlying randomization and typically leads to power gains. Dufour et al. (Citation2004), however, find that increasing the number of replications beyond 100 has only a small effect on power.

Furthermore, when nuisance parameters are present—as in the case of linearity tests studied here—the procedure can be extended through the use of maximized Monte Carlo (MMC) tests (Dufour, Citation2006). Two variants of this procedure are described: a fully exact version which requires maximizing a p-value function over the nuisance parameter space under the null hypothesis (here, the autoregressive coefficients), and an approximate one based on a (potentially much smaller) consistent set estimator of the autoregressive parameters. Both procedures are valid (in finite samples or asymptotically) without any need to establish the asymptotic distribution of the fundamental test statistics (here, residual moment-based statistics) or the convergence of the empirical distribution of the simulated test statistics toward the asymptotic distribution of the fundamental test statistic used (as in bootstrapping).

When the nuisance-parameter set on which the p-values are computed is reduced to a single point—a consistent estimator of the nuisance parameters under the null hypothesis—the MC test can be interpreted as a parametric bootstrap. The implementation of this type of procedure is also considerably simplified through the use of our moment-based test statistics. It is important to emphasize that evaluating the p-value function is far simpler to do than computing the likelihood function of the Markov-switching model, as required by the methods of Hansen (Citation1992, Citation1996) and Garcia (Citation1998). The MC tests are also far simpler to compute than the information matrix-type tests of Carrasco et al. (Citation2014), which require a grid search for a supremum-type statistic (or numerical integration for an exponential-type statistic) over a priori measures of the distance between potentially regime-switching parameters and another parameter characterizing the serial correlation of the Markov chain under the alternative.

Third, we conduct simulation experiments to examine the performance of the proposed tests using the optimal tests of Carrasco et al. (Citation2014) as the benchmark for comparisons. The new moment-based tests are found to perform remarkably well when compared with the asymptotically optimal ones, especially when the variance is subject to regime changes. Finally, the proposed methods are illustrated by revisiting the question of whether USA real GNP growth can be described as an autoregressive model with Markov-switching means and variances using the original Hamilton (Citation1989) data set from 1952 to 1984, as well as an extended data set from 1952 to 2010. We find that the empirical evidence does not justify a rejection of the linear model over the period 1952–1984. However, the linear autoregressive model is firmly rejected over the extended time period.

The paper is organized as follows. Section 2 describes the autoregressive model with Markov-switching means and variances. Section 3 presents the moments of normal mixtures implied by the regime-switching process and the test statistics we propose to combine for capturing those moments. Section 3 also explains how the MC test techniques can be used to deal with the presence of an autoregressive component in the model specification. Section 4 examines the performance of the developed MC tests in simulation experiments using the optimal tests for Markov-switching parameters of Carrasco et al. (Citation2014) as the benchmark for comparison purposes. Section 5 then presents the results of the empirical application to USA output growth and Section 6 concludes.

2. Markov-switching model

We consider an autoregressive model with Markov-switching means and variances defined by

(1) yt=μst+k=1rϕk(ytkμstk)+σst𝜀t(1)
where the innovation terms {𝜀t} are independently and identically distributed (i.i.d.) according to the N(0,1) distribution. The time-varying mean and variance parameters of the observed variable yt are functions of a latent first-order Markov chain process {St}. The unobserved random variable St takes integer values in the set {1,2} such that Pr(St=j)=i=12pijPr(St1=i), with pij=Pr(St=j|St1=i). The one-step transition probabilities are collected in the matrix:
P=[p11p12p21p22]
where j=12pij=1 , for i = 1,2. Furthermore, St and 𝜀τ are assumed independent for all t,τ.

The model in (1) can also be conveniently expressed as

(2) yt=i=12μi𝕀[St=i]+k=1rϕk(ytki=12μi𝕀[Stk=i])+i=12σi𝕀[St=i]𝜀t(2)
where 𝕀[A] is the indicator function of event A, which is equal to 1 when A occurs and 0 otherwise. Here, μi and σi2 are the conditional mean and variance given the regime St = i.

The model parameters are collected in the vector 𝜃=(μ1,μ2,σ1,σ2,ϕ1,,ϕr,p11,p22). The sample (log) likelihood, conditional on the first r observations of yt, is then given by

(3) LT(𝜃)=logf(y1T|yr+10;𝜃)=t=1Tlogf(yt|yr+1t1;𝜃)(3)
where yr+1t={yr+1,,yt} denotes the sample of observations up to time t, and
f(yt|yr+1t1;𝜃)=st=12st1=12...str=12f(yt,St=st,St1=st1,,Str=str|yr+1t1;𝜃).

Hamilton (Citation1989) proposes an algorithm for making inferences about the unobserved state variable St given observations on yt. His algorithm also yields an evaluation of the sample likelihood in (3), which is needed to find the maximum likelihood estimates of 𝜃.

The sample likelihood LT(𝜃) in (3) has several unusual features that make it notoriously difficult for standard optimizers to explore. In particular, the likelihood function has several modes of equal height. These modes correspond to the different ways of reordering the state labels. There is no difference between the likelihood for μ1=μ1, μ2=μ2, σ1=σ1, and σ2=σ2 and the likelihood for μ1=μ2, μ2=μ1, σ1=σ2, and σ2=σ1. Rossi (Citation2014, Chapter 1) provides a nice discussion of these issues in the context of normal mixtures, which is a special case implied by (2) when the ϕs are zero. He shows that the likelihood has numerous points where the function is not defined with an infinite limit. Furthermore, the likelihood function also has saddle points containing local maxima. This means that standard numerical optimizers are likely to converge to a local maximum and will therefore need to be started from several points in a constrained parameter space to find the ML estimates.

3. Tests of linearity

The Markov-switching model in (2) nests the following linear autoregressive specification as a special case:

(4) yt=c+k=1rϕkytk+σ1𝜀t,(4)
where c=μ1(1k=1rϕk). Here, μ1 and σ12 refer to the single-regime mean and variance parameters. It is well known that the conditional ML estimates of the linear model can be obtained from an ordinary least-squares (OLS) regression (Hamilton, Citation1994, Chapter 5). A problem with the ML approach is that the likelihood function will always increase when moving from the linear model in (4) to the two-regime model in (2) as any increase in flexibility is always rewarded. To avoid overfitting, it is therefore desirable to test whether the linear specification provides an adequate description of the data.

Given model (2), the null hypothesis of linearity can be expressed as (μ1=μ2, σ1=σ2), or (p11 = 1, p21 = 1), or (p12 = 1, p22 = 1). It is easy to see that if (μ1=μ2, σ1=σ2), then the transition probabilities are unidentified. On the contrary, if (p11 = 1, p21 = 1), then it is μ2 and σ2 which become unidentified, whereas if (p12 = 1, p22 = 1) then μ1 and σ1 become unidentified. One of the regularity conditions underlying the usual asymptotic distributional theory of ML estimates is that the information matrix be nonsingular; see, for example, Gouriéroux and Monfort (Citation1995, Chapter 7). Under the null hypothesis of linearity, this condition is violated because the likelihood function in (3) is flat with respect to the unidentified parameters at the optimum. A singular information matrix results also from another, less obvious, problem: the derivatives of the likelihood function with respect to the mean and variance are identically zero when evaluated at the constrained maximum; see Hansen (Citation1992) and Garcia (Citation1998).

3.1. Mixture model

We begin by considering the mean-variance switching model:

(5) yt=μ1𝕀[St=1]+μ2𝕀[St=2]+(σ1𝕀[St=1]+σ2𝕀[St=2])𝜀t,(5)
where 𝜀t∼ i.i.d. N(0,1). The Markov chain governing St is assumed ergodic, and we denote the ergodic probability associated with state i by πi. Note that a two-state Markov chain is ergodic provided that p11<1, p22<1 and p11+p22>0 (Hamilton, Citation1994, p. 683). As we already mentioned, the null hypothesis of linearity (no regime changes) can be expressed as
H0(μ,σ):μ1=μ2 and σ1=σ2,
and a relevant alternative hypothesis states that the mean and/or variance is subject to the first-order Markov switching. The tests of H0(μ,σ) we develop exploit the fact that the marginal distribution of yt is a mixture of two normal distributions. Indeed, under the maintained assumption of an ergodic Markov chain we have
(6) ytπ1N(μ1,σ12)+π2N(μ2,σ22),(6)
where π1=(1p22)(2p11p22) and π2=1π1. In the spirit of Cho and White (Citation2007) and Carter and Steigerwald (Citation2012, Citation2013), the suggested approach ignores the Markov property of St.

The marginal distribution of yt given in (6) is a weighted average of two normal distributions. Timmermann (Citation2000) shows that the mean (μ), unconditional variance (σ2), skewness coefficient (b1), and excess kurtosis coefficient (b2) associated with (6) are given by

(7) μ=π1μ1+π2μ2,(7)
(8) σ2=π1σ12+π2σ22+π1π2(μ2μ1)2,(8)
(9) b1=π1π2(μ1μ2){3(σ12σ22)+(12π1)(μ2μ12)2}(π1σ12+π2σ22+π1π2(μ2μ1)2)32,(9)
(10) b2=ab,(10)
where
a=3π1π2(σ22σ12)2+6(μ2μ1)2π1π2(2π11)(σ22σ12)+π1π2(μ2μ1)4(16π1π2),b=(π1σ12+π2σ22+π1π2(μ2μ1)2)2.

When compared with a bell-shaped normal distribution, the expressions in (7)–(10) imply that a mixture distribution can be characterized by any of the following features: the presence of two peaks, right or left skewness, or excess kurtosis. The extent to which these characteristics will be manifest depends on the relative values of π1 and π2 by which the component distributions in (6) are weighted and on the distance between the component distributions. This distance can be characterized by either the separation between the respective means, Δμ=μ2μ1, or by the separation between the respective standard deviations, Δσ=σ2σ1, where we adopt the convention that μ2>μ1 and σ2>σ1. For example, if Δσ = 0, then the skewness and relative difference between the two peaks of the mixture distribution depends on Δμ and the weights π1 and π2. When π1=π2, the mixture distribution is symmetric with two modes becoming more distinct as Δμ increases. On the contrary, if Δμ = 0, then the mixture distribution will have heavy tails depending on the difference between the component standard deviations and their relative weights. See Hamilton (Citation1994, Chapter 22), Timmermann (Citation2000), and Rossi (Citation2014, Chapter 1) for more on these effects.

To test H0(μ,σ), we propose a combination of four test statistics based on the theoretical moments in (7)–(10). The four individual statistics are computed from the residual vector 𝜀̂=(𝜀̂1,𝜀̂2,,𝜀̂T) comprising the residuals 𝜀̂t=ytȳ, themselves computed as the deviations from the sample mean. Each statistic is meant to detect a specific characteristic of mixture distributions. The first of these statistics is

(11) M(𝜀̂)=|m2m1|s22+s12,(11)
where
m2=t=1T𝜀̂t𝕀[𝜀̂t>0]t=1T𝕀[𝜀̂t>0],s22=t=1T(𝜀̂tm2)2𝕀[𝜀̂t>0]t=1T𝕀[𝜀̂t>0],
and
m1=t=1T𝜀̂t𝕀[𝜀̂t<0]t=1T𝕀[𝜀̂t<0],s12=t=1T(𝜀̂tm1)2𝕀[𝜀̂t<0]t=1T𝕀[𝜀̂t<0].

The statistic in (11) is a standardized difference between the means of the observations situated above the sample mean and those below the sample mean. The next statistic partitions the observations on the basis of the sample variance σ̂2=T1t=1T𝜀̂t2. Specifically, we consider

(12) V(𝜀̂)=v2(𝜀̂)v1(𝜀̂),(12)
where
v2=t=1T𝜀̂t2𝕀[𝜀̂t2>σ̂2]t=1T𝕀[𝜀̂t2>σ̂2],v1=t=1T𝜀̂t2𝕀[𝜀̂t2<σ̂2]t=1T𝕀[𝜀̂t2<σ̂2],
so that v2>v1. Note that we partition on the basis of average values because (6) is a two-component mixture. The last two statistics are the absolute values of the coefficients of skewness and excess kurtosis:
(13) S(𝜀̂)=|t=1T𝜀̂t3T(σ̂2)32|(13)
and
(14) K(𝜀̂)=|t=1T𝜀̂t4T(σ̂2)23|,(14)
which were also considered in Cho and White (Citation2007). Observe that the statistics in (11)–(14) can only be nonnegative and are each likely to be larger in value under the alternative hypothesis. Taken together, they constitute a potentially useful battery of statistics to test H0(μ,σ) by capturing characteristics of the first four moments of normal mixtures. As one would expect, the power of the tests based on ( 11)–(14) will generally be increasing with the frequency of regime changes.

It is easy to see that the statistics in (11)–(14) are exactly pivotal as they all involve ratios and can each be computed from the vector of standardized residuals 𝜀̂σ̂, which are scale and location invariant under the null of linearity. That is, the vector of statistics (M(𝜀̂),V(𝜀̂),S(𝜀̂),K(𝜀̂)) is distributed like (M(η̂),V(η̂),S(η̂),K(η̂)), where ηN(0,IT) and η̂=ηη̄. The null distribution of the proposed test statistics can thus be simulated to any degree of precision, thereby paving the way for an MC test as follows.

First, compute each of the statistics in (11)–(14) with the actual data to obtain (M(𝜀̂),V(𝜀̂),S(𝜀̂),K(𝜀̂)). Then generate N−1 mutually independent T×1 vectors ηi, i = 1,…, N−1, where ηiN(0,IT). For each such vector, compute η̂i=(η̂i1,η̂i2,,η̂iT) with typical element η̂it=ηitη¯i, where η¯i is the sample mean, and compute the statistics in (11)–(14) based on η̂i so as to obtain N−1 statistics vectors (M(η̂i),V(η̂i),S(η̂i),K(η̂i)), i = 1,…, N−1. Let ξ denote any one of the above four statistics, ξ0 its original data-based value, and ξi, i = 1,…, N−1, the corresponding simulated values. The individual MC p-values are then given by

(15) Gξ[ξ0;N]=N+1Rξ[ξ0;N]N,(15)
where Rξ[ξ0;N] is the rank of ξ0 when ξ0,ξ1,,ξN1 are placed in increasing order. The associated MC critical regions are defined as
WN(ξ)={Rξ[ξ0;N]cN(αξ)}
with
cN(αξ)=NI[Nαξ]+1,
where I[x] denotes the largest integer not exceeding x. These MC critical regions are exact for any given sample size, T. Further discussion and applications of the MC test technique can be found in Dufour and Khalaf (Citation2001) and Dufour (Citation2006).

Note that the MC p-values GM[M(𝜀̂);N], GV[V(𝜀̂);N], GS[S(𝜀̂);N], and GK[K(𝜀̂);N] are not statistically independent and may in fact have a complex dependence structure. Nevertheless, if we choose the individual levels such that αM+αV+αS+αK=α then, for TS = {M,V,S,K}, we have by the Boole–Bonferroni inequality:

Pr(ξTSWN(ξ))α,
so the induced test, which consists in rejecting H0(μ,σ) when any of the individual tests rejects, has level α. For example, if we set each individual test level at 2.5%, so that we reject if Gξ[ξ0;N]2.5% for any ξ∈{M,V,S,K}, then the overall probability of committing a Type I error does not exceed 10%. Such Bonferroni-type adjustments, however, can be quite conservative and lead to power losses; see Savin (Citation1984) for a survey of these issues.

To resolve these multiple comparison issues, we propose an MC test procedure based on combining individual p-values. The idea is to treat the combination like any other (pivotal) test statistic for the purpose of MC resampling. As with double bootstrap schemes (MacKinnon, Citation2009), this approach can be computationally expensive because it requires a second layer of simulations to obtain the p-value of the combined (first-level) p-values. Here, we can ease the computational burden using approximate p-values in the first level. A remarkable feature of the MC test combination procedure is that it remains exact even if the first-level p-values are only approximate. Indeed, the MC procedure implicitly accounts for the fact that the p-value functions may not be individually exact and yields an overall p-value for the combined statistics which itself is exact. For this procedure, we make use of approximate distribution functions taking the simple logistic form:

(16) F̂[x]=exp(γ̂0+γ̂1x)1+exp(γ̂0+γ̂1x),(16)
whose estimated coefficients are given in for selected sample sizes. These coefficients were obtained by the method of nonlinear least-squares (NLS) applied to simulated distribution functions comprising a million draws for each sample size. The approximate p-value of, say, M(𝜀̂) is then computed as ĜM[M(𝜀̂)]=1F̂M[M(𝜀̂)], where F̂M[x] is given by (16) with associated γ̂s from . The other p-values ĜV,ĜS, and ĜK are computed in a similar way.

Table 1. Coefficients of approximate distribution functions.

We consider two methods for combining the individual p-values. The first one rejects the null when at least one of the p-values is sufficiently small so that the decision rule is effectively based on the statistic

(17) Fmin(𝜀̂)=1min{ĜM[M(𝜀̂)],ĜV[V(𝜀̂)],ĜS[S(𝜀̂)],ĜK[K(𝜀̂)]}.(17)

The criterion in (17) was suggested by Tippett (Citation1931) and Wilkinson (Citation1951) for combining inferences obtained from independent studies. The second method, suggested by Fisher (Citation1932) and Pearson (Citation1933), again for independent test statistics, is based on the product (rather than the minimum) of the p-values:

(18) F×(𝜀̂)=1ĜM[M(𝜀̂)]×ĜV[V(𝜀̂)]×ĜS[S(𝜀̂)]×ĜK[K(𝜀̂)].(18)

The MC p-value of the combined statistic in (17), for example, is then given by

(19) GFmin[Fmin(𝜀̂);N]=N+1RFmin[Fmin(𝜀̂);N]N,(19)
where RFmin[Fmin(𝜀̂);N] is the rank of Fmin(𝜀̂) when Fmin(𝜀̂),Fmin(η̂1),,Fmin(η̂N1) are placed in ascending order. Although the statistics which enter into the computation of (17) and ( 18) may have a rather complex dependence structure, the MC p-values computed as in (19) are provably exact. See Dufour et al. (Citation2004) and Dufour et al. (Citation2014) for further discussion and applications of these test combination methods.

3.2. Autoregressive dynamics

In this section, we extend the proposed MC tests to Markov-switching models with state-independent autoregressive dynamics. To keep the presentation simple, we describe in detail the test procedure in the case of models with a first-order autoregressive component. Models with higher-order autoregressive components are dealt with by a straightforward extension of the AR(1) case. For convenience, the Markov-switching model with AR(1) component that we treat is given here as

(20) yt=μst+ϕ(yt1μst1)+σst𝜀t(20)
where
μst=μ1𝕀[St=1]+μ2𝕀[St=2],σst=σ1𝕀[St=1]+σ2𝕀[St=2].

The tests exploit the fact that, given the true value of ϕ, the simulation-based procedures of the previous section can be validly applied to a transformed model. The idea is that if ϕ in (20) were known we could test whether zt(ϕ)=ytϕyt1, defined for t = 2,…, T, follows a mixture of at least two normals.

Indeed, when μ1μ2 (μ1,μ20), the random variable zt(ϕ) follows a mixture of two normals (when ϕ = 0), three normals (when |ϕ| = 1), or four normals otherwise. That is, when ϕyt−1 is subtracted on both sides of (20), the result is a model with a mean that switches between four states according to

zt(ϕ)=μ1𝕀[St=1]+μ2𝕀[St=2]+μ3𝕀[St=3]+μ4𝕀[St=4]+(σ1𝕀[St=1]+σ2𝕀[St=2])𝜀t
where
(21) μ1=μ1(1ϕ),μ2=μ2ϕμ1,μ3=μ1ϕμ2,μ4=μ2(1ϕ)(21)
and St is a first-order, four-state Markov chain with transition probability matrix
P=[p11p120000p21p22p11p120000p21p22].

If μ1μ2, the quantities in (21) admit either two distinct values (when ϕ = 0), three distinct values (when ϕ = 1 or −1), or four distinct values otherwise. Under H0(μ,σ), the filtered observations zt(ϕ), t = 2,…, T, are i.i.d. when evaluated at the true value of the autoregressive parameter.

To deal with the fact that ϕ in unknown, we use the extension of the MC test technique proposed in Dufour (Citation2006) to deal with the presence of nuisance parameters. Treating ϕ as a nuisance parameter means that the proposed test statistics become functions of 𝜀̂t(ϕ), where 𝜀̂t(ϕ)=zt(ϕ)z̄(ϕ). Let Ωϕ denote the set of admissible values for ϕ which are compatible with the null hypothesis. Depending on the context, the set Ωϕ may be ℝ itself, the open interval (−1,1), the closed interval [−1,1], or any other appropriate subset of ℝ. In light of a minimax argument (Savin, Citation1984), the null hypothesis may then be viewed as a union of point null hypotheses, where each point hypothesis specifies an admissible value for ϕ. In this case, the statistic in (19) yields a test of H0(μ,σ) with level α if and only if

GFmin[Fmin(𝜀̂);N]α,ϕΩϕ,
or, equivalently,
supϕΩϕGFmin[Fmin(𝜀̂);N]α.

In words, the null is rejected whenever for all admissible values of ϕ under the null, the corresponding point null hypothesis is rejected. Therefore, if is an integer, we have under H0(μ,σ),

Pr[sup{GFmin[Fmin(𝜀̂);N]:ϕΩϕ}α]α,

i.e., the critical region sup{GFmin[Fmin(𝜀̂);N]:ϕΩϕ}α has level α. This procedure is called an MMC test. It should be noted that the optimization is done over Ωϕ holding fixed the values of the simulated T×1 vectors ηi, i = 1,…, N−1, with ηiN(0,IT) – from which the simulated statistics are obtained.

The maximization involved in the MMC test can be numerically challenging for Newton-type methods because the simulated p-value function is discontinuous. Search methods for nonsmooth objectives which do not rely on gradients are therefore necessary. A computationally simplified procedure can be based on a consistent set estimator CT of ϕ; i.e., one for which limTPr[ϕCT]=1. For example, if ϕ̂T is a consistent point estimate of ϕ and c is any positive number, then the set

CT={ϕΩϕ:ϕ̂Tϕ<c}
is a consistent set estimator of ϕ; i.e., limTPr[ϕ̂Tϕ<c]=1, ∀c>0. Under H0(μ,σ), the critical region based on (19) satisfies
limTPr(sup{GFmin[Fmin(𝜀̂);N]:ϕCT}α)α.

The procedure may even be based on the singleton set CT={ϕ̂T}, which yields a local MC (LMC) test based on a consistent point estimate. See Dufour (Citation2006) for additional details.

4. Simulation evidence

This section presents simulation evidence on the performance of the proposed MC tests using model (20) as the DGP. As a benchmark for comparison purposes, we take the optimal tests for Markov-switching parameters developed by Carrasco et al. (Citation2014) (CHP). To describe these tests, let t=t(𝜃0) denote the log of the predictive density of the tth observation under the null hypothesis of a linear model. For model (20), the parameter vector under the null hypothesis becomes 𝜃0=(c,ϕ,σ2) and we have

t=12log(2πσ2)(ytcϕyt1)22σ2.

Let 𝜃̂0 denote the conditional maximum likelihood estimates under the null hypothesis (which can be obtained by OLS) and define

t(1)=t𝜃|𝜃=𝜃̂0 and t(2)=2t𝜃𝜃|𝜃=𝜃̂0.

The CHP information matrix-type tests are calculated with

ΓT=ΓT(h,ρ)=tμ2,t(h,ρ)T
where
μ2,t(h,ρ)=12h[t(2)+t(1)t(1)+2s<tρtst(1)s(1)]h.

Here, the elements of vector h are a priori measures of the distance between the corresponding switching parameters under the alternative hypothesis, and the scalar ρ characterizes the serial correlation of the Markov chain. To ensure identification, the vector h needs to be normalized such that ∥h∥ = 1. For given values of h and ρ, let 𝜀̂=𝜀̂(h,ρ) denote the residuals of an OLS regression of μ2,t(h,ρ) on t(1).

Following the suggestion in CHP, h in the case of model (20) is a three-dimensional vector whose first and third elements (corresponding to a switching mean and variance) are generated uniformly over the unit sphere, and ρ takes values in the interval [ρ̲,ρ̄]=[0.7,0.7]. The nuisance parameters in h and ρ can be dealt with in two ways. The first is with a supremum-type test statistic:

supTS=sup{h,ρ:h=1,ρ̲<ρ<ρ̄}12(max(0,ΓT𝜀̂𝜀̂))2
and the second is with an exponential-type statistic (based on an exponential prior):
expTS={h=1,ρ̲<ρ<ρ̄}Ψ(h,ρ)dhdρ
where
Ψ(h,ρ)={2πexp[12(ΓT𝜀̂𝜀̂1)2]Φ(ΓT𝜀̂𝜀̂1) if 𝜀̂𝜀̂0,1 otherwise.

Here, Φ(⋅) stands for the standard normal cumulative distribution. CHP suggests using a parametric bootstrap to assess the statistical significance of these statistics because their asymptotic distributions are not free of nuisance parameters. This is done by generating data from the linear AR model with 𝜃̂0 and calculating supTS and expTS with each artificial sample. We implemented this procedure using 500 bootstrap replications.

In the following tables, LMC and MMC stand for the local and MMC procedures, respectively. The first-level p-values are computed from the estimated distribution functions in , and the subscript “min" is used to indicate that the first-level p-values are combined via their minimum, whereas the subscript “×” indicates that they are combined via their product. The MC tests were implemented with N = 100, and the MMC test was performed by maximizing the MC p-value by grid search over an interval defined by taking two standard errors on each side of ϕ̂0, the OLS estimate of ϕ. The simulation experiments are based on 1000 replications of each DGP configuration.

Table 2. Empirical size of tests for Markov-switching.

For a nominal 5% level, reports the empirical size (in percentage) of the LMC, MMC, supTS, and expTS tests for ϕ = 0.1, 0.9 and T = 100, 200. The MMC tests are seen to perform according to the developed theory with empirical rejection rates ≤5% under the null hypothesis. The LMC tests based on ϕ̂0 perform remarkably well, revealing an empirical size close to the nominal 5% level in each case. The same can be said about the bootstrap supTS and expTS tests even though they seem to be less stable than the LMC tests.

Table 3. Empirical power of tests for Markov-switching with ϕ = 0.1.

Table 4. Empirical power of tests for Markov-switching with ϕ = 0.9.

and report the empirical power (in percentage) of the tests for ϕ = 0.1 and ϕ = 0.9, respectively. The DGP configurations vary the separation between the means Δμ=μ2μ1 and standard deviations Δσ=σ2σ1 as (Δμ,Δσ) = (2,0), (0,1), (2,2); the sample size as T = 100, 200; and the transition probabilities as (p11,p22)=(0.9,0.9), (0.9,0.5), (0.9,0.1).

As expected, the power of the proposed tests increases with Δμ and Δσ, and the sample size. For given values of Δμ and Δσ, test power tends to increase with the frequency of regime switches. For example, when Δμ = 2 and Δσ = 1, the power of the MC tests increases when p22 decreases (increase) from 0.9 (0.1) to 0.5. Comparing the LMCmin and MMCmin to LMC× and MMC×, respectively, reveals that there is a power gain in most cases from using the product rule to combine the first-level p-values in the MC procedure. Not surprisingly, the LMC procedures (based on the point estimate ϕ̂0) have better power than the MMC procedures, which maximize the MC p-value over a range of admissible values for ϕ to hedge the risk of committing a Type I error.

The supTS and expTS generally tend to be more powerful than the MC tests, particularly when there are regimes only in the mean (e.g., Δμ = 2, Δσ = 0). Nevertheless, it is quite remarkable that the LMC tests have power approaching that of the supTS and expTS tests as soon as the variance is also subject to regime changes. In some cases, the LMC tests even appear to outperform the optimal CHP tests. For instance, this can be observed in the middle portion of , where Δμ = 0 and Δσ = 1. Another important remark is that the proposed moment-based MC tests are far easier to compute than the information matrix-type bootstrap tests.

5. Empirical illustration

In this section, we present an application of our test procedures to the study by Hamilton (Citation1989) who suggested modeling USA output growth with a Markov-switching specification as in (2) with r = 4 and where only the mean is subject to regime changes. With this model specification, business cycle expansions and contractions can be interpreted as a process of switching between states of high and low growth rates. Hamilton estimated his model by the method of maximum likelihood with quarterly data ranging from 1952Q2 to 1984Q4. Probabilistic inferences on the state of the economy were then calculated and compared with the business-cycle dates as established by the National Bureau of Economic Research. On the basis of simulated residual autocorrelations, Hamilton argued that his Markov-switching model encompasses the linear AR(4) specification.

We applied our proposed MC procedures to formally test the linear AR(4) specification. In this context, the LMC and MMC procedures are based on the filtered observations:

zt(ϕ)=ytϕ1yt1ϕ2yt2ϕ3yt3ϕ4yt4,
where yt is 100 times the change in the logarithm of USA real GNP. Following Carrasco et al. (Citation2014), we considered Hamilton’s original data set (135 observations of yt) and an extended data set including observations from 1952Q2 to 2010Q4 (239 observations of yt). The ϕ values used in zt(ϕ) for the LMC procedure are obtained by an OLS regression of yt on a constant and four of its lags. The MMC test procedure maximizes the MC p-value by grid search over a four-dimensional box defined by taking two standard errors on each side of the OLS parameter estimates. To ensure stationarity of the solutions, we only considered grid points for which the roots of the autoregressive polynomial 1ϕ1zϕ2z2ϕ3z3ϕ4z4=0 lie outside the unit circle. The number of MC replications was set as N = 100.

Table 5. MC test results: USA real GNP growth.

shows the test results for the LMC and MMC procedures based on the minimum and product combination rules. For the MMC statistics, the table reports the maximal MC p-value, the ϕ values that maximized the p-value function, and the smallest modulus of the roots of 1ϕ1zϕ2z2ϕ3z3ϕ4z4=0. These points on the grid with the highest MMC p-values can be interpreted as the Hodges–Lehmann-stye estimates of the autoregressive parameters (Hodges and Lehmann, Citation1963). In the case of the LMC statistics, the reported ϕ values are simply the OLS point estimates.

For Hamilton’s data, the results clearly show that the null hypothesis of linearity cannot be rejected at usual levels of significance. Furthermore, the retained values of the autoregressive component yield covariance-stationary representations of output growth. This shows that the GNP data from 1952 to 1984 are entirely compatible with a linear and stationary autoregressive model. It is interesting to note from that the MMCmin and MMC× procedures find ϕ values yielding p-values = 1 for the period 1952Q2–1984Q4. Our MC tests, however, reject the stationary linear AR(4) model with p-values ≤0.06 over the extended sample period from 1952 to 2010, which agrees with the findings of Carrasco et al. (Citation2014). The results presented here are also consistent with the evidence in Kim and Nelson (Citation1999) and McConnell and Perez-Quiros (Citation2000) about a structural decline in the volatility of business cycle fluctuations starting in the mid-1980s—the so-called Great Moderation.

6. Conclusion

We have shown how the MC test technique can be used to obtain provably exact and useful tests of linearity in the context of autoregressive models with Markov-switching means and variances. The developed procedure is robust to the identification issues that plague conventional likelihood-based inference methods, because all the required computations are done under the null hypothesis. Another advantage of our MC test procedure is that it is easy to implement and computationally inexpensive.

The suggested test statistics exploit the fact that, under the Markov-switching alternative, the observations unconditionally follow a mixture of at least two normal distributions once the autoregressive component is properly filtered out. Four statistics, each ones meant to detect a specific feature of normal mixtures, are combined together either through the minimum or the product of their individual p-values. Of course, one may combine any subset of the proposed test statistics or even include others not considered here. As long as the individual statistics are pivotal under the null of linearity, the proposed MC test procedure will control the overall size of the combined test.

The provably exact MMC tests require the maximization of a p-value function over the space of admissible values for the autoregressive parameters. A simplified version (LMC test) limits the maximization to a consistent set estimator. Strictly speaking, the LMC tests are no longer exact in finite samples. Nevertheless, the level constraint will be satisfied asymptotically under much weaker conditions than those typically required for the bootstrap. In terms of both the size and power, the LMC tests based on a consistent point estimate of the autoregressive parameters were found to perform remarkably well in comparison with the bootstrap tests of Carrasco et al. (Citation2014).

The developed approach can also be extended to allow for nonnormal mixtures. Indeed, it is easy to see that the standardized residuals 𝜀̂σ̂ remain pivotal under the null of linearity as long as 𝜀t in (5) has a completely specified distribution. As in Beaulieu et al. (Citation2007), the MMC test technique can be used to further allow the distribution of 𝜀t to depend on unknown nuisance parameters. Such extensions go beyond the scope of the present paper and are left for future work.

References

  • Ang, A., Bekaert, G. (2002a). International asset allocation with regime shifts. Review of Financial Studies 15:1137–1187.
  • Ang, A., Bekaert, G. (2002b). Regime switches in interest rates. Journal of Business and Economic Statistics 20:163–182.
  • Barnard, G. (1963). Comment on ‘the spectral analysis of point processes’ by m.s. bartlett. Journal of the Royal Statistical Society (Series B) 25:294.
  • Beaulieu, M.-C., Dufour, J.-M., Khalaf, L. (2007). Multivariate tests of mean-variance efficiency with possible non-Gaussian errors: An exact simulation-based approach. Journal of Business and Economic Statistics 25:398–410.
  • Birnbaum, Z. (1974). Computers and unconventional test-statistics. In: Proschan, F., Serfling, R., eds. Reliability and Biometry. Philadelphia: SIAM, pp. 441–458.
  • Carrasco, M., Hu, L., Ploberger, W. (2014). Optimal test for Markov switching parameters. Econometrica 82(2):765–784.
  • Carter, A., Steigerwald, D. (2012). Testing for regime switching: A comment. Econometrica 80:1809–1812.
  • Carter, A., Steigerwald, D. (2013). Markov regime-switching tests: Asymptotic critical values. Journal of Econometric Methods 2:25–34.
  • Cho, J., White, H. (2007). Testing for regime switching. Econometrica 75:1671–1720.
  • Davies, R. (1977). Hypothesis testing when a nuisance parameter is present only under the alternative. Biometrika 64: 274–254.
  • Davies, R. (1987). Hypothesis testing when a nuisance parameter is present only under the alternative. Biometrika 74:33–43.
  • Davig, T. (2004). Regime-switching debt and taxation. Journal of Monetary Economics 51:837–859.
  • Dufour, J.-M. (2006). Monte Carlo tests with nuisance parameters: A general approach to finite-sample inference and nonstandard asymptotics in econometrics. Journal of Econometrics 133:443–477.
  • Dufour, J.-M., Khalaf, L. (2001). Monte Carlo test methods in econometrics. In: Baltagi, B., ed. Companion to Theoretical Econometrics. Oxford, UK: Basil Blackwell.
  • Dufour, J.-M., Khalaf, L., Bernard, J.-T., Genest, I. (2004). Simulation-based finite-sample tests for heteroskedasticity and arch effects. Journal of Econometrics 122:317–347.
  • Dufour, J.-M., Khalaf, L., Voia, M. (2014). Finite-sample resampling-based combined hypothesis tests, with applications to serial correlation and predictability. Communications in Statistics - Simulation and Computation 44:2329–2347.
  • Dwass, M. (1957). Modified randomization tests for nonparametric hypotheses. Annals of Mathematical Statistics 28: 181–187.
  • Engel, C., Hamilton, J. (1990). Long swings in the dollar: Are they in the data and do markets know it? American Economic Review 80:689–713.
  • Fisher, R. (1932). Statistical Methods for Research Workers. Edinburgh: Oliver and Boyd.
  • Garcia, R. (1998). Asymptotic null distribution of the likelihood ratio test in Markov switching models. International Economic Review 39:763–788.
  • Garcia, R., Perron, P. (1996). An analysis of the real interest rate under regime shifts. Review of Economics and Statistics 78:111–125.
  • Gouriéroux, C., Monfort, A. (1995). Statistics and Econometric Models, Vol. 1. Cambridge, UK: Cambridge University Press.
  • Guidolin, M. (2011). Markov switching models in empirical finance. In: Drukker, D., ed. Missing Data Methods: Time-Series Methods and Applications (Advances in Econometrics, Volume 27 Part 2). Bingley, UK: Emerald Group Publishing Limited.
  • Hamilton, J. (1989). A new approach to the economic analysis of nonstationary time series and the business cycle. Econometrica 57:357–384.
  • Hamilton, J. (1994). Time Series Analysis. Princeton, New Jersey: Princeton University Press.
  • Hamilton, J. (2016). Macroeconomic regimes and regime shifts. In: Taylor, J., Uhlig, H., eds. Handbook of Macroeconomics, Vol. 2. Amsterdam, The Netherlands: Elsevier Science Publishers.
  • Hamilton, J., Susmel, R. (1994). Autoregressive conditional heteroskedasticity and changes in regime. Journal of Econometrics 64:307–333.
  • Hansen, B. (1992). The likelihood ratio test under nonstandard conditions: Testing the Markov switching model of GNP. Journal of Applied Econometrics 7:S61–S82.
  • Hansen, B. (1996). Erratum: The likelihood ratio test under nonstandard conditions: Testing the Markov switching model of GNP. Journal of Applied Econometrics 11:195–198.
  • Hodges, J., Lehmann, E. (1963). Estimates of location based on rank tests. The Annals of Mathematical Statistics 35:598–611.
  • Kim, C., Nelson, C. (1999). Has the U.S. economy become more stable? A Bayesian approach based on a Markov-switching model of the business cycle. Review of Economic and Statistics 81:608–616.
  • Lee, L.-F., Chesher, A. (1986). Specification testing when score statistics are identically zero. Journal of Econometrics 31: 121–149.
  • MacKinnon, J. (2009). Bootstrap hypothesis testing. In: Belsley, D., Kontoghiorghes, J., eds. Handbook of Computational Econometrics. West Sussex, UK: John Wiley & Sons.
  • McConnell, M., Perez-Quiros, G. (2000). Output fluctuations in the United States: What has changed since the early 1980’s? American Economic Review 90:1464–1476.
  • Pearson, K. (1933). On a method of determining whether a sample of size n supposed to have been drawn from a parent population having a known probability integral has probably been drawn at random. Biometrika 25:379–410.
  • Psaradakis, Z., Sola, M. (1998). Finite-sample properties of the maximum likelihood estimator in autoregressive models with Markov switching. Journal of Econometrics 86:369–386.
  • Rossi, P. (2014). Bayesian Non- and Semi-parametric Methods and Applications. Princeton, NJ: Princeton University Press.
  • Savin, N. (1984). Multiple hypothesis testing. In: Griliches, Z., Intriligator, M., eds. Handbook of Econometrics, Vol. 2. Amsterdam, The Netherlands: Elsevier Science Publishers.
  • Timmermann, A. (2000). Moments of Markov switching models. Journal of Econometrics 96:75–111.
  • Timmermann, A. (2001). Structural breaks, incomplete information and stock prices. Journal of Business and Economic Statistics 19:299–315.
  • Tippett, L. (1931). The Method of Statistics. London: Williams & Norgate.
  • Watson, M., Engle, R. (1985). Testing for regression coefficient stability with a stationary AR(1) alternative. Review of Economics and Statistics 67:341–346.
  • Wilkinson, B. (1951). A statistical consideration in psychological research. Psychology Bulletin 48:156–158.