176
Views
0
CrossRef citations to date
0
Altmetric
Original Articles

The Deaton paradox in a long memory context with structural breaks

, &
Pages 3309-3322 | Published online: 14 Jun 2011
 

Abstract

This article contributes to the Permanent Income Hypothesis (PIH) and excess consumption smoothness debate in the context of fractional integration. We show that the excess consumption smoothness result is a consequence of the quarterly data frequency commonly employed in the empirical work. In fact, the I(1) hypothesis is rejected for the income process with monthly data in favour of a fractional integration order lower than 1. Moreover, if a structural break is taken into account, we observe a substantial reduction in the degree of consumption smoothness, especially after the break found in 1975.

JEL Classification:

Acknowledgements

L.A. Gil-Alana and A. Moreno gratefully acknowledge the financial support from the Spanish Ministerio de Ciencia y Tecnologia (ECO2008-03035, ECO2009-11151 ECON Y FINANZAS, respectively). Comments from the Editor and from two anonymous referees are gratefully acknowledged.

Notes

1 See Gil-Alana and Hualde (Citation2009) for an updated revision of fractional integration in macroeconomic time series.

2 The GPH method is based on the log-periodogram estimate and this method has been improved in recent years by Robinson (Citation1995), Velasco (Citation1999) and Phillips and Shimotsu (Citation2002) among others.

3 We use disposable income rather than labour income due to the inexistence of data for the latter series on a monthly basis.

4 For the purpose of this work, an I(0) process is defined as a covariance stationary process with spectral density function which is positive and bounded at the zero frequency. Thus, it includes the stationary ARMA models.

5 Very similar results were obtained with time domain estimation procedures (e.g. Sowell, Citation1992b; Tanaka, Citation1999).

6 Note, however, that the quarterly data start as at 1947Q1 while the monthly start as at 1959M1. We performed the analogous analysis with quarterly data starting in 1959Q1 and the results were very similar: for the original series, d was estimated to be 0.991 and 1.111 for the white noise and AR(1) cases respectively, whereas we obtained 1.016 and 1.186 for the log-transformed data.

7 The AR(1) model also outperforms other higher AR order specifications.

8 The Ohanissian et al.'s (Citation2008) method was also carried out using the more refined estimate of Robinson (Citation1995) and the test statistics were practically the same as with the GPH's (Citation1983) approach.

9 Given the sample sizes used in this article, the inclusion of more than one break would result in relatively short subsamples, thus invalidating the analysis based on fractional integration.

10 We also performed here the Ohanissian et al. (Citation2008) method for each of the two subsamples in both the linear and the log-linear cases. The corresponding statistics were 1.342 (linear) and 0.00058 (log-lninear) for the first subsamples, and 0.210 (linear) and 0.00178 (log-linear) for the second subsamples. Thus, in all cases the evidence suggests that the long memory property found in the data is not a spurious phenomenon.

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 53.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 387.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.