Abstract
This paper examines a version of the tests of P. M. Robinson for testing unit roots and other fractionally integrated hypotheses in the context of autocorrelated disturbances. We use Monte Carlo simulations to examine the finite-sample behaviour of the tests when the disturbances follow AR(1) processes. Using the asymptotic critical values of the normal distribution, we show that the power of the tests is extremely low in some cases. Finite-sample critical values were then computed and the problem still persists unless we calculate these critical values for specific values of the AR parameters. An empirical application, illustrating this problem, is also carried out at the end of the article.
Acknowledgments
The author gratefully acknowledges the financial support from the European TMR Grant No. ERBFMRX-CT-98-0213. The comments of an anonymous referee are also acknowledged. The usual disclaimers apply.