Abstract
It is well documented in the literature that the sample skewness and excess kurtosis can be severely biased in finite samples. In this paper, we derive analytical results for their finite-sample biases up to the second order. In general, the bias results depend on the cumulants (up to the sixth order) as well as the dependency structure of the data. Using an AR(1) process for illustration, we show that a feasible bias-correction procedure based on our analytical results works remarkably well for reducing the bias of the sample skewness. Bias-correction works reasonably well also for the sample kurtosis under some moderate degree of dependency. In terms of hypothesis testing, bias-correction offers power improvement when testing for normality, and bias-correction under the null provides also size improvement. However, for testing nonzero skewness and/or excess kurtosis, there exist nonnegligible size distortions in finite samples and bias-correction may not help.
ACKNOWLEDGMENT
The author is grateful to an anonymous referee for constructive and detailed feedback that greatly improved the paper. The author benefited from discussions with Anil Bera and seminar participants at Purdue (statistics) and conference participants at the 2009 Midwest Econometrics\ Group meeting (West Lafayette) and the 2010 International Symposium on Econometric Theory and Applications (Singapore). Bai and Ng's (Citation2005) GAUSS\ code was used (with some modification) to calculate the Newey–West-type covariance estimator for V in Section 3.
Notes
A natural alternative is to use the bootstrap for possible bias reduction. We nevertheless think that it is useful to follow the present approach to develop analytical results, because it should eventually enable a better understanding of the actual nature of finite sample problems, especially in cases when the design of appropriate resampling schemes is complicated. In Section 2, however, we do include the (model-based) bootstrap results for comparison purpose, following the referee's suggestion.
Other choices of number of subsamples were tried and the results reported in Table are quite robust to the choice.
To establish diagonality of V , a somewhat stronger assumption than unconditional normality of x t is needed, namely, the process x t is Gaussian (joint normality of any vector of x at different times). This assumption was implicit in the proof of Theorem 4 in Bai and Ng (Citation2005). It was explicit in Lobato and Velasco (Citation2004) to establish their G test.
Bai and Ng (Citation2005) and Bontemps and Meddahi (Citation2005) also looked at directly the centered sample third and fourth moments instead of the sample skewness and kurtosis. However, the test of Lobato and Velasco (Citation2004) turns out to be relatively easier to implement.
For bias-correcting nonzero γ1, the nuisance parameters under the null include not only ρ, but also γ2 and γ3. Since we do not have analytical bias result for we only bias-correct and This is also the case for all the 's in Tables and , where only , and are bias-corrected (when applicable) and the other nuisance parameter estimators are not bias-corrected.