Abstract
This article proposes more efficient estimators for the leverage effect than the existing ones. The idea is to allow for nonuniform kernel functions in the spot volatility estimates or the aggregated returns. This finding highlights a critical difference between the leverage effect and integrated volatility functionals, where the uniform kernel is optimal. Another distinction between these two cases is that the overlapping estimators of the leverage effect are more efficient than the nonoverlapping ones. We offer two perspectives to explain these differences: one is based on the “effective kernel” and the other on the correlation structure of the nonoverlapping estimators. The simulation study shows that the proposed estimator with a nonuniform kernel substantially increases the estimation efficiency and testing power relative to the existing ones.
Supplementary Materials
The supplementary materials includes additional discussions on various issues related to the maintext and the proofs of all the theorems in the article.
Acknowledgments
I would like to thank the editor, the associate editor, and anonymous referees for their thoughtful comments and suggestions which led to significant improvements. I would also like to thank Jia Li for helpful discussions and suggestions.
Notes
1 The cited article proposes both price-only realized leverage (PRL) estimator and instrument-based realized leverage (IRL) estimator. Here, we only discuss the IRL estimator, for that the other cited articles and the current article only consider the price-only type of estimator.
2 If we do not introduce the kernel g but instead set it to be , then it is not possible to include the estimator considered by Wang and Mykland (2014) as a special case of . Besides, since it is easy to find the efficiency bound in this restricted case, it may obscure the complex nature of the problem.
3 The cited article focuses on the integrated leverage, which is defined differently. However, the integral of the process , which is equivalent to S defined in this article with f being the identity function, corresponds to the leverage effect considered by other articles in the literature (e.g., Wang and Mykland 2014; Aït-Sahalia and Jacod 2014).
4 The continuous leverage effect therein is defined in the same way as in the current article. The cited article also discusses the discontinuous leverage effect, which is a measure of price and volatility co-jumps, and the “leverage parameter,” which is the standardized version of the continuous leverage effect. Since these additional two estimators are fundamentally different from those considered in this article, we do not intend to compare them here.
5 According to the proof given in Appendix chap. B.2.4 of Aït-Sahalia and Jacod (2014), the factor in front of the squared leverage term in (8.38) of the cited book should be 23/15, rather than 23/30. The authors list the evaluation of , where , in (B.94). The second last line writes as “ if .” (β therein is the square root of ι here) We think the case is missing. The term is a sum of the product of and defined in the top part on p. 541. According to the bottom part of the same page, both and are nonzero for the values of m given there. One can calculate from those expressions that rather than zero.
6 It is certainly cannot be attained and h is nonnegative and bounded. According to the definition (2.11), we have and . In addition, H is continuous for nonnegative and bounded h. Hence, it is not possible to make H almost everywhere constant on , that is, being proportional to .
7 In particular, this allows us to calculate the asymptotic variance of the estimator in an easier way. The main focus of Kalnina and Xiu (2017) is the leverage parameter. Hence, the asymptotic variance provided in the article is not that of .
8 We thank an anonymous reviewer for pointing this out.
9 Aït-Sahalia and Xiu (2019) consider three levels for the variance of noise in their simulation study: (large), (medium), and (small). Here, we zoom in to the medium-to-large range to evaluate the impact of noise.
10 Sometimes, the transaction occurs at a price level that is very different from the prices before and after. We identified and removed many such outliers (it is hard to find all of them). Consequently, the volatility signature plot becomes much flatter at high sampling frequencies. More specifically, we first set a threshold value to determine which returns are subject to scrutiny (0.2% for 1-sec returns). Then we check how long the log-price stays at a similar level. If the price bounds back to the previous level in only a few seconds (e.g., one or two), then we further compare this return with three times the sample standard deviation on that day using those non-suspicious log-price changes. It exceeds that latter, then we identify this as an outlier. One can add more criterions to refine this procedure. For example, if the previous return is also relatively large, then this large return might not be an outlier, but a simple reflection of the price-discovering process with large disagreement among traders. See the supplementary materials for more details.
11 Marston and Perry (1996) also state that their results call into question the application of using the Hamada adjustment using industry classes.
12 Note that can be decomposed into one part related to and another part related to . The latter term is not significantly negative, but it will contribute to the overall asymptotic variance. Hence, the rejection rates against will be smaller than that against .