Abstract
Nonparametric test procedures in predictive regressions have limiting null distributions under both low and high regressor persistence, but low local power compared to misspecified linear predictive regressions. We argue that IV inference is better suited (in terms of local power) for analyzing additive predictive models with uncertain predictor persistence. Then, a two-step procedure is proposed for out-of-sample predictions. For the current estimation window, one first tests for predictability; in case of a rejection, one predicts using a nonlinear regression model, otherwise the historic average of the stock returns is used. This two-step approach performs better than competitors (though not by a large margin) in a pseudo-out-of-sample prediction exercise for the S&P 500.
Supplementary Materials
The supplementary materials provide additional details about the empirical findings, some further simulation results for OLS-based tests, as well as technical proofs for the formal results given in the article.
Acknowledgments
The authors thank two anonymous referees and an associate editor, as well as Jörg Breitung, Danvee Floro, Michael Funke, Paulo Rodrigues, and Julian Schröder for very helpful comments. They furthermore thank Ulrich Homm for cooperating on a previous version of the article as well as many useful suggestions.