Abstract
The vector autoregressive moving average (VARMA) model is fundamental to the theory of multivariate time series; however, identifiability issues have led practitioners to abandon it in favor of the simpler but more restrictive vector autoregressive (VAR) model. We narrow this gap with a new optimization-based approach to VARMA identification built upon the principle of parsimony. Among all equivalent data-generating models, we use convex optimization to seek the parameterization that is simplest in a certain sense. A user-specified strongly convex penalty is used to measure model simplicity, and that same penalty is then used to define an estimator that can be efficiently computed. We establish consistency of our estimators in a double-asymptotic regime. Our nonasymptotic error bound analysis accommodates both model specification and parameter estimation steps, a feature that is crucial for studying large-scale VARMA algorithms. Our analysis also provides new results on penalized estimation of infinite-order VAR, and elastic net regression under a singular covariance structure of regressors, which may be of independent interest. We illustrate the advantage of our method over VAR alternatives on three real data examples.
Supplementary Material
The supplementary material contains all proofs of the theoretical results presented in Section 4, as well as results on several numerical experiments.
Acknowledgments
We thank the editor and reviewers for their thorough review and highly appreciate their comments which substantially improved the quality of the manuscript. The authors wish to thank Profs. Christophe Croux, George Michailidis, Suhasini Subba Rao, and Ruey S. Tsay for stimulating discussions and helpful comments.