Abstract
This article introduces a high-dimensional linear IV regression for the data sampled at mixed frequencies. We show that the high-dimensional slope parameter of a high-frequency covariate can be identified and accurately estimated leveraging on a low-frequency instrumental variable. The distinguishing feature of the model is that it allows handing high-dimensional datasets without imposing the approximate sparsity restrictions. We propose a Tikhonov-regularized estimator and study its large sample properties for time series data. The estimator has a closed-form expression that is easy to compute and demonstrates excellent performance in our Monte Carlo experiments. We also provide the confidence bands and incorporate the exogenous covariates via the double/debiased machine learning approach. In our empirical illustration, we estimate the real-time price elasticity of supply on the Australian electricity spot market. Our estimates suggest that the supply is relatively inelastic throughout the day.
Supplementary Materials
The Supplementary Material contains detailed proofs of Theorems 3.2 and 3.4.
Acknowledgments
This article is a substantially revised Chapter 2 of my Ph.D. thesis. I’m deeply indebted to my advisor Jean-Pierre Florens as well as Eric Gautier, Ingrid van Keilegom, and Timothy Christensen for helpful discussions and suggestions. I thank to participants at the Conference on Inverse Problems in Econometrics at Northwestern University, the ENTER seminar at Tilburg University, the 48èmes Journées de Statistique de la SFdS, the 3rd ISNPS Conference, and the Recent Advances in Econometrics Conference at TSE Toulouse as well as Aleksandra Babii, David Benatia, Bruno Biais, Pascal Lavergne, and Mario Rothfelder. I’m also grateful to anonymous referees whose comments helped me to improve significantly the article. All remaining errors are mine.
Appendix. Proofs
To prove Theorem 3.1, we need an additional lemma that bounds the expected norm of the sample mean of a covariance stationary zero-mean -valued stochastic process
by the norm of its auto-covariance function γh.
Lemma A.1.
1. Suppose that is a zero-mean covariance stationary process in
with absolutely summable autocovariance function
, where
. Then
Proof.
We havewhere the second line follows by the bilinearity of the inner product and Fubini’s theorem and the third under the covariance stationarity. □
The following lemma allows controlling estimation errors appearing in the proof of Theorem 3.1 in terms of more primitive quantities.
Lemma A.1.
2. Suppose that are square-integrable. Then
and
Proof.
By the definition of the operator norm and the Cauchy–Schwartz inequality(A.1)
(A.1)
For the second part, use , and the estimate in Equationequation (A.1)
(A.1)
(A.1)
Proof of Theorem 3.1.
The proof is based on the following decomposition:with
To see that this decomposition holds, note that
Note also that
Therefore, if we can show the desired order for , the conclusion of the theorem would follow. Under Assumption 3.1 by Lemma A.1.1
and
Sinceit is sufficient to control each of the four terms separately.
The fourth term is a regularization bias and its order follows directly from the Assumption 3.2 and the isometry of the functional calculus
We can have two cases depending on the value of . For
, the function
admits maximum at
. For
, the function
is strictly increasing on
, attaining maximum at the end of the spectrum
. Therefore, since
, we have
This gives .
Next, note that is a finite-rank operator, and hence, compact. Therefore,
where we use Lemma A.1.2. Similarly,
Lastly, similar computations yield
Combining all the estimates, we obtain . □
Proof of Theorem 3.3.
Decompose By Theorem 3.1, we know that
. Consequently, it remains to control the discretization error
. To that end, note that if
solves
, then
. Therefore,
. Next, decompose
(A.2)
(A.2)
Then
The expression inside of the operator norm is the integral operator such that for every
Therefore, by the same computations as in EquationEquation (A.1)(A.1)
(A.1) and the triangle inequality
Under Assumption 3.4 (i)and whence
This shows that . □