678
Views
1
CrossRef citations to date
0
Altmetric
Articles

High-Dimensional Mixed-Frequency IV Regression

Pages 1470-1483 | Published online: 21 Jul 2021
 

Abstract

This article introduces a high-dimensional linear IV regression for the data sampled at mixed frequencies. We show that the high-dimensional slope parameter of a high-frequency covariate can be identified and accurately estimated leveraging on a low-frequency instrumental variable. The distinguishing feature of the model is that it allows handing high-dimensional datasets without imposing the approximate sparsity restrictions. We propose a Tikhonov-regularized estimator and study its large sample properties for time series data. The estimator has a closed-form expression that is easy to compute and demonstrates excellent performance in our Monte Carlo experiments. We also provide the confidence bands and incorporate the exogenous covariates via the double/debiased machine learning approach. In our empirical illustration, we estimate the real-time price elasticity of supply on the Australian electricity spot market. Our estimates suggest that the supply is relatively inelastic throughout the day.

Supplementary Materials

The Supplementary Material contains detailed proofs of Theorems 3.2 and 3.4.

Acknowledgments

This article is a substantially revised Chapter 2 of my Ph.D. thesis. I’m deeply indebted to my advisor Jean-Pierre Florens as well as Eric Gautier, Ingrid van Keilegom, and Timothy Christensen for helpful discussions and suggestions. I thank to participants at the Conference on Inverse Problems in Econometrics at Northwestern University, the ENTER seminar at Tilburg University, the 48èmes Journées de Statistique de la SFdS, the 3rd ISNPS Conference, and the Recent Advances in Econometrics Conference at TSE Toulouse as well as Aleksandra Babii, David Benatia, Bruno Biais, Pascal Lavergne, and Mario Rothfelder. I’m also grateful to anonymous referees whose comments helped me to improve significantly the article. All remaining errors are mine.

Appendix. Proofs

To prove Theorem 3.1, we need an additional lemma that bounds the expected norm of the sample mean of a covariance stationary zero-mean L2(S)-valued stochastic process (Xt)tZ by the norm of its auto-covariance function γh.

Lemma A.1.

1. Suppose that (Xt)tZ is a zero-mean covariance stationary process in L2(S) with absolutely summable autocovariance function hZγh1<, where γh1=S|γh(s,s)|ds. ThenE1Tt=1TXt21ThZγh1.

Proof.

We haveE1Tt=1TXt2=1T2Et=1TXt,k=1TXk =1T2t,k=1TSE[Xt(s)Xk(s)]ds =1T2t,k=1TSγtk(s)ds =1T|h|<TT|h|TSγh(s,s)ds 1ThZS|γh(s,s)|ds,where the second line follows by the bilinearity of the inner product and Fubini’s theorem and the third under the covariance stationarity. □

The following lemma allows controlling estimation errors appearing in the proof of Theorem 3.1 in terms of more primitive quantities.

Lemma A.1.

2. Suppose that k̂,k,r̂,r,β are square-integrable. ThenEK̂K2Ek̂k2andEr̂K̂β22Er̂r2+2β2Ek̂k2.

Proof.

By the definition of the operator norm and the Cauchy–Schwartz inequality(A.1) EK̂K2=E[supϕ1K̂ϕKϕ2]=E[supϕ1|ϕ(s)(k̂(s,u)k(s,u))ds|2du]Ek̂k2.(A.1)

For the second part, use r=Kβ,a+b22a2+2b2,KβKβ, and the estimate in Equationequation (A.1) Er̂K̂β22Er̂r2+2EK̂βKβ22Er̂r2+2β2Ek̂k2.

Proof of Theorem 3.1.

The proof is based on the following decomposition:β̂β=(αI+K*K)1K*(r̂K̂β)+R1+R2+R3+R4RTwithR1=[(αI+K̂*K̂)1K̂*(αI+K*K)1K*](r̂K̂β)R2=α(αI+K̂*K̂)1K̂*(K̂K)(αI+K*K)1βR3=α(αI+K̂*K̂)1(K̂*K*)K(αI+K*K)1βR4=(αI+K*K)1K*Kββ.

To see that this decomposition holds, note thatR2+R3=α(αI+K̂*K̂)1[K̂*K̂K*K](αI+K*K)1β=α(αI+K̂*K̂)1[(αI+K̂*K̂)(αI+K*K)](αI+K*K)1β=α(αI+K*K)1βα(αI+K̂*K̂)1β=[Iα(αI+K̂*K̂)1]β+[α(αI+K*K)1I]β=(αI+K̂*K̂)1K̂*K̂β(αI+K*K)1K*Kβ.

Note also thatr̂K̂β=1Tt=1TUtΨ(.,Wt).

Therefore, if we can show the desired order for ERT2, the conclusion of the theorem would follow. Under Assumption 3.1 by Lemma A.1.1Er̂r2=E1Tt=1T{YtΨ(.,Wt)E[YtΨ(.,Wt)]}21TandEk̂k2=E1Tt=1T{Zt(.)Ψ(.,Wt)E[Z(.)Ψ(.,W)]}21T.

SinceRTR1+R2+R3+R4,it is sufficient to control each of the four terms separately.

The fourth term is a regularization bias and its order follows directly from the Assumption 3.2 and the isometry of the functional calculusR4=[(αI+K*K)1K*KI]β[I(αI+K*K)1K*K](K*K)γ=supλσ(K*K)|(1λα+λ)λγ|=supλσ(K*K)|λγα+λ|α.

We can have two cases depending on the value of γ>0. For γ(0,1), the function λλγ/(α+λ) admits maximum at λ=αγ/(1γ). For γ1, the function λλγ/(α+λ) is strictly increasing on [0,), attaining maximum at the end of the spectrum λ=K*K. Therefore, since γγ(1γ)1γ1,γ(0,1), we havesupλσ(K*K)λγα+λ{K*Kγ1,γ1αγ1,γ(0,1).

This gives R4αγ.

Next, note that K̂ is a finite-rank operator, and hence, compact. Therefore,R2(αI+K̂*K̂)1K̂*K̂Kα(αI+K*K)1(K*K)γsupλσ(K̂*K̂)|λ1/2α+λ|k̂ksupλσ(K*K)|λγα+λ|αPαγαT,where we use Lemma A.1.2. Similarly,R3(αI+K̂*K̂)1(K̂*K*)αK(αI+K*K)1β(αI+K̂*K̂)1K̂*K*K(αI+K*K)1(K*K)γα1αk̂ksupλσ(K*K)|λγ+1/2α+λ|αPαγ(1/2)αT.

Lastly, similar computations yieldR1(αI+K̂*K̂)1K̂*(αI+K*K)1K*r̂K̂β=(αI+K̂*K̂)1(K̂*K*)+[(αI+K̂*K̂)1(αI+K*K)1]K*r̂K̂βP(αI+K̂*K̂*)1(K*KK̂*K̂)(αI+K*K)1K*1T+1αT(αI+K̂*K̂)1K̂*(K̂K)(αI+K*K)1K*1T+(αI+K̂*K̂)1(K̂*K*)K(αI+K*K)1K*1T+1αT1αTK̂K+1αTK̂*K*+1αTP1αT.

Combining all the estimates, we obtain RT1αT+αγ(1/2)αT+αγ. □

Proof of Theorem 3.3.

Decompose β̂mβ=β̂mβ̂+β̂β. By Theorem 3.1, we know that β̂βP1αT+αγ(1/2)αT+αγ. Consequently, it remains to control the discretization error β̂mβ̂. To that end, note that if ψ̂m solves (αI+K̂mK̂*)ψ̂m=r̂, then β̂m=K̂*ψ̂m. Therefore, β̂m=K̂*(αI+K̂mK̂*)1r̂. Next, decompose(A.2) β̂mβ̂=K̂*(αI+K̂mK̂*)1r̂K̂*(αI+K̂K̂*)1r̂=K̂*[(αI+K̂mK̂*)1(αI+K̂K̂*)1]r̂=K̂*(αI+K̂K̂*)1[K̂K̂*K̂mK̂*](αI+K̂mK̂*)1r̂=K̂*(αI+K̂K̂*)1(K̂K̂m)K̂*(αI+K̂mK̂*)1r̂.(A.2)

Thenβ̂mβ̂K̂*(αI+K̂K̂*)1(K̂K̂m)K̂*(αI+K̂mK̂*)1r̂P1α3/2(K̂K̂m)K̂*.

The expression inside of the operator norm is the integral operator such that for every ψL2 (K̂K̂m)K̂*ψ=ψ(u)(k̂(s,v)k̂(s,u)dsj=1mk̂(sj,v)k̂(sj,u)δj)du.

Therefore, by the same computations as in EquationEquation (A.1) and the triangle inequality(K̂K̂m)K̂*k̂(s,.)k̂(s,.)dsj=1mk̂(sj,.)k̂(sj,.)δj=1T2t=1Tk=1TΨ(.,Wt)Ψ(.,Wk){Zt(s)Zk(s)dsj=1mZt(sj)Zk(sj)δj}1T2t=1Tk=1TΨ(.,Wt)Ψ(.,Wk)|Zt(s)Zk(s)dsj=1mZt(sj)Zk(sj)δj|max1tTΨ(.,Wt)2max1t,kT|j=1mZt(sj)Zk(sj)δjZt(s)Zk(s)ds|.

Under Assumption 3.4 (i)|Zt(sj)Zk(sj)Zt(s)Zk(s)||Zt(sj)Zt(s)||Zk(sj)|+|Zt(s)||Zk(sj)Zk(s)|2L2|sjs|κ,and whence|j=1mZt(sj)Zk(sj)δjZt(s)Zk(s)ds|=|j=1msj1sj{Zt(sj)Zk(sj)Zt(s)Zk(s)}ds|j=1msj1sj|Zt(sj)Zk(sj)Zt(s)Zk(s)|ds2L2j=1msj1sj|sjs|κds2L2max1jmδjκ.

This shows that β̂mβ̂PΔmκ/α3/2. □

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 61.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 123.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.