492
Views
4
CrossRef citations to date
0
Altmetric
Article

Sequential and efficient GMM estimation of dynamic short panel data models

, &
Pages 1007-1037 | Published online: 05 Aug 2021
 

Abstract

This paper considers generalized method of moments (GMM) and sequential GMM (SGMM) estimation of dynamic short panel data models. The efficient GMM motivated from the quasi maximum likelihood (QML) can avoid the use of many instrument variables (IV) for estimation. It can be asymptotically efficient as maximum likelihood estimators (MLE) when disturbances are normal, and can be more efficient than QML estimators when disturbances are not normal. The SGMM, which also incorporates many IVs, generalizes the minimum distance estimation originated in Hsiao et al. . By focusing on the estimation of parameters of interest, the SGMM saves computational burden caused by nuisance parameters such as variances of disturbances. It is asymptotically as efficient as the corresponding GMM. In particular, the SGMM based on QML scores can generate a closed-form root estimator for the dynamic parameter, which is asymptotically as efficient as the QML estimator. Nuisance parameters can also be estimated efficiently by an additional SGMM step if they are of interest.

JEL CLASSIFICATION:

Acknowledgements

We are grateful to the Editor Esfandiar Maasoumi, Co-Editor Tong Li, and two anonymous referees for their valuable and helpful comments.

Notes

1 In the following, the ML or QML estimates for fixed effects DPD models all refer to those based on first differenced equations of the dependent variable.

2 We note that, “stationarity” here refers to the situation that the process has started a long time ago.

3 For stationary random effects DPD models, the quasi log likelihood function can be decomposed as a sum of the quasi log likelihood function of within equations and that of between equations (Lee and Yu, Citation2020). We use the decomposition to derive simple moment vectors, which can yield GMM estimators that are asymptotically as efficient as ML estimators under normal disturbances, but can be more efficient relative to QML estimators.

4 On the other hand, if tr(FTATHTFTATHT) is not equal to zero, Var(QnTenT) would not be necessarily equal to σv02E[QnT(HTIn)QnT].

5 If the disturbances are not normal, from the proof of Theorem 3(iii), the asymptotic variance of the IV estimate θ̂1,iv is equal to that of an optimal GMM estimator. We show that there is no best GMM under non-normal disturbances in the supplementary file, so there is no best IV under non-normal disturbances.

6 Initial consistent parameter estimates for various models considered in this paper are given in the supplementary file.

7 As in Hsiao et al. (Citation2002), when the process {yit} starts from a finite past, γ0 can be 1. We thank an anonymous referee for pointing out this.

8 When T goes to infinity, the second component is dominated by the first one, so that the asymptotic precision of the MLE is asymptotically equal to that of the best IV estimate. The best IV estimation is possible by ignoring the first row of HT(ω) or simply replacing it by HT(2) with 2 replacing ω. The approximation or replacement will be good when T becomes large.

9 Recall that ΔYnT=[ΔYn1,,ΔYnT] and ΔYn,T1=[0,ΔYn1,,ΔYn,T1].

10 We note that as contrary to later sequential GMM estimation, these moments are quadratic in enT but not quadratic in the parameter γ because BT(γ) is nonlinear in γ.

11 See the supplementary file for a proof.

12 We thank a referee for raising this issue.

13 We may show that (BωTBT1)s=BTJTBT. See the proof of Theorem 3 in the supplementary file.

14 If the disturbances are not normally distributed, we show in the supplementary file that the limiting variance of the GMM estimator θ̂2,gmm has a lower bound by the generalized Schwarz inequality, but the lower bound cannot be achieved. The reason is that DT,T+1BTCjTsBTDT,T+1 needs to be a diagonal matrix for some CjTs, but this cannot be the case given the specific form of DT,T+1.

15 If the number of best moments is just identified, and the score vector and the best moment vector are linear transformations of each other, then their estimators would be the same. In this exactly identified moments case, the best GMM estimator would not have an asymptotic gain.

16 The explicit expression of E(1nlnLw(θ0)θlnLw(θ0)θ) can be derived similarly as that of the variance matrix of ngnT(θ20), thus we omit it for simplicity. We can see that it does not depend on n.

17 Instead of moments based on the score vector, one may used the best moments in (Equation2.25) to obtain an SGMM estimate of γ. But as the number of moments involved is over-identified for γ, the corresponding SGMM would not have a tractable explicit expression. Such an SGMM estimation approach will be considered in a subsequent section on models with exogenous regressors. The moments in (Equation2.25) could be regarded as a special case of the estimation with ιnT as a regressor vector.

18 The estimate of κ for given γ and ω is [ιnT(HT1(ω)In)ιnT]1ιnT(HT1(ω)In)(ΔYnTγΔYn,T1)=1nlnΔYn1+1nt=2T(1t1T)ln(ΔYntγΔYn,t1), which does not depend on ω.

19 In our case, the concentration works on the solution of scores, so it is a method of elimination and substitution in solving a system of equations.

20 The SGMM in Jin and Lee (Citation2021) is asymptotically equivalent to the approach in Trognon and Gouriéroux (Citation1990) applied to the GMM, which is derived by a first order Taylor expansion of the moment vector at the nuisance parameter estimator.

21 Root estimators for spatial autoregressive models are considered in Jin and Lee (Citation2012).

22 The consistent estimation of plimngnT,γ(γ0,τ0)τ(gnT,τ(γ0,τ0)τ)1 by ĈnT,γ would not have an asymptotic influence on the moment equation due to its role as coefficients for linear combinations of valid moments.

23 The probability limit of snT,4 depends on ω0, γ0, T and σv02, and it can be positive or negative.

24 This estimate can be further simplified to 1nlnΔYn1+1nt=2T(1t1T)ln(ΔYntγΔYn,t1), which does not depend on ω. Using the form [ιnT(HT1(ω)In)ιnT]1ιnT(HT1(ω)In)(ΔYnTγΔYn,T1) simplifies the presentation of the concentrated moments.

25 As in Section 2, we can also directly follow the approach in Jin and Lee (Citation2021) to construct an SGMM estimator of γ using moment conditions derived from the QML first order conditions. On the other hand, we do not use gnT(δ,α1) to construct an SGMM estimator of only γ due to an identification issue. As shown below, by using the concentrated moments derived from the QML first order conditions, we can have closed-form roots of γ and investigate which root is consistent.

26 κ no longer exists, and the mean of ΔYn1 is zero.

27 Detailed Monte Carlo results for other values of γ0 (0.2, 0.8, 0.9) are presented in the supplement file.

28 Due to space limit, we present the details of simulation results under non-normal disturbances for γ0=0.2,γ0=0.5,γ0=0.8,γ0=0.9 in the supplementary file.

Additional information

Funding

Fei Jin gratefully acknowledges the financial support from the National Natural Science Foundation of China (No. 71973030 and No. 71833004) and Program for Innovative Research Team of Shanghai University of Finance and Economics. Jihai Yu gratefully acknowledges the financial support from the National Natural Science Foundation of China (No. 71925006 and No. 92046021) and support from the Center for Statistical Science of Peking University and the Key Laboratory of Mathematical Economics and Quantitative Finance (Peking University), the Ministry of Education.

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 61.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 578.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.