ABSTRACT
In recent years, dynamic stochastic general equilibrium (DSGE) models have come to play an increasing role in central banks, as an aid in the formulation of monetary policy (and increasingly after the global crisis, for maintaining financial stability). DSGE models, compared to other widely prevalent econometric models (such as vector autoregressive or large-scale econometric models), are less a-theoretic and with secure micro-foundations based on the optimizing behaviour of rational economic agents. Additionally, the models in spite of being strongly tied to theory, can be ‘taken to the data’ in a meaningful way. A major feature of these models is that their theoretical underpinnings lie in what has now come to be called as the New Consensus Macroeconomics (NCM). This paper concentrates on the econometric structure underpinning such models. Identification, estimation and evaluation issues are discussed at length with a special emphasis on the role of Bayesian maximum likelihood methods.
Disclosure statement
No potential conflict of interest was reported by the author.
Notes
1. Some elements of could be zero.
2. In control theory, a state space formulation is defined by two equations: (i) and (ii)
, where x(t) is the state variable vector, u(t) is the control variable vector, y(t) is the observable variable vector, the A’s and B’s are matrices of appropriate dimension and
is a shock to the state variable equation. The state variables are often unobserved but can be recovered from the observable variables. Our models (2) and (3) are special cases of this state space format in which the matrices
and
are zero. Hence, the control variables
can themselves be treated as the observable variables.
3. This of course by-passes the issue of stochastic singularity altogether since no estimation is involved.
4. The ABCD representation is discussed in Fernandez-Villaverde et al. (Citation2007).
5. We are using to denote the likelihood since
has been used to denote the log-likelihood in Equation (14) earlier.
6. The usual candidate densities are the exponential family of distributions, the Gamma and Beta distributions, the multivariate t-distribution, etc.
7. In view of Equation (18), , and a high value of
implies that the probability of
belonging to the posterior distribution is high.
8. This need not always be true (see Gamerman (Citation1997, 124) for a counterexample). Hence, this condition needs to be verified in applications.
9. In simple terms, we are taking averages of over the first
iterations after the candidate burn-in iteration and over the last
iterations.
10. See Cox and Miller (Citation2001) for a definition of this concept.
11. Geyer (Citation1992) cites the case of sampling from Polson’s (Citation1992) ‘witch’s hat distribution’ to substantiate this point.
12. Gelman and Rubin (Citation1992) substantiate this point with the help of the Lenz-Ising lattice model of statistical mechanics (see Brush Citation1967).
13. The iterations themselves may be based on the Gibbs sampler or the Metropolis–Hastings scheme.
14. The simulations are performed using one of the MCMC algorithms discussed above or the procedures presented in Niederreiter (Citation1988).
15. This step involves evaluation of integrals by simulation which can be done by standard methods (see Weinberg and Kyprianou (Citation2005) for a recent discussion).
Additional information
Notes on contributors
Dilip Nachane
Dilip Nachane is currently Hon. Professor at the IGIDR. He has been successively Director, Department of Economics, University of Mumbai, Director IGIDR and Chancellor, University of Manipur, Imphal. He also served briefly (2012–2014) on the Prime Minister's Economic Advisory Council, Government of India and on the Technical Advisory Committee on Monetary Policy, Reserve Bank of India (2005–2011). He has authored/edited 10 books and written about a hundred articles in professional journals. His fields of specialization are Macroeconomics and Quantitative Methods.