254
Views
2
CrossRef citations to date
0
Altmetric
Articles

Bayesian input–output table update using a benchmark LASSO prior

ORCID Icon
Pages 413-427 | Received 02 Mar 2018, Accepted 17 Dec 2019, Published online: 09 Jan 2020
 

ABSTRACT

We propose updating a multiplier matrix subject to final demand and total output constraints, where the prior multiplier matrix is weighted against a LASSO prior. We update elements of the Leontief inverse, from which we can derive posterior densities of the entries in input–output tables. As the parameter estimates required by far exceed the available observations, many zero entries deliver a sparse tabulation. We address that problem with a new statistical model wherein we adopt a LASSO prior. We develop novel numerical techniques and perform a detailed Monte Carlo study to examine the performance of the new approach under different configurations of the input–output table. The new techniques are applied to a 196 × 196 U.S. input–output table for 2012.

JEL CODES:

Acknowledgements

The author is indebted to the Editor Michael Lahr, three anonymous reviewers and Gerald Steele for providing useful comments on earlier versions. The usual disclaimer applies.

Disclosure statement

No potential conflict of interest was reported by the author.

Notes

1 Writing b=Ikβ+v instead of βN(b,ω2Ik), which is equivalent to β=Ikb+v is, of course, unacceptable from the point of view of both frequentist and Bayesian purists. Taking this ‘leap of faith' can, actually, be traced back to Fisher (Citation1939) who, in a simpler context, called this produce by the name of ‘fiducial inference'. Nonetheless, β^ is exactly equal to the posterior mean through formal Bayesian analysis! As a matter of fact, in the author's view, this is a good way for students or novices to be introduced into the ‘apocrypha' of Bayesian analysis.

2 For any matrix M, the vec operator stacks rows of M to a vector.

3 For each parameter, say ϖ we construct a grid consisting of 100 values on a support, which is adapted every 500 MCMC iterations during the burn-in phase. The cdf is computed using density values and then normalizing. In turn, we invert the cdf to obtain a random draw from the respective posterior conditional distribution. This is also known as ‘griddy Gibbs sampler’.

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 53.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 773.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.