254
Views
2
CrossRef citations to date
0
Altmetric
Articles

Bayesian input–output table update using a benchmark LASSO prior

ORCID Icon
Pages 413-427 | Received 02 Mar 2018, Accepted 17 Dec 2019, Published online: 09 Jan 2020
 

ABSTRACT

We propose updating a multiplier matrix subject to final demand and total output constraints, where the prior multiplier matrix is weighted against a LASSO prior. We update elements of the Leontief inverse, from which we can derive posterior densities of the entries in input–output tables. As the parameter estimates required by far exceed the available observations, many zero entries deliver a sparse tabulation. We address that problem with a new statistical model wherein we adopt a LASSO prior. We develop novel numerical techniques and perform a detailed Monte Carlo study to examine the performance of the new approach under different configurations of the input–output table. The new techniques are applied to a 196 × 196 U.S. input–output table for 2012.

JEL CODES:

Acknowledgements

The author is indebted to the Editor Michael Lahr, three anonymous reviewers and Gerald Steele for providing useful comments on earlier versions. The usual disclaimer applies.

Disclosure statement

No potential conflict of interest was reported by the author.

Notes

1 Writing b=Ikβ+v instead of βN(b,ω2Ik), which is equivalent to β=Ikb+v is, of course, unacceptable from the point of view of both frequentist and Bayesian purists. Taking this ‘leap of faith' can, actually, be traced back to Fisher (Citation1939) who, in a simpler context, called this produce by the name of ‘fiducial inference'. Nonetheless, β^ is exactly equal to the posterior mean through formal Bayesian analysis! As a matter of fact, in the author's view, this is a good way for students or novices to be introduced into the ‘apocrypha' of Bayesian analysis.

2 For any matrix M, the vec operator stacks rows of M to a vector.

3 For each parameter, say ϖ we construct a grid consisting of 100 values on a support, which is adapted every 500 MCMC iterations during the burn-in phase. The cdf is computed using density values and then normalizing. In turn, we invert the cdf to obtain a random draw from the respective posterior conditional distribution. This is also known as ‘griddy Gibbs sampler’.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.