214
Views
4
CrossRef citations to date
0
Altmetric
Original Articles

Convergence of Griddy Gibbs sampling and other perturbed Markov chains

, &
Pages 1379-1400 | Received 16 Oct 2015, Accepted 21 Nov 2016, Published online: 08 Dec 2016
 

ABSTRACT

The Griddy Gibbs sampling was proposed by Ritter and Tanner [Facilitating the Gibbs Sampler: the Gibbs Stopper and the Griddy–Gibbs Sampler. J Am Stat Assoc. 1992;87(419):861—868] as a computationally efficient approximation of the well-known Gibbs sampling method. The algorithm is simple and effective and has been used successfully to address problems in various fields of applied science. However, the approximate nature of the algorithm has prevented it from being widely used: the Markov chains generated by the Griddy Gibbs sampling method are not reversible in general, so the existence and uniqueness of its invariant measure is not guaranteed. Even when such an invariant measure uniquely exists, there was no estimate of the distance between it and the probability distribution of interest, hence no means to ensure the validity of the algorithm as a means to sample from the true distribution. In this paper, we show, subject to some fairly natural conditions, that the Griddy Gibbs method has a unique, invariant measure. Moreover, we provide Lp estimates on the distance between this invariant measure and the corresponding measure obtained from Gibbs sampling. These results provide a theoretical foundation for the use of the Griddy Gibbs sampling method. We also address a more general result about the sensitivity of invariant measures under small perturbations on the transition probability. That is, if we replace the transition probability P of any Monte Carlo Markov chain by another transition probability Q where Q is close to P, we can still estimate the distance between the two invariant measures. The distinguishing feature between our approach and previous work on convergence of perturbed Markov chain is that by considering the invariant measures as fixed points of linear operators on function spaces, we do not need to impose any further conditions on the rate of convergence of the Markov chain. For example, the results we derived in this paper can address the case when the considered Monte Carlo Markov chains are not uniformly ergodic.

AMS SUBJECT CLASSIFICATION:

Additional information

Funding

All authors of this paper are partially supported by NSF grant DMS-0900277.

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 61.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 1,209.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.