Abstract
Nonreversible Markov chain Monte Carlo methods often outperform their reversible counterparts in terms of asymptotic variance of ergodic averages and mixing properties. Lifting the state-space is a generic technique for constructing such samplers. The idea is to think of the random variables we want to generate as position variables and to associate to them direction variables so as to design Markov chains which do not have the diffusive behavior often exhibited by reversible schemes. In this article, we explore the benefits of using such ideas in the context of Bayesian model choice for nested models, a class of models for which the model indicator variable is an ordinal random variable. By lifting this model indicator variable, we obtain nonreversible jump algorithms, a nonreversible version of the popular reversible jump algorithms. This simple algorithmic modification provides samplers which can empirically outperform their reversible counterparts at no extra computational cost. The code to reproduce all experiments is available online 1 .
Supplementary Materials
Title: Nonreversible jump algorithms for Bayesian nested model selection—supplementary materials (pdf).
Section 1: We present the proofs of Proposition 1, Theorem 1, and Corollary 1.
Section 2: We present weak convergence results for the ideal samplers as the size of the state-space increases.
Section 3: The details about the multiple change-point example of Section 5.2 are provided.
Acknowledgments
The authors thank three anonymous referees for helpful suggestions that led to an improved article.
Notes
1
The difference is that instead of systematically changing direction at 1 and , the sampler changes direction probabilistically after on average
steps, making it aperiodic.
2 We used accurate approximations to the posterior model probabilities. We verified that the TV goes to 0 for all algorithms as the number of iterations increases.