ABSTRACT
Models with intractable normalizing functions arise frequently in statistics. Common examples of such models include exponential random graph models for social networks and Markov point processes for ecology and disease modeling. Inference for these models is complicated because the normalizing functions of their probability distributions include the parameters of interest. In Bayesian analysis, they result in so-called doubly intractable posterior distributions which pose significant computational challenges. Several Monte Carlo methods have emerged in recent years to address Bayesian inference for such models. We provide a framework for understanding the algorithms, and elucidate connections among them. Through multiple simulated and real data examples, we compare and contrast the computational and statistical efficiency of these algorithms and discuss their theoretical bases. Our study provides practical recommendations for practitioners along with directions for future research for Markov chain Monte Carlo (MCMC) methodologists. Supplementary materials for this article are available online.
Supplementary Materials
The supplementary material provides details about how to select particles in the adaptive exchange (AEX) algorithm and a description of the Russian roulette algorithm. It also provides details about how computational complexity calculations for various algorithm was calculated.
Acknowledgment
The authors are grateful to Anne-Marie Lyne, Ick Hoon Jin, and Yves Atchade for providing useful sample code and advice, and to Faming Liang, Galin Jones and John Hughes for helpful comments.