112
Views
2
CrossRef citations to date
0
Altmetric
Articles

Bayesian networks: regenerative Gibbs samplings

Pages 7554-7564 | Received 04 Nov 2019, Accepted 16 Oct 2020, Published online: 01 Jan 2021

References

  • Athreya, K. B., and P. Ney. 1978. A new approach to the limit theory of recurrent Markov chains. Transactions of the American Mathematical Society 245:493–501.
  • Brockwell, A. E., and J. B. Kadane. 2005. Identification of regeneration times in MCMC simulation, with application to adaptive schemes. Journal of Computational and Graphical Statistics 14 (2):436–58.
  • Casella, G., and E. George. 1992. Explaining the Gibbs sampler. American Statistician 46:167–74.
  • Crane, M. A., and D. L. Iglehart. 1975. Simulating stable stochastic systems: III. Regenerative processes and discrete-event simulations. Operations Research 23 (1):33–45.
  • Dechter, R. 1999. Bucket elimination: A unifying framework for reasoning. Artificial Intelligence 113 (1-2):41–85.
  • de Freitas, N., P. Højen-Sørensen, M. I. Jordan, and S. Russell. 2001. Variational MCMC. Proceedings of the Seventeenth Conference on Uncertainty in Artificial Intelligence (UAI2001), 120–27.
  • El Adlouni, S., A. C. Favre, and B. Bobée. 2006. Comparison of methodologies to assess the convergence of Markov Chain Monte Carlo methods. Computational Statistics & Data Analysis 50:2685–701.
  • Fill, J. A. 1998. An interruptible algorithm for perfect sampling via Markov chains. The Annals of Applied Probability 8 (1):131–62.
  • Geyer, C. J., and E. A. Thompson. 1995. Annealing Markov chain Monte Carlo with applications to ancestral inference. Journal of the American Statistical Association 90 (431):909–20.
  • Gilks, W. R., G. O. Roberts, and S. K. Sahu. 1998. Adaptive Markov chain Monte Carlo through regeneration. Journal of the American Statistical Association 93 (443):1045–54.
  • Guo, H., and W. Hsu. 2002. A survey of algorithms for real-time Bayesian network inferencein the joint AAAI-02/KDD-02/UAI-02 workshop on Real-Time Decision Support and Diagnosis Systems.
  • Hastings, W. K. 1970. Monte Carlo sampling methods using Markov Chains and their applications. Biometrika 57 (1):97–109.
  • Henrion, M. 1988. Propagating uncertainty in Bayesian networks by probabilistic logic sampling. In Uncertainty in artificial intelligence 2, eds. J. Lemmer and L. Kanal, 149–63. New York: Elsevier Science Publishers B.V. (North-Holland).
  • Hobert, J. P., G. L. Jones, B. Presnell, and J. S. Rosenthal. 2002. On the applicability of regenerative simulation in Markov chain Monte Carlo. Biometrika 89 (4):731–43.
  • Jensen, F. V., K. G. Olesen, and S. K. Andersen. 1990. An algebra of Bayesian belief universes for knowledge-based systems. Networks 20 (5):637–59.
  • Lavenberg, S. S., and D. R. Slutz. 1975. Introduction to regenerative simulation. IBM Journal of Research and Development 19 (5):458–62.
  • Lauritzen, S. L., and D. J. Spiegelhalter. 1988. Local computations with probabilities on graphical structures and their applications to expert systems. Journal of the Royal Statistical Society, Series B 50 (2):157–224.
  • Metropolis, N., A. W. Rosenbluth, M. N. Rosenbluth, A. Teller, and H. Teller. 1953. Equation of state calculations by fast computing machines. The Journal of Chemical Physics 21 (6):1087–92.
  • Minh, D. L. 2001. Applied probability models. Duxbury: Thomson Learning.
  • Minh, D. L., D. D. L. Minh, and A. Nguyen. 2012. Regenerative Markov Chain Monte Carlo for any distribution. Communications in Statistics - Simulation and Computation 41 (9):1745–60.
  • Minh, D. D. L., and D. L. Minh. 2015. Understanding the Hastings algorithm. Communications in Statistics - Simulation and Computation 44 (2):332–49.
  • Minh, D. L., and T. T. T. Duong. 2019. Bayesian networks: The minimal triangulations of a graph. Theoretical Computer Science 795 (26): 1–8. doi:10.1016/j.tcs.2019.05.030.
  • Mykland, P., L. Tierney, and B. Yu. 1995. Regeneration in Markov Chain Samplers. Journal of the American Statistical Association 90 (429):233–41.
  • Nummelin, E. 1978. A splitting technique for Harris Recurrent Markov chains. Zeitschrift für Wahrscheinlichkeitstheorie Und Verwandte Gebiete 43 (4):309–18.
  • Pearl, J. 1987. Evidential reasoning using stochastic simulation of causal models. Artificial Intelligence 32 (2):245–57.
  • Propp, J. G., and D. B. Wilson. 1996. Exact sampling with coupled Markov chains and applications to statistical mechanics. Random Structures and Algorithms 9 (1-2):223–52.
  • Sahu, S. K., and A. A. Zhigljavsky. 2003. Self-regenerative Markov Chain Monte Carlo with adaptation. Bernoulli 9 (3):395–422.
  • Tierney, L. 1996. Introduction to general state-space Markov chain theory. In Markov chain Monte Carlo in practice, eds. W. R. Gilks, S. Richardson, and D. J. Spiegelhalter, 59–74. London: Chapman & Hall.
  • York, J. 1992. Use of Gibbs sampler in expert systems. Artificial Intelligence 56 (1):115–30.
  • Zhang, N. L., and D. Poole. 1996. Exploiting causal independence in Bayesian network inference. Journal of Artificial Intelligence Research 5:301–28.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.