932
Views
0
CrossRef citations to date
0
Altmetric
Bayesian Computing

Connecting the Dots: Numerical Randomized Hamiltonian Monte Carlo with State-Dependent Event Rates

ORCID Icon
Pages 1238-1253 | Received 24 Feb 2021, Accepted 06 Apr 2022, Published online: 19 May 2022

References

  • Abraham, M. J., Murtola, T., Schulz, R., Páll, S., Smith, J. C., Hess, B., and Lindahl, E. (2015), “GROMACS: High Performance Molecular Simulations Through Multi-Level Parallelism from Laptops to Supercomputers,” SoftwareX, 1–2, 19–25. DOI: 10.1016/j.softx.2015.06.001.
  • Andersen, H. C. (1980), “Molecular Dynamics Simulations at Constant Pressure and/or Temperature,” The Journal of Chemical Physics, 72, 2384–2393. DOI: 10.1063/1.439486.
  • Bierkens, J., Fearnhead, P., and Roberts, G. (2019), “The Zig-Zag Process and Super-Efficient Sampling for Bayesian Analysis of Big Data,” The Annals of Statistics, 47, 1288–1320. DOI: 10.1214/18-AOS1715.
  • Bierkens, J., Grazzi, S., Kamatani, K., and Roberts, G. (2020), “The Boomerang Sampler,” arXiv:2006.13777.
  • Bou-Rabee, N., and Eberle, A. (2020), “Couplings for Andersen Dynamics,” arXiv:2009.14239.
  • Bou-Rabee, N., and Eberle, A. (2021), “Mixing Time Guarantees for Unadjusted Hamiltonian Monte Carlo,” arXiv:2105.00887.
  • Bou-Rabee, N., and Sanz-Serna, J. M. (2017), “Randomized Hamiltonian Monte Carlo,” Annals of Applied Probability, 27, 2159–2194.
  • Bou-Rabee, N., and Sanz-Serna, J. M. (2018), “Geometric Integrators and the Hamiltonian Monte Carlo Method,” Acta Numerica, 27, 113–206.
  • Bou-Rabee, N., and Schuh, K. (2020), “Convergence of Unadjusted Hamiltonian Monte Carlo for Mean-Field Models,” arXiv:2009.08735.
  • Bouchard-Côté, A., Vollmer, S. J., and Doucet, A. (2018), “The Bouncy Particle Sampler: A Nonreversible Rejection-Free Markov Chain Monte Carlo Method,” Journal of the American Statistical Association, 113, 855–867. DOI: 10.1080/01621459.2017.1294075.
  • Cambanis, S., Huang, S., and Simons, G. (1981), “On the Theory of Elliptically Contoured Distributions,” Journal of Multivariate Analysis, 11, 368–385. DOI: 10.1016/0047-259X(81)90082-8.
  • Cances, E., Legoll, F., and Stoltz, G. (2007), “Theoretical and Numerical Comparison of Some Sampling Methods for Molecular Dynamics,” ESAIM: Mathematical Modelling and Numerical Analysis, 41, 351–389. DOI: 10.1051/m2an:2007014.
  • Carpenter, B., Gelman, A., Hoffman, M., Lee, D., Goodrich, B., Betancourt, M., Brubaker, M., Guo, J., Li, P., and Riddell, A. (2017), “Stan: A Probabilistic Programming Language,” Journal of Statistical Software, 76, 1–32. DOI: 10.18637/jss.v076.i01.
  • Chen, Z., and Vempala, S. S. (2019), “Optimal Convergence Rate of Hamiltonian Monte Carlo for Strongly Logconcave Distributions,” in Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques, APPROX/RANDOM 2019, September 20–22, 2019, Massachusetts Institute of Technology, Cambridge, MA, USA, Volume 145 of LIPIcs, eds. D. Achlioptas and L. A. Végh, pp. 64:1–64:12, Schloss Dagstuhl - Leibniz-Zentrum für Informatik.
  • Cheng, X., Chatterji, N. S., Bartlett, P. L., and Jordan, M. I. (2018), “Underdamped Langevin mcmc: A Non-asymptotic Analysis,” in Proceedings of the 31st Conference On Learning Theory, Volume 75 of Proceedings of Machine Learning Research, eds. S. Bubeck, V. Perchet, and P. Rigollet, pp. 300–323. PMLR.
  • Chopin, N., and Ridgway, J. (2017), “Leave Pima Indians Alone: Binary Regression as a Benchmark for Bayesian Computation,” Statistical Science, 32, 64–87. DOI: 10.1214/16-STS581.
  • Cotter, S., House, T., and Pagani, F. (2020), “The NuZZ: Numerical ZigZag Sampling for General Models,” arXiv:2003.03636.
  • Davis, M. H. A. (1984), “Piecewise-Deterministic Markov Processes: A General Class of Non-diffusion Stochastic Models,” Journal of the Royal Statistical Society, Series B, 46, 353–376. DOI: 10.1111/j.2517-6161.1984.tb01308.x.
  • Davis, M. H. A. (1993), Markov Models and Optimization, London: Chapman & Hall.
  • Deligiannidis, G., Paulin, D., Bouchard-Côté, A., and Doucet, A. (2018), “Randomized Hamiltonian Monte Carlo as Scaling Limit of the Bouncy Particle Sampler and Dimension-Free Convergence Rates,” arXiv:1808.04299.
  • Dormand, J., and Prince, P. (1987), “Runge-Kutta-Nystrom Triples,” Computers & Mathematics with Applications, 13, 937–949.
  • Fang, Y., Sanz-Serna, J. M., and Skeel, R. D. (2014), “Compressible Generalized Hybrid Monte Carlo,” The Journal of Chemical Physics, 140, 174108. DOI: 10.1063/1.4874000.
  • Fearnhead, P., Bierkens, J., Pollock, M., and Roberts, G. O. (2018), “Piecewise Deterministic Markov Processes for Continuous-Time Monte Carlo,” Statistical Science, 33, 386–412. DOI: 10.1214/18-STS648.
  • Gelman, A., Carlin, J. B., Stern, H. S., Dunson, D. B., Vehtari, A., and Rubin, D. (2014), Bayesian Data Analysis, (3rd ed.), Boca Raton, FL: CRC Press.
  • Geyer, C. J. (1992), “Practical Markov chain Monte Carlo,” Statistical Science, 7, 473–483. DOI: 10.1214/ss/1177011137.
  • Giles, M. B. (2015), “Multilevel Monte Carlo Methods,” Acta Numerica, 24, 259–328. DOI: 10.1017/S096249291500001X.
  • Girolami, M., and Calderhead, B. (2011), “Riemann Manifold Langevin and Hamiltonian Monte Carlo Methods,” Journal of the Royal Statistical Society, Series B, 73, 123–214. DOI: 10.1111/j.1467-9868.2010.00765.x.
  • Goldstein, H., Poole, C., and Safko, J. (2002), Classical Mechanics (3rd ed.), Boston: Addison Wesley.
  • Golosnoy, V., Gribisch, B., and Liesenfeld, R. (2012), “The Conditional Autoregressive Wishart Model for Multivariate Stock Market Volatility,” Journal of Econometrics, 167, 211–223. DOI: 10.1016/j.jeconom.2011.11.004.
  • Grothe, O., Kleppe, T. S., and Liesenfeld, R. (2019), “The Gibbs Sampler with Particle Efficient Importance Sampling for State-Space Models,” Econometric Reviews, 38, 1152–1175. DOI: 10.1080/07474938.2018.1536098.
  • Hairer, E., Nørsett, S. P., and Wanner, G. (1993), Solving Ordinary Differential Equations I (2nd Rev. Ed.): Nonstiff Problems, Berlin, Heidelberg: Springer-Verlag.
  • Hoffman, M. D., and Gelman, A. (2014), “The no-u-turn Sampler: Adaptively Setting Path Lengths in Hamiltonian Monte Carlo,” Journal of Machine Learning Research, 15, 1593–1623.
  • Horowitz, A. M. (1991), “A Generalized Guided Monte Carlo Algorithm,” Physics Letters B, 268, 247–252. DOI: 10.1016/0370-2693(91)90812-5.
  • Kleppe, T. S. (2016), “Adaptive Step Size Selection for Hessian-based Manifold Langevin Samplers,” Scandinavian Journal of Statistics, 43, 788–805. DOI: 10.1111/sjos.12204.
  • Lee, Y. T., Song, Z., and Vempala, S. S. (2018), “Algorithmic Theory of ODEs and Sampling from Well-Conditioned Logconcave Densities,” arXiv:1812.06243.
  • Leimkuhler, B., and Matthews, C. (2015), Molecular Dynamics With Deterministic and Stochastic Numerical Methods, New York: Springer.
  • Leimkuhler, B., and Reich, S. (2004), Simulating Hamiltonian Dynamics, Cambridge: Cambridge University Press.
  • Li, D. (2007), “On the Rate of Convergence to Equilibrium of the Andersen Thermostat in Molecular Dynamics,” Journal of Statistical Physics, 129, 265–287. DOI: 10.1007/s10955-007-9391-0.
  • Livingstone, S., Faulkner, M. F., and Roberts, G. O. (2019), “Kinetic Energy Choice in Hamiltonian/Hybrid Monte Carlo,” Biometrika, 106, 303–319. DOI: 10.1093/biomet/asz013.
  • Lu, J., and Wang, L. (2020), “On Explicit l2-convergence Rate Estimate for Piecewise Deterministic Markov Processes in MCMC Algorithms,” arXiv:2007.14927.
  • Mackenze, P. B. (1989), “An Improved Hybrid Monte Carlo Method,” Physics Letters B, 226, 369–371. DOI: 10.1016/0370-2693(89)91212-4.
  • Mangoubi, O., and Smith, A. (2017), “Rapid Mixing of Hamiltonian Monte Carlo on Strongly Log-Concave Distributions,” arXiv:1708.07114.
  • Mangoubi, O., and Smith, A. (2019), “Mixing of Hamiltonian Monte Carlo on Strongly Log-Concave Distributions 2: Numerical Integrators,” in Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics, Volume 89 of Proceedings of Machine Learning Research, eds. K. Chaudhuri and M. Sugiyama, pp. 586–595. PMLR.
  • Michie, D., Spiegelhalter, D. J., and Taylor, C. C., Eds. (1994), Machine Learning, Neural and Statistical Classification. Series in Artificial Intelligence. Hemel Hempstead, Hertfordshire: Ellis Horwood.
  • Neal, R. M. (2003), “Slice Sampling,” The Annals of Statistics, 31, 705–767. DOI: 10.1214/aos/1056562461.
  • Neal, R. M. (2010), “MCMC using Hamiltonian Dynamics,” in Handbook of Markov chain Monte Carlo, eds. S. Brooks, A. Gelman, G. Jones, and X.-L. Meng, pp. 113–162, Boca Raton, FL: CRC Press.
  • Nishimura, A., and Dunson, D. (2020), “Recycling Intermediate Steps to Improve Hamiltonian Monte Carlo,” Bayesian Analysis, 15, 1087–1108. DOI: 10.1214/19-BA1171.
  • Osmundsen, K. K., Kleppe, T. S., and Liesenfeld, R. (2021), “Importance Sampling-Based Transport Map Hamiltonian Monte Carlo for Bayesian Hierarchical Models,” Journal of Computational and Graphical Statistics, forthcoming. DOI: 10.1080/10618600.2021.1923519.
  • Pakman, A., and Paninski, L. (2014), “Exact Hamiltonian Monte Carlo for Truncated Multivariate Gaussians,” Journal of Computational and Graphical Statistics, 23, 518–542. DOI: 10.1080/10618600.2013.788448.
  • Robert, C. P., and Casella, G. (2004), Monte Carlo Statistical Methods (2nd ed.), New York: Springer.
  • Roberts, G. O., and Rosenthal, J. S. (1998), “Optimal Scaling of Discrete Approximations to Langevin Diffusions,” Journal of the Royal Statistical Society, Series B, 60, 255–268. DOI: 10.1111/1467-9868.00123.
  • Rudolf, D., and Schweizer, N. (2018), “Perturbation Theory for Markov Chains via Wasserstein Distance,” Bernoulli, 24, 2610–2639. DOI: 10.3150/17-BEJ938.
  • Sanz-Serna, J., and Calvo, M. (1994), Numerical Hamiltonian Problems, New York: Dover Publications Inc.
  • Stan Development Team. (2017), “Stan Modeling Language Users Guide and Reference Manual version 2.17.0.”
  • Vanetti, P., Bouchard-Côté, A., Deligiannidis, G., and Doucet, A. (2018), “Piecewise-Deterministic Markov chain Monte Carlo,” arXiv:1707.05296v2.
  • Weinan, E., and Li, D. (2008), “The Andersen Thermostat in Molecular Dynamics,” Communications on Pure and Applied Mathematics, 61, 96–136. DOI: 10.1002/cpa.20198.
  • Welling, M., and Teh, Y. W. (2011), “Bayesian Learning via Stochastic Gradient Langevin Dynamics,” in Proceedings of the 28th International Conference on International Conference on Machine Learning, Madison, WI, USA, pp. 681–688, Omnipress.
  • Wu, C., Stoehr, J., and Robert, C. P. (2018), “Faster Hamiltonian Monte Carlo by Learning Leapfrog Scale,” arXiv:1810.04449.