News & Events

Subscribe to email list

Please select the email list(s) to which you wish to subscribe.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Image CAPTCHA

Enter the characters shown in the image.

User menu

You are here

Automatic Massively Parallel MCMC with Quantifiable Error

Tuesday, June 25, 2024 - 11:00 to 12:00
Miguel Biron-Lattes, UBC Statistics PhD student
ESB 4192 / Zoom

To Join via Zoom: To join this seminar virtually, please register here.

Abstract: Simulated Tempering (ST) is an MCMC algorithm for complex target distributions that operates on a path between the target and an amenable reference distribution. Crucially, if the reference enables i.i.d. sampling, ST is regenerative and therefore embarrassingly parallel. However, the difficulty of tuning ST has hindered its widespread adoption. In this work, we develop a simple nonreversible ST (NRST) algorithm, a general theoretical analysis of ST, and an automated tuning procedure for ST. This procedure enables straightforward integration of NRST into existing probabilistic programming languages. We provide extensive experimental evidence that our tuning scheme improves the performance and robustness of NRST algorithms on a diverse set of probabilistic models.

NRST can be seen as a meta-MCMC algorithm, in that an explorer Markov chain is required to make local moves within distributions in the path, while NRST orchestrates movement along the path. Gradient-based methods like Metropolis-adjusted Langevin algorithm (MALA) produce Markov chains that scale favorably with dimension. However, MALA depends critically on a step size parameter, and tuning it requires too much work to be useful for NRST. To resolve this issue we introduce autoMALA, an improved version of MALA that automatically sets its step size at each iteration based on the local geometry of the target distribution. We prove that autoMALA preserves the target measure despite continual adjustments of the step size. Our experiments demonstrate that autoMALA is competitive with related state-of-the-art MCMC methods, in terms of the number of density evaluations per effective sample, and it outperforms state-of-the-art samplers on targets with varying geometries.