News & Events

Subscribe to email list

Please select the email list(s) to which you wish to subscribe.

User menu

You are here

Stochastic gradient methods for estimating expectations

Thursday, July 2, 2015 - 11:00
Sebastian Vollmer, University of Oxford
Statistics Seminar
Room 4192, Earth Science Buildling, 2207 Main Mall

Stochastic gradient methods have had great impact on tuning large scale models such as deep learning. This talk describes recent results of the use stochastic gradient methods for approximating expectation with respect to probability distributions. Applying standard Markov chain Monte Carlo (MCMC) algorithms to large data sets is computationally expensive. Both the calculation of the acceptance probability and the creation of informed proposals usually require an iteration through the whole data set. The recently proposed stochastic gradient Langevin dynamics (SGLD) method circumvents this problem by generating proposals which are only based on a subset of the data, by skipping the accept-reject step. The talk surveys two recent preprints providing rigorous foundation on decreasing and non decreasing step size SGLD, and, respectively.