Link to Zoom Webinar: https://washington.zoom.us/j/93146867819
Webinar ID: 931 4686 7819
UBC SpeakerTrevor Campbell, Assistant Professor
|
UW SpeakerAlex Luedtke, Assistant Professor
|
Title: Parallel Tempering on Optimized Paths |
Title: Using Deep Adversarial Learning to Construct Optimal Statistical Procedures |
Abstract: Parallel tempering (PT) is a class of Markov chain Monte Carlo algorithms that constructs a path of distributions annealing between a tractable reference and an intractable target, and then interchanges states along the path to improve mixing in the target. The performance of PT depends on how quickly a sample from the reference distribution makes its way to the target, which in turn depends on the particular path of annealing distributions. However, past work on PT has used only simple paths constructed from convex combinations of the reference and target log-densities. In this talk I'll show that this path performs poorly in the common setting where the reference and target are nearly mutually singular. To address this issue, I'll present an extension of the PT framework to general families of paths, formulate the choice of path as an optimization problem that admits tractable gradient estimates, and present a flexible new family of spline interpolation paths for use in practice. Theoretical and empirical results will demonstrate that the proposed methodology breaks previously established upper performance limits for traditional paths. |
Abstract: Traditionally, statistical procedures have been derived via analytic calculations whose validity often relies on sample size growing to infinity. We use tools from deep learning to develop a new approach, adversarial Monte Carlo meta-learning, for constructing optimal statistical procedures. Statistical problems are framed as two-player games in which Nature adversarially selects a distribution that makes it difficult for a Statistician to answer the scientific question using data drawn from this distribution. The players’ strategies are parameterized via neural networks, and optimal play is learned by modifying the network weights over many repetitions of the game. In numerical experiments and data examples, this approach performs favorably compared to standard practice in point estimation, individual-level predictions, and interval estimation, without requiring specialized statistical knowledge. |
Bio: Trevor Campbell is an Assistant Professor of Statistics at the University of British Columbia. His research focuses on automated, scalable Bayesian inference algorithms, Bayesian nonparametrics, streaming data, and Bayesian theory. He was previously a Postdoctoral Associate advised by Tamara Broderick in the Computer Science and Artificial Intelligence Laboratory (CSAIL) and Institute for Data, Systems, and Society (IDSS) at MIT, a Ph.D. candidate under Jonathan How in the Laboratory for Information and Decision Systems (LIDS) at MIT, and before that he was in the Engineering Science program at the University of Toronto. |
Bio: Alex Luedtke is an Assistant Professor in the Department of Statistics at the University of Washington, with an affiliate appointment in the Vaccine and Infectious Disease Division at the Fred Hutchinson Cancer Research Center. He received his Sc.B. in Applied Math from Brown University in 2012 and completed his Ph.D. in Biostatistics at University of California, Berkley in 2016, under the supervision of Mark van der Laan. His research interests involve quantifying the uncertainty for population-level effects while making minimal assumptions for how the data were generated. He works with both clinical trial data and observational data—when working with observational data, he applies methods from causal inference to elicit assumptions under which a causal effect of an intervention can be estimated. |