Will Welch's Suggested Papers (Updated 2025/02/03)


A. Sauer, R. B. Gramacy and D. Higdon (2023), "Active Learning for Deep Gaussian Process Surrogates", Technometrics, 65, 4--18, https://doi.org/10.1080/00401706.2021.2008505

The paper provides a review of Deep Gaussian Processes (DGPs) and employs DGPs in active learning, i.e., sequential design of experiments. It is sufficient to focus on DGPs with non-adaptive designs, though perhaps varying the sample size, and ignore the active learning component of the paper. The authors' R package deepgp implements DGPs.

There are two related tasks.

  1. Find a small number of example functions, say from the Virtual Library of Simulation Experiments (https://www.sfu.ca/~ssurjano/), where a 2- or 3-layer DGP has better, worse, or similar prediction accuracy compared with a regular GP with no hidden layers. Remember your study itself is a simulation experiment and you need to employ the principles of statistical design and analysis of experiments in your comparisons. The deepgp package has many options, and it might be part of your experiment to compare options in an organized way too. Given the limited time available, it would be reasonable to vary only a few options you argue are likely to be most relevant.
  2. Use your examples to illustrate the advantages or otherwise of DGPs, There is no need to reproduce the math of the authors' paper. More important is to explain why there is an advantage or not in the specific context of each example. "Explain" will likely involve some conjecture, as the GPs in the hidden layers are indeed hidden. Some imaginative plotting might reveal structure in the predictions, however, to support your conjectures.

Return to the faculty list