To join this seminar virtually: Please request Zoom connection details from ea [at] stat.ubc.ca
Abstract: With skyrocketing improvements in the computational performance of modern computing machines, the area of Bayesian inference applications is outstandingly improved, and Bayesian statistical analysis is used more frequently. In Bayesian inference, most computational resources are applied to running Markov chain Monte Carlo (MCMC) algorithms to obtain samples from posterior distributions. The MCMC algorithm is the main route to implement Bayesian inference. It allows for high-dimensional and flexible sampling. However, at the same time, researchers can undergo poorer computational performance when Bayesian statistical inference is performed using some specific families of models, namely partially identified models. This is because the good computational performance of the MCMC algorithm is not guaranteed. The parameters of the partially identified model are not uniquely identified, which makes the off-she-shelf MCMC algorithm hard to sample from posterior distributions. Importance sampling with transparent reparameterization (ISTP) is a good computational remedy for posterior inference with partially identified models. With the ISTP algorithm, researchers could obtain better and more stable computational performance while having samples in their original parameterization. In this talk, we first traverse scenarios of worsening computational performance with partially identified models and compare the results of ISTP with an off-the-shelf MCMC algorithm. Then, we discuss the general usability of ISTP and develop the diagnostic method for models suspected to have partial or weak identification. Along with ISTP, we introduce an R package for the Bayesian inference with the partially identified model. Lastly, we discuss what was completed, its limitations, and possible future improvements.