Joseph Sakaya defends his PhD thesis From Approximations to Decisions

On Wednesday the 17th of February 2021, M.Sc. Joseph Sakaya will defend his doctoral thesis From Approximations to Decisions. The thesis is related to research done in the Department of Computer Science and in the Multi-source Probabilistic Inference group at the University of Helsinki.

M.Sc. Joseph Sakaya defends his doctoral thesis From Approximations to Decisions on Wednesday the 17th of February 2021 at 16 o'clock in the University of Helsinki Chemicum building, Hall A129 (A. I. Virtasen aukio 1, 1st floor). His opponent is University Lecturer José Miguel Hernández-Lobato (University of Cambridge, United Kingdom) and custos Professor Petri Myllymäki (University of Helsinki). The defence will be held in English. It is possible to follow the defence as a live stream at https://helsinki.zoom.us/j/64731375374?pwd=WmJmTkJQQ0J6Y1hVZ09zNHhwck10QT09.

The thesis of Joseph Sakaya is a part of research done in the Department of Computer Science and in the Multi-source Probabilistic Inference group at the University of Helsinki. His supervisor has been Assistant Professor Arto Kalmi (University of Helsinki).

From Approximations to Decisions

Bayesian models capture the intrinsic variability of a data-generating process as a posterior distribution over the parameters of the model for the process. Decisions that are optimal for a user-defined loss are obtained by minimizing expectation of the loss over the posterior. Because posterior inference is often intractable approximations of the posterior are obtained either via sampling with Monte Carlo Markov chain methods or through variational methods which minimize a discrepancy measure between an approximation and the true posterior. Probabilistic programming offers practitioners tools that combine easy model specification with automatic approximate inference techniques. However, these techniques do not yet accommodate posterior calibrations that yield decisions that are optimal for the expected posterior loss.

This thesis develops efficient and flexible variational approximations as well as density function transformations for flexible modeling of skewed data for use in probabilistic programs. It also proposes extensions to the Bayesian decision framework and a suite of automatic loss-sensitive inference techniques for decision-making under posterior approximations. Briefly, we make four concrete contributions: First, we exploit importance sampling to approximate the objective gradient and show how to speed up convergence in stochastic gradient and stochastic average gradient descent for variational inference. Next, we propose a new way to model skewed data in probabilistic programs by prescribing an improved version of the Lambert W distribution amenable to gradient-based inference. Lastly, we propose two new techniques to better integrate decision-making into probabilistic programs - a gradient-based optimization routine for the loss-calibrated variational objective, specifically for the challenging case of continuous losses, and an amalgamation of learning theory and Bayesian decision theory that utilizes a separate decision-making module to map the posterior to decisions minimizing the empirical risk.

Availability of the dissertation

An electronic version of the doctoral dissertation is available on the e-thesis site of the University of Helsinki at http://urn.fi/URN:ISBN:978-951-51-7000-2.

Printed copies will be available on request from Joseph Sakaya: joseph.sakaya@helsinki.fi.