We are hiring!
- Postdoc (or doctoral) position in Probabilistic Machine Learning (FCAI co-supervised).
Please see job postings below.
Our group is committed to principles of diversity, equality and inclusion. We encourage applications from women, racial and ethnic minorities, and other under-represented individuals.
Postdoc (or doctoral) position in Probabilistic Machine Learning
The Machine and Human Intelligence research group has a potential opening for a co-supervised postdoc position (or for an excellent PhD candidate) funded by the Finnish Center for Artificial Intelligence (FCAI). Projects in our group relate to sample-efficient and active-sampling approaches in probabilistic machine learning. The candidate will join a research team of other FCAI postdocs and professors based at the University of Helsinki and Aalto University (Finland).
Example of research topics in our group are listed below. Please note that these are presented as examples — we would also consider related projects. Please get in touch if interested.
- Fast active-sampling approximate Bayesian inference for everyone.
In recent years, a new approach to approximate Bayesian inference has emerged, on the side of the traditional workhorses (MCMC and variational inference). Active-sampling Bayesian inference aims to build posterior distributions (and approximations of the model evidence) in a sample-efficient way, by constructing a statistical surrogate of the posterior (or likelihood), such as via a Gaussian process, and then actively evaluating the log-likelihood or log-joint distribution where needed to efficiently update the surrogate model. This approach is similar to Bayesian optimization, but the goal differs in that the goal is to learn the posterior distribution (and/or the marginal likelihood). Crucially, thanks to recent advances in Bayesian nonparametrics, this approach is not limited anymore to “expensive” models, but it could become part of the standard Bayesian workflow for many models, affording calculation of cheap, uncertainty-aware posterior approximations with only a small number of evaluations. This project is concerned with pushing the state-of-the-art of active-sampling Bayesian inference algorithms, in terms of both theory and implementation, to obtain a new instrument for approximate inference which would be widely accessible, fast and failsafe. The ideal candidate has prior experience with Gaussian processes and active learning (e.g., Bayesian optimization), both in theory and with modern software implementations (e.g., GPyTorch).
Potential co-supervisors: Luigi Acerbi (University of Helsinki); Aki Vehtari (Aalto University), Samuel Kaski (Aalto University), Arto Klami (University of Helsinki)
Keywords: Bayesian inference; active learning; Gaussian processes; Bayesian optimization.
- Bayesian deep active learning for amortized inference of simulator models.
Recent approaches to inference in simulator models exploit the power of flexible deep neural density estimators to iteratively learn a direct mapping from summary statistics of the data to the posterior distribution, skipping the intermediate steps of approximate inference. In some limited cases, the trained networks can be immediately used on new data, achieving the holy grail of amortized Bayesian inference — inference at virtually no cost at runtime. However, training these networks requires a very large number of samples from the model, and the mapping to the posterior has no notion of uncertainty, meaning that the network could fail silently in unseen regions of parameter space. This project is concerned with applying Bayesian principles of uncertainty estimation and active learning to develop a new generation of algorithms for sample-efficient training of robust, safe emulator networks for simulator-based inference. The ideal candidate has prior experience with deep learning and Bayesian methods.
Potential co-supervisors: Luigi Acerbi (University of Helsinki); Jukka Corander (University of Helsinki), Samuel Kaski (Aalto University)
Keywords: Simulator-based inference; Bayesian deep learning; active learning; neural density estimators; amortized inference
If interested in the position, to apply please send:
- your CV with list of publications (or link to Google Scholar);
- a motivation letter explicitly specifying why you are applying for this position;
- transcripts of your relevant degrees; and
- contact information of at least two referees (who could provide a letter of recommendation) to firstname.lastname@example.org.
Please also get in touch for informal inquiries. Applications will be considered until the position is filled.