Learned operator correction in inverse problems

Andreas Hauptmann (University of Oulu), 25.04.2022, Exactum B120 (hybrid via zoom), 2pm-4pm

For zoom access, please contact Bjørn Jensen

Iterative model-based reconstruction approaches for high-dimensional problems with non-trivial forward operators can be highly time consuming. Thus, it is desirable to employ model reduction techniques to speed-up reconstructions in variational approaches as well as to enable training of learned model-based techniques. Nevertheless, reduced or approximate models can lead to a degradation of reconstruction quality and need to be accounted for. For this purpose, we discuss in this talk the possibility of learning a data-driven explicit model correction for inverse problems and whether such a model correction can be used within a variational framework to obtain regularized reconstructions. We will discuss the conceptual difficulty of learning such a forward model correction and derive conditions under which solutions to the variational problem with a learned correction converge to solutions obtained with the accurate model.

This talk is based on joint work with Simon Arridge, Carola Schönlieb, Tanja Tarvainen, Sebastian Lunz.