Ozan Öktem – KTH Royal Institute of Technology
Task adapted reconstruction for inverse problems
We consider the problem of performing a task defined on a model parameter that is only observed indirectly through noisy data in an ill-posed inverse problem. Several such tasks have been approached using deep neural networks, and recent advancements in image reconstruction using learned iterative schemes now enable us to have a fully differentiable, end-to-end trainable, imaging pipeline. The suggested framework is adaptable, with a plug-and-play structure for adjusting to both the inverse problem and the task at hand. We demonstrate the approach is demonstrated on joint tomographic image reconstruction and semantic segmentation.
See the related pre-print "Task adapted reconstruction for inverse problems".
This is joint work with Jonas Adler, Sebastian Lunz, Olivier Verdier and Carola-Bibiane Schönlieb.
Andreas Hauptmann – University College London
Learned image reconstruction for high-resolution tomographic imaging
Recent advances in deep learning for tomographic reconstructions have shown great potential to create accurate and high quality images with a considerable speed-up of reconstruction time. In this talk I will discuss two common approaches to combine deep learning methods, in particular convolutional neural networks (CNN), with model-based reconstruction techniques. These approaches are illustrated with two conceptually different imaging modalities:
For accelerated dynamic cardiovascular magnetic resonance we can train a CNN to remove noise and aliasing artefacts from an initial reconstruction to obtain clinically useful information. For the more challenging problem of limited-view photoacoustic tomography, we rather need to train a network that performs an iterative reconstruction which feeds back the model information into the reconstruction algorithm to successively negate limited-view artefacts.
Sebastian Lunz – University of Cambridge
Adversarial Regularizers in Inverse Problems
We propose a framework for applying neural networks to the variational approach for inverse problems by learning a regularisation functional. The network learns to discriminate between the distribution of ground truth images and the distribution of unregularized reconstructions, leading to a functional that suppresses characteristic noise. Once trained, the network is applied to the inverse problem by solving the corresponding variational problem. Unlike other data-based approaches, the algorithm can be applied even if only unpaired training data is available. The approach is demonstrated on computed tomography reconstruction for lung scans.
Jonas Adler – KTH
Deep Bayesian Inversion
The ability to better characterize statistical properties of solutions to many inverse problems is essential for decision making. Bayesian inversion offers a tractable framework for such an analysis, but current approaches are computationally unfeasible for most realistic imaging applications in the clinic and asymptotic characterizations rely on unrealistic assumptions.
We show how deep learning can be used for Bayesian inversion by introducing two novel methods: a sampling based method using GANs and a direct approach that trains a neural network using a novel loss function. We demonstrate the capabilities of both methods by performing uncertainty quantification on ultra low dose 3D helical CT. We estimate the posterior mean and standard deviation of the 3D images and furthermore perform a Bayesian hypothesis test to assess the presence of a "dark spot" in the liver of a cancer stricken patient.
This is joint work with Ozan Öktem.
Axel Ringh – KTH
Learning to solve inverse problems using Wasserstein loss
We propose using the Wasserstein loss for training in inverse problems. In particular, we consider a learned primal-dual reconstruction scheme for ill-posed inverse problems using the Wasserstein distance as loss function in the learning. This is motivated by miss-alignments in training data, which when using standard mean squared error loss could severely degrade reconstruction quality. We prove that training with the Wasserstein loss gives a reconstruction operator that correctly compensates for miss-alignments in certain cases, whereas training with the mean squared error gives a smeared reconstruction. Moreover, we demonstrate these effects by training a reconstruction algorithm using both mean squared error and optimal transport loss for a problem in computerized tomography.
Sebastian Banert – KTH
How to accelerate convex optimisation with machine learning
In this talk, we are going to present some ideas how to design algorithms for convex optimisation with possibly nonsmooth functions and choose optimal parameters for them. The main point of this talk will be that one still can get convergence guarantees for the neural networks resulting from this procedure. We will demonstate their performance in variational regularisation of inverse problems in imaging.
This talk will present joint work with Axel Ringh, Jonas Adler, Jevgenija Rudzusīka, Johan Karlsson, and Ozan Öktem.
Roy Lederman – Yale University
Heterogeneity in Cryo-Electron Microscopy: High-Dimensional Movies of Molecules
Cryo-Electron Microscopy (cryo-EM) is an imaging technology that is revolutionizing structural biology; the Nobel Prize in Chemistry 2017 was recently awarded to Jacques Dubochet, Joachim Frank and Richard Henderson “for developing cryo-electron microscopy for the high-resolution structure determination of biomolecules in solution”. Cryo-electron microscopes produce a large number of very noisy two-dimensional projection images of individual frozen molecules. Unlike related methods, such as computed tomography (CT), the viewing direction of each image is unknown. The unknown directions, together with extreme levels of noise and additional technical factors, make the determination of the structure of molecules challenging. While other methods for structure determination, such as x-ray crystallography and nuclear magnetic resonance (NMR), measure ensembles of molecules, cryo-electron microscopes produce images of individual molecules. Therefore, cryo-EM could potentially be used to study mixtures of different conformations of molecules. Indeed, current algorithms have been very successful at analyzing homogeneous samples, and can recover some distinct conformations mixed in solutions, but, the determination of multiple conformations, and in particular, continuums of similar conformations (continuous heterogeneity), remains one of the open problems in cryo-EM. In practice, some of the key components in “molecular machines” are mobile and therefore appear as very blurry regions in 3-D reconstructions of macro-molecular structures that are otherwise stunning in resolution and detail.
We will discuss “hyper-molecules,” the mathematical formulation of heterogenous 3-D objects as higher dimensional objects, and the machinery that goes into recovering these “hyper-objects” from data. We will discuss some of the information and computation challenges, and the role of models and learned models in recovering the structure of these heterogenous components.
This is joint work with Joakim Andén and Amit Singer.
Axel Böhm – University of Vienna
A variable smoothing algorithm for convex optimization problems using stochastic gradients
We aim to solve a structured convex optimization problem, where a non-smooth function is composed with a linear operator. When opting for full splitting schemes, usually, primal-dual type methods are employed as they are effective and also well studied. However, under the additional assumption of Lipschitz continuity of parts of the objective function we can derive novel algorithms through regularization via the Moreau envelope. Applications can be found e.g. in inverse problems which lend themselves to the application of stochastic methods by the means of gradient estimators.
Rien Lagerwerf – CWI
Neural Network Feldkamp-Davis-Kress Algorithm
Computed Tomography (CT) is a broadly used non-destructive imaging modality, with applications in industrial quality assessment, materials sciences and medical imaging. In applications like these there are often limitations on the scanning time or x-ray dosage, resulting in measurements with low number of projection angles or high noise levels. Moreover, with the development of high-resolution detectors these measurements are still very large memory-wise. When coupled with high-resolution X-ray detectors, the projection data and the reconstructed volume are challenging to handle both memory-wise and computationally, creating a need for efficient reconstruction methods that can handle such data. We propose a neural network adaptation of the Feldkamp-Davis-Kress algorithm (NN-FDK). Here we design a multilayer perceptron (MLP) network such that the weights of the MLP coincide with the filters used in the FDK algorithm. The proposed method combines the computational efficiency of the FDK algorithm with the accuracy of iterative reconstruction methods. Moreover, due to a limited number of learnable parameters training times are low, even for high-resolution cases.
Allard Hendriksen – CWI
Deep learning for CT with little training data
In recent years, several imaging fields, including computed tomography, have benefited from the use of deep learning methods. Nonetheless, successful practical application of these techniques is often inhibited by the lack of sufficient training data. In this talk, we present several approaches for applying deep neural networks to tomography problems where little or no training data is available. These neural networks can for instance be used to improve reconstruction quality, enabling analysis of more challenging samples than is currently possible. Results will be shown for various types of objects, and practical considerations, such as computational requirements and generalizability, will be discussed.
Hector Andrade Loarca – Technical University of Berlin
Learned wavefront set extractor for inverse problem regularization
The study of singularities plays an important role in different areas of imaging science, the fact that images are formed mostly by anisotropic features makes also the study of the orientation of singularities necessary. The mathematical concept that defines the singularities and orientations of an image is known as wavefront set and it has been widely covered in the area of computed tomography reconstruction. As with other pseudo-differential operators, the X-ray transform maintains a relation of the wavefront sets of the image and the sinogram, due to its continuous nature, it is not straight forward to compute the wavefront set in real data. Typically even a problem like limited-angle tomography is severely ill-posed, an alternative method that just perform the recovery of an image with the same wavefront set will be just mildly ill-posed. In this talk I will present a method that uses fully convolutional neural networks and shearlets to compute the wavefront set of an image, and the path to use it as a post-processing step for CT-reconstruction.
Yoeri Boink – University of Twente
Joint Photoacoustic Reconstruction and Segmentation with a Partially Learned Algorithm
In this talk we will show that the learned primal-dual algorithm (L-PD) can be employed for a joint reconstruction and segmentation problem in tomography. In photoacoustic tomography there is a huge interest in the reconstruction of vascular geometries. Generally one reconstructs the initial pressure, which has a much lower intensity deep in the tissue than close to the surface. Post-processing a filtered backprojection is not a useful approach, since lower intensity regions will be overshadowed by high intensity streaks. Therefore it is almost impossible to reconstruct these parts of the vascular geometry. First we will empirically show the sensitivity of the L-PD method to changes in images and photoacoustic system settings. Second we will use this knowledge for setting up the experiments in which we learn for a joint reconstruction and segmentation of blood vessels. Results are shown for sensor settings in a limited angle and the limited view problem.
Carl Jidling – Uppsala University
Probabilistic approach for tomographic reconstruction
We consider the problem of using tomographic methods for reconstructing the internal structure of an object. To that end we propose a probabilistic approach in which the unknown quantity is modelled as a Gaussian process, which comes with high flexibility and automatic tuning of hyperparameters. The computational complexity is significantly reduced by utilising an approximation scheme well suited for the problem. The performance is illustrated on real-world problems from continuum mechanics and x-ray tomography.
Nikita Moriakov – Radboud University Medical Center
Deep Learning for Digital Breast Tomosynthesis
Digital breast tomosynthesis is rapidly replacing digital mammography as the basic x-ray technique for evaluation of the breasts. However, the sparse sampling and limited angular range gives rise to different artifacts, which manufacturers try to solve in several ways. In this study we propose an extension of the Learned Primal-Dual algorithm for digital breast tomosynthesis. We extend the architecture by providing breast thickness measurements as a mask to the neural network and allow it to learn how to use this thickness mask. We have trained the algorithm on digital phantoms and the corresponding noise-free/noisy projections, and then tested the algorithm on digital phantoms for varying level of noise. Reconstruction performance of the algorithms was compared visually, using MSE loss and Structural Similarity Index. Results indicate that the proposed algorithm outperforms the baseline iterative reconstruction algorithm in terms of reconstruction quality for both breast edges and internal structures and is robust to noise. As an application, we will discuss using the model for computing patient-specific dose accumulation estimates.
Lassi Roininen – Lappeenranta University of Technology
Hierarchical Stochastic Partial Differential Representation of Deep Gaussian Processes in Computed Tomography
We consider the computed tomography (CT) problem of reconstructing the internal structure of an object from limited x-ray projections. In this work, we formulate the problem in a Bayesian framework in which the target function is modeled as a Gaussian process (GP) with a hierarchy of non-stationary Matérn class covariance functions. The hierarchical non-stationary Matérn GP with spatially varying length scale can be seen as an incarnation of a deep GP and it is employed to be optimally capture the edges or rapid changes of the target function. Unlike algorithms commonly used in limited x-ray tomography problem in which tuning the prior parameters is required, the proposed GP method offers an easier set up as it takes into account the prior parameters as a part of the estimation. Simulated and real data are tested, and the advantages of the method are demonstrated with respect to more classical algorithms algorithm.
This is a collaboration with Simo Särkkä, Zenith Purisha, Karla Monterrubio-Gómez and Sari Lasanen.
Daniel Otero Baguer – University of Bremen
Deep image prior approaches for inverse problems
Deep image priors (DIP) have been recently introduced as a machine learning approach for some tasks in image processing. Usually, such machine learning approaches utilize large sets of training data, hence, it was somewhat surprising that deep image priors are based on a single data set. The success is partially based on quite complex network architectures and so far, the presented results are mainly experimental. In this talk we will show some theoretical analysis for rather specific network designs and linear operators.
Kai Lønning – Dutch Cancer Institute, Spinoza Centre for Neuroimaging
Reconstructing Sparsely Sampled MRI with Recurrent Inference Machines
MR-images are reconstructed from measurements of the Fourier transform of the object within the scanner. Sampling from this function requires the shifting of magnetic gradients produced by the scanner, for which there are both physiological and mechanical time constraints in place, making the imaging process slow relative to other modalities like CT. To decrease scan times, less samples are acquired than necessary to reconstruct the true signal, producing aliasing artifacts in the image to be removed as part of a post-processing step. The algorithms used for this purpose typically require a long time to retrieve a high-quality scan, but by using neural networks to learn the reconstruction process from fully sampled MR-data, we can now achieve higher quality reconstructions at just a fraction of the time, making real-time MR-imaging a goal within reach. This talk will present results from structural imaging experiments as a first step toward this goal. A recurrent network architecture is used to learn an iterative optimization scheme in order to reconstruct sparsely sampled MR-images, and its performance is demonstrated across different acceleration factors, contrast mechanisms and anatomical regions.
Markus Haltmeier – University Innsbruck
Regularizing inverse Problems with deep neural networks
We will analyze the NETT for the regularization of inverse problems. In this approach we use Tikhonov regularization with a regularizer defined by a neural network.
Johannes Schwab – University Innsbruck
Deep Null Space Learning for Inverse Problems: Convergence Analysis and Rates
Recently, deep learning based methods appeared as a new paradigm for solving inverse problems. These methods empirically show excellent performance but lack of theoretical justification. We propose to use a trained deep neural network called null space network combined with a classical regularization method as a reconstruction layer. The proposed deep null space learning approach is shown to be a regularization method and convergence rates are derived.
Holger Kohr – Thermo Fisher Scientific
Cryo Electron Tomography: Introduction and Deep Learning Opportunities
This talk will give an introduction into both mathematical and practical aspects of electron tomography for life sciences. Major challenges in modelling, reconstruction and alignment will be discussed, and a few areas will be highlighted where deep learning techniques could help solving outstanding issues, or even enable new workflows.
The presentation is intended to spark interest and initiate discussions on promising future developments.
Maureen van Eijnatten – CWI
Deep learning for personalized medicine: applications in medical imaging and 3D printing
Over the last decade, advances in image processing algorithms and graphical processing power have extended the role of medical imaging far beyond traditional 2D visualization. The spatial information embedded in CT and MRI scans is being increasingly used to personalize treatments by employing technologies such as virtual surgical planning, 3D printing of personalized constructs such as anatomical models, surgical saw guides or implants, virtual and augmented reality, and robot-guided surgery. The occasional use of these emerging technologies has resulted in better treatment outcomes and reduced operating times and costs. However, the widespread use of these technologies is impeded by the vast amount of expert knowledge and time-consuming manual tasks required in current medical image-based workflows. Deep learning presents a major opportunity to automate such tedious image-related tasks. In this presentation I will give a clinical perspective on the developments in deep learning and I will discuss some recent applications in medical imaging (image segmentation and registration) and 3D printing.
Leonardo Rundo – University of Cambridge
PGGAN-based Data Augmentation for Brain Tumor Detection on MR Images
Due to the lack of available annotated medical images, accurate computer-assisted diagnosis requires intensive Data Augmentation techniques, such as geometric/intensity transformations of original images; unfortunately, those transformed images have a very similar distribution to the original ones, leading to limited performance improvement. Recently, the synthesis of new images, which are realistic but completely different from the original ones, can be tackled using Generative Adversarial Networks (GANs). This contribution presents an application on brain contrast-enhanced Magnetic Resonance (MR) images exploiting Progressive Growing of GANs (PGGANs) to generate original-sized MR images for Convolutional Neural Network based brain tumor detection.
Lukas Mosser – Imperial College London
Stochastic seismic waveform inversion using generative adversarial networks as a geological prior
We present an application of deep generative models in the context of partial-differential equation (PDE) constrained inverse problems. We combine a generative adversarial network (GAN) representing an a priori model that creates subsurface geological structures and their petrophysical properties, with the numerical solution of the PDE governing the propagation of acoustic waves within the earth's interior. We perform Bayesian inversion using an approximate Metropolis-adjusted Langevin algorithm (MALA) to sample from the posterior given seismic observations. Gradients with respect to the model parameters governing the forward problem are obtained by solving the adjoint of the acoustic wave equation. Gradients of the mismatch with respect to the latent variables are obtained by leveraging the differentiable nature of the deep neural network used to represent the generative model. We show that approximate MALA sampling allows efficient Bayesian inversion of model parameters obtained from a prior represented by a deep generative model, obtaining a diverse set of realizations that reflect the observed seismic response.
Joakim Andén – Flatiron Institute
Capturing Signal Structure with Scattering Networks
Convolutional networks have enjoyed great success extracting relevant structural information from signals in applications ranging from classification to inverse problems. However, since their filters are learned from optimizing over a training set, analyzing their properties and predicting their behavior poses difficulties. We consider the scattering network, a convolutional network with fixed wavelet filters that performs comparably to fully learned networks for several classification tasks. The fixed nature of the network lets us analyze how it captures different signal structures in one dimension, including amplitude modulation, harmonic structure, and frequency modulation. We also study the synthesis of signals from a set of target scattering coefficients. Both results illustrate the expressive power of the scattering network in representing commonly occurring structures. We conclude with a discussion of potential applications to inverse problems.
Joint work with Vincent Lostanlen and Stéphane Mallat.
Camille Pouchol – KTH
Joakim da Silva – KTH
Pär Kurlberg – KTH
Muhamed Barakovic – EPFL
Jevgenija Rudzusīka – KTH
Massimiliano Colarieti Tosti – KTH
Subhadip Mukherjee – KTH
Jens Sjölund – Elekta
David Marlevi – KTH
Adriaan Graas – CWI
Anna-Lena Robisch – Georg-August-University Göttingen
Marina Eckermann – Institute for X-ray Physics, Göttingen University
Jasper Frohn – Institute for X-ray Physics, Göttingen University
Ivan Yashchuk – VTT Technical Research Centre of Finland
José Carlos Gutiérrez Pérez – University of Bremen
Renat Sibgatulin – University Hospital Jena
Sari Lasanen – Lappeenranta University of Technology
Daniel Klosa – University of Bremen
Johannes Leuschner – University of Bremen
Maximilian Schmidt – University of Bremen
Haiwen Zhang – Institute for Numerical and Applied Mathematics, University of Göttingen
Roeland Dilz – NKI Amsterdam (Dutch Cancer instute)
Gustav Zickert – KTH
Zelalem Berihun Asfaw – Uppsala University
Martina Scolamiero – KTH
Anna Persson – KTH
Manuch Soleimani – University of Bath
Felix Lucka – CWI
Niklas Gunnarsson – Elekta
Kenneth Lau – Elekta
Niek Huttinga – University Medical Center Utrecht