This site uses cookies. By continuing to use this site you agree to our use of cookies. To find out more, see our Privacy and Cookies policy.
Paper The following article is Open access

Computed tomography reconstruction using deep image prior and learned reconstruction methods

, and

Published 2 September 2020 © 2020 The Author(s). Published by IOP Publishing Ltd
, , Special Issue on Modern Challenges in Imaging Citation Daniel Otero Baguer et al 2020 Inverse Problems 36 094004 DOI 10.1088/1361-6420/aba415

0266-5611/36/9/094004

Abstract

In this paper we describe an investigation into the application of deep learning methods for low-dose and sparse angle computed tomography using small training datasets. To motivate our work we review some of the existing approaches and obtain quantitative results after training them with different amounts of data. We find that the learned primal-dual method has an outstanding performance in terms of reconstruction quality and data efficiency. However, in general, end-to-end learned methods have two deficiencies: (a) a lack of classical guarantees in inverse problems and (b) the lack of generalization after training with insufficient data. To overcome these problems, we introduce the deep image prior approach in combination with classical regularization and an initial reconstruction. The proposed methods achieve the best results in the low-data regime in three challenging scenarios.

Export citation and abstract BibTeX RIS

Original content from this work may be used under the terms of the Creative Commons Attribution 4.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.

1. Introduction

Deep learning approaches to solving ill-posed inverse problems currently achieve state-of-the-art reconstruction quality. However, they require large amounts of training data, i.e., pairs of ground truths and measurements, and it is not clear how much is necessary to be able to achieve good generalization. For ill-posed inverse problems arising in medical imaging, such as magnetic resonance imaging (MRI), guided positron emission tomography (PET), magnetic particle imaging, or computed tomography (CT), obtaining such high amounts of training data is challenging. In particular, ground truth data is difficult to obtain as it is impossible to take a photograph of the inside of the human body. What learned methods usually consider as ground truths are phantoms or high-dose reconstructions obtained with classical methods, such as filtered back-projection (FBP). These methods work well when using a large amount of low-noise measurements. In MRI, it is possible to obtain these reconstructions, but the data acquisition process requires a great deal of time. Therefore, one potential benefit of learned approaches in MRI is the reduction of data acquisition times [30]. In other applications such as CT, it would be necessary to expose patients to high doses of x-ray radiation to obtain the required training ground truths.

There is another approach called deep image prior (DIP) [31] that also uses deep neural networks, for example, a U-Net [45]. However, there is a remarkable difference: the DIP does not need any learning, i.e., the weights of the network are not trained. This approach seems to have low applicability because it requires a lot of time for image reconstruction, in contrast to learned methods. In the applications initially considered, for example, inpainting, denoising, and super-resolution, it is much easier to obtain or simulate data, which allows for the use of learned methods, and the DIP does not seem to have an advantage.

In this paper, we aim to explore the application of the DIP together with other deep learning methods for obtaining CT reconstructions when little training data is available. The structure of the paper and the main contributions are organized as follows. In section 2, we briefly describe the CT reconstruction problem. Section 3 provides a summary of related articles and approaches, together with some background and observations that we use as motivation for our work. In section 4, we introduce the combination of the DIP with classical regularization methods and discuss under which assumptions the classical regularization results still hold. In section 5, we propose a similar approach to the DIP but using an initial reconstruction given by any end-to-end learned method. Finally, in section 6, we present a benchmark of the different methods that we have analyzed using varying amounts of data from two standard datasets.

2. CT

CT is one of the most valuable technologies in modern medical imaging [9]. It allows for a non-invasive acquisition of the inside of the human body using x-rays. Since the introduction of CT in the 1970s, technical innovations such as new scan geometries have extended the limits on speed and resolution. Current research focuses on reducing the amount of potentially harmful radiation to which a patient is exposed during the scan [9]. These innovations include making measurements using lower intensity x-rays or at fewer angles. Both approaches introduce particular challenges for reconstruction methods that can severely reduce the image quality. In our work, we compare several reconstruction methods in these low-dose scenarios for a basic 2D parallel beam geometry (cf figure 1).

Figure 1.

Figure 1. Parallel beam geometry.

Standard image High-resolution image

In this case, the forward operator is given by the 2D Radon transform [43] and models the attenuation of the x-ray when passing through a body. We can parameterize the path of an x-ray beam by the distance from the origin $s\in \mathbb{R}$ and angle φ ∈ [0, π]:

Equation (1)

The Radon transform then calculates the integral along the line for parameters s and φ:

Equation (2)

According to Beer–Lambert's law, the result is the logarithm of the ratio of the intensity, I0, at the x-ray source to the intensity, I1, at the detector

Equation (3)

Calculating the transform for all pairs (s, φ) results in a so-called sinogram, which we also call an observation. To get a reconstruction $\hat{x}$ from the sinogram, we have to invert the forward model. Since the Radon transform is linear and compact, the inverse problem is ill-posed in the sense of Nashed [39, 40].

3. Related approaches and motivation

In this section, we first review and describe some of the existing data-driven and classical methods for solving ill-posed inverse problems, that have also been applied to obtain CT reconstructions. Following this, we review the DIP approach and related works.

In inverse problems one aims at obtaining an unknown quantity, in this case the image of the interior of the human body, from indirect measurements that frequently contain noise [16, 36, 44]. The problem is modeled by an operator A : XY between Banach or Hilbert spaces X and Y and the measured noisy data or observation:

Equation (4)

The aim is to obtain an approximation $\hat{x}$ for x (the true solution), where τ, with ||τ|| ⩽ δ, describes the noise in the measurement.

Classical approaches to solving inverse problems include linear pseudo inverses given by filter functions [36] or non-linear regularized inverses given by the variational approach

Equation (5)

where $\mathcal{S}:Y{\times}Y\to \mathbb{R}$ is the data discrepancy, $J:X\to \mathbb{R}\cup \left\{\infty \right\}$ is the regularizer, $\mathcal{D}{:=}\mathcal{D}\left(A\right)\cap \mathcal{D}\left(J\right)$ and $\mathcal{D}\left(A\right)$, $\mathcal{D}\left(J\right)$ are the domains of A and J respectively. Examples of hand-crafted regularizers/priors are ||x||2, ||x||1 and total variation (TV). The value of the regularization parameter α should be carefully selected. One way to do that, in the presence of a validation dataset with some ground truth and observation pairs, is to do a line-search and select the α that yields the best performance on average, assuming there is a uniform noise level. Given validation data ${\left\{{x}_{i}^{{\dagger}},\enspace {y}_{i}^{\delta }\right\}}_{i=1}^{N}$, the data-driven parameter choice would be

Equation (6)

where $\ell :X{\times}X\to \mathbb{R}$ is some similarity measure, such as peak signal-to-noise ratio (PSNR) or structural self-similarity (SSIM).

Data-driven regularized inversion methods for solving inverse problems in imaging have recently had great success in terms of reconstruction quality [6]. Three main classes of methods are: end-to-end learned methods [1, 3, 8, 21, 28, 46], learned regularizers [34, 37] and generative networks [2, 7, 13]. For the study described in this paper, we only focus on the end-to-end learned methods.

3.1. End-to-end learned methods

In this section, we briefly review some of the most successful end-to-end learned methods. Most of them were implemented and included in our benchmark.

3.1.1. Post-processing

This method aims at improving the quality of the FBP reconstructions from noisy or few measurements by applying learned post-processing. Recent works [11, 28, 42, 48] have successfully used a convolutional neural network (CNN), such as the U-Net [45], to remove artifacts from FBP reconstructions. In mathematical terms, given a possibly regularized FBP operator ${\mathcal{T}}_{\text{FBP}}$, the reconstruction is computed using a network Dθ : XX as

Equation (7)

with parameters θ of the network that are learned from data.

3.1.2. Fully learned

Methods of this type aim at directly learning the inversion process from data while keeping the network architecture as general as possible. This idea was successfully applied in MRI by the AUTOMAP architecture [49]. The main building blocks consist of fully connected layers. Depending on the problem, the number of parameters can grow quickly with the data dimension. For mapping from sinogram to reconstruction in the LoDoPaB-CT dataset [32] (see section 6.1), such a layer would have over 1000 × 513 × 3622 ≈ 67 × 109 parameters. This makes the naive approach infeasible for large CT data.

He et al [22] introduced an adapted two-part network, called iRadonMap. The first part reproduces the structure of the FBP. A fully connected layer is applied along s and shared over the rotation angle dimension φ, playing the role of the filtering. For each reconstruction pixel (i, j) only sinogram values on the sinusoid s = i  cos(φ) + j sin(φ) have to be considered and are multiplied by learned weights. For the example above, the number of parameters in this layer reduces to 5132 + 3622 × 1000 ≈ 13 × 107. The second part consists of a post-processing network. We choose the U-Net architecture for our experiments, which allows for a direct comparison with the FBP + U-Net approach.

3.1.3. Learned iterative schemes

Another series of works [1, 3, 20, 21] use CNNs to improve iterative schemes commonly used in inverse problems for solving (5), such as gradient descent, proximal gradient descent or hybrid primal-dual algorithms. For example, the proximal gradient descent is given by the iteration

Equation (8)

for k = 0, ..., L − 1, where ϕJ,α,λ : XX is the proximal operator or projector. In [20], the authors replace the projector by a CNN that is trained to project perturbed reconstructions to the set of clean reconstructions. However, this approach is not end-to-end because the network is first trained to do the projection and then inserted into the iterative scheme.

The idea behind end-to-end learned iterative methods is to unroll these schemes with a small number of iterations, and replace some operators by CNNs with parameters that are trained using ground truth and observation data pairs. Each iteration is performed by a convolutional network ${\psi }_{{\theta }_{k}}$ that includes the gradients of the data discrepancy and of the regularizer as input in each iteration. Moreover, the number of iterations is fixed and small, e.g., L = 10. The reconstruction operator is given by ${\mathcal{T}}_{\theta }:Y\to X$ with ${\mathcal{T}}_{\theta }\left({y}^{\delta }\right)={x}^{\left(L\right)}$ and

for any pseudo inverse A+ of the operator A and θ = (θ0, ..., θL−1). Alternatively, x(0) could be just randomly initialized.

Similarly, more sophisticated algorithms, such as hybrid primal-dual algorithms, can be unrolled and trained in the same fashion. In this work, we used an implementation of the learned gradient descent [1] and the learned primal-dual method [3].

The above mentioned approaches all rely on a parameterized operator ${\mathcal{T}}_{\theta }:Y\to X$, whose parameters θ are optimized using a training set of N ground truth samples ${x}_{i}^{{\dagger}}$ and their corresponding noisy observations ${y}_{i}^{\delta }$. Usually, the empirical mean squared error is minimized, i.e.,

Equation (9)

After training, the reconstruction $\hat{x}\in X$ from a noisy observation yδ Y is given by $\hat{x}={\mathcal{T}}_{\hat{\theta }}\left({y}^{\delta }\right)$. The main disadvantage of most of these approaches is that they do not enforce data consistency. As a consequence, some information in the observation could be ignored, yielding a result that might lack important features of the image. In medical imaging, this is critical since it might remove an indication of a lesion. Recent works [4, 19] also show that some methods, such as those which are fully learned or follow the post-processing approach, are unstable, which means that tiny perturbations in the ground truth or the measurements may result in severe artifacts in the reconstructions. These are the main motivations for the approach we introduce in section 5. Nevertheless, there exist other methods [46] that do enforce data consistency and may not suffer from these instabilities.

3.2. DIP

The DIP is similar to the generative networks approach and the variational method. However, instead of having a regularization term J(x), the regularization is incorporated by the reparametrization x = φ(θ, z), where φ is a deep generative network, for example a U-Net, with randomly initialized weights θ ∈ Θ, and z is a fixed input such as random white noise. The approach is depicted in figure 2 and consists in solving

Equation (10)

The weights are optimized by a gradient descent method to minimize the data discrepancy of the output of the network. In the original method, the authors use gradient descent with early stopping to avoid reproducing noise. This is necessary due to the overparameterization of the network, which makes it able to reproduce the noise. The regularization is a combination of early stopping (similar to the Landweber iteration) and the architecture [14]. The drawback is that it is not clear how to choose when to stop. In the original work, the authors do this using a validation set and select the number of iterations that performs best on average in terms of PSNR.

Figure 2.

Figure 2. The figure illustrates the DIP approach. We use a U-Net architecture with 128 channels at every layer. Some layers have additionally the skip channels (coming from the dashed arrows). We always use either 4 or 0 skip channels.

Standard image High-resolution image

The prior is related to the implicit structural bias of this kind of deep convolutional networks. In the original DIP paper [31] and more recently in [10, 24], it is shown that convolutional image generators, optimized with gradient descent, fit natural images faster than noise and learn to construct them from low to high frequencies. This effect is illustrated in figure 3.

Figure 3.

Figure 3. Intermediate reconstructions of the DIP approach for CT (ellipses dataset, see section 6.2). At the beginning the coefficients are randomly initialized from a prior distribution. The method starts reconstructing the image from global to local details.

Standard image High-resolution image

3.2.1. Related work

The DIP approach has inspired many other researchers to improve it by combining it with other methods [35, 38, 47], to use it for a wide range of applications [17, 18, 26, 27] and to offer different perspectives and explanations of why it works [10, 12, 14]. In [38], the concept of regularization by denoising (RED) is introduced and it is shown how the two (DIP and RED) can be merged into a highly effective unsupervised recovery process. Another series of works also adds explicit priors but on the weights of the network. In [47], this is done in the form of a multi-variate Gaussian but learning the covariance matrix and the mean using a small dataset. In [12], a Bayesian perspective on the DIP is introduced by also incorporating a prior on the weights θ and conducting the posterior inference using stochastic gradient Langevin dynamics.

So far, the DIP has been used for denoising, inpainting, super-resolution, image decomposition [17], compressed sensing [47], PET [18], MRI [27] among other applications. A similar idea [26] was also used for structural optimization, which is a popular method for designing objects such as bridge trusses, airplane wings, and optical devices. Rather than directly optimizing densities on a grid, they instead optimize the parameters of a neural network which outputs those densities.

3.2.2. Network architecture

In the paper by Ulyanov et al [31], several architectures were considered, for example, ResNet [23], encoder–decoder (autoencoder) and a U-Net [45]. For inpainting large regions, the Autoencoder with depth = 6 performed best, whereas for denoising a modified U-Net achieved the best results. The regularization happens mainly due to the architecture of the network, which reduces the search space but also influences the optimization process to find more natural images. Therefore, for each application, it is crucial to choose the appropriate architecture and to tune hyper-parameters, such as the network's depth and the number of channels per layer. Optimizing the hyper-parameters is the most time-consuming part. In figure 4 we show some reconstructions from the ellipses dataset (see section 6.2) with different hyper-parameter choices. In this case, it seems that the U-Net without skip connections and depth 5 (encoder–decoder) achieves the best performance. One can see that when the number of channels is too low, the network does not have enough representation power. Also, if there are no skip channels, the higher the number of scales (equivalent to the depth), the more the regularization effect. The extraordinary success of this approach demonstrates that the architecture of the network has a significant influence on the performance of deep learning approaches that use similar kinds of networks.

Figure 4.

Figure 4. CT reconstructions after 5000 iterations using the DIP with a U-Net architecture and different scales (depths), channels per layer (the network has the same number of channels at every layer) and number of skip connections (the first two rows do not use skip connections, i.e., skip: [0, 0, 0, 0, 0]). In the last row all reconstructions use 5 scales and 128 channels.

Standard image High-resolution image

3.2.3. Early-stopping

As mentioned earlier, in [31], it is shown that early stopping has a positive impact on the reconstruction results. It was observed that in some applications, such as denoising, the loss decreases rapidly toward natural images, but takes much more time to go toward noisy images. This empirical observation helps to determine when to stop. In figure 5, one can observe how the similarity with respect to the ground truth (measured by the PSNR and the SSIM metrics) reaches a maximum and then deteriorates during the optimization process.

Figure 5.

Figure 5. Training loss and true similarity (PSNR and SSIM) of CT reconstructions using the DIP approach. The training was done over 15 000 iterations and the architecture is an encoder–decoder (no skip channels) with 5 scales and 128 channels per layer.

Standard image High-resolution image

4. DIP and classical regularization

In this section we analyze the DIP in combination with classical regularization, i.e., we include a regularization term $J:X\to \mathbb{R}\cup \left\{\infty \right\}$, such as TV. We give necessary assumptions under which we are able to obtain standard guarantees in inverse problems, such as existence of a solution, convergence, and convergence rates.

In the general case, we consider X and Y to be Banach spaces, and A : XY a continuous linear operator. To simplify notation, we use φ(⋅) instead of φ(⋅, z), since the input to the network is fixed. Additionally, we assume that Θ is a Banach space, and φ : Θ → X is a continuous mapping.

The proposed method aims at finding

Equation (11)

to obtain

Equation (12)

With this approach, we eliminate the need for early stopping, i.e., the need to find an optimal number of iterations. However, we introduce the problem of finding an optimal α, which is a classical issue in inverse problems. These problems are similar since both choices depend on the noise level of the observation data. The higher the noise, the higher the value of α or the smaller the number of iterations for obtaining optimal results.

If the range of φ is Ω := rg(φ) = X, i.e.,

Equation (13)

this is equivalent to the standard variational approach in equation (5). However, although the network can fit some noise, it cannot fit, in general, any arbitrary xX. This depends on the chosen architecture, and it is mainly because we do not use any fully connected layers. Nevertheless, the minimization in (11) is similar to the setting in equation (5) if we restrict the domain of A to be $\tilde {\mathcal{D}}\left(A\right){:=}\mathcal{D}\left(A\right)\cap {\Omega}$

Equation (14)

where $\tilde {\mathcal{D}}{:=}\tilde {\mathcal{D}}\left(A\right)\cap \mathcal{D}\left(J\right)$. If the following assumptions are satisfied, then all the classical theorems, namely well-posedness, stability, convergence, and convergence rates, still hold, see [25].

Assumption 1. The range of φ with respect to θ (parameters of the network), namely Ω, is closed, i.e., if there is a convergent sequence {xk } ⊂ Ω with limit $\tilde {x}$, it holds $\tilde {x}\in {\Omega}$.

Definition 1. An element ${x}^{{\dagger}}\in \tilde {\mathcal{D}}$ is called a J-minimizing solution if Ax = y and $\forall \enspace x\in \tilde {\mathcal{D}}:\enspace J\left({x}^{{\dagger}}\right){\leqslant}J\left(x\right)$, where y is the perfect noiseless data.

Assumption 2. There exists a J-minimizing solution ${x}^{{\dagger}}\in \tilde {\mathcal{D}}$ and J(x) < .

Assumption 1 guarantees that the restricted domain of A is closed, whereas assumption 2 guarantees that there is a J-minimizing solution in the restricted domain. In appendix A, we analyze in which cases these conditions hold.

5. DIP with initial reconstruction

In this section, we propose a two-steps approach based on the method from the previous section. The idea is to take the result from any end-to-end learned method $\mathcal{T}:Y\to X$ as initial reconstruction (first step) and further enforce data consistency by optimizing over its deep-neural parameterization (second step).

Definition 2  (Deep-neural parameterization). Given an untrained network φ : Θ × ZX and a fixed input zZ, the deep-neural parameterization of an element xX with respect to φ and z is

Equation (15)

The projection onto the range of the network is possible because of the result of assumption 1, i.e., the range is closed. If φ is a deep convolutional network, for example, a U-Net, the deep-neural parameterization has similarities with other signal representations, such as the Wavelets and Fourier transforms [26]. For image processing, such domains are usually more convenient than the classical pixel representation.

As shown in figure 6, one way to enforce data consistency is to project the initial reconstruction into the set where ||Axyδ || ⩽ δ. The puzzle is that due to the ill-posedness of the problem, the new solution (red point) will very likely have artifacts. The proposed approach first obtains the deep-neural parameterization θ0 of the initial reconstruction $\mathcal{T}\left({y}^{\delta }\right)$ and then use it as starting point to minimize

Equation (16)

over θ via gradient descent. The iterative process is continued until ||(θ, z) − yδ || ⩽ δ or for a given fixed number of iterations K determined by means of a validation dataset. This approach seems to force the reconstruction to stay close to the set of natural images because of the structural bias of the deep-neural parameterization. The procedure is listed in algorithm 1 and a graphical representation is shown in figure 6.

Figure 6.

Figure 6. Graphical illustration of the DIP approach with initial reconstruction. The blue area refers to an approximation of some part of the space of natural images.

Standard image High-resolution image

Algorithm 1. DIP with initial reconstruction.

1: ${x}_{0}{\leftarrow}\mathcal{T}\left({y}^{\delta }\right)$
2: z ← noise
3: ${\theta }_{0}\in \underset{\theta }{\text{arg}\;\text{min}}{\Vert}\varphi \left(\theta ,z\right)-{x}_{0}{{\Vert}}^{2}$
4: for k ← 0 to K − 1 do
5:  $\omega \in \partial \mathcal{L}\left({\theta }_{k}\right)$
6:  θk+1θk ηω
7: end for
8: $\hat{\mathcal{T}}\left({y}^{\delta }\right){\leftarrow}\varphi \left({\theta }_{k},z\right)$

The new method $\hat{\mathcal{T}}:Y\to X$ is similar to other image enhancement approaches. For example, related methods [15] first compute the wavelet transform (parameterization), and then repeatedly perform smoothing or shrinking of the coefficients (further optimization).

6. Benchmark setup and results

For the benchmark, we implemented the end-to-end learned methods described in section 3.1. We trained them on different data sizes and compared them with classical methods, such as FBP and TV regularization, and with the proposed methods. The datasets we use were recently released to benchmark deep learning methods for CT reconstruction [32]. They are accessible through the DIVαℓ python library [33]. We also provide the code and the trained methods in the following GitHub repository: https://github.com/oterobaguer/dip-ct-benchmark.

6.1. The LoDoPaB-CT dataset

The low-dose parallel beam (LoDoPaB) CT dataset [32] consists of more than 40 000 two-dimensional CT images and corresponding simulated low-intensity measurements. Human chest CT reconstructions from the LIDC/IDRI database [5] are used as virtual ground truth. Each image has a resolution of 362 × 362 pixels. For the simulation setup, a simple parallel beam geometry with 1000 angles and 513 projection beams is used. To simulate low intensity, Poisson noise is applied to the projection data. The noise amount corresponds to an x-ray source that on average emits 4096 photons per detector pixel. We use the standard dataset split defining in total 35 820 training pairs, 3522 validation pairs and 3553 test pairs. In addition, we analyze another dataset, LoDoPaB (200), obtained by uniformly sampling 200 angles from the original 1000 without any further modification.

6.2. Ellipses dataset

As a synthetic dataset for imaging problems, random phantoms of combined ellipses are commonly used. We use the 'ellipses' standard dataset from the DIVαℓ python library (as provided in version 0.4) [33]. The images have a resolution of 128 × 128 pixels. Measurements are simulated with a parallel beam geometry with only 30 angles and 183 projection beams. In addition to the sparse-angle setup, moderate Gaussian noise with a standard deviation of 2.5% of the mean absolute value of the projection data is added to the projection data. In total, the training set contains 32 000 pairs, while the validation and test set consist of 3200 pairs each.

6.3. Implementation details

For the DIP with initial reconstruction, we used the learned primal-dual, which has the best performance among the compared methods (see the results in figure 7). For each data size, we chose different hyper-parameters, namely the step-size η, the TV regularization parameter α, and the number of iterations K, based on the available validation dataset.

Figure 7.

Figure 7. Benchmark results of several existing methods and the proposed approaches (DIP + TV, learned primal-dual + DIP) on the Ellipses, LoDoPaB (200) and LoDoPaB datasets. The horizontal lines indicate the performance of data-free methods.

Standard image High-resolution image

Minimizing $\mathcal{L}\left(\theta \right)$ in (16) is not trivial because TV is not differentiable. In our implementation we use the PyTorch automatic differentiation framework [41] and the ADAM [29] optimizer. For the Ellipses dataset we use the 2-discrepancy term, whereas for LoDoPaB we use the Poisson loss.

6.4. Numerical results

We trained all the methods with different dataset sizes. For example, 0.1% on the ellipses dataset means we trained the model with 0.1% (32 data-pairs) of the available training data and 0.1% (3 data-pairs) of the validation data. Afterward, we tested the performance of the method on the first 100 samples of the test dataset (in the original order, i.e., not sorted by patient). This reduced test dataset was used because some of the methods require a lot of time for reconstruction, and the mean performance on 100 samples already allows for accurate benchmarking. The results are depicted in figure 7 and more details can be found in appendix B.

As expected, the fully learned method (iRadonMap) requires a large amount of data to achieve acceptable performance. On the ellipses and LoDoPaB (200) dataset, it outperformed TV using 100% of the data, whereas on the LoDoPaB dataset, it performed just slightly better than the FBP. The learned post-processing (FBP + U-Net) required much less data. It outperformed TV with only 10% of the ellipses dataset and 0.1% of the LoDoPaB dataset. On the other hand, we find that the learned primal-dual is very data efficient and achieved the best performance. In figure 8, we show some results from the test set for different data sizes.

Figure 8.

Figure 8. Reconstructions of test samples using the learned primal-dual method trained with different amounts of data from the ellipses and LoDoPaB datasets. The 2 data error measures the discrepancy between the noisy observation and the noise-free projection of the (reconstructed) image.

Standard image High-resolution image

The DIP + TV approach achieved the best results among the data-free methods. On average, it outperforms TV by 1 dB on all the analyzed datasets. In figures 9 and 10, it can be observed that TV tends to produce flat regions but also produces high staircase effects on the edges. On the other hand, the combination with DIP produces more realistic edges. For the first two smaller data sizes of the ellipses and LoDoPaB (200) datasets, it performs better than all the end-to-end learned methods.

Figure 9.

Figure 9. Reconstruction obtained with the FBP method, isotropic TV regularization and the DIP approach combined with TV, for test samples from the ellipses and LoDoPaB datasets. The 2 data error measures the discrepancy between the noisy observation and the noise-free projection of the (reconstructed) image.

Standard image High-resolution image
Figure 10.

Figure 10. Reconstruction obtained with the FBP method, isotropic TV regularization and the DIP approach combined with TV, for test samples from the LoDoPaB (200) dataset. The 2 data error measures the discrepancy between the noisy observation and the noise-free projection of the (reconstructed) image.

Standard image High-resolution image

The DIP + TV with initial reconstruction improved the results on the low-data regime for the ellipses and LoDoPaB (200) datasets. For the higher data sizes and the LoDoPaB dataset, it did not yield reconstructions with higher quality than those already obtained by the DIP + TV or learned primal-dual methods. We believe that this approach is more useful in the case of having sparse measurements and little training data.

In figures 11 and 12, we show some reconstructions obtained using this method for the LoDoPab (200) and ellipses datasets. The reconstructions have a better data consistency with respect to the observed data (2-discrepancy) and higher quality both visually and in terms of the PSNR and SSIM measures. Moreover, this approach is in general much faster, even if we also consider the iterations required to obtain the deep-prior/neural parameterization of the first reconstruction. These initial iterations are much faster because they only use the identity operator instead of the Radon transform. For example, for the Ellipses dataset, the DIP + TV approach needs 8000 iterations to obtain optimal performance in a validation dataset (five ground truth and observation pairs). On the other hand, by using the initial reconstruction, it needs 4000 iterations with the identity operator and only 1000 with the Radon transform operator, which results in a 2× speed factor.

Figure 11.

Figure 11. Examples of reconstructions obtained with the DIP + TV approach, the learned primal-dual method trained with 0.01% and 0.1% of the LoDoPaB (200) dataset and the DIP + TV approach with initial reconstruction. The 2 data error measures the discrepancy between the noisy observation and the noise-free projection of the (reconstructed) image.

Standard image High-resolution image
Figure 12.

Figure 12. Examples of reconstructions obtained with the DIP approach combined with TV, the learned primal-dual method trained with 0.1% and 0.2% of the Ellipses dataset (32 and 64 resp. data-pairs) and the DIP approach with initial reconstruction. The 2 data error measures the discrepancy between the noisy observation and the noise-free projection of the (reconstructed) image.

Standard image High-resolution image

7. Conclusions

In this work, we study the combination of classical regularization, deep-neural parameterization, and deep learning approaches for CT reconstruction. We benchmark the investigated methods and evaluate how they behave in low-data regimes. Among the data-free approaches, the DIP + TV method achieves the best results. However, it is considerably slow and does not benefit from having a small dataset with reference reconstructions. On the other hand, the learned primal-dual is very data efficient. However, it lacks data consistency when not trained with enough data. These issues motivate us to adjust the reconstruction obtained with the learned primal-dual to match the observed data. We solved the puzzle without introducing artifacts through a combination of classical regularization and the DIP.

The results presented in this paper offer several baselines for future comparisons with other approaches. Moreover, the proposed methods could be applied to other imaging modalities.

Acknowledgments

The authors acknowledge the support by the Deutsche Forschungsgemeinschaft (DFG) within the framework of GRK 2224/1 'π3: Parameter Identification—Analysis, Algorithms, Applications'. The authors also thank Jonas Adler, Jens Behrmann, Sören Dittmer, Peter Maass and Michael Pidcock for useful comments and discussions.

Appendix A.: DIP and classical regularization

The mapping φ : Θ → X has a neural network structure, with a fixed input $z\in {\mathbb{R}}^{{n}_{0}}$, and can be expressed as a composition of affine mappings and activation functions:

Equation (A.1)

where ${\mathcal{K}}^{\left(i\right)}\left(x\right){:=}{W}^{\left(i\right)}x+{b}^{\left(i\right)}$, ${W}^{\left(i\right)}\in {G}^{\left(i\right)}\subseteq {\mathbb{R}}^{{n}_{i}{\times}{n}_{i-1}}$, ${b}^{\left(i\right)}\in {B}^{\left(i\right)}\subseteq {\mathbb{R}}^{{n}_{i}}$, ${\sigma }^{\left(i\right)}:\mathbb{R}\to \mathbb{R}$ (applied component-wise), and θ = (W(L), b(L), ..., W(1), b(1)) ∈ G(L) × B(L)× G(1) × B(1) = Θ. In the following we analyze under which conditions we can guarantee that the range of φ (with respect to Θ) is closed.

Definition 3. An activation function $\sigma :\mathbb{R}\to \mathbb{R}$ is valid, if it is continuous, monotone, and bounded, in the sense there exist c > 0 such that ∀ xX : |σ(x)| ⩽ c|x|.

Lemma 1. Let φ be a neural network φ : Θ → X with L layers. If Θ is a compact set, and the activation functions σ(i) are valid, then the range of φ is closed.

Proof. In order to prove the result, we show that the range after each layer of the network is compact.

  • (a)  
    Let the set $V=\left\{Wu:\enspace W\in G\subset {\mathbb{R}}^{m{\times}n},\enspace u\in U\subset {\mathbb{R}}^{n}\right\}$, where G and U are compact sets, i.e., bounded and closed. Since G and U are bounded, it follows that V is bounded.Let the sequence {W(k) u(k)}, with W(k)G and u(k)U, converge to v. Since {W(k)} and {u(k)} are bounded, there is a subsequence $\left\{{\overline{W}}^{\left(k\right)}{\bar{u}}^{\left(k\right)}\right\}$, where both $\left\{{\overline{W}}^{\left(k\right)}\right\}$ and {ū(k)} converge to $\overline{W}\in G$ and ūU respectively. It follows that $\left\{{\overline{W}}^{\left(k\right)}{\bar{u}}^{\left(k\right)}\right\}$ converges to $\overline{W}\bar{u}$, therefore, $v=\overline{W}\bar{u}\in V$, which shows that V is closed. Thus, V is compact.

  • (b)  
    From (a), the fact that G(i), B(i) are compact sets, and assuming ${U}^{\left(i\right)}\subset {\mathbb{R}}^{{n}_{i}-1}$ is also compact, it follows that ${V}^{\left(i\right)}=\left\{Wu+b:\enspace W\in {G}^{\left(i\right)},\enspace u\in {U}^{\left(i\right)},b\in {B}^{\left(i\right)}\subset {\mathbb{R}}^{{n}_{i}}\right\}$ is compact.
  • (c)  
    It is easy to show that if the pre-image of a valid activation σ is compact, then its image is also compact.

In the first layer, U0 = {z}, which is compact; thus, using (a), (b), and (c) it can be shown by induction that the range of φ : Θ → Ω is closed. □

All activation functions commonly used in the literature, for example, sigmoid, hyperbolic tangent, and piece-wise linear activations, are valid. The bounds on the weights of the network can be ensured by clipping the weights after each gradient update. □

Remark 1. An alternative condition to the bound on the weights is to use only valid activation functions with closed range, for example, ReLU or leaky ReLU. However, it wouldn't be possible to use sigmoid or hyperbolic tangent. In our experiments, we observed that having a sigmoid activation in the last layer of the DIP network performs better than having a ReLU.

Appendix B.: Dataset details, hyper-parameters and results

In this appendix, we present all the hyper-parameters tables B3B10 that were selected for the method using a validation set. The first two tables B1B2 depict the number of samples used for training and validation in each case.

Table B1. Amounts of training and validation pairs from the ellipses dataset used for the benchmark in section 6.

%0.10.20.51.02.05.010.025.050.0100.0
#train326416032064016003200800016 00032 000
#val3616326416032080016003200

Table B2. Amounts of training and validation pairs from the LoDoPaB dataset used for the benchmark in section 6. The last two lines denote the numbers of patients of whom images are included.

%0.010.11.010.0100.0
#train335358358235 820
#val13353523522
#patients train11764632
#patients val111660

Table B3. FBP hyper-parameters and results.

DatasetFilter typeLow-pass cut-offPSNR (dB)SSIM
EllipsesHann0.705124.180.5939
LoDoPaB (200)Hann0.500028.380.6492
LoDoPaBHann0.641030.370.7386

Table B4. TV hyper-parameters and results. The step size is set to 10−3.

DatasetLoss function α PSNR (dB)SSIM
Ellipses 2 7.743 × 10−4 27.840.8495
LoDoPaB (200)Poisson12.6330.890.7563
LoDoPaBPoisson20.5532.950.8034

Table B5. DIP + TV hyper-parameters and results. For all experiments the number of channels is set to 128 at every scale. For the output sigmoid activation is used.

DatasetLoss func.ScalesSkip channels Α step sizePSNR (dB)SSIM
Ellipses 2 5(0, 0, 0, 0, 0)3.162 × 10−4 1 × 10−3 28.940.8855
LoDoPaB (200)Poisson6(0, 0, 0, 0, 4, 4)4.05 × 10−4 32.510.7803
LoDoPaBPoisson6(0, 0, 0, 0, 4, 4)7.05 × 10−4 34.440.8143

Table B6. DIP + TV (with initial reconstruction given by the learned primal-dual method). For all experiments the number of channels is set to 128 at every scale. For the output sigmoid activation is used.

DatasetData size (%)Loss func.ScalesSkip channels Α PSNR (dB)SSIM
Ellipses0.1 2 5(0, 0, 0, 0, 0)3.162 × 10−4 29.230.8915
0.2 2 5(0, 0, 0, 0, 0)2.154 × 10−4 29.390.8911
0.5 2 5(0, 0, 0, 0, 0)2.154 × 10−4 29.850.904
1.0 2 5(0, 0, 0, 0, 0)2.154 × 10−4 30.390.915
2.0 2 5(0, 0, 0, 0, 0)2.154 × 10−4 30.990.9253
5.0 2 5(0, 0, 0, 0, 0)2.154 × 10−4 31.440.9285
10.0 2 5(0, 0, 0, 0, 0)1.292 × 10−4 31.780.9337
LoDoPaB (200)0.01Poisson6(0, 0, 0, 0, 4, 4)4.032.520.7822
0.1Poisson6(0, 0, 0, 0, 4, 4)3.032.780.7821

Table B7. FBP + U-Net. The input FBP reconstruction uses a Hann filter with no additional low-pass filter. Common hyperparameters: scales = 5, skip channels = 4, linear output (i.e. no sigmoid activation). The maximum learning rate is set to 10−2 or 10−3 and scheduled with either cosine annealing or one-cycle policy.

DatasetData size (%)ChannelsBatch sizeEpochsPSNR (dB)SSIM
Ellipses0.1(32, 32, 64, 64, 128)16500026.330.7895
0.2(32, 32, 64, 64, 128)16500026.590.8042
0.5(32, 32, 64, 64, 128)16500026.800.8114
1.0(32, 32, 64, 64, 128)16500027.120.8321
2.0(32, 32, 64, 64, 128)16250027.440.8323
5.0(32, 32, 64, 64, 128)16100027.970.8604
10.0(64, 64, 128, 128, 256)1670028.490.8751
25.0(64, 64, 128, 128, 256)1628028.800.8872
50.0(64, 64, 128, 128, 256)1614029.100.8940
100.0(64, 64, 128, 128, 256)167029.360.8987
LoDoPaB (200)0.01(32, 32, 64, 64, 128)32500029.330.7143
0.1(32, 32, 64, 64, 128)32500031.580.7616
1.0(32, 32, 64, 64, 128)32200032.600.7818
10.0(32, 32, 64, 64, 128)3250033.190.7931
100.0(32, 32, 64, 64, 128)3225033.550.7994
LoDoPaB0.01(32, 32, 64, 64, 128)32500031.360.7727
0.1(32, 32, 64, 64, 128)32500033.270.7982
1.0(32, 32, 64, 64, 128)32200034.620.8209
10.0(32, 32, 64, 64, 128)3250035.180.8313
100.0(32, 32, 64, 64, 128)3225035.480.8371

Table B8. Learned gradient descent. For all experiments the number of iterations is set to L = 10. The output of the network is linear, i.e. no sigmoid activation is used.

DatasetData size (%)ChannelsBatch sizeEpochslrPSNR (dB)SSIM
Ellipses0.13232500010−3 27.810.8580
0.23232500010−3 28.400.8769
0.53232500010−3 29.150.8955
1.03232500010−3 29.550.9027
2.03232250010−3 29.700.9051
5.03232100010−3 29.840.9077
10.0323250010−3 29.880.9082
25.0323220010−3 29.950.9094
50.0323210010−3 30.070.9121
100.032325010−3 30.300.9162
LoDoPaB (200)0.013220500010−4 29.870.7151
0.13220500010−5 31.280.7473
1.0322050010−5 31.830.7602
10.064120010−5 32.410.7724
100.06412010−5 32.410.7724
LoDoPaB0.01321500010−3 32.700.7860
0.1321500010−3 33.810.8043
1.032150010−3 34.290.8103
10.064110010−4 34.340.8115
100.06411010−4 34.360.8122

Table B9. Learned primal-dual. For all experiments the number of iterations is set to L = 10. The output of the network is linear, i.e. no sigmoid activation is used.

DatasetData size [%]ChannelsBatch sizeEpochslrPSNR [dB]SSIM
Ellipses0.1325500010−3 28.090.8621
0.2325500010−3 28.450.8778
0.5325500010−3 29.350.8997
1.0325500010−3 30.110.9124
2.0325250010−3 30.840.9258
5.0325100010−3 31.440.9282
10.032550010−3 31.840.9360
25.032520010−3 32.150.9367
50.032510010−3 32.210.9390
100.03255010−3 32.270.9403
LoDoPaB (200)0.01321500010−3 29.650.7343
0.1321500010−3 32.480.7771
1.032150010−3 33.210.7929
10.064110010−4 33.530.7990
100.06411010−4 33.640.8020
LoDoPaB0.01321500010−3 32.680.7842
0.1321500010−3 34.650.8227
1.032150010−3 35.270.8303
10.064110010−4 35.630.8401
100.06411010−4 35.730.8426

Table B10. iRadonMap. The U-Net part of the network has the same hyperparameters for all experiments: scales = 5, skip channels = 4, channels = (32, 32, 64, 64, 128). The learning rate is set to 10−2. Selection of the sigmoid output is based on the validation performance; the difference on LoDoPaB with and without sigmoid is marginal.

DatasetData size (%)Batch sizeEpochsSigmoid outputPSNR (dB)SSIM
Ellipses0.1641000 17.830.2309
0.2641000 18.350.2837
0.5641000 21.410.5378
1.0641000 22.640.6312
2.0641000 23.620.7042
5.0641000 24.770.7444
10.0641000 25.610.8051
25.064400 26.560.8389
50.064200 27.360.8615
100.064100 28.020.8766
LoDoPaB (200)0.0132150 14.610.3529
0.132150 18.770.4492
1.032150 24.630.6031
10.032150 31.270.7569
100.03230 32.450.7781
LoDoPaB0.012150 14.820.3737
0.12150 17.670.4438
1.02150 22.730.5361
10.02150 28.690.6929
100.0215 30.990.7486

For the data-free baseline approaches, i.e. FBP and TV, we used 100 samples for selecting the optimal hyper-parameters. In the low-data regime this by far exceeds the number of samples used by the learned approaches, leading to a slight bias of the comparison in favor of the data-free baseline approaches. For the DIP + TV we used at most 5 samples for validation and selection of hyper-parameters.

For the learned methods, the numbers of epochs listed in the tables denote the maximum numbers—the model with best mean PSNR on the validation set reached during training is selected. In some cases we used a learning rate scheduler that improved the training. More details can be found in https://github.com/oterobaguer/dip-ct-benchmark.

Please wait… references are loading.
10.1088/1361-6420/aba415