This site uses cookies. By continuing to use this site you agree to our use of cookies. To find out more, see our Privacy and Cookies policy.
Paper

A multi-scale geometric flow method for molecular structure reconstruction

, and

Published 26 March 2015 © 2015 IOP Publishing Ltd
, , Citation Guoliang Xu et al 2015 Comput. Sci. Discov. 8 014002 DOI 10.1088/1749-4680/8/1/014002

1749-4699/8/1/014002

Abstract

We have previously reported an L2-gradient flow (L2GF) method for cryo-electron tomography and single-particle reconstruction, which has a reasonably good performance. The aim of this paper is to further upgrade both the computational efficiency and accuracy of the L2GF method. In a finite-dimensional space spanned by the radial basis functions, a minimization problem combining a fourth-order geometric flow with an energy decreasing constraint is solved by a bi-gradient method. The bi-gradient method involves a free parameter $\beta \in [0,1].$ As β increases from 0 to 1, the structures of the reconstructed function from coarse to fine are captured. The experimental results show that the proposed method yields more desirable results.

Export citation and abstract BibTeX RIS

1. Introduction

In recent decades, cryo-electron microscope imaging techniques have been established as indispensable tools for determining the three-dimensional (3D) structures of large macromolecules and biological machinery. These techniques can be separated into several imaging modalities, including single-particle analysis (multiple copies of a low- or high-symmetry unit), electron tomography (single copy of a low-symmetry unit) and electron crystallography (multiple copies of a low-symmetry unit symmetrically arranged in a lattice) (see [21]).

In this paper, we focus on single-particle analysis in which multiple copies of identical particles are imaged at different, randomly chosen orientations. We assume that the alignment problem would already have been solved using existing methods (see [6, 11, 12, 17, 18, 20]). The next step is to conduct the 3D image reconstruction, which is the problem addressed by the algorithm introduced in this paper. The reconstruction algorithms are often weighted back-projection (WBP) methods [16], direct Fourier methods [16] or iterative methods, including the algebraic reconstruction technique [9], the simultaneous iterative reconstructive technique [8] and the simultaneous algebraic reconstruction technique [1]. In order to accelerate convergence of these iterative algorithms, block iterative techniques have been proposed previously (see [9, 14]).

The problem of producing 3D reconstructions from a series of two-dimensional (2D) projection images is typically an inverse problem. Theoretically, an object can be reconstructed uniquely from its projections when all of its projections are known. In practice, however, only a limited number of projection images can be obtained. Hence, in most cases, the inverse problem is ill-posed. Thus some constrained conditions should be imposed so that the problem is well-posed. One such technique that imposes these constrained conditions is the projection onto convex sets, which assumes that the object f belongs to the intersection of some closed convex sets (see [19, 25]). Another technique that can be used to find a well-posed approximation solution is the regularization technique. It should be noted that classic Tikhonov regularization has been employed in tomography reconstruction [15].

An L2-gradient flow method (L2GF) has been presented previously [13, 24] for cryo-electron tomography reconstruction (see [24]) and single-particle analysis (see [13]). By minimizing an energy functional consisting of a fidelity term and a regularization term, an L2GF is derived. The flow is solved by a finite-element discretization in the B-spline function space in the spatial direction [22] and an explicit Euler scheme in the temporal direction. The experimental results show that the proposed method is stable, reliable and robust. A complete theoretical analysis of the convergence has been published previously [5]. The convergence analysis for solving the L2GF using the semi-implicit finite element method has been given in [4]. Therefore, the L2GF method is effective.

The purpose of this paper is to further enhance both the computational speed and accuracy of the L2GF method, using B-spline radial basis functions rather than B-spline basis functions as the basis functions, and a fourth-order geometric flow as a regularizer. We present a bi-gradient method to solve the involved variational model in a finite-dimensional space spanned by the B-spline radial basis functions. There are several advantages of using the B-spline radial basis functions. First, it is possible to obtain a C2-smooth object, and the local support property of the B-spline basis can accelerate the reconstruction process. Second, the x-ray transformation of the B-spline basis has a closed-form representation, hence, no integration is required, resulting in a remarkable reduction in computational cost. Third, as an electric potential, f is the solution of the Poisson and Boltzmann equations. This solution can be approximated by a Gaussian map, which is the sum of certain radial basis functions, hence using the radial basis is more reasonable.

The paper is organized as follows. In section 2, some basic setting is defined, including image size, the B-spline radial basis function grid and the volume grid. The geometric flow used and its level-set formulation are presented in section 3. In section 4, we explain the algorithms in details, while section 5 gives some illustrative examples and discussions. Finally, we conclude this paper in section 6. Some detailed derivations are presented in appendices A and B.

2. Problem setting

Let $\{{{g}_{{{{\bf d}}_{l}}}}({\bf u})\}_{l=1}^{p}$, ${{{\bf d}}_{l}}\in {{S}^{2}},\ {\bf u}\in {{\mathbb{R}}^{2}}$, be a set of 2D projections measured from an unknown 3D function (electric potential) f by the x-ray transform ${{X}_{{{{\bf d}}_{l}}}}$ i.e.

where ${\bf e}_{{{{\bf d}}_{l}}}^{(1)}$ and ${\bf e}_{{{{\bf d}}_{l}}}^{(2)}$ are two directions in ${\bf d}_{l}^{\bot }$ satisfying

Equation (1)

$\parallel \cdot \parallel $ stands for the Euclidian normal of a vector in ${{\mathbb{R}}^{3}}$. ${\bf d}_{l}^{\bot }$ is the space consisting of all the vectors perpendicular to ${{{\bf d}}_{l}}$. ${\bf e}_{{{{\bf d}}_{l}}}^{(1)}$ and ${\bf e}_{{{{\bf d}}_{l}}}^{(2)}$ also determine the in-plane rotation of the projection. Our problem is to reconstruct $f({\bf x})$, ${\bf x}\in \Omega \subset {{\mathbb{R}}^{3}}$, such that ${{X}_{{{{\bf d}}_{l}}}}f({\bf u})$ is as close to ${{g}_{{{{\bf d}}_{l}}}}({\bf u})$ as possible in the following sense.

where

Equation (2)

We assume that all the measured images have the same size $(n+1)\times (n+1)$, and the pixel values ${{g}_{{{{\bf d}}_{l}}}}({\bf u})$ of each image are defined on the integer grid points ${{(i,j)}^{T}}\in {{\left[ -\frac{n}{2},\frac{n}{2} \right]}^{2}}$ (we assume n is an even number). Since the values of ${{g}_{{{{\bf d}}_{l}}}}({\bf u})$ are the projection of f, we therefore define a sphere Ω as

For simplicity, we put the sphere Ω into a cube defined as ${{\Omega }_{c}}={{\left[ -\frac{n}{2}-1,\frac{n}{2}+1 \right]}^{3}}$, with the assumption that

Then the image values ${{g}_{{{{\bf d}}_{l}}}}({\bf u})$ at the grid points are defined as

for the unknown function f.

Because the measured images ${{g}_{{{{\bf d}}_{l}}}}({\bf u})$ are affected by heavy noise, a regularization mechanism is absolutely necessary. To avoid the regularization effect destroying the accuracy of the reconstruction result, we reconstruct the electric potential f using the following two stages.

Stage one. Compute an approximate minimizer

Equation (3)

using our bi-gradient iterative method (see section 4.3), where V is a given function space (we use radial basis function space defined by B-splines; see section 4.1).

Stage two. Compute a regularized function of f1, denoted as f, using a geometric flow with the following constraint (see section 4.6)

Equation (4)

Remark 2.1. We should point out that stage 2 is not simply a smoothing or a post-processing of the result from the first stage. The second stage requires that the energy functional is not increasing.

3. Gradient and geometric flow

We want to minimize J(f) by adjusting f iteratively. This goal is achieved by a bi-gradient flow, in a radial basis function space, combining with a geometric flow. In the following, we first compute the gradient and then introduce the geometric flow. Suppose f is represented as

Then it is easy to see that

The gradient of J(f) with respect to ${{f}_{{\bf i}}}$ is denoted as $\nabla (J(f))$.

Owing to the insufficient number of projection directions, the minimization problem (3) is not well-posed, meaning there may be an infinite number of solutions. To overcome this difficulty, we combine the minimization process with the surface diffusion flow. In the parametric form, the surface diffusion flow is represented as (see [23], p 56),

Equation (5)

where ${\bf x}$ is the surface point, ${\bf n}$ is the surface normal, H is the mean curvature and ${{\Delta }_{s}}$ denotes the Laplace–Beltrami operator over the surface. For evolving a close surface, the surface diffusion flow is volume-preserving and area-shrinking; hence, this flow has a very desirable regularization effect. In the level-set form, the surface diffusion flow is represented as (see [23], p 75)

Equation (6)

where ${{\Delta }_{f}}$ stands for the Laplace–Beltrami operator on the level-set ${{\Gamma }_{c}}=\{{\bf x}\in {{\mathbb{R}}^{3}}:\ f({\bf x})=c\}$ (see [23], p 28). The weak form of the surface diffusion flow can be written as

Equation (7)

where ${{\nabla }^{2}}\phi $ is the Hessian matrix of ϕ and

is the mean curvature of the level-set surface ${{\Gamma }_{c}}$. The derivation of this weak form is given in appendix A. The surface diffusion flow is a fourth-order equation. Using the weak form, only the second-order derivatives are required, while the tri-cubic B-spline functions have sufficient smoothness. This is the main reason for using the weak-form equations.

We denote the right-handed side of (7) by $\delta (f,\phi )$. Take ϕ as ${{\phi }_{{\bf i}}}$, and arrange $\delta (f,{{\phi }_{{\bf i}}})$ as a vector. We regard this vector as the negative gradient $-\nabla G(f)$ of certain unknown functional G(f). In our minimization method, we need only the vector without knowing the energy.

Note that $\delta (f,\phi )$ is linear with respect to ϕ. For a given vector ${\bf h}$, let

Then if we take $\phi =h$, we have $\delta (f,h)=-\nabla G{{(f)}^{T}}{\bf h}$. Therefore, the directional derivative of G(f) in the direction ${\bf h}$ can be computed.

4. Numerical minimizations

In this section, we discretize the minimization problem in a function space. Then we solve the discretized minimization problem by the bi-gradient method.

4.1. Discretization

Given an even integer $m\geqslant 4$, suppose the domain ${{\Omega }_{c}}={{\left[ -\frac{n}{2}-1,\frac{n}{2}+1 \right]}^{3}}$ is uniformly partitioned with grid points

where $h=\frac{n+2}{m}$. The function f is represented as

Equation (8)

where

and N(s) is a cubic B-spline basis function. The cubic B-spline basis function, defined on the uniform knots $-2,-1,0,1,2$, is

Equation (9)

The support of $N(s/h)$ is the interval $(-2h,2h)$. Hence, the support of ${{\phi }_{{\bf i}}}({\bf x},h)$ is $\{{\bf x}\in {{\mathbb{R}}^{3}}:\parallel {\bf x}-{\bf i}h\parallel \lt 2h\}$. It is not difficult to show that

Theorem 4.1. The functions in the set ${{\{{{\phi }_{{\bf i}}}\}}_{-\frac{m}{2}+2\leqslant {\bf i}\leqslant \frac{m}{2}-2}}$ are linearly independent.

Therefore, they form a basis functional space. Since N(s) is a C2-continuous function, ${{\phi }_{{\bf i}}}({\bf x},h)$ is also C2 continuous and

where ${\bf y}={\bf x}-{\bf i}h={{[{{y}_{1}},{{y}_{2}},{{y}_{3}}]}^{T}}$.

The volumetric electron density maps of molecules are often approximated by Gaussian maps in the literature (see [2, 10, 26]). In such approximations, each atom is simulated by a sphere. Using radial basis functions, the spherical property of atoms can be ideally approximated. The left figure of figure 1 shows the iso-contours of a bi-cubic B-spline basis function. When the iso-value approaches 0, the iso-contours differ greatly from the circles. The right figure shows the iso-contours of the cubic B-spline radial basis function. Furthermore, the projections of the cubic B-spline basis functions have no closed form, while the projections of the cubic radial B-spline basis functions can be exactly evaluated from their closed forms. This increases greatly the computational efficiency.

Figure 1.

Figure 1. Left: iso-contours of the bi-cubic B-spline basis. Right: iso-contours of the cubic B-spline radial basis.

Standard image High-resolution image

4.2. Fast computation of partial derivatives of J(f)

In our bi-gradient method, we need to compute the partial derivatives of J(f) with respect to the coefficients of f. Hence, we consider first the computation of these partial derivatives. It is easy to see that

Equation (10)

Now let us explain how each of the terms in (10) is efficiently computed.

Computing ${{X}_{{{{\bf d}}_{l}}}}{{\phi }_{{\bf i}}}$.

Let ${\bf d}\in {{S}^{2}}$ be a given direction. Then the projection of ${{\phi }_{{\bf i}}}$ in the direction ${\bf d}$ is a 2D function, defined as

For the cubic B-spline radial basis function ${{\phi }_{i}}$, we have

where

${\bf e}_{{\bf d}}^{(1)}$ and ${\bf e}_{{\bf d}}^{(2)}$ are two directions defined by (1), which span the (u,v)-plane in space ${{\mathbb{R}}^{3}}$. The integrations above can be exactly computed using the expression (9) and the following integral formulas

Computing ${{X}_{{{{\bf d}}_{l}}}}f$. Here we propose an efficient approach for computing ${{X}_{{{{\bf d}}_{l}}}}f$.

where $({{X}_{{{{\bf d}}_{l}}}}{{\phi }_{{\bf i}}})(u,v)$ is computed as above. Notice that N(s) is locally supported. The cost for computing ${{X}_{{{{\bf d}}_{l}}}}f$ is $O({{m}^{3}})$. The total cost for the projection is $O(p{{m}^{3}})$, where p denotes the total number of projections. Thus, compared with using fast Fourier transform, the cost of this method is one order higher; however, its performance is much better. The computation can be accelerated by removing small coefficients.

where $\epsilon \gt 0$ is a given small number.

4.3. Refinement of the B-spline radial basis functions

Let

Equation (11)

In the following, the vector consisting of the coefficient of $f({\bf x})$ is denoted as ${\bf f}$. Now we refine $f({\bf x})$ by replacing h and m with $\frac{h}{2}$ and $2m$, respectively. To obtain a representation in the form

from (11), we need to refine formulas for $N(s,h)$. It is easy to derive that

Hence

where values for ${{n}_{{\bf i}}}$ are computed by interpolation. There are a total of 125 basis functions involved, and the coefficients could be determined by interpolating 125 points ${{[-h,-h/2,0,h/2,h]}^{3}}$.

4.4. L2-gradient flow

One method to minimize the energy J(f) is to use the following L2GF

In matrix form, the flow can be written as

where $M=\left[ {{\int }_{{{\mathbb{R}}^{3}}}}{{\phi }_{{\bf i}}}{{\phi }_{{\bf j}}}{\rm d}{\bf x} \right]$ is a sparse matrix and $\nabla J(f)=R\ {\bf f}-G$ with R and G are defined by the right-hand side of (10). We call the vector ${{M}^{-1}}\nabla J(f)$ as L2-gradient of J(f). To compute ${{M}^{-1}}\nabla J(f)$ efficiently, matrix elements of M are approximated by

Using this approximation, ${{M}^{-1}}$ can be approximately computed quickly.

4.5. Stage one: minimize J(f)

We present the main steps of the bi-gradient method. The method depends on a parameter $\beta \in [0,1]$.

Algorithm 4.1. Bi-gradient method.

  • (i)  
    Compute a threshold ${{\varepsilon }_{{\rm stop}}}\gt 0$ of J(f) for stopping the iteration.
  • (ii)  
    Given an initial value ${{f}^{(0)}}=0$. Set k = 0.
  • (iii)  
    Compute ${{{\bf r}}_{k}}:=-\nabla J({{f}^{(k)}})$ and then ${{{\bf h}}_{k}}:={{M}^{-1}}{{{\bf r}}_{k}}$.
  • (iv)  
    Compute ${{\alpha }_{k}}$ and ${{\beta }_{k}}$ such that
    Equation (12)
    where ${{r}^{(k)}}$ and ${{h}^{(k)}}$ are spline functions with ${{{\bf r}}_{k}}$ and ${{{\bf h}}_{k}}$ as their coefficient vectors, respectively. The real numbers ${{\alpha }_{k}}$ and ${{\beta }_{k}}$ are obtained by solving a 2 × 2 linear system derived from (12).
  • (v)  
    Set ${{d}_{k}}=({{\alpha }_{k}}/{{\beta }_{k}}){{r}^{(k)}}+{{h}^{(k)}}$. Then for the given $\beta =\frac{\alpha }{1+\alpha }\in [0,1]$, determine a ${{\tau }_{k}}$, such that
  • (vi)  
    Compute
  • (vii)  
    If the following stopping condition
    Equation (13)
    is satisfied, stop the iteration. Then ${{f}_{1}}:={{f}^{(k+1)}}$ is the required result. Otherwise, set k as $k+1$ and then go back to step 3. The integer $M\gt 1$ in (13) is a given bound for the iteration.

Computing stopping threshold. Let f* be the exact function to be reconstructed, which is unknown. Then the measured image ${{g}_{{{{\bf d}}_{l}}}}$ can be represented as

where ${{n}_{{{{\bf d}}_{l}}}}({\bf u})$ are additive noise images. Therefore

Equation (14)

Hence we can take

To compute ${{\varepsilon }_{{\rm stop}}}$, we need to estimate ${{n}_{{{{\bf d}}_{l}}}}({\bf u})$. Though ${{n}_{{{{\bf d}}_{l}}}}({\bf u})$ are unknown in general, some parts of ${{n}_{{{{\bf d}}_{l}}}}({\bf u})$ are presented in the image ${{g}_{{{{\bf d}}_{l}}}}({\bf u})$. We can use these known parts to estimate ${{\varepsilon }_{{\rm stop}}}$. In this paper, we regard the part ${{[-R,R]}^{2}}\setminus \{{\bf u}:\parallel {\bf u}\parallel \leqslant R\}$ in ${{g}_{{{{\bf d}}_{l}}}}({\bf u})$ as noise. We therefore compute ${{\varepsilon }_{{\rm stop}}}$ as follows

where ${{[-R,R]}^{2}}$ is the domain of the measured images. The integrations above are approximated by summations of the image values over the integer grid points.

4.6. Stage two: combining the geometric flow

The second stage for reconstructing f is as the following steps.

Algorithm 4.2. Combining geometric flow.

  • (i)  
    Given an initial value ${{f}^{(0)}}={{f}_{1}}$, where f1 is the output of algorithm 4.1 Set k = 0.
  • (ii)  
    Compute $\nabla J({{f}^{(k)}})$ and $\nabla G({{f}^{(k)}})$, where J is defined by (2) and G is defined by the geometric flow in section 3.
  • (iii)  
    Compute combined direction ${{{\bf h}}_{k}}=-{{\alpha }_{k}}\nabla J({{f}^{(k)}})-{{\beta }_{k}}\nabla G({{f}^{(k)}})$ (${{\lambda }_{k}}=\frac{{{\beta }_{k}}}{{{\alpha }_{k}}}$, see section 4.6.1 for detail).
  • (iv)  
    Compute ${{\tau }_{k}}$ (see (21) and section 4.6.1 for detail) and then compute
    Equation (15)
    where ${{h}^{(k)}}$ is a linear combination of the B-spline radial basis function with ${{{\bf h}}_{k}}$ as its coefficient vector.
  • (v)  
    If the following stopping condition
    Equation (16)
    is satisfied, stop the iteration. Then $f:={{f}^{(k+1)}}$ is the final result. Otherwise, set $k=k+1$ and then go back to step 2. In (16), epsilon is a small number: we take it as 10−5. The integer $N\gt 1$ in (16) is a given bound for the iteration.

4.6.1. Combined direction

Let

If ${\bf r}_{k}^{T}{{{\bf g}}_{k}}=1$ (${{{\bf g}}_{k}}={{{\bf r}}_{k}}$), then the combine direction is taken as ${{{\bf r}}_{k}}$. If ${\bf g}_{k}^{T}{{{\bf r}}_{k}}=-1$ (${{{\bf g}}_{k}}=-{{{\bf r}}_{k}}$), then the spanned space by ${{{\bf g}}_{k}}$ and ${{{\bf r}}_{k}}$ is one-dimensional (1D). Then we replace ${{{\bf g}}_{k}}$ by ${{{\bf h}}_{k-1}}$ and restart the determination of the combined direction. In the following, we assume $|{\bf r}_{k}^{T}{{{\bf g}}_{k}}|\lt 1$. Let

where ${{\alpha }_{k}}={\bf r}_{k}^{T}{{{\bf g}}_{k}}$. Then it is easy to see that

Let

Then

In the plane spanned by the vectors ${{{\bf g}}_{k}}$ and ${{{\bf r}}_{k}}$, we first introduce an angle θ measured from vector ${{{\bf r}}_{k}}$ to vector ${{{\bf g}}_{k}}$. Then we define the two step-size curves

Equation (17)

Equation (18)

for reducing the energies J(f) and G(f) in the directions

respectively. The numbers ar, br, cr and dr are given by (24)–(27). Constants ag, bg, cg and dg are given by (29)–(31). Note that

Let

Then the curve $\tau (\theta )$ is the right step-size for the direction ${\bf r}(\theta )$, which makes J(f) not increase and G(f) decrease. We want to find a ${{\theta }^{*}}\in [{{\theta }_{0}},\pi /2]$, that leads to the maximal reduction of the energy G(f). Since

Omitting the higher order term $O({{\tau }^{2}}(\theta ))$, we determine our best ${{\theta }^{*}}$ as follows

Then the combined direction is given as

Equation (19)

Remark 4.2. Since $\tilde{{\bf r}}_{k}^{T}{{{\bf r}}_{k}}=0$, we know from (19) that

Equation (20)

Hence ${{{\bf h}}_{k}}$ is a descent direction of J(f).

Having ${{\theta }^{*}}$ and the combined direction ${{{\bf h}}_{k}}$, we compute the step-size in the direction ${{{\bf h}}_{k}}$ as follows

Equation (21)

4.6.2. Step-size curves

We first consider the step-size curve ${{\tau }_{r}}(\theta )$. Given a descent direction h, from minimizing $J({{f}^{(k)}}+\tau h)$, we have

we can determine τ as

with

Equation (22)

Equation (23)

To make J(f) not increase, we redefine

Taking

then we obtain (17) with

Equation (24)

Equation (25)

Equation (26)

Equation (27)

Now we consider step-size curve ${{\tau }_{g}}(\theta )$. Since

then from $G^{\prime} (f+\tau (g(\theta )))=0$ and omitting the higher order term $O({{\tau }^{2}})$, we obtain

Equation (28)

Substituting

into (28), we obtain (18), with

Equation (29)

Equation (30)

Equation (31)

Equation (32)

where ${{G}_{{{g}_{k}}}}({{f}^{(k)}})$ and ${{G}_{{{g}_{k}}{{g}_{k}}}}({{f}^{(k)}})$ are the first- and second-order variations of G with respect to gk. Appendix B gives the computational details.

4.6.3. Discussions

Now we show that the f produced by algorithm 4.2 satisfies the condition (4). To illustrate this, let us consider a general form functional

Then the partial derivative with the coefficient of the spline function f is

Let

Then the changing rate of $\mathcal{E}(f)$ in the direction r is

Hence, $\mathcal{E}(f)$ is non-increasing in the direction r. Let $h={{\sum }_{{\bf i}}}{{h}_{{\bf i}}}{{\phi }_{{\bf i}}}$ be another function, if

Equation (33)

then it is easy to derive that

Hence, $\mathcal{E}(f)$ is non-increasing in the direction h.

From the discussion above, we know that J(f) is not increasing in the direction ${{{\bf h}}_{k}}$ at ${{f}^{(k)}}$, as (20) is satisfied. Furthermore, the step-size ${{\tau }_{k}}$ given by (21) makes $J({{f}^{(k+1)}})\leqslant J({{f}^{(k)}})$. Therefore, the function sequence $\{{{f}^{(k)}}\}$ produced by algorithm 4.2 causes the sequence $\{J({{f}^{(k)}})\}$ to decrease monotonically.

5. Numerical experiments and discussions

To evaluate the performance of our reconstruction algorithms, we present several examples in this section. We first look at the performance of our reconstruction algorithms for clean data in section 5.1. Then, we look at the performance of algorithms for noisy data in sections 5.2 and 5.3. We compare our results with that of WBP.

5.1. Numerical experiments for clean data

Given a volume data $F=\{{{f}_{ijk}}\}$ with size $131\times 131\times 131$, we first generate 5000 projection images $\{{{I}_{i}}\}_{i=1}^{5000}$ with size 131 × 131 at randomly chosen projection directions $\{{{{\bf d}}_{i}}\}_{i=1}^{5000}$ on the unit sphere (figure 2 shows the first five projection images). Then we reconstruct F using different reconstruction algorithms. Let ${{R}^{(l)}}=\{r_{ijk}^{(l)}\}_{ijk=1}^{131}$, $l=1,\ldots ,5$, be the reconstructed volumes using WBP, and our bi-gradient method with $\beta =0$, $\beta =\frac{1}{3}$, $\beta =\frac{1}{2}$ and $\beta =1$, respectively. Table 1 lists the L2-errors ${{E}^{(l)}}$, where ${{E}^{(l)}}$ is defined as

Since the data is clean, we do not use the regularization term in the reconstruction equation. The L2-errors are monotonically decreasing as the iteration number increases. It can be clearly observed that as $\beta $ increases from 0 to 1, the errors decrease. Choosing $\beta =1$ gives the most accurate result. Following $\beta =1$, the next best result is given by WBP with L2-error 0.08 131. Following WBP are $\beta =\frac{1}{2}$, $\beta =\frac{1}{3}$, and $\beta =0$. More iterations will make the case $\beta =\frac{1}{2}$ better than WBP. For instance, the L2-error is 0.07 855 after 40 iterations. In the next subsection, we will illustrate that the most accurate method may not be the best method for data with high noise.

Figure 2.

Figure 2. Five projection images of the volume data F.

Standard image High-resolution image

Table 1.  L2-errors ${{E}^{(2)}},\ldots ,{{E}^{(5)}}$ for different iteration numbers for the clean data. ${{E}^{(1)}}=0.08\;131$.

Iteration number $\beta =0$ $\beta =\frac{1}{3}$ $\beta =\frac{1}{2}$ $\beta =1$
5 0.15 519 0.15 368 0.14 793 0.07 803
10 0.13 726 0.13 419 0.12 313 0.05 790
15 0.12 838 0.12 416 0.10 977 0.05 686
20 0.12 167 0.11 635 0.09 931 0.05 674
25 0.11 733 0.11 115 0.09 246 0.05 672
30 0.11 354 0.10 652 0.08 653 0.05 671

Figure 3 shows the central slices of the reconstructed volume data, where figure (a) is from the initial data. Figures (b–f) are produced from the reconstruction results of the WBP, bi-gradient method with $\beta =0$, $\beta =\frac{1}{3}$, $\beta =\frac{1}{2}$, and $\beta =1$, respectively. These figures show the same conclusions as the numerical results.

Figure 3.

Figure 3. Central slices of the reconstructed volume data from the clean projections. (a) is from the initial data. (b) is from WBP. (c)–(f) are from the reconstruction results of our bi-gradient method with $\beta =0$, $\beta =\frac{1}{3}$, $\beta =\frac{1}{2}$ and $\beta =1$, respectively, after 30 iterations.

Standard image High-resolution image

Remark 5.3. The increase in the accuracy of the bi-gradient method as β increases from 0 to 1 is due to the fact that when $\beta =0$, we search the minimal point in the 1D space ${\rm span}\;[{{{\bf r}}_{k}}]$, but when $\beta =1$, we search the minimal point in the 2D space ${\rm span}\;[{{{\bf r}}_{k}},{{{\bf h}}_{k}}]$. When $\beta \in (0,1)$, the method is an average of the cases $\beta =0$ and $\beta =1$.

5.2. Numerical experiments for noised data with SNR = 1.0

At this point, we generate a new set of data by adding additive white Gaussian noise to each of the images Ii, produced in the previous subsection with SNR = 1.0. Figure 4 shows five noisy images of the ones shown in figure 2. We then reconstruct f using different reconstruction algorithms from the noisy images. Because the data is noisy, a regularization term is used. As before, we use ${{R}^{(r)}}$ to represent the reconstructed volume. For our bi-gradient method, we iterate 30 steps. In table 2, we list the energies, defined by (2), for the reconstruction results after 30 iterations. The energies show that the bi-gradient method with $\beta =1$ yields minimal energy. Hence, it is indeed the most accurate method. However, the most accurate method may not lead to the most meaningful results.

Figure 4.

Figure 4. Five noised projection images of the volume data f for SNR = 1.0.

Standard image High-resolution image

Table 2.  Energies of different methods after 30 iterations.

SNR $\beta =0$ $\beta =\frac{1}{3}$ $\beta =\frac{1}{2}$ $\beta =1$
1.0 101 747 81.537 409 101 552 48.877 989 100 916 05.869 403 990 8452.694 144
 

Figure 5 shows the central slices of the reconstructed volume data. The figures, from the first to the fifth, are the central slices of the reconstruction volumes of WBP, our bi-gradient method with $\beta =0$, $\beta =\frac{1}{3}$, $\beta =\frac{1}{2}$, and $\beta =1$, respectively. It can be clearly observed that the bi-gradient method with $\beta =1$ gives the noisiest result with detailed structures. After that, the bi-gradient method with $\beta =\frac{1}{2}$, $\beta =\frac{1}{3}$ and $\beta =0$ follow successively.

Figure 5.

Figure 5. Central slices of the reconstructed volume data for the noised data with SNR = 1.0. The figures from the first to the fifth are from the reconstruction results of WBP, our bi-gradient method after 30 iterations with $\beta =0$, $\beta =\frac{1}{3}$, $\beta =\frac{1}{2}$ and $\beta =1$, respectively.

Standard image High-resolution image

In table 3, we list the L2-errors between the reconstructed volumes and the exact initial volume data for iterations 5, 10, 15, 20, 25, and 30. The errors are not monotonically decreasing as the iteration number increases. It is easy to see that the most accurate method ($\beta =1$) for the clean data leads to the maximal L2 error, while the other three cases lead to similar L2 error bounds. All these three cases are better than WBP for iterations 10, 15, 20, 25, and 30. The iso-surfaces of the reconstructed volume data, as shown in figure 6, demonstrate that:

Figure 6.

Figure 6. Iso-surfaces of the reconstructed volume data at the middle iso-values. Figure (a) is from WBP. Figures (b)–(e) are from the reconstruction results of the bi-gradient method after 30 iterations with $\beta =0,\frac{1}{3},\frac{1}{2}$ and 1, respectively.

Standard image High-resolution image

Table 3.  L2-errors ${{E}^{(2)}},\ldots ,{{E}^{(5)}}$ for different iteration numbers. The data are noised with SNR = 1.0. ${{E}^{(1)}}=0.15\;005$.

Iteration number ${{E}^{(2)}}$ ${{E}^{(3)}}$ ${{E}^{(4)}}$ ${{E}^{(5)}}$
5 0.15 521 0.15 379 0.15 071 0.36 863
10 0.13 770 0.13 567 0.14 074 0.45 722
15 0.13 374 0.12 946 0.13 406 0.45 476
20 0.13 394 0.12 671 0.12 680 0.45 221
25 0.13 419 0.12 618 0.12378 0.45 170
30 0.13 428 0.12 571 0.12 203 0.45 145

  • (i)  
    The bi-gradient method with $\beta =0$ gives the most coarse level structure, then followed by $\beta =\frac{1}{3}$, $\beta =\frac{1}{2}$ and $\beta =1.0$.
  • (ii)  
    The bi-gradient method with $\beta =1$ does not yield valuable iso-surfaces.

Remark 5.4. The noise is added to the 2D images Ii, not to the volume data F. If the 2D image is highly polluted, such that the noise level is much higher than the signal, the reconstructed volume data from these polluted images differs greatly from the initial volume data F. Hence, it is not reasonable to require the reconstruction volumes to be close to the initial volume. Therefore, the L2-error between the initial data and the reconstructed data may not be a reasonable measure to evaluate the reconstruction method. In the next subsection, we explain how we used Fourier shell correlation (FSC) to evaluate the performance of the reconstruction methods.

Remark 5.5. In the previous section, we showed that, for the clean data set, as β increases from 0 to 1, the accuracy of the bi-gradient method also increases. In this subsection, we observe that $\beta =0$ leads to the smoothest result. Now we explain the reason. From our previous work [5], we know that if we choose ${{{\bf f}}^{(0)}}=0$ and search the minimal point in the direction ${{{\bf r}}_{k}}$, ${{{\bf f}}^{(k)}}$ converges to the minimal solution ${{{\bf f}}^{*}}$ of the equation $R{\bf f}=G$ in the space $N{{(R)}^{\bot }}=\{{\bf x}:{{{\bf x}}^{T}}{\bf y}=0,\forall {\bf y}\in N(R)\}$ in the sense of the Euclidean norm $\parallel {\bf f}\parallel =\sqrt{{{{\bf f}}^{T}}{\bf f}}$, where N(R) is the null space of the matrix R. While searching the minimal point in the direction ${{{\bf h}}_{k}}$, ${{{\bf f}}^{(k)}}$ converges to the minimal solution ${{{\bf f}}^{*}}$ of the equation $R{\bf f}=G$ in the space $N{{(R)}^{{{\bot }^{M}}}}=\{{\bf x}:{{{\bf x}}^{T}}M{\bf y}=0,\forall {\bf y}\in N(R)\}$ in the sense of the norm $\parallel {\bf f}{{\parallel }_{M}}=\sqrt{{{{\bf f}}^{T}}M{\bf f}}$. The minimal property of the Euclidean norm makes the case that $\beta =0$ yields the most smooth result. Table 4 lists some Euclidean norms for the cases $\beta =0,\frac{1}{3},\frac{1}{2},1$. It can be seen that as β increases, the Euclidean norm also increases.

5.3. Numerical experiments for noised data with SNR=0.1

Given a volume data $F=\{{{f}_{ijk}}\}$, of size $151\times 151\times 151$, we first generate 200 00 projection images $\{{{I}_{i}}\}_{i=1}^{200\,00}$ with size 151 × 151 at randomly chosen projection directions $\{{{{\bf d}}_{i}}\}_{i=1}^{200\,00}$ on the unit sphere. These images are polluted by adding additive white Gaussian noise with SNR=0.1 (figure 7 shows five of these 'noised' images). These noised images are split randomly into datasets A and B, with each dataset containing 100 00 images. Then, we reconstruct F using our bi-gradient method with $\beta =0,\frac{1}{3},\frac{1}{2},\frac{2}{3}$ for each of the datasets (in this section, we do not take $\beta =1$, as the examples in the previous subsection have shown that taking $\beta =1$ does not lead to desirable results). Because the data are extremely noisy, a regularization term is used. The aim of splitting the dataset into two parts is to allow us to examine the correlation of the reconstructed results using FSC.

Figure 7.

Figure 7. Five noised projection images of the volume data f for SNR = 0.1.

Standard image High-resolution image

Table 4.  Euclidean norms of the reconstruction results. For WBP, it is 4.067 898e+02.

Iteration number $\beta =0$ $\beta =\frac{1}{3}$ $\beta =\frac{1}{2}$ $\beta =1$
5 3.046 299e+02 3.048 806e+02 3.061 394e+02 3.759 363e+02
10 3.225 103e+02 3.231 880e+02 3.270 741e+02 4.281 541e+02
15 3.321 209e+02 3.335 966e+02 3.378 084e+02 4.239 439e+02
20 3.340 637e+02 3.356 739e+02 3.410 555e+02 4.227 851e+02
25 3.355 638e+02 3.377 481e+02 3.444 411e+02 4.227 860e+02
30 3.360 086e+02 3.384 513e+02 3.451 244e+02 4.227 148e+02

As before, we use ${{R}^{(r)}}$ to represent the reconstructed volume. For each value of $\beta $ in our method, the algorithm iterates 30 steps. In table 5, we list the energies for the reconstruction results after 30 iterations. These energies are decreasing as the value of $\beta $ increases and $\beta =\frac{2}{3}$ yields minimal energies for the two sets of data.

Table 5.  Energies of different methods after 30 iterations.

Data $\beta =0$ $\beta =\frac{1}{3}$ $\beta =\frac{1}{2}$ $\beta =\frac{2}{3}$
Data set A 135 429 907.779 135 138 283.715 134 896 618.185 134 531 987.307
Data set B 135 409 723.391 135 117 692.553 134 875 769.350 134 510 933.838

Figure 8 shows the central slices of the volume data. Figures on the first to the fifth columns are from the reconstruction results of WBP, our bi-gradient method with $\beta =0,\frac{1}{3},\frac{1}{2},\frac{2}{3}$. It can be clearly observed that the case $\beta =\frac{2}{3}$ gives the noisiest result with detailed structures. After that, the cases $\beta =\frac{1}{2},\frac{1}{3},0$ follow successively.

Figure 8.

Figure 8. Central slices of the volume data. The first and the second rows are for the noised data sets A and B, respectively. The first, second, ..., fifth columns are from the reconstruction results of WBP, the bi-gradient method after 30 iterations with $\beta =0,\frac{1}{3},\frac{1}{2}$ and $\frac{2}{3}$, respectively.

Standard image High-resolution image

In tables 6 and 7, we list the L2-errors between the reconstructed volumes and the exact initial volume data for iterations 5, 10, 15, 20, 25, and 30. It is easy to see that the bi-gradient method with $\beta =0$ leads to the minimal L2 error, followed by the cases $\beta =\frac{1}{3}$, $\beta =\frac{1}{2}$, and $\beta =\frac{2}{3}$, and all of these are better than WBP after 30 iterations. The iso-surfaces of the reconstructed volume data, as shown in figure 9, demonstrate that the bi-gradient method with $\beta =0$ gives the coarsest level structure, followed by $\beta =\frac{1}{3}$, $\beta =\frac{1}{2}$, and $\beta =\frac{2}{3}$.

Figure 9.

Figure 9. Iso-surfaces of the reconstructed volume data at the middle iso-values. First row is for the noised data set A. Second row is for the noised data set B. Column (a) is from WBP. Columns (b)–(e) are respectively from the reconstruction results of the bi-gradient method with $\beta =0,\frac{1}{3},\frac{1}{2}$ and $\frac{2}{3}$ after 30 iterations.

Standard image High-resolution image

Table 6.  L2-errors ${{E}^{(2)}},\ldots ,{{E}^{(5)}}$ for different iteration numbers and for data set A. ${{E}^{(1)}}=0.29\;193$.

Iteration number ${{E}^{(2)}}$ ${{E}^{(3)}}$ ${{E}^{(4)}}$ ${{E}^{(5)}}$
5 0.17 066 0.16 985 0.17 464 0.19 615
10 0.14 953 0.16 200 0.19 536 0.28 074
15 0.14 286 0.15 583 0.19 542 0.29 516
20 0.14 033 0.14 994 0.18 813 0.28 671
25 0.13 990 0.14 878 0.18 605 0.28 404
30 0.13 981 0.15 085 0.18 804 0.28 388

Table 7.  L2-errors ${{E}^{(2)}},\ldots ,{{E}^{(5)}}$ for different iteration numbers and for data set B. ${{E}^{(1)}}=0.29\;208$.

Iteration number ${{E}^{(2)}}$ ${{E}^{(3)}}$ ${{E}^{(4)}}$ ${{E}^{(5)}}$
5 0.17 064, 0.16 984 0.17 466 0.19 621
10 0.14 956 0.16 210 0.19 553 0.28 105
15 0.14 290 0.15 600 0.19 576 0.29 572
20 0.14 035 0.15 018 0.18 841 0.28 768
25 0.13 992 0.14 895 0.18 635 0.28 452
30 0.13 984 0.15 097 0.18 836 0.28 435

Table 8 lists the resolutions for each of the reconstructed volume pairs after 10, 20 and 30 iterations. The resolution is computed at the place where the value of FSC is 0.5. It can be seen that for most of the cases, the resolution of each reconstruction result is higher than that produced by WBP. Figure 10 shows the FSC curves for the five cases and different numbers of iterations.

Figure 10.

Figure 10. The FSC curves of WBP and our iterative method after 10 (left), 20 (middle) and 30 (right) iterations, respectively.

Standard image High-resolution image

Table 8.  Resolution of different methods. The resolution of WBP is 12.362 563 Å.

Iteration number $\beta =0$ $\beta =\frac{1}{3}$ $\beta =\frac{1}{2}$ $\beta =\frac{2}{3}$
10 12.055 628 Å 12.107 415 Å 12.152 935 Å 12.227 004 Å
20 12.079 924 Å 12.199 175 Å 12.280 208 Å 12.391 474 Å
30 12.108 918 Å 12.334 633 Å 12.476 160 Å 12.527 178 Å

Remark 5.6. Both the numerical and illustrative results in this subsection reveal that the reconstruction results from datasets A and B are very similar, giving further evidence that the bi-gradient method is robust.

5.4. Multi-scale reconstruction

The reconstruction in the previous subsection is conducted for each β. The computation conducted in this way allows examination of the performance of our method. In a real construction, we do not need to compute the reconstruction result for each β. We have noticed that as β increases from 0 to 1, structures from coarse to fine are captured. We now combine these computations into one loop, aiming to obtain a sequence of multi-scale reconstruction results. For a given $K\gt 1$, we assume we are going to reconstruct a sequence of volume data ${{F}_{0}},{{F}_{1}},\ldots ,{{F}_{K}}$ from coarse to fine. Then we carry out the following:

  • (i)  
    Construct an initial volume data F0 using the bi-gradient method with $\beta =0$. The iteration numbers M and N in the the bi-gradient method are taken as 12 for both the first and second stages.
  • (ii)  
    For $k=1,2,\ldots ,K$ we do the following: set $\beta =\frac{\alpha }{1+\alpha }$ with $\alpha ={{2}^{k+1-K}}$ and reconstruct volume data Fk using the bi-gradient method with ${{F}_{k-1}}$ as the initial value. The iteration numbers M and N in the bi-gradient method are taken as 5 for both the first and second stages.

For instance, if we take K = 3, and reconstruct the volume data ${{F}_{0}},{{F}_{1}},{{F}_{2}},{{F}_{3}}$ using the data in the previous section, we obtain the results corresponding to $\beta =0,\frac{1}{3},\frac{1}{2},\frac{2}{3}$. The obtained results were very close to the results obtained in the previous subsection; hence, they are not presented again.

6. Conclusions

We have presented a multi-scale bi-gradient flow method for single-particle reconstruction, which enhances both the computational speed and accuracy of the earlier L2GF method we reported. Using the space spanned by the B-spline radial basis functions, a bi-gradient method is combined with geometric flow with decreasing energy constraints. The experimental results have shown that the proposed method yields very desirable results. The multi-scale property of the method provides the user with the degree of freedom to reconstruct the volumetric data with the desired level of detail.

Acknowledgments

This research was supported in part by the NSFC under the grants (11101401, 81173663, 11301520) and the NSFC Fund for Creative Research Groups of China under the grants (11021101, 11321061).

Appendix A.: Derivations of weak form

Now we derive the weak form of the surface diffusion flow (6). Multiplying both sides of (6) by a trial function $\phi \in {{C}^{2}}({{\mathbb{R}}^{2}})$ with compacted support and then using co-area formula (see [7]) and Green formula (see [23] p 24), we have

Equation (A.1)

where (see [3])

and ${\bf P}=I-{\bf n}{{{\bf n}}^{T}}$. Hence

Since

and

we have

Therefore

Equation (A.2)

Substituting (A.2) into (A.1), we obtain the weak form of the surface diffusion flow as (7).

Appendix B.: Second-order variation

The weak form of the surface diffusion flow has been written as (see (7))

Equation (B.1)

We have known

By expansion, we have

Then the surface diffusion flow is represented as

where

and

We regard the right-hand side of the surface diffusion flow with a minus sign as the first-order variation ${{G}_{\phi }}(f)$ with respect to ϕ for an unknown energy G(f). Then we compute the second-order variation of G(f) with respect to ψ as

where

Now let us compute $Q_{\psi }^{(0)}(f)$, $Q_{\phi \psi }^{(1)}(f)$, $Q_{\phi \psi }^{(2)}(f)$ and $Q_{\phi \psi }^{(3)}(f)$. It is easy to see that

Using these equations, we can obtain that

Please wait… references are loading.
10.1088/1749-4680/8/1/014002