3D Poissonian image deblurring via patch-based tensor logarithmic Schatten-p minimization

In medical and biological image processing, multi-dimensional images are often corrupted by blur and Poisson noise. In this paper, we first propose a new tensor logarithmic Schatten-p (t-log-Sp ) low-rank measure and a tensor iteratively reweighted Schatten-p minimization algorithm for minimizing such measure. Furthermore, we adopt this low-rank measure to regularize the non-local tensors formed by similar 3D image patches and develop a patch-based non-local low-rank model. The data fidelity term of the model characterizes the Poisson noise distribution and blur operator. The optimization model is further solved by an alternating minimization technique combined with variable splitting. Experimental results tested on 3D fluorescence microscope images show that the proposed patch-based tensor logarithmic Schatten-p minimization method outperforms state-of-the-art methods in terms of image evaluation metrics and visual quality.

Original Content from this work may be used under the terms of the Creative Commons Attribution 4.0 licence.Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.

Introduction
Image degradation by blur and Poisson noise is inevitable in electronic microscopy [1], astronomical imaging [2], single particle emission computed tomography (SPECT) [3,4], positron emission tomography (PET) [5], and so on.On one hand, images are convoluted by a point spread function (PSF) of the imaging device or body movement caused by the respiratory shake of the patient.On the other hand, due to the low photon count [6], images such as xray tomography [7], fluorescence microscopes [1], astronomy [2], mammography [8], and tomosynthesis [9], are often affected by Poisson noise.
For deconvoluting Poissonian images, a popular method is the Richardson-Lucy (RL) algorithm [10], which calculated a Poisson maximum likelihood estimate.The ameliorated RL (ARL) algorithm [11] accelerated the deblurring procedure of the RL algorithm.But the RL and ARL algorithms may amplify the noise after several iterations.To efficiently restore blurry Poissonian images, various optimization models with regularization terms were developed and further solved by efficient algorithms.The most commonly used regularization is the total variation (TV) regularization [12][13][14][15][16][17][18][19].Dey et al [12] enhanced the RL algorithm by the TV regularization; Harmany et al [13] solved the TV regularized model by sequential quadratic approximations; Bonettini and Ruggiero [14] combined a Poisson log-likelihood data fidelity term with the TV regularization term and used an alternating extragradient algorithm to solve the model; Figueiredo and Bioucas-Dias [15] solved the model by the alternating direction method of multipliers; Liyan et al [16] proposed a dictionary learning model in addition to the TV regularization for Poissonian image restoration.Other regularizations such as wavelet based regularizations [20][21][22][23][24][25] and Hessian Schatten norm regularization [26] were also proposed.However, those regularization techniques are primarily designed for 2D images and cannot be easily extended to 3D images.
Recent approaches for 3D Poissonian image deblurring converted Poisson noise into Gaussian noise through some transformations and then restored the image via denoising tools for Gaussian noise.Dupe et al [27] utilized the Anscombe variance stable transformation (VST) [28] leading to Gaussian noise and denoised the blurry Gaussian image by a convex optimization model; Azzari et al [29] deconvolved the blurry image by a linear regularized inverse filter and then adopted VST and block matching 3D (BM3D) [30] or BM4D [31] to remove Poisson noise.Besides these approaches, the methods based on the Poisson unbiased risk estimate (PURE) also achieved great performance.The PURE-LET method that characterized the deconvolution process as a linear combination of elementary functions (LET) was proposed in [32] for 2D images and in [33] for 3D images.Each LET function contains a Wiener filtering and wavelet-domain thresholding and the PURE is used to estimate the coefficients of the linear combination.
In this paper, we propose a patch-based approach for 3D Poissonian image deblurring.First, a new tensor low-rank measure called the t-log-S p low-rank measure is proposed, and an efficient algorithm with convergence results is also proposed for minimizing such measure.Second, according to the image non-local self-similarity, we use the proposed tensor low-rank measure to regularize the low-rankness of the tensors formed by similar 3D patches extracted from the 3D image.Then we further propose a non-local low-rank model with a data fidelity term for Poissonian deblurring and solve it by an alternating minimization algorithm with a proximal term.Lastly, we demonstrate the proposed method outperforms the state-of-the-art methods in removing Poisson noise and deblurring of fluorescence microscope images.
The main contributions of this paper are as follows: • We propose a matrix logarithmic Schatten-p (log-S p ) low-rank measure for 2D images, which can reveal the weighting strategy used in the weighted Schatten p-norm minimization [34].Then we further extend the log-S p low-rank measure to tensor log-S p (t-log-S p ) low-rank measure for 3D images.It can be demonstrated in this paper that the t-log-S p measure is efficient and suitable for applications in 3D image restoration such as 3D Poissonian image deblurring.• For the proposed log-S p and t-log-S p measures, we introduce some properties and develop reliable solvers for their minimization problems.In particular, we develop an iteratively reweighted S p minimization (IRSpM) algorithm for the log-S p minimization and a tensor IRSpM (t-IRSpM) algorithm for the t-log-S p minimization.A convergence analysis of each algorithm is provided in detail, showing any accumulation point generated by the algorithm is a stationary point of the problem.• We build a new patch-based non-local low-rank model using the proposed t-log-S p measure for 3D Poissonian image deblurring.This approach can achieve state-of-the-art performance for 3D Poissonian image deblurring.
This paper is organized as follows.In section 2, we provide some tensor notations and definitions, then introduce matrix and tensor logarithmic Schatten-p (log-S p ) low-rank measures and their properties.To solve the matrix and tensor log-S p minimization problems, in section 3 we propose matrix and tensor IRSpM algorithms, respectively, along with convergence analysis.
We further develop our model for 3D Poissonian image deblurring in section 4. Experimental results tested on 3D fluorescence microscope images are presented in section 5. Section 6 concludes this paper.

Tensor logarithmic Schatten-p low-rank measure
In this section, we first introduce the definitions and notations of tensors including tensor singular value decomposition (t-SVD).Then we propose a t-log-S p low-rank measure and present its properties.

Preliminaries on tensors
Tensors are represented by bold calligraphy letters, e.g.X ; matrices are represented by bold capital letters, e.g.X; vectors are represented by bold lowercase letters, e.g.x; and scalars are represented by lowercase letters, e.g.x.For an N-order tensor And the operator 'unfold' and its inverse operator 'fold' are defined by For a three-order tensor X ∈ R n1×n2×n3 , x i j k denotes the (i, j, k)th entry of X , X (k) denotes the kth frontal slice X (:, :, k), and X denotes the discrete Fourier transform (DFT) of X along the 3rd dimension, i.e.X = fft(X , [ ], 3).This also implies X = ifft(X , [ ], 3).X (i) denotes the ith frontal slice of X .The block diagonal matrix of X is defined as . . .
As for block unfolding X and its inverse operation, the operations are defined as follows: bvec X (2)  . . .
Using the tensor notations above, we present the definition of a tensor product.
In fact, the t-product can also be calculated via the following equivalence under the DFT: that is, the ith frontal slice of the DFT of the t-product is equal to the matrix product of the ith frontal slices of the DFT of X and Y.
Definition 2.2 (identity tensor, [35]).The identity tensor I ∈ R n×n×n3 is the tensor whose first frontal slice is the n × n identity matrix, and other frontal slices are all zeros.
Note that each frontal slice of I is the identity matrix and each frontal slide of Q, where Q is orthogonal, is an orthogonal matrix.Next, we present the definition of a t-SVD and several tensor low-rank measures.
Definition 2.6 (tensor tubal rank, [36]).For a 3D tensor X ∈ R n1×n2×n3 , the tensor tubal rank of X , denoted as rank t (X ), is defined as the number of non-zero tubes of S where S is from the t-SVD of The tensor tubal rank is a tensor low-rank measure based on t-SVD, which counts the number of non-zero tubes in t-SVD.In fact, the tensor tubal rank only depends on the first frontal slice of S, that is, rank t (X ) = #{i : S(i, i, 1) ̸ = 0}.Since the tensor tubal rank minimization is NP-hard, several tensor low-rank measures were proposed to approximate the tensor tubal rank.
Definition 2.7 (tensor nuclear norm, [37]).Let X = U * S * V T be the t-SVD of X ∈ R n1×n2×n3 .Then the tensor nuclear norm is defined as The tensor nuclear norm can also be computed via the frontal slices of X , that is, As the frontal slices X (i) are matrices, the matrix nuclear norm can be replaced by other nonconvex surrogates of the matrix rank.For example, the tensor p-shrinkage nuclear norm [38] replaces the nuclear norm by the Schatten-p quasi-norm; the tensor weighted Schatten-p norm [39] uses the weighted Schatten-p norm, and the log-based tensor nuclear norm [40] uses the log-det function.

Tensor logarithmic Schatten-p low-rank measure and its properties
Before we propose the tensor log-S p low-rank measure, we first define a new matrix log-S p low-rank measure as follows.
Definition 2.8 (log-Sp).Given X ∈ R m×n , the matrix logarithmic Schatten-p (log-S p ) low-rank measure of X is defined as where ε > 0, 0 < p ⩽ 1, and σ j (X) represents the jth largest singular value of X.
If p = 1, then this log-S p low-rank measure M log,Sp (•) reduces to the log-det function [41], which is a non-convex surrogate of the matrix rank.If 0 < p < 1, due to the non-convexity of the ℓ p norm, M log,Sp (•) is also a non-convex relaxation of the matrix rank, and in fact it can achieve a better approximation than the S p quasi-norm [42] or log-det function.
Next, we propose a new tensor low-rank measure called the t-log-S p low-rank measure, which adopts the matrix log-S p low-rank measure to characterize the low-rankness of the frontal slices X (i) .The t-log-S p low-rank measure also denoted as M log,Sp (•) is defined as follows.
Definition 2.9 (t-log-Sp).Given X ∈ R n1×n2×n3 , the tensor logarithm Schatten-p (t-log-S p ) low-rank measure of X is defined as where ε > 0 and 0 < p ⩽ 1.When n 3 = 1, the t-log-S p low-rank measure reduces to the matrix log-S p low-rank measure.
Proposition 2.10 (orthogonal invariance).The following assertions hold: (i) For a given matrix X ∈ R m×n , if U ∈ R m×m and V ∈ R n×n are orthogonal matrices, then M log,Sp (X) = M log,Sp UXV T .
(ii) Let Z = U * X * V T .By equation (1), we have ) T , where U (i) and are orthogonal matrices.According to (i), we have (ii) holds.
As shown in proposition 2.10, both the matrix and tensor log-S p low-rank measures M log,Sp (•) satisfy the orthogonal invariance property.This property is useful when we minimize these measures together with another function that also has an orthogonal invariance property.For example, the log-S p minimization problem that will be discussed in the next section may be reduced to a minimization problem only in terms of the singular values using this orthogonal invariance property.

Tensor iteratively reweighted S p minimization algorithm for the t-log-S p minimization
For a 3D tensor Y ∈ R n1×n2×n3 , the t-log-S p minimization problem is written as min where τ > 0. By the definition of the t-log-S p low-rank measure, M log,Sp (•) is separable in terms of the frontal slices of X .Then solving the t-log-S p minimization problem ( 5) is equivalent to solving for each frontal slice X (i) via the following problem min In section 3.1 we will propose an IRSpM algorithm for the log-S p minimization problem as in (6), and conduct in section 3.2 a convergence analysis for IRSpM algorithm.Then in section 3.3, we summarize the tensor IRSpM (t-IRSpM) algorithm and its convergence analysis for solving the t-log-S p minimization problem (5).

Iteratively reweighted Sp minimization algorithm for the log-Sp minimization
We consider the log-S p minimization problem as follows where Y ∈ R m×n is the given data, X ∈ R m×n is the unknown to be computed, and τ > 0. Note that X and Y can represent X (i) and Y (i) , respectively.By definition, the log-S p low-rank measure can be written as where g : [0, ∞) → R is defined by g(t) = log(t + ε).The function g is monotonically increasing, concave, and continuously differentiable.Also, g has a Lipschitz continuous gradient with constant L g > 0, i.e.
To solve the log-S p minimization as in (7), we propose an iteratively reweighted S p minimization (IRSpM) algorithm as follows where Before solving equation ( 8), we recall some notations on the singular value decomposition (SVD).Given a vector x ∈ R l , let Diag(x) denote the l × l diagonal matrix with the jth diagonal element as x j .Given a matrix X ∈ R m×n , the SVD of X is computed as X = UΣV T , where U ∈ R m×l and V ∈ R n×l are orthogonal matrices with U T U = V T V = I, and Σ ∈ R l×l is a diagonal matrix, l = min{m, n}.In particular, Σ = Diag(σ(X)), where σ(X) T and σ j (X) is the jth largest singular value of X.Since equation ( 8) can be viewed as a weighted S p minimization problem, we recall some preliminary results in [34].

Lemma 3.1 ([34]
).For the following optimization problem: with w ⩾ 0 and 0 < p ⩽ 1, there exists a specific threshold: and we have the following conclusions., w) and S GST p (|σ|, w) can be obtain by solving The generalized soft-thresholding (GST) algorithm proposed in [43] for finding an optimal solution T GST p (σ, w) of problem ( 9) is summarized in algorithm 1.
Input: σ, w, p, J for k = 0, 1, . . ., J do 7: end for 10: Then a global optimal solution for the following problem is given by In particular, γ also satisfies For the IRSpM algorithm given in equation ( 8), it can be easily verified that Then a global optimal solution of (8) can be efficiently solved according to theorem 3.2 as follows We summarize the IRSpM algorithm in algorithm 2.
Input: Y and parameter τ 1: Then the sequence {X k } generated by the IRSpM algorithm in (11) can be computed by

Convergence analysis of the IRSpM algorithm
We can prove that any accumulation point of the sequence {X k } generated by algorithm 2 is a stationary point of the objective function of the log-S p minimization as in (7).First, we recall some definitions of subdifferentials and some results on computing the subdifferential of singular value functions introduced in [44].(1) For a given x ∈ dom ∂f := {x ∈ R d : ∂f(x) ̸ = ∅}, the Fréchet subdifferential of f at x, written ∂f(x), is the set of all vectors u ∈ R d which satisfy lim inf for any permutation π.
where f : R l → R is an absolutely symmetric function, l = min{m, n}.

Lemma 3.7 ([44]
).Let f be an absolutely symmetric function, then the subdifferential of the corresponding singular value function f • σ at a matrix X is given by the formula The log-S p low-rank measure can be viewed as a singular value function and its subdifferential can be computed by lemma 3.7.However, it is still challenging to find an explicit expression for the subdifferential of the log-S p low-rank measure due to the non-smoothness of the S p quasi-norm.
Second, motivated by the class of first-order stationary points for ℓ p regularized low-rank approximation problems introduced in [42], we define a class of first-order stationary points for the log-S p minimization problem (7) using where σ(X) := [σ 1 (X), σ 2 (X), . . ., σ r (X)] T and r = rank(X).Note that O(X) is the set of all such pairs ( Ũ, Ṽ) of the rank reduced SVD of X.
Definition 3.8.A point X * is a first-order stationary point of problem (7) if The next theorem shows that a local minimizer of problem ( 7) is a first-order stationary point.
Theorem 3.9.Suppose that X * is a local minimizer of problem (7).Then X * is a first-order stationary point of problem (7), that is, (12) holds at X * .
Third, we show some convergence results on the sequence {X k } generated by the proposed IRSpM algorithm in algorithm 2. The objective function of (7) evaluated at the sequence {X k } is strictly decreasing and any accumulation point of {X k } is a stationary point.Proposition 3.10.Let Ψ denote the objective function of the log-S p minimization problem (7).Suppose that {X k } is a sequence generated by algorithm 2 and µ > 1.Then we have Proof.By the descent lemma [45] and the concavity of the function g, we obtain where w k j = g ′ σ p j (X k ) and l = min{m, n}.Note that X k+1 is a minimizer of ( 8), and thus we have Then substituting (15) into ( 14) yields (13).
Theorem 3.11.Let Ψ denote the objective function of the log-S p minimization problem (7).Suppose that {X k } is a sequence generated by Algorithm 2 and µ > 1.Then the following assertions hold: Proof.(i) It follows from proposition 3.10 that the decreasing sequence {Ψ(X k )} is bounded above by Ψ(X 0 ).Also, This yields assertion (ii).(iii) Let X * be an accumulation point of the sequence {X k } and let γ * ∈ R l be a vector such that γ * = σ(X * ).Assume that a subsequence {X ki } of {X k } converges to X * as i → ∞.
Let r = rank(X * ).Then there exists some I 0 > 0 such that γ ki+1 j > 0 for all j ⩽ r and i > I 0 .And and By lemma 3.1, the following equation holds for all j ⩽ r and i > I 0 , We observe that Ũki+1 Diag(γ ki+1 )( ) T , where U ki+1 j and V ki+1 j denote the jth column of U ki+1 and V ki+1 , respectively.Also, Ũki+1 Σki+1 ( Upon pre-and post-multiplying the equation above by ( Ũki+1 ) T and Ṽki+1 and using Next, it can be easily verified that { Ũki+1 } and { Ṽki+1 } are bounded.Considering a convergent subsequence if necessary, without loss of generality we assume that Ũki+1 → Ũ * and Ṽki+1 → Ṽ * .Then taking the limit of both sides of equation ( 17) as i → ∞ and using assertion (ii), we have where Since γ * j = 0 for all j > r, we have Therefore, X * is a stationary point of Ψ.

The tensor IRSpM algorithm and its convergence analysis for the t-log-Sp minimization
As mentioned in the beginning of section 3, solving the t-log-S p minimization ( 5) is equivalent to solving a log-S p minimization (6) for each frontal slice X (i) .Then a tensor IRSpM (t-IRSpM) algorithm can be proposed for the t-log-S p minimization using the IRSpM algorithm for the log-S p minimization.We summarize the t-IRSpM algorithm in algorithm 3 and its convergence results in theorem 3.13.
6: end for 7: X = ifft(X , [ ], 3) Output: X Definition 3.12.A point X * is a first-order stationary point of problem (5) if for i = 1, 2, . . ., n 3 , To present the convergence results of the t-IRSpM algorithm, we denote {X k } as the sequence generated by IRSpM in the fifth line in Algorithm 3 and denote Theorem 3.13.Let Φ denote the objective function of the log-S p minimization problem (7).Suppose that {X k } is a sequence generated by Algorithm 3. Then the following assertions hold: ) denote the objective function of problem (6).Note Φ (X ) = All the assertions immediately hold.

Patch-based approach for 3D Poissonian image deblurring
In this section, we propose a non-local low-rank model for 3D Poissonian image deblurring by exploiting low-rank priors of the non-local similar patch groups extracted from the observed images.

Problem statement
For a 3D image x ∈ R N , the image degradation model under Poisson noise can be written as where x denotes an image that is not degraded, H ∈ R n×n denotes a matrix operation of the convolution of a PSF, P(•) denotes a process in which the image is contaminated with Poisson noise, and y denotes a degraded image.If H is an identity matrix, the model becomes a simple denoising model.In this paper, we consider periodic boundary conditions and then the blurring operator H keeps the block-cyclic structure.
Since the variance of the Poisson noise is proportional to the intensity of the signal in each pixel, more precisely, assuming that the observed value of image f at position i is independent, we can write where y i denotes the pixel value of the observed image at each position i, and x denotes the original clear image.Using the Bayesian framework, Triet et al [46].proposed a minimization model as follows for 2D Poissonian image deblurring where 1 denotes the vector whose entries are all ones, the logarithm and multiplication with y are component-wise operations, and τ > 0 is a parameter.The first term of model (20) is the data fidelity term derived from the log-likelihood function of the Poisson distribution, and the second term is the classical discrete TV regularization [47] defined as the composition of the l 1 norm and the first-order difference operator ∇.
For 3D Poissonian image deblurring, the data fidelity term of model (20) for 2D Poissonian image deblurring can also be used.However, due to the ill-posedness of the problem, TV regularization-based methods have some limitations in preserving the image textures, especially for 3D images.Therefore, we propose a non-local low-rank model based on the t-log-S p low-rank measure.

Non-local low-rank model for 3D images
Non-local self-similarity for 2D images indicates that for each patch of the 2D image, similar patches can be found in the image and grouped to obtain a low-rank matrix.And using this property, non-local low-rank models have been developed for various applications in image restoration [48][49][50][51][52][53][54].For example, weighted nuclear norm minimization [52] has been applied to image denoising [52], image deblurring [53], Rician noise removal [54] and phase retrieval [51].For 3D images, the non-local self-similarity property also exists.The non-local low-rank regularization for 3D images can be imposed by using matrix low-rank measures [55] or tensor low-rank measures [56,57].For example, Kronecker-basis-representation (KBR) tensor sparsity regularization [58] has been applied to multispectral image denoising [58] and low-dose dynamic cerebral perfusion CT reconstruction [59] and low-dose CT sinogram recovery [60].In the following, we adopt our t-log-S p tensor low-rank measure proposed in section 2, and develop a non-local low-rank model for 3D Poissonian image deblurring.
First, we group non-local 3D patches, also called cubes, with similarity together by cube matching and form a non-local similar patch tensor.Given a 3D image x, suppose it can be divided into L overlapping cubes of size For each reference cube x i of the image, a total number of n 2 non-local self-similar cubes } can be found by cube matching.Here, the cubes are grouped using Euclidean distances, and the tensor R i (x) is generated for the reference cube x i by stacking the grouped unfolding cubes in the ascending order of Euclidean distance in the second dimension, see definition 4.1.
Definition 4.1.Given a vectorized 3D image x ∈ R N and a reference vectorized cube x i ∈ R n1n3 , the non-local similar patch matrix R i,j ∈ R n1n3×N is a binary matrix (whose terms are 1 or 0), i = 1, 2. . . ., L, j = 1, 2. . . ..n 2 , such that R i,j x is the jth vectorized cube in the ith nonlocal similar group x i,j , that is, R i,j x = x i,j .Let R i : R N → R n1×n2×n3 be the extraction operator for the ith non-local self-similar tensor defined as Here, R i (x) is the constructed tensor for the ith reference cube.And this tensor describes the spatial correlation along the first dimension, presents the repeated patterns of similar cubes along the second dimension, and keeps the mode-3 correlation of the 3D image along the third dimension.Note that the order of the modes can be switched.And R i (x) should be a low-rank tensor according to non-local self-similarity if x is a clean image.
Second, we adopt the t-log-S p low-rank measure to regularize the low-rank properties of these non-local similar patch tensors.By combining the low-rank tensor regularization using t-log-S p low-rank measure defined as in (4) with the tensor Poissonian image deblurring model (20), a non-local low-rank tensor model for image restoration is as follows where R i (x) represents the constructed tensor for each reference cube, W = l is a diagonal matrix whose main diagonal entries indicate the counts for each pixel, ⟨•, •⟩ is the W-weighted inner product and η i > 0. This model preserves the structural correlation of the constructed tensors, thus obtaining better denoising results.
Lastly, we use variable splitting to reformulate the model.By introducing relaxation variables, and problem (21) can be rewritten as a constrained problem: Then by relaxing these equalities of the splitting variables, the constrained problem can be relaxed to an unconstrained problem as follows min where τ > 0, α > 0 and η i > 0. We call this model as the non-local low-rank tensor model for 3D Poissonian image deblurring.

The full algorithm for 3D Poissonian image deblurring
To solve the proposed model ( 22) for 3D Poissonian image deblurring, we perform an alternating minimization algorithm with a proximal term as follows.
• Update of L i : given x = x k , we update L k+1 i by solving the following subproblem We solve this t-log-S p minimization problem by the t-IRSpM algorithm given in algorithm 3 using L k i as an initial solution.• Update of h: given x = x k , we update h k+1 by minimizing problem (22) with respect to h as follows This is a least squares problem.Its closed-form solution is • Update of x: given h = h k+1 and L i = L i k+1 , the update of the estimated image x k+1 at the (k + 1)th step is computed by minimizing problem (22) together with a proximal term as follows where β > 0. The update of x has a closed-form solution as follows where Since the update of h has a closed-form solution, the algorithm can be viewed as alternatively updating variables L i and x.We call this algorithm as the patch-based tensor logarithmic S p minimization (TLSpM) algorithm and summarize it in Algorithm 4. The convergence analysis of this algorithm is presented in the next subsection.Input: y, and parameters τ, α, η i , and β.

Convergence analysis of the Patch-based TLSpM algorithm
We can prove that any accumulation point of the sequence {(x * , h * , {L i * })}, where {L i * } denotes {L 1 * , L 2 * , . . ., L L * }, generated by patch-based TLSpM algorithm in Algorithm 4 is a stationary point of the objective function of the proposed model in (22).
For the sake of proving convergence results for Algorithm 4, we assume with loss of generality that the t-IRSpM algorithm in line 4 performs q inner iterations.And we denote the inner updates of L i from the initial L qk i to the output L q(k+1) i , which are corresponding to the initial L k i to the output L k+1 i in line 4.
Proof.(i) By the update of L q(k+1) i via t-IRSpM algorithm, it follows from theorem 3.13 that By the updates of h k+1 and x k+1 , we have It follows immediately from these inequalities that equation ( 26) holds.(ii) Since Φ is bounded below and coercive assertion (ii) holds.(iii) Summing (26) Taking K → ∞, we have These together with (24) yield assertion (iii).(iv) Let (x * , h * , {L i * }) be an accumulation point of the sequence According to the t-IRSpM algorithm and theorem 3.13, we have for s = 1, 2, . . ., n 3 , i = 1, 2, . . ., L, .

J Lu et al
According to the updates of h k+1 and x k+1 , we have Taking k ∈ K approaches ∞ and using assertion (iii), we can obtain the assertion (iv).

Experimental results
In this section, we demonstrate the performance of the patch-based TLSpM algorithm in Algorithm 4 for 3D Poissonian image deblurring.We compare this algorithm with other Poisson deblurring algorithms including RL [10], ARL [11], VST-BM3D [29] and PURE-LET [33] algorithms.Also, we test the KBR-denoising [61] for 3D Poissonian image deblurring by using our proposed model and algorithm scheme.For example, in the KBR-PoisDebl algorithm, the model is (22) where M log,Sp is replaced by KBR.The experiments were implemented in MATLAB 2016b running a 64-bit Ubuntu 18.04 system and executed on an eightcore Intel Xeon E5-2640v3 128GB CPU at 2.6 GHz.The proposed algorithm was accelerated using parallel computing, as the estimation of each patch tensor can be computed in parallel.

Experiments on fluorescence microscope images
Poisson noise and blur degradation often occur simultaneously in fluorescence microscope images.Fluorescence microscopy is widely used in biological studies to analyze cell and tissue structures.Its resolution is affected by two factors.One is the ambiguity caused by the Abbe diffraction limit, and the other is the noise that strongly depends on the signal.We use 3D fluorescence microscope images for testing.Three test images are 'Sphericalbeads' 7 (128 × 128 × 64), 'Micro-tubules' 1 (128 × 128 × 64), and 'Pollen'8 (256 × 256 × 32).The 10th frontal slice of each original image is shown in figure 1.To simulate blurry Poissonian images, we adopt the procedure in [16].First, the original image is scaled by Peak/I max , where I max is the maximum value of the original image and Peak is the peak value set as 255.Then the image is further convolved with three different 3D blur kernels obtained by a microscope PSF generator 9 , including one 3D Gibson & Lanni blur (G&L) [62] and two different 3D Gaussian blur (G1 and G2).Lastly, Poisson noise is added to the blurry image.
For the proposed patch-based TLSpM algorithm, we first set the search window as 35 × 35 and the number of non-local patches for each group as 60.The cube size is 7 × 7 × 7 for the G&L blur kernel and 6 × 6 × 14 for the G1 and G2 blur kernels.Also, the parameters p = 0.95, β = 0.0001 and µ = 1.0001 and the rest are shown in table 1.And to achieve better performance, the cube matching R i is also updated for certain iterations and then remains unchanged afterward.In the experiment, the peak signal-to-noise ratio (PSNR) [63] and structural similarity index measure (SSIM) [64] are used to measure the quality of the restored images.In particular, the PSNR value is defined as , where x * is the restored image and x is the original image.And the SSIM value is defined in [64].
The PSNR and SSIM values of the restored images obtained by different algorithms are shown in table 2. The best PSNR and SSIM values for each case are marked in bold.It shows that the proposed patch-based TLSpM algorithm achieves the best numerical values for most of the testing cases.For example, for 'Micro-tublules' image with the G2 blur kernel, the PSNR value of the proposed algorithm exceeds the state-of-the-art PURE-LET algorithm by 1.18 dB.The PoisDebl-KBR method that is modified from our proposed model performs very competitive numerical results, achieving only 0.17 dB in average less than our Patch-based TLSpM in terms of PSNR values.
To evaluate the visual quality of the restored images obtained by different algorithms, we compare several selected slices of the restored 3D images in figures 2-4.In figure 2, for the 'Spherical-beads' image with the G&L blur kernel, the proposed patch-based TLSpM algorithm obtains the best performance in preserving the spherical structure of beads and separating distinct beads.In contrast, The RL and ARL algorithms were not able to remove Poisson noise and restore the shape of the beads; the VST-BM4D and PURE-LET fail to separate the distinct beads if they are too close to each other; and the PoisDebl-KBR method can separate the beads but the gaps between the beads are not as clear as our proposed method, even though the PSNR value of PoisDebl-KBR method exceeds our proposed method by 0.03dB.In figure 3, the lateral slice of the original 'Micro-tubules' image contains many luminous points.The ARL and RL algorithms cannot recognize luminous points, while the VST-BM4D, PURE-LET, PoisDebl-KBR and proposed algorithms can identify most of the luminous points.In fact, the proposed algorithm can restore images with higher accuracy and fewer artifacts, compared to the VST-BM4D, PURE-LET and PoisDebl-KBR algorithms, as shown in the zoomed-in image of figure 3. Lastly, in figure 4, for the 'Pollen' image with the G2 blur kernel, the proposed and PoisDebl-KBR algorithms can recover the pattern of the cell wall, while the RL and ARL algorithms fail to remove the noise on the cell wall and the state-of-the-art VST-BM4D and PURE-LET algorithms restore blurry cell walls without details.
All in all, the proposed patch-based TLSpM algorithm outperforms the competing algorithms in removing Poisson noise and retrieving details from blurry images.

Analysis on the parameter p
The denoising performance of the proposed patch-based TLSpM method is related to the parameter p, which is used in the t-log-S p low-rank measure.We conduct a sensitivity analysis on parameter p using the 'Spherical-beads' image with a G2 blur kernel.Figure 5(a) presents the plot of the PSNR value for p ∈ (0, 1) vs the number of iterations.We can observe that when p ∈ (0, 0.65), the PSNR value decreases as p decreases; when p ∈ [0.65, 1), the differences in PSNR are not very significant.To analyze the parameter p ∈ [0.65, 1), we use the PSNR value for p = 0.8 as a reference and compute the difference between the PSNR value for each p and the reference PSNR. Figure 5(b) presents the plot of the PSNR difference vs the number of iterations.When fewer than 100 iterations are performed, the PSNR differences are not significant; when 100-300 iterations are performed, p = 0.8 performs the best; when 500 iterations are performed, p = 0.95 performs the best.In summary, the proposed method can achieve satisfactory performance by choosing p ∈ [0.65, 1).When early stop is preferred, one may choose p = 0.8; otherwise, one may choose p = 0.95.

Conclusion
In this paper, we first define a new t-log-S p low-rank measure for tensors.Then we propose a patch-based non-local low-rank approach, called patched-based TLSpM, for removing blur and Poisson noise.The experimental results show that this algorithm is effective in improving the image quality of 3D fluorescence microscopes, and it is superior to the existing methods in terms of visual quality and quantitative quality measures.

Remark 3 . 3 .
) T 10: k ← k + 1. 11: end while Output: X k If we initialize X 0 by X 0 = Y, the IRSpM algorithm can be simplified and only requires one SVD operation.Suppose Y = UΣV T is the SVD of Y, where Σ = Diag(σ(Y)).

Figure 5 .
Figure 5. Sensitivity analysis of parameter p.(a) Plot of the PSNR value for p ∈ (0, 1) vs the number of iterations; (b) The PSNR difference for p ∈ [0.65, 1) vs the number of iterations.

Table 1 .
Parameter settings for patch-based TLSpM algorithm.