Brought to you by:
Paper

Stability results for backward time-fractional parabolic equations

, , and

Published 19 November 2019 © 2019 IOP Publishing Ltd
, , Citation Dinh Nho Hào et al 2019 Inverse Problems 35 125006 DOI 10.1088/1361-6420/ab45d3

0266-5611/35/12/125006

Abstract

Optimal order stability estimates of Hölder type for the backward Caputo time-fractional abstract parabolic equations are obtained. This ill-posed problem is regularized by a non-local boundary value problem method with a priori and a posteriori parameter choice rules which guarantee error estimates of Hölder type. Numerical implementations are presented to show the validity of the proposed scheme.

Export citation and abstract BibTeX RIS

1. Introduction

Let H be a Hilbert space with inner product $\left\langle \cdot,\cdot\right\rangle $ and norm $\|\cdot\|$ , $ A: D(A)\subset H\to H$ be a self-adjoint closed operator on H such that  −A generates a compact contraction semi-group $\{S(t)\}_{t\geqslant 0}$ on H. Assume that A admits an orthonormal eigenbasis $\{\phi_i\}_{i\geqslant 1}$ in H, associated with the eigenvalues $\{\lambda_i\}_{i\geqslant 1}$ such that

For $\gamma\in (0,1)$ , consider the backward time-fractional parabolic equation

Equation (1.1)

where

Equation (1.2)

is the Caputo derivative [7, 9] and $\Gamma(\cdot)$ is Euler's Gamma function. Since the first work [8] devoted to the backward time-fractional diffusion equation, several papers on backward time-fractional parabolic equations have been published: the mollification method [12], the non-local boundary value problem method [1618], Tikhonov regularization [1, 6, 1315].

In this paper, we first establish stability estimates of Hölder type for problem (1.1) and then we apply a non-local boundary value problem method [2] to it. Namely, we regularize the ill-posed problem (1.1) by the non-local boundary value problem

Equation (1.3)

where $0<\alpha<1$ and $k=0, 1, 2,...$ is fixed. A priori and a posteriori parameter choice rules are suggested which yield error estimates for $u(t)$ of Hölder type for $t\in[0,T]$ .

In [25], we used the non-local boundary value problem method with k  =  0 to regularize backward parabolic equations and we obtained the error estimates of optimal order. However, for the backward time-fractional parabolic equations, we note that if we use k  =  0, the order of the error estimate does not exceed ${1}/{2}$ . Therefore, we set $k=0,1,2,...$ for getting a higher convergence rate of the regularizing method theoretically. The numerical performances of the regularizing schemes for different values of k are also checked.

To compare our results with the previous ones, let us denote

and the norm

In case $H=L^2(\Omega)$ , $\Omega$ is a bounded domain in $\mathbb{R}^d$ with sufficiently smooth boundary $\partial\Omega$ , the expression ${{\mathcal A}}$ is of the form:

Equation (1.4)

with $a_{ij} = a_{ji} \in C^1(\overline{\Omega}), i,j = 1, \ldots, d, c \in C(\overline{\Omega}), c \geqslant 0, x \in \overline{\Omega}$ and $\sum_{i,j=1}^d a_{ij}\xi_i\xi_j \geqslant \nu \sum_{i=1}^d\xi_i^2, x \in \overline{\Omega}$ for some $\nu > 0$ . Then the operator A can take the form

and we have [10] $D(A^{\,p})\subset H^{2p}(\Omega), p>0$ , $D(A^{\frac{1}{2}})=H_0^1(\Omega)$ .

For similar problems to (1.1), Liu and Yamamoto [8] used the quasi-reversibility method, Yang and Liu [19] used the Fourier method, Yang and Liu [18] applied the non-local boundary value problem method supposing that $\|u(0)\|_1$ is bounded, meanwhile Wang and Liu [12] studied the data regularization method assuming that $\|u(0)\|_p$ is bounded with p   =  1 or 1/2. Furthermore, these authors applied an a priori parameter choice rule to their approaches. Wang, Zhou and Wei [16], Wei and Wang in [17] applied variations of the non-local boundary value problem method to regularizing (1.1), where in [16] the authors studied a posteriori parameter choice rules and in [17] both a priori and a posteriori parameter choice rules are investigated. In this paper we also study the non-local boundary value problem method for both a priori and a posteriori parameter choice rules assuming that $\|u(0)\|_p$ is bounded for arbitrary positive constant p . However, the order of our error estimates is better than that of [16] and our strategy for a posteriori method is much simpler than that of [16] and [17]. Indeed, the order of our error estimates can be greater than ${2}/{3}$ while that of the authors in [12, 16, 18] is not greater than ${1}/{2}$ (since these authors only considered k  =  0). Furthermore, the order of the error estimate in [17] is not greater than ${2}/{3}$ for all p   >  0 for the a priori parameter choice rule and is not greater than ${1}/{2}$ for the a posteriori parameter choice rule (since these authors only considered k  =  1).

This paper is organized as follow: in the next section we will present our stability estimate for (1.1). In section 3 we describe our regularization method with the error estimates, the proofs of which will be given in section 4. Finally we present some numerical implementations for the proposed regularizing scheme in section 5.

2. Stability estimate

Denote by $E_{\gamma, \beta}(z)$ the Mittag–Leffler function [7, 9]:

Equation (2.1)

For $\gamma \in (0,1)$ , $a\in H$ , consider the forward time-fractional parabolic equation:

Equation (2.2)

Definition 1. The Caputo derivative $\frac{\partial^\gamma u}{\partial t^\gamma}$ is defined by

Definition 2. Let $a\in H$ . The function $u(t): [0,T] \mapsto H$ is called a solution to problem (2.2) if $u(t)\in C^1((0,T),H)\cap C([0,T],H)$ , $u(t)\in D(A)$ for all $t\in(0,T)$ and (2.2) holds.

Theorem 1. Problem (2.2) admits a unique solution, which can be represented in the form:

Equation (2.3)

To prove theorem 1, we need following auxiliary results.

([16]).

Lemma 1 For any $\lambda_n$ satisfying $\lambda_n \geqslant \lambda_1>0$ , there exist positive constant $\overline{C}_1,\overline{C}_2$ depending on $ \gamma, T, \lambda_1$ such that

Lemma 2. Let $\gamma\in (0,1)$ , $\lambda>0$ and t  >  0. We have

Equation (2.4)

Equation (2.5)

Part (a) can be found in [10]. The proof of part (b) is straightforward and we omit it.

Now we are in a position to prove theorem 1. First, we verify that $u(t)$ defined by (2.3) is a solution to problem (2.2).

We prove that $u(t)\in D(A)$ for all $t\in(0,T).$ For each t  >  0, using lemma 1, we conclude that there exist positive constants $\overline{C}_3,\overline{C}_4$ depending on $ \gamma, t, \lambda_1$ such that $ \frac{\overline{C}_3}{\lambda_n}\leqslant E_{\gamma,1}(-\lambda_nt^\gamma)\leqslant\frac{\overline{C}_4}{\lambda_n}. $ This implies that $ \left\langle u(t),\phi_n\right\rangle ^2= \left(E_{\gamma,1}(-\lambda_nt^\gamma)\right){}^2\left\langle a,\phi_n\right\rangle ^2\leqslant \left(\frac{\overline{C}_4}{\lambda_n}\right){}^2\left\langle a,\phi_n\right\rangle ^2,~\forall n\in\mathbb{N}^{*}$ with $\mathbb{N}^* = \mathbb{N}\setminus\{0\}$ . Consequently, it follows that $ \sum_{n=1}^\infty\lambda_n^2\left\langle u(t),\phi_n\right\rangle ^2\leqslant \sum_{n=1}^\infty(\overline{C}_4){}^2\left\langle a,\phi_n\right\rangle ^2 =(\overline{C}_4){}^2\|a\|^2<\infty. $ Therefore, $u(t)\in D(A)$ for all $t\in(0,T).$

Since $E_{\gamma, 1}(0)=1$ , we have $u(0)=\sum_{n=1}^\infty E_{\gamma, 1}(0)\left\langle a,\phi_n\right\rangle \phi_n=\sum_{n=1}^\infty \left\langle a,\phi_n\right\rangle \phi_n=a$ . Let

Equation (2.6)

We have

Equation (2.7)

Using lemma 2 and (2.6), we get

This implies that

Equation (2.8)

From (2.7) and (2.8) and the closeness of the operator A, we obtain

Now we prove the continuity of $u(t)$ at t  =  0, i.e.

Equation (2.9)

Indeed, since $a\in H$ , there exist positive constants M and $n_\delta$ such that $ \|a\|^2\,=$ $\sum_{n=1}^\infty\left\langle a,\phi_n\right\rangle ^2 \leqslant M $ and $ \sum_{n=n_\delta}^\infty\left\langle a,\phi_n\right\rangle ^2 \leqslant \delta^2/4. $ Furthermore, since $\lim\limits_{t\rightarrow0}E_{\gamma, 1}(t)\,=$ $E_{\gamma, 1}(0)=1$ , there exists a constant $\delta_1=\delta_1(\delta,M)$ such that $|E_{\gamma, 1}\left(-t\right)-1|\leqslant \frac{\delta}{2(1+M)}, \forall t \leqslant \delta_1.$ If $0 < t \leqslant \left({\delta_1}/{\lambda_{n_\delta}}\right){}^{\frac{1}{\gamma}}$ , then $\lambda_nt^\gamma\leqslant \delta_1, \forall n < n_\delta$ . Consequently,

This implies $ \lim\limits_{t\rightarrow0}\|u(t)-u(0)\|=0. $

To prove the uniqueness of a solution to problem (2.2) we represent $ u(t)=\sum_{n=1}^\infty \left\langle u(t), \phi_n\right\rangle \phi_n:=\sum_{n=1}^\infty u_n(t)\phi_n, $ . Plugging the last into (2.2), we get $\partial^\gamma u/{\partial t^\gamma} + Au=0, 0<t<T$ and $u_n(0)=\left\langle a,\phi_n\right\rangle $ . Hence, following theorem 4.3, page 231 in [7], we obtain $ u_n(t)=E_{\gamma,1}(-\lambda_nt^\gamma)u_n(0)=E_{\gamma,1}(-\lambda_nt^\gamma)\left\langle a,\phi_n\right\rangle $ which follows (2.3).

Remark 1. We note that the case $1 < \gamma < 2$ , the equation $\partial^\gamma u/\partial t^\gamma + Au = 0$ requires two initial conditions $u(0)$ and ut(0) (see,e.g. [10]), is completely different from problem (2.2). Therefore, we do not consider it in this paper.

Theorem 2 (Stability estimate). Suppose that $u(t)$ is a solution of the equation

satisfying $\|u(T)\|\leqslant \varepsilon$ and $\|u(0)\|_p\leqslant E$ for some positive constants $E, p$ . Then there exists a constant C such that

Equation (2.10)

Proof of theorem 2. We have

Using the Hölder inequality, we obtain $ \|u(0)\|_q^2 \leqslant \left(\sum_{n=1}^\infty\lambda_n^{2p} \left <u(0), \phi_n\right\rangle ^2\right){}^{\frac{q+1}{p+1}} $ $ \left(\sum_{n=1}^\infty\lambda_n^{-2}\left\langle u(0), \phi_n\right\rangle ^2\right){}^{\frac{p-q}{p+1}}. $ From (2.3), we have $\left\langle u(T), \phi_n\right\rangle =E_{\gamma,1}(-\lambda_nT^\gamma)\left\langle u(0), \phi_n\right\rangle $ . Hence, $\left\langle u(0), \phi_n\right\rangle =\frac{\left\langle u(T), \phi_n\right\rangle }{E_{\gamma,1}(-\lambda_nT^\gamma)}.$ Therefore, $ \|u(0)\|_q^2\leqslant \|u(0)\|_p^{\frac{2(q+1)}{p+1}}\left(\sum_{n=1}^\infty\lambda_n^{-2}\frac{\left\langle u(T), \phi_n\right\rangle ^2} {E_{\gamma,1}^2(-\lambda_nT^\gamma)}\right){}^{\frac{p-q}{p+1}}. $ Using lemma 1, we have

Equation (2.11)

From (2.11), there exists a constant C  >  0 such that

Since $u(t)=\sum_{n=1}^\infty E_{\gamma,1}(-\lambda_nT^\gamma)\left\langle u(0), \phi_n\right\rangle \phi_n$ and $ 0\leqslant E_{\gamma,1}(-\lambda_nT^\gamma)\leqslant 1$ , we obtain

The theorem is proved. □

Remark 2. In theorem 2, the estimates at t  =  0,

Equation (2.12)

is of optimal order. Indeed, set $v=\sum_{n=1}^\infty\lambda_n^{q}\left\langle u(0), \phi_n\right\rangle \phi_n$ , we have

From lemma 1, we have $\frac{\lambda_n}{\overline{C}_2}\leqslant \frac{1}{E_{\gamma,1}(-\lambda_n T^\gamma)}\leqslant \frac{\lambda_n}{\overline{C}_1}.$ This implies that $ \frac{\lambda_n^{1+q}}{\overline{C}_2} \leqslant \frac{\lambda_n^q}{E_{\gamma,1}(-\lambda_n T^\gamma)}\leqslant \frac{\lambda_n^{1+q}}{\overline{C}_1}.$ Therefore, the condition $\|u(0)\|_{p}^2\leqslant E^2$ is equivalent to the condition

Equation (2.13)

Since $u(t)$ is the solution of problem $\frac{\partial^\gamma u}{\partial t^\gamma}+Au=0$ , we have

Let us formulate this equation as an operator equation

Then B is a continuous linear operator. Furthermore, we can easily check that B is a self-conjugated operator which means that B*  =  B. This implies that

Therefore, condition (2.13) is equivalent to $\|(B^*B){}^{-\frac{p-q}{2(q+1)}}v\|^2\leqslant E_1^2.$ With $a\geqslant \|B^*B\|$ we define $\psi: (0,a]\rightarrow \mathbb{R_+}$ by $\psi(\lambda)=\lambda^{(\,p-q)/(q+1)}$ and $\rho(\lambda)=\lambda\psi^{-1}(\lambda)$ . Then, $\psi(\lambda)$ is strongly monotonic increasing $(0,a]$ and $\rho(\lambda)=\lambda^{(\,p+1)/(\,p-q)}$ is convex in $(0,a]$ . Thus, the functions $\psi(\lambda)$ and $\rho(\lambda)$ satisfy assumption 1.1 (p.379) in [11]. Therefore, by theorem 2.1 in [11], with condition (2.13), the estimate of optimal order is

Since $\|v\|=\|u(0)\|_q$ , estimate (2.12) of optimal order.

Remark 3. In [15] Wang, Wei and Zhou analyzed the optimal error bound for problem (1.1) with any p   >  0 but only for the problem in one-dimensional space and q  =  0. We obtain stability estimate of optimal order for the general problem with any p   >  0 and $0\leqslant q<p$ . Furthermore, our proof is simpler than that of [15].

Remark 4. 

  • 1.  
    Liu and Yamamoto in [8], Yang and Liu in [18] considered the case p   =  1 and q  =  0. Wang and Liu in [12] considered the cases p   =  1, q  =  0 and p   =  1/2, q  =  0,
  • 2.  
    Our results is valid for any p   >  0 and $0\leqslant q<p$ ,
  • 3.  
    If p   >  1 and q  =  0, then the Hölder exponent in estimate (2.10) $\frac{p}{p+1}$ is strictly greater than $\frac{1}{2}$ .

Remark 5. The problem (1.1) is well-posed for t  >  0 and ill-posed for t  =  0. In fact, following theorem 2, we have $ u(t)=\sum_{n=1}^\infty E_{\gamma,1}(-\lambda_n t^\gamma)\left\langle u(0),\phi_n\right\rangle \phi_n = $ $\sum_{n=1}^\infty\frac{E_{\gamma,1}(-\lambda_n t^\gamma)\left\langle u(T),\phi_n\right\rangle \phi_n}{E_{\gamma,1}(-\lambda_n T^\gamma)}. $ For t  >  0, due to lemma 2, we have $0 < \frac{E_{\gamma,1}(-|\xi|^2 t^\gamma)}{E_{\gamma,1}(-|\xi|^2 T^\gamma)} \leqslant \frac{\overline{C}_2}{\overline{C}_1}\left(\frac{T}{t}\right){}^\gamma$ which follows that problem (1.1) is well-posed for t  >  0. At t  =  0, we have $u(0)=\sum_{n=1}^\infty\frac{\left\langle u(T),\phi_n\right\rangle \phi_n}{E_{\gamma,1}(-\lambda_n T^\gamma)}$ . Due to lemma 2, $\frac{1}{E_{\gamma,1}(-\lambda_n T^\gamma)}$ behaves like $\lambda_n$ as $n\rightarrow\infty$ . Therefore, problem (1.1) is ill-posed at t  =  0.

3. Regularization

In this section, we regularize the ill-posed problem (1.1) by the well-posed non-local boundary value problem

Equation (3.1)

where $0<\alpha<1$ and k is fixed, $k=0,1,2,...$ . We shall suggest a priori and a posteriori methods for choosing the regularization parameter $\alpha$ which yield error estimates of Hölder type.

3.1. A priori parameter choice rule

Theorem 3. Problem (3.1) is well-posed. For solutions $u(t)$ of problem (1.1) satisfying

Equation (3.2)

the following statements hold:

  • (i)  
    If 0  <  p   <  k  +  1, then with $\alpha=\left(\frac{\varepsilon}{E}\right){}^{\frac{k+1}{p+1}}$ , there exists constant C1 such that
  • (ii)  
    If $p \geqslant k+1$ , then with $\alpha=\left(\frac{\varepsilon}{E}\right){}^{\frac{k+1}{k+2}}$ , there exists constant C2 such that

Remark 6. We note that $\frac{p}{p+1}>\frac{2}{3}$ when p   >  2 and $\frac{k+1}{k+2}>\frac{2}{3}$ when k  >  1. Therefore the order of our error estimates is greater than $\frac{2}{3}$ when 2  <  p   <  k  +  1 or $p\geqslant k+1>2$ .

3.2. A posteriori parameter choice rule

Theorem 4. Let $k\in \mathbb{N}$ and $\beta\in (0,1)$ . Suppose that $0<\varepsilon^\beta < \|f\|$ . Choose $\tau>1$ such that $0<\tau\varepsilon^\beta\leqslant\|f\|$ . Then for solutions $u(t)$ of problem (1.1) satisfying (3.2) the following statements hold:

  • (i)  
    If k  >  0 and $\varepsilon$ is sufficiently small, then there exists a unique number $\alpha_\varepsilon>0$ such that
    Equation (3.3)
    Further, there exist constants $C_3, C_4 $ such that, for all $t\in[0,T],$
  • (ii)  
    If k  =  0, then there exists a unique number $\alpha_{\varepsilon}>0$ such that
    Equation (3.4)
    Further, there exists a constant C5 such that

Remark 7. We note that $\frac{p}{p+1}>\frac{2}{3}$ when p   >  2 and $\frac{k}{k+1}>\frac{2}{3}$ when k  >  2. Therefore the order of our error estimates is greater than $\frac{2}{3}$ when 2  <  p   <  k or $p\geqslant k>2$ .

Remark 8. Our results in theorems 3 and 4 are better than those of Wang, Zhou and Wei [16] and Wei and Wang [17].

  • 1.  
    Wang, Zhou and Wei [16] proposed an a posteriori parameter choice rule by solving the equation
    Equation (3.5)
    and got the convergence rate
    Equation (3.6)
    Since $\frac{p}{p+2}<\frac{1}{2}$ for all $p\in (0,2)$ , the order of the error estimate in (3.6) is not greater than ${1}/{2}$ for all p   >  0. The order of our error estimates in theorem 3 and part (i) of theorem 4 is better than that of 3.6.
  • 2.  
    Wei and Wang in [17] proposed an a priori parameter choice rule and got a convergence rate of form
    Equation (3.7)
    Since $\frac{p}{p+2}<\frac{2}{3}$ for all $p\in (0,4)$ , the order of the error estimate in (3.7) does not exceed $\frac{2}{3}$ for all p   >  0.
  • 3.  
    Wei and Wang in [17] also proposed an a posteriori parameter choice rule by solving the equation (3.5) and got the convergence rate
    Equation (3.8)
    The error estimates in theorem 3 and part (i) of theorem 4 are better than those of (3.7) and (3.8).
  • 4.  
    Our a posteriori methods (3.3) and (3.4) are much simpler than (3.5).

Remark 9. Our results in theorems 3 and 4 are better than those of Al-Jamal [1], Wang, Wei and Zhou in [14, 15]. Indeed, Al-Jamal applied the Tikhonov regularization method but proved no convergence rate. Wang, Wei and Zhou [15] regularized problem (1.1) by the Tikhonov method but only for the problem in one-dimensional space and with the hypothesis $\sum_{n=1}^\infty(1+\lambda_n){}^{\,p} \left\langle u(0), \phi_n\right\rangle ^2<E^2$ ; they proposed an a priori parameter choice rule and got a convergence rate of form

Equation (3.9)

Since $\frac{p}{p+2}<\frac{2}{3}$ for all $p\in (0,4)$ , the order of the error estimate in (3.9) cannot exceed $\frac{2}{3}$ for all p   >  0.

Wang, Wei and Zhou in [15] also proposed an a posteriori parameter choice rule and got a convergence rate of form

Equation (3.10)

The order of the error estimate (3.10) does not exceed $\frac{1}{2}$ for all p   >  0.

Error estimates in theorem 3 and part (i) of theorem 4 of our are better than (3.9) and (3.10).

In [14] Wang, Wei and Zhou generalized their method to the multi-dimensional problems and obtained the upper bounds $ \widetilde{C}\varepsilon^{\frac{p}{p+2}}E^{\frac{2}{p+2}}$ if $0<p<4,$ and $\widetilde{C}\varepsilon^{\frac{2}{3}}E^{\frac{1}{3}}$ if $p \geqslant 4$ for the a priori method (note that $\frac{p}{p+2}<\frac{2}{3}$ if 0  <  p   <  4) and $\widetilde{C}\varepsilon^{\frac{p}{p+2}}E^{\frac{2}{p+2}}$ if $0<p<2,$ $ \widetilde{C}\varepsilon^{\frac{1}{2}}E^{\frac{1}{2}}$ if $p\geqslant 2$ for the a posteriori method (note that $\frac{p}{p+2}<\frac{1}{2}$ if 0  <  p   <  2), which are weaker than that of ours.

We also note that for $k \geqslant p$ theorems 3 and 4 give the convergence rate $E^{\frac{1}{p+1}}\varepsilon^{\frac{p}{p+1}}$ which is of optimal order.

3.3. Convergence rate of the regularizing solution to the exact one in norm $ {\|\cdot\|_{q}} $ with 0  <  q  <  p

Theorem 5. Let $k\in \mathbb{N}, k\geqslant q>0$ . If $u(t)$ is a solution of problem (1.1) satisfying (3.2) with p   >  q  >  0 and $v_\alpha(t)$ is the solution of problem (3.1) then with $\alpha=\left(\frac{\varepsilon}{E}\right){}^{\frac{k+1}{p+1}}$ there exist constants $C_6,C_7 $ such that

$\forall t\in[0,T].$

Remark 10. Our method is of optimal order in case $k\geqslant p-q-1$ .

Theorem 6. Suppose that $u(t)$ is a solution of problem (1.1) satisfying (3.2) with p   >  q  >  0. If $\alpha_\varepsilon>0$ satisfying (3.3), then there exist constants $C_8, C_9 $ such that

$\forall t\in[0,T].$

Remark 11. Our method is of optimal order in case $k\geqslant p$ .

4. Proofs of the main results

4.1. Proof of theorem 3

First, we present some auxiliary results.

Lemma 3 (Young's inequality). If $a,b$ are nonnegative numbers and $m,n$ are positive numbers such that $\frac{1}{m}+\frac{1}{n}=1$ , then $ ab \leqslant \frac{a^m}{m}+ \frac{b^n}{n}. $

Lemma 4. Problem (3.1) admits a unique solution

Equation (4.1)

The proof if this lemma is straightforward and we omit it.

Lemma 5. If $v_\alpha(t)$ is the solution of problem (3.1), then there exists a constant $\overline{C}_5$ such that

Proof. By lemma 4, we have $ \|v_\alpha(0)\|^2 =\sum_{n=1}^\infty\left(\frac{\left\langle\,f, \phi_n\right\rangle }{\alpha\lambda_n^k+E_{\gamma,1}\left(-\lambda_n T^\gamma\right)}\right){}^2. $

For k  >  0, using lemmas 1 and 3, we obtain

Equation (4.2)

Therefore, with $k\in \mathbb{N}$ we have

Equation (4.3)

where $\overline{C}_6=\min\{1,C_1^{\frac{k}{k+1}}\}.$ This implies that $ \|v_\alpha(0)\|\leqslant \frac{1}{\overline{C}_6}\alpha^{-\frac{1}{k+1}}\|f\|. $

Since $\quad v_\alpha(t)=\sum_{n=1}^\infty E_{\gamma,1}(-\lambda_nT^\gamma)\left\langle v_\alpha(0), \phi_n\right\rangle \phi_n$ and $ 0 \leqslant E_{\gamma,1}(-\lambda_nT^\gamma) \leqslant 1$ , with $\overline{C}_5=\frac{1}{\overline{C}_6}$ we obtain $ \|v_\alpha(t)\|\leqslant \|v_\alpha(0)\|\leqslant \overline{C}_5\alpha^{-\frac{1}{k+1}}\|f\|, \forall t\in[0,T]. $ The lemma is proved. □

Lemma 6. If $\|u(0)\|_p\leqslant E$ holds for some positive constants $p,E>0$ , then there exist constants $\overline{C}_{7}$ and $\overline{C}_8$ such that

Proof. We have

Equation (4.4)

From (4.3) and (4.4), we have

Equation (4.5)

If p   <  k  +  1, using lemmas 1 and 3, we get

From (4.5) and this inequality, we have

Therefore, there exists a constant $\overline{C}_{7}$ such that

If $p\geqslant k+1$ , from (4.5) we obtain

and thus arrive at the second estimate of the lemma. □

Now we are in a position to prove theorem 3.

The well-posedness of problem (3.1) is implied from Lemmas 4 and 5.

Proof of part (i) of theorem 3.

If p   <  k  +  1, using lemma 6, we have $ \|u(0)-v_\alpha(0)\|^2\leqslant \overline{C}_7\left(\alpha^{\frac{2p}{k+1}}E^2+\alpha^{\frac{-2}{k+1}}\varepsilon^2\right). $ From $ u(t)-v_\alpha(t)=\sum_{n=1}^\infty E_{\gamma,1}(-\lambda_nT^\gamma)\left\langle u(0)-v_\alpha(0), \phi_n\right\rangle \phi_n$ and $ 0\leqslant E_{\gamma,1}(-\lambda_nT^\gamma)\leqslant 1$ , we get $ \|u(t)-v_\alpha(t)\|^2\leqslant \overline{C}_7\left(\alpha^{\frac{2p}{k+1}}E^2+\alpha^{\frac{-2}{k+1}}\varepsilon^2\right). $ Choosing $\alpha=\left({\varepsilon}/{E}\right){}^{\frac{k+1}{p+1}}$ , we arrive at the conclusion of part (i) of theorem 3.

Proof of part (ii) of theorem 3.

If $p\geqslant k+1$ , from $u(t)-v_\alpha(t)=\sum_{n=1}^\infty E_{\gamma,1}(-\lambda_nT^\gamma)\left\langle u(0)-v_\alpha(0), \phi_n\right\rangle \phi_n$ and $ 0\leqslant E_{\gamma,1}(-\lambda_nT^\gamma)\leqslant 1$ , and lemma 6, we have $ \|u(t)-v_\alpha(t)\|^2\leqslant \overline{C}_8\left(\alpha^2 E^2+\alpha^{\frac{-2}{k+1}}\varepsilon^2\right). $ Choosing $\alpha=\left(\frac{\varepsilon}{E}\right){}^{\frac{k+1}{k+2}}$ , we arrive at part (ii) of theorem 3.

4.2. Proof of theorem 4

We need following auxiliary result.

Lemma 7. Set $\rho (\alpha)=\|v_\alpha(T)-f\|$ and suppose that $f \neq 0$ . Then

  • (a)  
    $\rho $ is a continuous function,
  • (b)  
    $\lim\limits_{\alpha\to 0^{+}}\rho (\alpha)=0,$
  • (c)  
    $\lim\limits_{\alpha\to +\infty}\rho (\alpha)=\|f\|,$
  • (d)  
    $\rho $ is a strictly increasing function.

Proof. 

  • (a)  
    From
    Equation (4.6)
    we directly verify the continuity of $\rho $ and $\rho(\alpha)>0$ for all $\alpha>0$ .
  • (b)  
    Let $\delta$ be an arbitrary positive number. Since $\|f\|^2=\sum_{n=1}^\infty\left\langle\,f,\phi_n\right\rangle ^2$ , there exists a positive integer $n_\delta$ such that $\sum_{n=n_\delta+1}^\infty\left\langle\,f,\phi_n\right\rangle ^2<\frac{\delta^2}{2}$ . For $0<\alpha<\frac{\overline{C}_1\delta}{\sqrt{2}\lambda_{n_\delta}^{k+1}\|f\|}$ , using lemma 1, we have
    This implies that $\lim\limits_{\alpha\to 0^{+}}\rho (\alpha)=0.$
  • (c)  
    From (4.6) we have $\rho(\alpha)\leqslant\|f\|$ and using lemma 1 we get
    Therefore, $ \|f\|\geqslant \rho(\alpha)\geqslant\frac{\|f\|} {1+\frac{\overline{C}_2}{\alpha\lambda_1^{k+1}}} . $ This implies that $\lim\limits_{\alpha\to +\infty}\rho (\alpha)=\|f\|.$
  • (d)  
    For $0<\alpha_1<\alpha_2$ , we have $ \frac{\alpha_1\lambda_n^k}{\alpha_1\lambda_n^k+E_{\gamma,1}\left(-\lambda_n T^\gamma\right)}< \frac{\alpha_2\lambda_n^k}{\alpha_2\lambda_n^k+E_{\gamma,1}\left(-\lambda_n T^\gamma\right)}. $ Since $\|f\|>0$ , there exists a positive integer n0 such that $\left\langle\,f, \phi_{n_0}\right\rangle ^2>0$ . Therefore $\rho(\alpha_1)<\rho(\alpha_2)$ . We conclude that $\rho $ is a strictly increasing function.The lemma is proved. □

Lemma 8. Let $k\in \mathbb{N}$ . If $u(t)$ is a solution of problem (1.1) satisfying (3.2), then there exists a constant $\overline{C}_{9}$ such that

Proof. From (4.5) and using the Hölder inequality, we obtain

After some estimations, we get

Equation (4.7)

Noting that $\|v_{\alpha_\varepsilon}(T)-f\|^2=\sum_{n=1}^{\infty}\left(\frac{\alpha_\varepsilon\lambda_n^k\left\langle\,f,\phi_n\right\rangle } {\alpha_\varepsilon\lambda_n^k+E_{\gamma,1}\left(-\lambda_n T^\gamma\right)}\right){}^{2}$ , we get

Equation (4.8)

The lemma is proved. □

Now we are in a position to prove theorem 4.

Proof of part (i) of theorem 4.

It follows from lemma 7 that there exists a unique number $\alpha_\varepsilon > 0$ satisfying (3.3).

We have

Equation (4.9)

It follows from (4.9) that

Equation (4.10)

If 0  <  p   <  k, using lemmas 1 and 3, we get

Equation (4.11)

From (4.10) and (4.11), we have

Equation (4.12)

Further, from lemma 8, (3.3) and (4.12), we get

Equation (4.13)

If $p \geqslant k>0$ , from (4.10), we have

Equation (4.14)

Hence, from lemma 8, (3.3) and (4.14), we get

Equation (4.15)

From $u(t)-v_\alpha(t)=\sum_{n=1}^\infty E_{\gamma,1}(-\lambda_nT^\gamma)\left\langle u(0)-v_\alpha(0), \phi_n\right\rangle \phi_n$ and $ 0 \leqslant E_{\gamma,1}(-\lambda_nT^\gamma)\leqslant 1$ , and (4.13) and (4.15), part (i) of theorem 4 is proved.

Proof of part (ii) of theorem 4

It follows from lemma 7 that there exists a unique number $\alpha_\varepsilon > 0$ satisfying (3.4).

We have

Equation (4.16)

If $\varepsilon$ is sufficiently small, then

Equation (4.17)

If k  =  0, from lemma 8, (3.4) and (4.17), there exists a constant $\overline{C}_{10}$ such that

Equation (4.18)

From $u(t)-v_\alpha(t)=\sum_{n=1}^\infty E_{\gamma,1}(-\lambda_nT^\gamma)\left\langle u(0)-v_\alpha(0), \phi_n\right\rangle \phi_n$ and $ 0 \leqslant E_{\gamma,1}(-\lambda_nT^\gamma)\leqslant 1$ , and (4.18), then there exists a constant C5 such that

Part (ii) of theorem 4 is proved.

4.3. Proof of theorem 5

Since $ u(t)=\sum_{n=1}^\infty E_{\gamma,1}\left(-\lambda_n t^\gamma\right)\left\langle u(0),\phi_n\right\rangle \phi_n, \quad v_\alpha(t)=\sum_{n=1}^\infty\frac{E_{\gamma,1}\left(-\lambda_n t^\gamma\right)\left\langle\,f,\phi_n\right\rangle \phi_n} {\alpha\lambda_n^k+E_{\gamma,1}\left(-\lambda_n T^\gamma\right)} $ and $0\leqslant E_{\gamma,1}\left(-\lambda_n t^\gamma\right)\leqslant 1$ , we have

Equation (4.19)

With k  >  q, using lemmas 1 and 3, we have

Therefore, with $k\geqslant q$ there exists a constant $\overline{C}_{11}$ such that

Equation (4.20)

From (4.19) and (4.20) there exists a constant $\overline{C}_{12}$ such that

Equation (4.21)

If q  <  p   <  k  +  q  +  1, using lemmas 1 and 3, we have

Therefore, with $q< p\leqslant k+q+1$ , there exists a constant $\overline{C}_{13}$ such that

Equation (4.22)

From (4.21) and (4.22), we conclude that there exists a constant $\overline{C}_{14}$ such that

Equation (4.23)

If p   >  k  +  q  +  1, from (4.21) and lemma 1, after some estimations, we have

Equation (4.24)

From (4.23) and (4.24), we conclude that there exists a constant $\overline{C}_{15}$ such that

Choosing $\alpha=\left({\varepsilon}/{E}\right){}^{\frac{k+1}{p+1}}$ , we arrive at the conclusion of theorem 5.

4.4. Proof of theorem 6

From (4.21), using the Hölder inequality, we obtain

Since $\frac{\alpha_\varepsilon\lambda_n^{k}} {\alpha_\varepsilon\lambda_n^k+E_{\gamma,1}\left(-\lambda_n T^\gamma\right)}\leqslant 1$ , it follows that

From that, after some calculations, we can conclude that

Equation (4.25)

Note that, with $k\geqslant 1$ then $\tau \varepsilon=\|v_{\alpha_\varepsilon}(T)-f\|^2=\sum_{n=1}^{\infty}\left(\frac{\alpha_\varepsilon\lambda_n^{k} \left\langle\,f,\phi_n\right\rangle } {\alpha_\varepsilon\lambda_n^k+E_{\gamma,1}\left(-\lambda_n T^\gamma\right)} \right){}^2$ . From (4.25), we conclude that there exist constants $\overline{C}_{16},\overline{C}_{17}$ such that

Equation (4.26)

From (4.12) and (4.14), with $k > q, p>q$ , we have

and we arrive at the conclusion of theorem 6.

5. Numerical implementation

In this section, we give some numerical implementations for our inversion scheme with different k by regularized scheme (3.1) in 1-dimension space domain $\Omega=[0,l]$ . Using the eigenvalue expansions, we get the representation of regularized solution for $k=0,1,2,\cdots$

Equation (5.1)

where, the eigenvalue system $\left(\lambda_n,\phi_n\right)=\left(\frac{n^2\pi^2}{l^2}, \sqrt{\frac{2}{l}}\sin(\sqrt{\lambda_n}x)\right)$ .

For numerical tests, we consider the special cases where the solutions can be expressed by the eigenvalue system explicitly, so that we can check the numerical errors of our scheme by comparing the numerics with exact solutions. We take the time fractional derivative order $\gamma = 1/2$ where the function $E_{\gamma,1}(x)$ has an integral expression and thus can be computed accurately in (2.3). Of course, we can also consider other values $\gamma\in (0,1)$ for which $E_{\gamma,1}(x)$ should be computed efficiently by some package, for example, see www.mathworks.com/matlabcentral/fileexchange/48154-the-mittag-leffler-function.

We test our regularizing scheme for two 1-dimensional examples with smooth and non-smooth initial status, respectively. We implement the regularizing schemes with $k=0,1,2$ and compare their performances on the regularizing solutions.

Example 1. Consider the following problem

Equation (5.2)

The forward problem for $\gamma=1/2$ has the solution

Equation (5.3)

where

Equation (5.4)

For exact input data $u_{1/2}(\cdot, T)$ , the regularizing solutions for $k=0,1,\cdots$ are

Equation (5.5)

Now we generate the final measurement data at T  =  1 with noise in the form

Equation (5.6)

at discrete points $x_i\in [0,\pi]$ , where $rand(x_i)\in [-1,1]$ with $i=1,\cdots 101$ are the standard random numbers, $\delta$ is the error level. Using the above noisy data, we compute the regularizing solution

Equation (5.7)

with the coefficients

Equation (5.8)

with $0=x_1<\cdots<x_{101}=\pi$ the grids dividing the interval $[0,\pi]$ .

We take the first five-term in (5.7) for our computations, which will lead to an optimal numerical behavior by our numerical tests. In fact, the number of finite summation terms for truncations is also a regularizing parameter which should be chosen suitably in terms of $\delta$ . The authors in [17, 18] considered the case k  =  0 and k  =  1 respectively for regularized schemes (3.1). The order of the error estimate is found to be not greater than ${1}/{2}$ in [18], and not greater than ${2}/{3}$ in [17] for all p   >  0 for the a-priori parameter choice rule and is not greater than ${1}/{2}$ for the a-posteriori parameter choice rule.

Now we take $k=2, p=3$ for the numerical tests. In this case, the convergence order is ${3}/{4} $ theoretically in terms of the a-priori parameter choice rule (ii), which is greater than ${1}/{2}$ in [18] and greater than ${2}/{3}$ in [17] for the a-priori parameter choice rule.

To show the accuracy of numerical results, we compute the approximate L2 error denoted by

and the approximate relative error in L2 norm denoted by

To verify the convergence rate, we use the index

The numerical errors and convergence orders for example 1 with different $\delta$ are shown in table 1, where the choice strategy of regularized parameter $\alpha$ by using the a-priori parameter choice rule (ii), from which we can see that the numerical error is decreasing as the noise level becomes smaller and the convergence order is close to 0.75, which support our convergence estimate in the theoretical analysis.

Table 1. Reconstruction results with $k=2, p=3$ .

$\delta$ 0.0005 0.001 0.002 0.004 0.008 0.016 0.032
$e_a(u_0,\delta)$ 0.0861 0.1437 0.2383 0.3917 0.6346 1.0056 1.5415
$e_r(u_0,\delta)$ 0.0086 0.0144 0.0238 0.0392 0.0635 0.1006 0.1542
Order   0.7407 0.7297 0.7170 0.6961 0.6641 0.6163

In figure 1, we show the reconstructions applying the same noisy data of level $\delta=0.05$ with fixed regularized parameter choice strategy for $k=0,1,2$ , respectively. From these spatial distributed results for different time t, we can see, except those near t  =  0, the numerical results are relatively good at the other time points. Especially, the reconstructions presented in (c) corresponding to k  =  2 are much better than those for $k=0,1$ shown in (a) and (b), which support our theoretical result.

Figure 1.

Figure 1. Reconstruction results for k  =  0(a), k  =  1(b) and k  =  2(c), respectively.

Standard image High-resolution image

In figure 2(a), the computational results for $t = 0,0.2,0.5$ from the noise data given at T  =  1 with $\delta=0.05$ are shown using the explicit expressions, where the regularizing parameter is fixed as $\alpha=0.0005$ . It should be noticed that reconstructions for such a value $\alpha$ are better than those shown in figure 1(c) where $\alpha$ is chosen for the optimal value specified in theorem 3(ii). The reason is that the optimal value should be $ \newcommand{\e}{{\rm e}} C^*\left(\frac{\epsilon}{E}\right){}^{\frac{k+1}{p+1}}$ for some constant C*  >  0 which may not be 1, see the proof of lemma 6. In figure 2(b), we show the absolute error between exact distributions and regularized ones.

Figure 2.

Figure 2. Numerical performances at different times for fixed noise level and regularizing parameter for example 1. (a): reconstructions at $t=0,0.2,0.5$ from noisy data. (b): the absolute error between exact and regularized solutions.

Standard image High-resolution image

To show the performances of our scheme for recovering the initial status, we present our reconstructions at t  =  0 from final measurement time T  =  1 in figure 3 for fixed noisy level $\delta=0.02$ , with different variable regularizing $\alpha=0.02,0.002,0.0002,0.000\,02$ (left) and variable noisy level $\delta=0.005,0.01,0.05,0.1$ with fixed regularizing parameter chosen as $\alpha=(\frac{\sqrt{2/\pi}\delta}{E}){}^{3/4}$ (right), which comes from the a priori rule (ii). By comparing the results in (a) and (b), we can see that the reconstructions using Tikhonov regularizing parameters from our theoretical strategy are indeed good.

Figure 3.

Figure 3. Reconstructions of initial value for example 1. (a): reconstructions for different values of $\alpha$ . (b): reconstructions for different noise levels $\delta$ .

Standard image High-resolution image

Example 2. Consider a backward problem for non-smooth initial distribution containing all frequencies.

Equation (5.9)

The solution u1/2(x,t) of forward problem should be expressed in terms of the series with infinite number of terms. The high frequency amplitude is small in the Fourier expansion, therefore we can approximate the solution by a finite number of terms, i.e.

Equation (5.10)

with the coefficient

Equation (5.11)

for all $n=1,2,\cdots$ . We solve the backward problem using the final measurement data given at T  =  1. We firstly observe that, for this smooth data at T  =  1 with a random noise perturbation, the high frequency amplitude is almost zero in the Fourier expansion. Therefore, we can expand the solution to a finite number terms. The regularizing solution for $\gamma=1/2$ , $k=0,1,2,\cdots$ is

Equation (5.12)

with the coefficients

Equation (5.13)

with $0=x_1<\cdots<x_{101}=\pi$ the grids dividing the interval $[0,\pi]$ .

Firstly, it is easy to see that the reconstruction will be satisfactory for exact or almost exact input data, even if we do not apply the regularizing scheme. This phenomena, showing the mild ill-posedness of the backward problem, are shown numerically in figure 4, where the computational results for $t = 0,0.2,0.5$ from the exact data given at T  =  1 using the explicit expressions (5.12) for the regularizing parameter $\alpha =0$ are shown in (a), while the absolute errors between exact and regularized solutions are shown in (b). From these results, the numerical approximations are quite accurate with exact input data except for some boundary points. We can find the reconstruction for $u(x,0)$ near to $x=\frac{\pi}{2}$ with exact input data is not so good. The reason is that $u(x,0)$ is not smooth at $x=\frac{\pi}{2}$ , while our regularizing scheme has some smooth effect.

Figure 4.

Figure 4. Reconstructions from exact input data without regularization for example 2. (a): reconstructions at $t = 0,0.2,0.5$ from exact data. (b): the absolute error between exact and regularized solutions.

Standard image High-resolution image

For this non-smooth initial distribution, we also check the performances of the proposed regularizing scheme with different values $k=0,1,2$ from noisy data with $\delta=0.03$ by regularized parameter from our choice strategy. In figure 5, we present the reconstructions for different k. We can see that the numerical results are not very well at t  =  0, while the numerical results are relatively good at the other time points. Similarly to the performances shown for example 1, the results with k  =  2 (see (c)) generate more satisfactory results. However, our reconstructions are smooth due to the smoothing effect by $\alpha A^k$ for $k=0,1,2$ introduced in our regularizing scheme.

Figure 5.

Figure 5. Reconstructions for k  =  0(a), k  =  1(b) and k  =  2(c) for example 2.

Standard image High-resolution image

In figure 6(a), the inversion results for $t=0,0.05,0.5$ from the noise data at T  =  1 with $\delta=0.01$ are shown using the explicit expressions (5.12) for the regularizing parameter $\alpha =1e-5$ , while in figure 6(b), the spatial error distributions in L2 norm in time interval $[0,1]$ for k  =  2 are shown, with the distributions at different instants tm for $m=1,2,\cdots,101$ defined by

Figure 6.

Figure 6. Reconstruction performances for example 2 with non-smooth initial distribution. (a): reconstructions at $t=0,0.05,0.5$ from noise data. (b): error distribution with respect to time by L2 norm with k  =  2 for spatial variable.

Standard image High-resolution image

Acknowledgments

This research was supported by Vietnam National Foundation for Science and Technology Development (NAFOSTED) under Grant No. 101.02-2017.318. Part of this work has been done during the stay of DNH and NVD at Vietnam Institute for Advance Study in Mathematics. Jijun Liu is supported by NSFC (No.91730304, 11531005).

Please wait… references are loading.
10.1088/1361-6420/ab45d3