Brought to you by:
Paper

An ill-posed parabolic evolution system for dispersive deoxygenation–reaeration in water

, , and

Published 17 December 2013 © 2014 IOP Publishing Ltd
, , Citation M Azaïez et al 2014 Inverse Problems 30 015002 DOI 10.1088/0266-5611/30/1/015002

0266-5611/30/1/015002

Abstract

We consider an inverse problem that arises in the management of water resources and pertains to the analysis of surface water pollution by organic matter. Most physically relevant models used by engineers derive from various additions and corrections to enhance the earlier deoxygenation–reaeration model proposed by Streeter and Phelps in 1925, the unknowns being the biochemical oxygen demand (BOD) and the dissolved oxygen (DO) concentrations. The one we deal with includes Taylor's dispersion to account for the heterogeneity of the contamination in all space directions. The system we obtain is then composed of two reaction-dispersion equations. The particularity is that both Neumann and Dirichlet boundary conditions are available on the DO tracer while the BOD density is free of any conditions. In fact, for real-life concerns, measurements on the DO are easy to obtain and to save. On the contrary, collecting data on the BOD is a sensitive task and turns out to be a lengthy process. The global model pursues the reconstruction of the BOD density, and especially of its flux along the boundary. Not only is this problem plainly worth studying for its own interest but it could also be a mandatory step in other applications such as the identification of the location of pollution sources. The non-standard boundary conditions generate two difficulties in mathematical and computational grounds. They set up a severe coupling between both equations and they are the cause of the ill-posed data reconstruction problem. Existence and stability fail. Identifiability is therefore the only positive result one can search for; it is the central purpose of the paper. Finally, we have performed some computational experiments to assess the capability of the mixed finite element in missing data recovery.

Export citation and abstract BibTeX RIS

1. Introduction

Oxygen is the central element in the water quality assessment and is fundamental in the viability of an aquatic habitat. In two- or three-dimensional bodies of water such as lakes, bays, estuaries or even large rivers, modern models of deoxygenation and reaeration are based on parabolic boundary value problems where dispersion has been added to the earlier Streeter–Phelps model (see [25]). The Streeter–Phelps model was first introduced in [21, 1925] and accounts only for reaction and advection phenomena. The study developed there was tuned to the Ohio river, where only the longitudinal abscissa was considered. Its validity seems to be reduced to rivers and channels under the assumption that pollution is instantaneously mixed across the whole cross-section of the river. To remediate this weakness, the 'dispersive' modeling of contaminant transport in stream-waters has been proposed by Taylor (see [22]) to take into account the heterogeneity of the pollutant concentration in cross-sections. This correction enhances and broadens the validity of the original advective-reacting model. On the other hand, including dispersion brings about significant difficulties in mathematical analysis and scientific computations, in particular when issues related to inverse problems are intended. Readers may be referred to [10, 18, 20] for sophisticated linear and nonlinear contaminant transport models.

The model we are primarily interested in relies upon the indicator b( ·, ·) for the biochemical oxygen demand (BOD) and on the tracer c( ·, ·) for the dissolved oxygen (DO) concentration. The BOD density describes the rate of the oxygen to be consumed by the biodegradation of organic matter contained in the water. The DO concentration represents the amount of oxygen housed in the ambient water. Continuous organic spills (wastewater, sewage or drain discharge) in stream waters with elevated BOD depletes the DO. To translate these words into mathematical equations, let us assume that the organic contaminant is discharged into a body of water occupying a domain $\Omega \subset {\mathbb {R}}^k, k=2,3$. Hence, Streeter–Phelps equations with dispersion read as,

Equation (1.1)

The symbols (r( · ), r*( · )) stand for the reaction parameters and d( · ) is for the dispersion coefficient. Actually, the dispersion in two or three dimensions is generally anisotropic so that it is represented by a tensor. Nevertheless, given that the analysis with isotropic dispersion does not differ at all from the anisotropic dispersion, we assume from now on that d( · ) is a scalar function. All these physical parameters are supposed to be space-varying. The right-hand side F in (1.1), responsible for deoxygenation, describes the pollution spill, while the datum G represents the rate of oxygen supplied by the atmosphere to the river to remediate its oxygen level. The coupling term rb( ·, ·) in the second equation is the oxygen depletion due to an elevated b( ·, ·). This says that the river will suffer from high deoxygenation due to the high level of BOD which poses serious threats to the marine environment. The initial state (b0, c0) is often defined by (0, cS). At the time origin, the river is not polluted and the DO is at the saturation level cS which is most often constant. To complete the model, boundary conditions are required. Neumann conditions are natural, the fluxes (d∂nb, d∂nc) are therefore prescribed along the boundary. The related problem has then a triangular structure and may be uncoupled. Indeed, the scalar equation involving the density b( ·, ·) may be solved independently; it is well-posed. Afterwards, we turn to the equation on c( ·, ·), for which the oxygen depletion term acts as a sink source. It is also well-posed and can be solved easily.

The problem we focus on here is pretty different and arises when abundant data are collected on the DO concentration c( ·, ·). The counterpart is that no measurements are available on the BOD density b( ·, ·). The boundary conditions may henceforth be expressed by

Equation (1.2)

The homogeneous Neumann condition says there is no oxygen supply or loss across the boundary and the Dirichlet condition means that the DO level is given. Notice that this is not a rare occurrence. In fact, if polluting agents affect estuaries through their bank then measurements on the DO concentration c( ·, ·) can instantaneously be performed along the border. Similar data on the BOD density b( ·, ·) are harder to collect and tedious to obtain. Protecting the environment from the effects of dangerous contaminant unloads urges people committed to satisfy themselves with the available information, make appropriate decisions and implement them as quickly as possible. Numerical software are affordable tools for the reconstruction of the missing data (Dnb) along the border. This may help hydrological engineers to enhance and increase the amount of information they dispose of to proceed efficiently with the accidental spills.

A major effect caused by the non-standard boundary conditions (1.2) is that the triangular structure of the problem is definitely lost. The depletion term rb( ·, ·) generates a strong coupling between both equations. The second important issue consists of the deep change in the nature of the problem (1.1)–(1.2) because its well-posedness is thus broken. In fact, the ill-posedness of the problem has been illustrated in [3] (see also [4]). Existence and stability fail. Only the identifiability has been successfully stated in the particular case of constant reaction parameters. In a more recent work on the steady problem, expected to be well-posed, have emerged some surprising remarks. Complications of the mathematical analysis are tremendously increased when the reaction parameters become space-dependent (see [5]). By the way, all our attempts to state existence results for arbitrary kinetic parameters fell short and to our knowledge the issue is still open in the steady case. The positive results in [5] prevail only when the gap between the extrema of the function $\sqrt{r_* /r}$ does not exceed two. For a while, we suspected this limitation to apply as well to the identifiability for the unsteady data completion problem (1.1)–(1.2). Nothing could be less true, by chance. This will be corroborated by the analysis we propose here.

Hence, the purpose is to obtain the only possible positive result, the identifiability for the unsteady dispersive Streeter–Phelps model (1.1)–(1.2). The main tool we use is a uniqueness result by A Pazy proven in [19, chapter 4, theorem 1.2] in an abstract framework of semi-group theory. The preamble to the application of such a powerful result consists of suitable a priori estimates on the resolvent of the spacial operator underlying the time-dependent problem. We shall then carry out a full study of the quasi-steady version of (1.1)–(1.2) where the time derivatives (∂tb, ∂tc) are replaced by (λb, λc) for a positive real number λ. This is the subject of section 2. It turns out to be the most difficult part. We resume the variational methodology introduced in [4]. A saddle-point problem emerges then. The major point with this (saddle-point) problem is the lack of symmetry. A specific abstract framework has been developed in [6, 17] for the non-symmetric saddle-point problems. The keys to their analysis are several inf-sup conditions some of which are far from being obvious to establish. This task is brought to a successful conclusion here, for large values of λ. This result allows us to state the required estimates on the resolvent of the quasi-steady problem. Based on these estimates, section 3 explains how Pazy's theorem provides the identifiability. Section 4 is dedicated to a brief description of the mixed finite element method applied to the approximation of the quasi-steady problem. To close, we perform and comment in section 5 some numerical experiences to solve the steady and unsteady versions of problem (1.1)–(1.2).

Functional notation. The Lebesgue space of square integrable functions over Ω is denoted by L2(Ω), and ( ·, ·) stands for the scalar product. The Sobolev space H1(Ω) contains all the functions that belong to L2(Ω) so as their first-order derivatives. We also denote by $H^1_0(\Omega )$, the subspace of H1(Ω) made of all functions whose traces on ∂Ω vanish. The dual space of $H^1_0(\Omega )$ is H−1(Ω) and the duality pairing is represented by $\langle \cdot ,\cdot \rangle _{H^{-1},H^1_0 }$. We refer to [1] for more details on these functional spaces. Although not necessary, we shall use for convenience the weighted norms

2. The quasi-steady model

We follow the methodology of [4] for the identification result for (1.1)–(1.2), applying Pazy's uniqueness theorem for the time-dependent equation. It is therefore mandatory to study the related quasi-steady boundary value problem; the time derivatives (∂tb, ∂tc) are hence replaced by (λb, λc) for a positive real number λ. The generalization we pursue, compared to [4], is concerned with the kinetic terms in the dispersive Streeter and Phelps model where the reaction coefficients r( · ), r*( · ) are dependent on the space variable. Unexpectedly, this is a source of substantial difficulties. We refer to [5] where this intricate issue is first reported. To proceed, let us first write down the quasi-steady problem

Equation (2.1)

This has been studied in [4], when (r, r*) are constants. A suitable functional framework has been followed there. Existence and uniqueness results have been stated. When the reaction coefficients are space-dependent things turn out to be surprisingly very different and much more complicated. As a matter of fact, in the steady case that is when λ = 0, we are able to show the well-posedness only for a reduced class of the parameters r( · ) and r*( · ) (see [5]). The nice feature here is that this limitation is found to be not effective for the quasi-steady case for large values of λ. A direct consequence is that the identifiability result we have primarily in mind holds for the unsteady problem (1.1)–(1.2) without any restriction on the reaction parameters.

Prior to the technical developments, we need supplementary regularity assumptions on the physical parameters. Reaction coefficients r( · ) and r*( · ) are piecewise continuous on $\overline{\Omega }$ and d is piecewise regular. Moreover, there exist positive real-numbers r, r, d and d such that

2.1. The variational framework

A suitable functional framework should be available to put problem (2.1) in a variational form. We adopt here the one used in [4] and introduced earlier in [7] for the vorticity-stream-function formulation of the Stokes problem. Let us hence consider the non-standard Hilbert space

endowed with the graph norm

To step ahead, some additional notations and definitions may help us to present the technical results we have in mind. We define hence three bilinear forms, for all $ \chi \in {\mathbb {V}}, \varphi \in {\mathbb {V}}, \psi \in H^1_0(\Omega )$

They are all continuous, a( ·, ·) on ${\mathbb {V}}\times {\mathbb {V}}$ and m( ·, ·) and m*( ·, ·) on $H^1_0(\Omega )\times {\mathbb {V}}$.

Now, assume that f and g be given in H−1(Ω) and L2(Ω) respectively. The mixed variational problem may be expressed in terms of the above bilinear forms as: find (b, c) in ${\mathbb {V}}\times H^1_0(\Omega )$ such that

Equation (2.2)

This is a linear saddle-point problem. It is non-symmetric because m( ·, ·) and m*( ·, ·) do not coincide. Indeed, r( · ) and r*( · ) are typically different. Several worthy manuscripts and papers have been dedicated to the abstract linear saddle-point problems. We refer to [9] for a general treatise and to [17] (see also [6]) which deals specifically with the non-symmetric problems.

2.2. Uniqueness

In the steady case that is λ = 0 only a partial result has been obtained in [5]. The salient fact is that, when we come to the quasi-steady problem (2.2), stating existence and uniqueness results turns out to be possible for arbitrary coefficients (r( · ), r*( · )), provided that λ is sufficiently large. To understand how things operate, when λ is large, we propose to look at the proof of the uniqueness.

Lemma 2.1. Problem (2.2) has at most one solution for large values of λ.

Proof. Consider that the data are homogeneous, then (f, g) = (0, 0). We hope to check that (b, c) = (0, 0) is the unique solution. We proceed by choosing ψ = c and φ = b in (2.2). Subtracting both equations yields

This formula allows to derive the following bound

Equation (2.3)

for some real-number β > 0, that does not depend on λ. Then, back to the second equation with φ = c, we obtain that

Using the bound (2.3) allows for

Once again, the constant β* is independent of λ. Now, if λ is large enough then c( · ) = 0 and we have straightforwardly b( · ) = 0. The proof is complete. □

2.3. Existence

After the uniqueness, we investigate the existence. The basic tool is the fulfilment of inf-sup conditions on the bilinear forms involved in the variational formulation (see [6, 17]). Following [6, theorem 2.1 and equation (2.13)] we need to check out some inf-sup conditions on the mixed forms m( ·, ·) and m*( ·, ·).

Lemma 2.2. There exists α > 0 such that the following inf-sup conditions hold

The constant α may be chosen independent of λ.

Proof. We follow the proof of [4, lemma 3.2]. Let $\psi \in H^1_0(\Omega )$ be given. Choosing $\varphi =\psi \in \mathbb {V}$, Green's formula yields that

Then, it not difficult to see that the following equivalence holds true

Accordingly, we derive that

Taking the sup on $\varphi \in \mathbb {V}$ yields the desired result. Different equivalence constants are clearly independent of λ. The proof is complete. □

Things are less simple for the bilinear form a( ·, ·) which must satisfy an inf-sup condition and a positivity property on both null-spaces of the forms m( ·, ·) and m*( ·, ·). They are defined to be

The following statements

are straightforward. Both spaces are closed subspaces in $\mathbb {V}$ and have thus Hilbertian structures, when endowed with $\Vert \cdot \Vert _{{\mathbb {V}}}$.

Remark 2.3. The norm $\Vert \cdot \Vert _{L^2_r(\Omega )}$ (resp. $\Vert \cdot \Vert _{L^2_{r_*}(\Omega )}$) is equivalent to $\Vert \cdot \Vert _{{\mathbb {V}}}$ in both spaces $ \mathcal {N}$ and $ \mathcal {N}_*$. The equivalence constants are obviously functions of λ.

Notice if f = 0 then problem (2.2) is equivalent to the reduced one: find b in $ \mathcal {N}$ such that

Equation (2.4)

The analysis of this equation may be brought to a conclusion if the couple of inf-sup conditions we are looking for succeed. Accordingly, the main purpose is thus to bound from below the following inf-sup quantities

An intermediary result may help one doing so. We construct an isomorphism $\mathcal{K}$ between $\mathcal {N}$ and $\mathcal {N}_*$ as follows. Let φ be in $ \mathcal {N}$, define θ in $H^1_0(\Omega )$ as the unique solution of

Equation (2.5)

Then, set ${\mathcal {K}} \varphi = \theta + \varphi$. Clearly, the function ${\mathcal {K}}\varphi$ belongs to $\mathcal {N}_*$.

Lemma 2.4. The following inequalities hold

The constants σ and σ depend only on the reactions (r( · ), r*( · )).

Proof. Let φ be given in $\mathcal {N}$, we set $\chi = \mathcal {K} \varphi$. The difference function θ = (χ − φ) belongs to $H^1_0(\Omega )$ and fulfils equation (2.5) in the form

By multiplying both sides of this equation by θ and applying Green's integration formula we obtain

Now, using assumption r < r( · ), r*( · ) ⩽ r, it is readily confirmed that

We thus deduce that

This yields the inequalities of the lemma with σ = (σ)−1 = r/r + 1. The proof is complete. □

This lemma is of great help in the preparation of the harder part: establishing the inf-sup conditions on a( ·, ·). We have a preliminary result.

Lemma 2.5. There exists λ > 0 such that the following inf-sup conditions hold, for any λ ⩾ λ,

The constant η depends on λ. It is uniform for λ ⩾ λ.

Proof. Let φ be given in $\mathcal {N}$ and set $\chi = \mathcal {K} \varphi$. Then χ lies in $\mathcal {N}_*$ and θ = (χ − φ) satisfies

This equation stems from (2.5). By multiplying by θ and applying Green's formula we obtain

Using Young's inequality tst2 + s2/4, results in the following

whence

Assume that λ is large enough, then there exists a constant β = β(λ) ∈ ]0, 1[ such that for all λ ⩾ λ there hold that

Now, owing to the estimate by lemma 2.4, we deduce that

This provides the first inf-sup condition. The second one is checked following the same arguments. □

Remark 2.6. According to the proof of lemma 2.5, the steady case (λ = 0) suffers from limitations. The proof still works for the class of kinetic coefficients (r, r*) subjected to the restriction that the ratio function r*/r is lower then 4. In reality, that result has been extended for a broader class of (r, r*) (see [5]).

Unless explicitly contradicted, from now on λ stands for the real-number determined in lemma 2.5. Next, we have the following results.

Lemma 2.7. The following inf-sup conditions holds, for any λ ⩾ λ,

Here, the constant $\eta _\lambda ^{\prime }$ is dependent on λ.

Proof. It is directly issued from lemma 2.5 and remark 2.3. □

Remark 2.8. It can be shown that the constant ηλ decays like $\frac{1}{\lambda ^2}$, for growing λ.

Sufficient tools to state and prove the existence for the mixed quasi-steady problem (2.2) are now available. The following is a direct consequence of lemmas 2.2 and 2.7.

Proposition 2.9. Assume that λ ⩾ λ. Then, problem (2.2) has a unique solution (b, c) in ${\mathbb {V}}\times H^1_0(\Omega )$. Moreover this solution satisfies

The constant Cλ depends on λ.

2.4. Stability

Deriving a priori estimates on the solution of (2.2) is possible from the previous section. But, the main point we are devoted to looking at is the behavior of the stability constant with respect to λ. That is the reason why we choose to handle the issue in a separate paragraph.

Proposition 2.10. Assume that λ ⩾ λ. Then, we have that

The constant C does not depend on λ.

Proof. The proof is developed in two steps.

  • (i)  
    We start with the case (f = 0). Applying the first inf-sup condition in lemma 2.5 to the reduced problem (2.4), we derive that
    The second equation of problem (2.2), with φ = c, provides that
    This, together with the bound in b, yields in particular that
  • (ii)  
    Extension to the case $(f\not=0)$ relies on the superposition principle. Let us first consider $b_\circ \in H^1_0(\Omega )$, the unique solution of equation
    We have straightforwardly the following bound
    Equation (2.6)
    with some constant C independent of λ. Now, we set b* = bb. Then, (b*, c) is the unique solution of (2.2) where (f, g) are changed into (0, grb). We are thus in the context of (i). We derive that
    Calling for estimate (2.6), we obtain the desired stability on (b, c). The proof is complete.

 □

3. Identifiability for the dynamic problem

The central statement we are going to prove is the identifiability for the time-dependent dispersive Streeter–Phelps model (1.1)–(1.2). The tool to establish uniqueness is a suitable theorem worked out by A Pazy in his treatise on semi-groups of linear operators (see [19, chapter 4, theorem 1.2]). Let A be a densely closed operator in a Hilbert space ${\mathbb {H}}$. Consider u in $ {\mathscr {C}}([0,T];{\mathbb {H}})$ a solution to the abstract initial value problem

Equation (3.1)

Pazy's uniqueness theorem tells us the following.

Lemma 3.1. Assume that the resolvent ${\mathcal {R}}(\lambda )= (\lambda + A)^{-1}$ satisfies

Then, the trivial function u = 0 is the unique solution to problem (3.1) that belongs to L2(ε, T; D(A)) for all ε > 0.

Remark 3.2. In his theorem, Pazy assumed that the solution u is in ${\mathscr {C}}([0,T]; D(A))$. This hypothesis can be weakened to uL2(ε, T; D(A)) for all ε > 0. The proof provided in [19, pp 100–101] remains unchanged.

The methodology followed here starts by setting $ \mathbb {H}(\Omega ) = H^{-1}(\Omega )\times L^2(\Omega )$. Then we consider the time-dependent system

Equation (3.2)

The operator A is linear and unbounded with the domain

It is defined by

The domain D(A) is dense in $ \mathbb {H}(\Omega )$ and it is readily checked that the graph of A is closed. A is thus a closed operator with a dense domain. According to lemma 3.1, the analysis of the differential equation (3.2) is based on the resolvent of A. We need then to check that $\mathcal {R}(\lambda )$ is well defined in ${ \mathbb {H}}(\Omega )$, for large enough λ.

Lemma 3.3. Let (f, g) be given in $ \mathbb {H}(\Omega )$. Then, for any λ ⩾ λ, problem (2.1) has a unique solution (b, c) that belongs to D(A).

Proof. Let (f, g) be given in $ \mathbb {H}(\Omega )$. First, we call for proposition 2.9. The variational problem (2.2) admits therefore a unique solution (b, c) in ${\mathbb {V}}\times H^1_0(\Omega )$. Back to the strong formulation (2.1), we see that (div(d ∇c)) belongs actually to L2(Ω). Besides, using a Green formula in the second equation of (2.2) shows that the homogeneous Neumann condition on the boundary (d∂nc = 0) is fulfilled. The proof is complete. □

We come now to the crucial point; the stability of the resolvent in the framework of $ \mathbb {H}(\Omega )$. The following statement holds

Proposition 3.4. Assume that λ ⩾ λ. Then we have that

The constant C does not depend on λ.

Proof. This is directly issued from proposition 2.10. □

The last step is the determination of the regularity class in which we intend to reconstruct (b, c) as a solution to the system (1.1)–(1.2). At this stage, we need to indicate the regularity of the source terms and of the initial state on the DO. We assume that F, G are both given in L2(0, T, L2(Ω)) and that b0, c0L2(Ω).

Now, the very issue we are concerned with here is the data completion. As a matter of fact, we seek the recovery of the polluting flux ξ through the boundary. Thus, we have

Equation (3.3)

The function ξ (naturally) belongs to L2(0, T; H−1/2(∂Ω)). Consequently, the variational solution b of the boundary value problem composed of the first, the third equations of (1.1) together with equation (3.3), belongs to $\mathscr {C}([0,T]; L^2(\Omega )) \cap L^2(0,T; H^1(\Omega ))$ (see [16]). We obtain straightforwardly that div(d ∇b) ∈ L2(0, T; H−1(Ω)). Now, we switch to the tracer c. On account of the triangular structure of system (1.1), we consider the boundary value problem on c, obtained after putting together the second partial differential equation with a source data (Grb) ∈ L2(0, T; L2(Ω)) and the fourth equation of (1.1) to which we add the homogeneous Neumann boundary condition in (1.2). The weak solution c lies naturally in $\mathscr {C}([0,T]; L^2(\Omega )) \cap L^2(0,T; H^1(\Omega ))$. Besides, owing to the parabolic regularity, we deduce that div(d ∇c) ∈ L2(ε, T; L2(Ω)) for all ε > 0 (see, e.g., [16]). We conclude thus to the regularity that actually holds; $(b,c)\in \mathscr {C}([0,T]; {\mathbb {H} }(\Omega )) \cap {L^2}(\varepsilon ,T; D(A))$ for all ε > 0 .

Remark 3.5. Notice that if the initial state c0 belongs to H1(Ω) then we have the better regularity div(d ∇c) ∈ L2(0, T; L2(Ω)). This is highly realistic since the saturation state of the lake enjoys some smoothness.

Equipped with suitable mathematical tools, we are able to complete the proof of the uniqueness for the time-dependent pollution problem (1.1)–(1.2). No particular assumptions are required on the kinetic parameters (r( · ), r*( · )) to obtain such a result.

Theorem 3.6. Assume that (F, G) is given in L2(0, T, L2(Ω)2) and that b0, c0L2(Ω). Then, problem (1.1)–(1.2) has at most one solution (b, c) that lies in $\mathscr {C}([0,T]; \mathbb {H}(\Omega ))\cap {L^2}(\varepsilon ,T; D(A))$, for all ε > 0.

Proof. Owing to proposition 3.4 and the regularity discussion conducted above, we are in the context of lemma 3.1. This provides the uniqueness. The proof is complete. □

4. A mixed finite element method

Prior to any numerical discussion, we briefly describe the finite element discretization we use in our computations for quasi-steady problem (2.1) and then for the unsteady model (1.1). The quasi-steady version (2.1) looks like the vorticity/stream-function problem. As soundly remarked in [2], users of linear finite elements are expected to face some inaccuracy in particular on the variable b( · ). A relevant remedy to this sort of numerical locking is to introduce some regularization procedures to improve the reliability of the finite element discretization. The one we use is fully assessed in [2].

To alleviate the presentation, we consider that Ω is a two-dimensional domain. Assume that it is polygonal and is the union of a finite number of triangles. Then, we introduce a regular family $(\mathcal{T}_h)_h$ of triangulations of Ω.

  • Ω is the union of all elements of $\mathcal{T}_h$;
  • The intersection of two different elements of $\mathcal{T}_h$, if not empty, is a vertex or a whole edge of both of them;
  • The ratio of the diameter hK of any element K of $\mathcal{T}_h$ to the diameter of its inscribed circle is smaller than a constant σ independent of h.

The mesh-size h is the maximum of the diameters hK. We refer to [8, 11] for the basics of the finite element method.

Next, we consider the discrete spaces ${\mathbb {V}}_h \subset {\mathbb {V}}$ and $ {\mathbb {H}}_h\subset H^1_0(\Omega )$, defined by

where $\mathcal{P}_1(K)$ stands for the space of restrictions to K of affine functions on ${\mathbb {R}}^2$. The first discrete problem in the quasi-steady case is constructed from (2.2) by the Ritz–Galerkin method. It reads as: find (bh, ch) in ${\mathbb {V}}_h\times {\mathbb {H}}_h$ fulfilling

Equation (4.1)

where now, the mixed bilinear forms mh( ·, ·) and m*, h( ·, ·) have different forms, compared to the exact ones,

The augmented bilinear form aρ, h( ·, ·) is expressed as

$\mathcal{E}_h$ stands for the set of all edges of elements e of $\mathcal{T}_h$ which are not located in ∂Ω. The notation he is the length of the edge e. The parameter ρ is a positive real number called the regularization coefficient. Note that the current form of the bilinear forms mh( ·, ·) and m*, h( ·, ·) coincide with m( ·, ·) and m*( ·, ·), respectively, on $H^1_0(\Omega ) \times H^1(\Omega )$, and by then on ${\mathbb {H}}_h \times {\mathbb {V}}_h$. In [5] is carried out the numerical analysis of the regularized mixed finite element method in the linear case. The convergence estimates established there are the following

Thus, they predict that the errors $\Vert b-b_h\Vert _{\mathbb {V}}$ would be strongly affected by the choice of ρ while $\Vert c-c_h\Vert _{H^1(\Omega )}$ seem to be less sensitive to ρ. For a suitably chosen ρ, both errors decrease like h when the mesh size gets small.

Remark 4.1. The regularization procedure seems mandatory for the linear finite elements otherwise the computations may lead to some erratic solutions, in particular the density bh( · ) may suffer from undesired oscillation along the boundary. Higher-order finite elements are expected to preserve their good behavior and users can spare form using any regularization without endangering the accuracy.

Remark 4.2. In order to put the mixed problem under a matrix form, we denote by $\boldsymbol{b}$ and $\boldsymbol{c}$ the vectors of unknowns for the BOD concentration bh and the DO density ch. The degrees of freedom in $\boldsymbol{b}$ are the values of bh at all finite element vertices and those of $\boldsymbol{c}$ made of the values of ch at the internal vertices, not located on ∂Ω. The problem (4.1) results in a linear system:

Equation (4.2)

The matrix Aρ is symmetric positive definite while the matrices M and M* are related to the mixed bilinear forms mh( ·, ·) and m*, h( ·, ·) and are rectangular. Owing to the analysis realized here, the global matrix is square and invertible. Thus, it can be solved directly. We use UMFPack or sparsesolver incorporated in Freefem++.

When it comes to the approximation of the time-dependent problem (1.1), a time marching scheme has to be combined with the stabilized finite element method. Let then τ be time step and T = n*τ. We denote by $(b_h^n, c_h^n)$ the approximation of (b(nτ, ·), c(nτ, ·)). We opt for the implicit Euler first-order time scheme. The problem to solve is thus expressed as follows: find the sequence $((b_h^n)_n,(c_h^n)_n)$ in ${\mathbb {V}}_h\times {\mathbb {H}}_h$ that satisfies the induction

Equation (4.3)

The parameter λ is the inverse of the time step, that is λ = τ−1. The resulting algebraic system has to be solved repeatedly and we choose to use a direct algorithm. Let us draw the attention here, that although the time-dependent problem (2.2) is ill-posed, it is but mildly ill-posed as illustrated in [4]. We refer to [24] for the definition of the degree of ill-posedness. Hopefully, we shall be able to solve the discrete problem while avoiding the necessity of using any additional regularization other than the one required for the linear finite elements.

5. Numerical examples

We describe two examples to assess the capability of the regularized mixed finite elements in simulating the solution of the steady system (2.1) where λ = 0. Particular attention is paid to the convergence rates we observe for the finite elements. Computations are realized by means of the code Freefem++ (see [14]), where a script is specifically dedicated to the pollution system. Then, we switch to the numerical checking of the unsteady problem (1.1)–(1.2).

5.1. The steady problem

The computational domain is here the square $\big(\big]-\frac{1}{2\pi }, 1-\frac{1}{2\pi }\big[\big)^2$. The dispersion parameter is permanently equal to d = 0.151 while the reaction parameters r( · ) and r*( · ) are piecewise constant. Each of them takes two possible values, 0.2 or 0.1 for r( · ) and 0.4 or 0.2 for r*( · ). Both coefficients are depicted in figure 1. Purple regions correspond to the greater values and the yellow strips are for the lower values. We are in the steady case, that related to λ = 0.

Figure 1.

Figure 1. Reaction parameters r( · ) and r*( · ).

Standard image High-resolution image

We focus, in the first simulation, on the steady system where the exact solution (b, c) is supplied by

The boundary of the domain is split into two portions. Along the horizontal edges, each of the concentrations b( · ) and c( · ) is subjected to Neumann conditions while along the vertical walls Dirichlet and Neumann data are both enforced on c( · ), b( · ) being free from any boundary condition. Plots in figure 2 are a super-imposition the isolines of the exact and the discrete solutions computed by the linear finite element method. They are obtained with a regularization parameter ρ = 0.5. They show the reliability of the regularizing tool to provide a good quality approximation of b( · ).

Figure 2.

Figure 2. Isolines of the exact and the regularized computed solutions.

Standard image High-resolution image

The variation of the relative errors (bbh, cch) versus the mesh-size h are depicted in the left panel figure 3 for the suitably regularized solution (ρ = 0.5). The slopes of H1-convergence curves for b( · ) and c( · ) are close to 0.91 and 1.03, respectively. Those related to the error in the L2-norm are measured to (1.41, 1.92). The convergence observed on c( · ) is in agreement with the theory findings (see [5]). The variations of the error on b( · ) are less coherent. In order to provide a complete insight on the quality of the approximation, we depict the variations of the same errors with respect to the L-norm in the right plot of the same figure 3. The convergence rate is evaluated to (0.85, 1.96) for both tracers (b, c). Notice that so far there are no proven estimates on the maximum norm of the errors.

Figure 3.

Figure 3. Convergence rates. The regularization parameter is chosen to be ρ = 0.5.

Standard image High-resolution image

The issue of selecting the best possible regularization parameter is of course so important. Concerned readers may find in specialized literature many rules to achieve a judicious choice of this parameter. We refer for instance to [12, 13]. This is beyond the scope of the analysis we undertake here. Nevertheless, to figure out what happens for different values of that parameter we propose various plots in figure 4. In the two first rows the solution is under-regularized while in the last row it is over-regularized. What we come to see is that the under-regularized density bh starts to suffer from substantial inaccuracies at the border and in particular in the vicinity of the corners. The quality of ch seems satisfactory. On the contrary, this BOD density ch is clearly affected by over-regularization.

Figure 4.

Figure 4. Under-regularized solutions with ρ = 0.5 (top) and ρ = 0.05 (middle). Over-regularized solution where ρ = 5 (bottom).

Standard image High-resolution image

In the second steady test, we consider a more complex geometry. The computational domain looks like Lake Geneva (Lake Léman) which stretches in length almost 60 (kms) and in width 35 (kms). It has been reconstructed from a satellite picture of the lake (converted into portable gray-map format) shot at the border of France and Switzerland. The dispersion parameter is kept to d = 0.151 while the reaction parameters r( · ) and r*( · ) are represented in the first row of figure 5. The exact solution (b, c) is supplied by

Equation (5.1)

The functions (b, c) can be assimilated to the BOD and DO tracers due to a couple of polluting sources located at $({\boldsymbol{s}}_i)_{i}$ two given fixed points. The functions $(\varrho _i=\varrho _i({\boldsymbol{x}}))_{i=1,2}$ stand for the distance of the current point $ {\boldsymbol{x}}$ to $({\boldsymbol{s}}_i)_{i=1,2}$. We intend to reconstruct the densities b( · ) and c( · ) using finite element computations in the case of two sources located outside the domain, which are however close to the boundary. The source $ {\boldsymbol{s}}_1$ is situated up-north of the lake while the second source $ {\boldsymbol{s}}_2$ is placed down-south. The boundary part on which both Neumann and Dirichlet conditions are enforced on the DO density c( · ) appears dark in first plots of figure 5. Along the remaining (clear) portion, a Neumann type condition is fixed on both BOD and DO densities. Plots in the second row of figure 5 depict the isolines of the computed solutions and may help the reader figure out where $ {\boldsymbol{s}}_1$ and ${\boldsymbol{s}}_2$ are approximately positioned.

Figure 5.

Figure 5. Reaction parameters: the steady computed concentrations. Adapted meshes after five and ten refinings (3rd and 4th rows).

Standard image High-resolution image

In this experiment we run computations with an adaptation process of the mesh. We pursue double advantages: to calculate a better solution with a number of vertices or degrees of freedom determined beforehand. Efficient mesh generation needs a metric. The construction of that metric uses the Hessian error indicator on the density bh. We refer to [14] for a brief description of this error indicator and the way the adaptation procedure can be used within Freefem++. The simulations are made when the parameter ρ equals 0.5. They start from a 'coarse' mesh with 170 triangles and 120 vertices and are stopped after ten refinements are realized. The 'refined' mesh has 5122 triangles and 2753 vertices. The meshes after five and ten refinements are provided in the third row of figure 5. To obtain a better insight, the relative errors are recorded in table 1 when evaluated in the energy, mass and maximum norms. They certify the reliability of the finite element method.

Table 1. Finite element errors in the first (left) and the second (right) computations.

$\begin{array}{llll} {\rm Norm} & H^1 & L^2 &L^\infty\\ (b-b_h) & 0.0825 & 0.0014 & 0.0130\\ (c-c_h)& 0.0433 & 0.0001 & 0.0009 \end{array} $   $\begin{array}{lll} H^1 & L^2 &L^\infty\\ 0.0819 & 0.0014 & 0.0076\\ 0.0425 & 0.0001 & 0.0007 \end{array}$

The symmetric test corresponds to a permutation of the boundary conditions. Neumann conditions are thus imposed on b( · ) and c( · ) along the dark portion of the boundary. Along the clear part we enforce Cauchy boundary conditions on c( · ). Comparing with the previous test, the portion where Cauchy conditions are used on c( · ) is substantially longer. Putting aside the singularities generated by the source term, specific instabilities may be born along the Cauchy boundary. This boundary is almost twice as long and we aim to achieve the same accuracy. After ten refining steps, the final mesh has 5371 triangles and 2910 vertices and the quality of the results we obtain is close to those obtained in the symmetric example. This is clearly displayed in table 1. The only worthy observation to make after comparing different adapted meshes in the third and fourth row of figure 5 is that the refinement seems sensitive to the boundary conditions. It has moved slightly toward the Cauchy part of the boundary. Look at the arm (left-bottom) of the lake to be convinced. It is more densely refined in the second computation where the Cauchy condition is prescribed on the upper and lower boundaries (of that arm).

5.2. The unsteady problem

We begin with the illustration of the ability of the time scheme/finite element method to approximate a steady state over a long time. We therefore attempt to capture the steady solution (5.1) of the last test in the previous subsection, through the discretized time-dependent model (4.3). The geometry is the same Geneva lake. The boundary conditions are unchanged; Cauchy's conditions are imposed on c( · ) on the whole lower part of the boundary and on the left portion of the upper part. This corresponds to the clear part of the boundary represented in the first row in figure 5. The initial condition is put to zero. We use the adapted mesh, the one obtained at the end of the refinement process in the steady calculations. It is represented in the right panel of the fourth row in figure 5. It has thus 5371 triangles and 2910 vertices. We select the regularization parameter ρ = 0.5 and the time step is fixed to τ = 0.5. We hope to find out whether the ill-posedness has a negative effect on the final computations or not. Then, does the solution (b( ·, t), c( ·, t)) approximate the steady one when t grows high? The answer is affirmative. The computations are stopped after 100 time steps, when the final instant tF = 50 is reached. Hence, we evaluate the gaps between the exact and the discrete solutions. They are recorded in table 2. The results seem at least as satisfactory as those provided by directly simulating the steady model. Comparison with the results displayed in the left table in table 1 demonstrates a better quality of the BOD density b( · ) computed here. Intermediary dynamics, for both densities b( ·, ·) and c( ·, ·) are depicted in figure 6. They show the evolution for the computation towards the steady state. The structures of the isolines of the concentrations b( · ) and c( · ) suggest that the reconstruction of the steady states is less easy at the vicinity of the Cauchy boundary than elsewhere.

Figure 6.

Figure 6. Convergence of the time-varying solution toward the steady state: densities bh and ch at different times, t = 10, 20, 30, 40 and 50.

Standard image High-resolution image

Table 2. Accuracy of the time scheme/regularized finite elements method.

Norm H1 L2 L
(bbh) 0.0394 0.0027 0.0041
(cch) 0.0316 0.0002 0.0009

In the second example we deal with the real time-varying model. The exact solution (b, c) is thus time-dependent and is supplied by the expressions

The time step is now equal to τ = 0.01 and a quasi-uniform mesh is used with 5853 triangles and 3102 vertices. The first run is initiated with the regularization parameter put to ρ = 0.5. This value is the one that provided us with the more accurate numerical simulation in the steady case. The discrete tracers $b_h^{n}$ and $c_h^{n}$ obtained at the final instant tF = nτ = 0.5 (n = 50) are represented in the first plots of figure 7. The isolines of the density c( ·, ·) seem to be nicely shaped. To the contrary those of b( ·, ·) appear a little bit misshapen, the space between isolines is tight along the portion of the boundary where data are missing (on b( ·, ·)). The relative gap between b(tF, ·) and $b_h^n$ is evaluated to 0.31 in the maximum norm while the one between c(tF, ·) and $c_h^n$ is equal to 0.064. We then carried out a second experiment with a higher regularization parameter, that is ρ = 20. The isolines of $b_h^n$ and $c_h^n$ are represented in the second row of figure 7. The structure of the isolines of b( ·, ·) are improved. Indeed, the maximum error on b(tF, ·) decreases to 0.015. On the other hand, the isolines of c( ·, ·) are apparently and slightly distorted, however without any substantial damage to the accuracy. Indeed, the gap between c(tF, ·) and $c_h^n$ does not exceed 0.058 which is as good a result as the former one. To summarize, we display in table 3 the maximum norm of the errors at times tF = 0.5 with the regularization parameters ρ = 0.5 and ρ = 20. Things happen as if there is a need of more regularization in the time-dependent model than in the steady equation. This seems 'morally' normal since adding the time derivative terms to the deoxygenation–reaeration system brings some more ill-posedness.

Figure 7.

Figure 7. Time-dependent solution: under-regularized solution (top); regularized solution (bottom).

Standard image High-resolution image

Table 3. Gap between exact and discrete solutions.

t(ρ) 0.5(0.5) 0.5(20)
(bbh) 0.309 0.015
(cch) 0.064 0.058

6. Conclusion

The main problems studied here are the analysis and the discretization of the inverse problem of data completion for the time-dependent dispersive deoxygenation–reaeration model. As illustrated earlier in [4], the most important feature of this problem is the ill-posedness which is intrinsically linked to time dependence. As a consequence, uniqueness is the only result that holds true; existence and stability both fail. Amazingly, the cause of the mathematical difficulties originates from the space-varying reaction parameters. This has been noticed in [5] for the steady counterpart of the problem. By the way, the well-posedness for that steady system established in that reference is in fact restricted to a particular class of reaction coefficients. The main theoretical contribution of this work consists in showing that this limitation does not operate in the unsteady case and the identifiability we prove applies to arbitrary reactions.

The examples shown in the numerical section for the data completion are novel, since we are not aware of any similar computational research on this topic. The model is approximated after putting together a mixed finite element method and an implicit Euler time-scheme. The ill-conditioning we expect to inherit from the ill-posedness of the continuous problem is aggravated by the instability of the mixed finite element discretization. To remedy this specific weakness, we adopt the regularizing device recommended in [2]. This allows us to obtain some reliable results for the steady and unsteady versions of the deoxygenation–reaeration problem. Needless to say that the computational discussion led here is but a modest step on the numerical ground. We are called to sharpen our first observations and deepen the study of some important issues in particular with connection to the best regularizing strategy to apply. Is it necessary, for instance, to call for a second regularization method to cope with the ill-posedness specifically connected to the time derivatives of the BOD and DO densities? How does the choice of the regularizing parameter(s) influence the quality of the solutions? How can we automatically select the right parameter(s)? Many aspects remain therefore to be carefully investigated and more than one question is to be answered. The calculations achieved in [4] to check the ill-posedness show that our problem can be related to the Volterra type equations. An exposition on the regularization artifices that fit in this class of problems can be found in [15, 23], for example. Before ending, we emphasize once again the fact that, in the current study, we partially address a numerical subject that might be the milestone of a number of challenges in environmental science and also of simple real-life problems. In summary, there remains a lot to be done; this is a substantial research program which is under consideration.

Acknowledgments

Two anonymous reviewers helped in clarifying some important statements in the paper and in improving substantially its presentation. 'Merci' to them.

Please wait… references are loading.
10.1088/0266-5611/30/1/015002