Brought to you by:
Paper

MAP estimators and their consistency in Bayesian nonparametric inverse problems

, , and

Published 2 September 2013 © 2013 IOP Publishing Ltd
, , Citation M Dashti et al 2013 Inverse Problems 29 095017 DOI 10.1088/0266-5611/29/9/095017

0266-5611/29/9/095017

Abstract

We consider the inverse problem of estimating an unknown function u from noisy measurements y of a known, possibly nonlinear, map $\mathcal {G}$ applied to u. We adopt a Bayesian approach to the problem and work in a setting where the prior measure is specified as a Gaussian random field μ0. We work under a natural set of conditions on the likelihood which implies the existence of a well-posed posterior measure, μy. Under these conditions, we show that the maximum a posteriori (MAP) estimator is well defined as the minimizer of an Onsager–Machlup functional defined on the Cameron–Martin space of the prior; thus, we link a problem in probability with a problem in the calculus of variations. We then consider the case where the observational noise vanishes and establish a form of Bayesian posterior consistency for the MAP estimator. We also prove a similar result for the case where the observation of $\mathcal {G}(u)$ can be repeated as many times as desired with independent identically distributed noise. The theory is illustrated with examples from an inverse problem for the Navier–Stokes equation, motivated by problems arising in weather forecasting, and from the theory of conditioned diffusions, motivated by problems arising in molecular dynamics.

Export citation and abstract BibTeX RIS

1. Introduction

This paper considers questions from Bayesian statistics in an infinite-dimensional setting, for example in function spaces. We assume our state space to be a general separable Banach space $\bigl (X, \Vert \cdot \Vert _X\bigr )$. While in the finite-dimensional setting, the prior and posterior distribution of such statistical problems can typically be described by densities w.r.t. the Lebesgue measure, such a characterization is no longer possible in the infinite-dimensional spaces we consider here: it can be shown that no analogue of the Lebesgue measure exists in infinite-dimensional spaces. One way to work around this technical problem is to replace the Lebesgue measure with a Gaussian measure on X, i.e. with a Borel probability measure μ0 on X such that all finite-dimensional marginals of μ0 are (possibly degenerate) normal distributions. Using a fixed, centred (mean-zero) Gaussian measure $\mu _0 = \mathcal {N}(0,\mathcal {C}_0)$ as a reference measure, we then assume that the distribution of interest, μ, has a density with respect to μ0:

Equation (1.1)

Measures μ of this form arise naturally in a number of applications, including the theory of conditioned diffusions [18] and the Bayesian approach to inverse problems [33]. In these settings, there are many applications where $\Phi :X\rightarrow \mathbb {R}$ is a locally Lipschitz continuous function, and it is in this setting that we work.

Our interest is in defining the concept of 'most likely' functions with respect to the measure μ, and in particular the maximum a posteriori (MAP) estimator in the Bayesian context. We will refer to such functions as MAP estimators throughout. We will define the concept precisely and link it to a problem in the calculus of variations, study posterior consistency of the MAP estimator in the Bayesian setting and compute it for a number of illustrative applications.

To motivate the form of MAP estimators considered here, we consider the case where $X=\mathbb {R}^d$ is finite dimensional and the prior μ0 is Gaussian $\mathcal {N}(0,\mathcal {C}_0)$. This prior has density $\exp (-\frac{1}{2}|\mathcal {C}_0^{-1/2}u|^2)$ with respect to the Lebesgue measure, where | · | denotes the Euclidean norm. The probability density for μ with respect to the Lebesgue measure, given by (1.1), is maximized at minimizers of

Equation (1.2)

where $\Vert \cdot \Vert _E=|\mathcal {C}_0^{-1/2}u|$. We would like to derive such a result in the infinite-dimensional setting.

The natural way to talk about MAP estimators in the infinite-dimensional setting is to seek the centre of a small ball with maximal probability and then study the limit of this centre as the radius of the ball shrinks to zero. To this end, let Bδ(z)⊂X be the open ball of radius δ centred at zX. If there is a functional I, defined on E, which satisfies

Equation (1.3)

then I is termed the Onsager–Machlup functional [11, 21]. For any fixed z1, the function z2 for which the above limit is maximal is a natural candidate for the MAP estimator of μ and is clearly given by minimizers of the Onsager–Machlup function. In the finite-dimensional case, it is clear that I given by (1.2) is the Onsager–Machlup functional.

From the theory of infinite-dimensional Gaussian measures [5, 25], it is known that copies of the Gaussian measure μ0 shifted by z are absolutely continuous w.r.t. μ0 itself, if and only if z lies in the Cameron–Martin space $\bigl (E, \langle \cdot , \cdot \rangle _E, \Vert \cdot \Vert _E\bigr )$; furthermore, if the shift direction z is in E, then the shifted measure μz has density

Equation (1.4)

In the finite-dimensional example, above, the Cameron–Martin norm of the Gaussian measure μ0 is the norm || · ||E, and it is easy to verify that (1.4) holds for all $z\in \mathbb {R}^d$. In the infinite-dimensional case, it is important to keep in mind that (1.4) only holds for zEX. Similarly, relation (1.3) only holds for z1, z2E. In our application, the Cameron–Martin formula (1.4) is used to bound the probability of the shifted ball Bδ(z2) from equation (1.3). (For an exposition of the standard results about small ball probabilities for Gaussian measures, we refer to [5, 25]; see also [24] for related material.) The main technical difficulty that is encountered stems from the fact that the Cameron–Martin space E, while being dense in X, has measure zero with respect to μ0. An example where this problem can be explicitly seen is the case where μ0 is the Wiener measure on L2; in this example, E corresponds to a subset of the Sobolev space H1, which has indeed measure zero w.r.t. the Wiener measure.

Our theoretical results assert that despite these technical complications, the situation from the finite-dimensional example, above, carries over to the infinite-dimensional case essentially without change. In theorem 3.2, we show that the Onsager–Machlup functional in the infinite-dimensional setting still has the form (1.2), where || · ||E is now the Cameron–Martin norm associated with μ (using ||z||E = for zXE), and in corollary 3.10, we show that the MAP estimators for μ lie in the Cameron–Martin space E and coincide with the minimizers of the Onsager–Machlup functional I.

In the second part of the paper, we consider the inverse problem of estimating an unknown function u in a Banach space X, from a given observation $y \in \mathbb {R}^J$, where

Equation (1.5)

here, $G:X\rightarrow \mathbb {R}^J$ is a possibly nonlinear operator, and ζ is a realization of an $\mathbb {R}^J$-valued centred Gaussian random variable with known covariance Σ. A prior probability measure μ0(du) is put on u, and the distribution of y|u is given by (1.5), with ζ assumed to be independent of u. Under appropriate conditions on μ0 and G, the Bayes theorem is interpreted as giving the following formula for the Radon–Nikodym derivative of the posterior distribution μy on u|y with respect to μ0:

Equation (1.6)

where

Equation (1.7)

The derivation of the Bayes formula (1.6) for problems with finite-dimensional data, and ζ in this form, is discussed in [7]. Clearly, then, Bayesian inverse problems with Gaussian priors fall into the class of problems studied in this paper, for potentials Φ given by (1.7) which depend on the observed data y. When the probability measure μ arises from the Bayesian formulation of inverse problems, it is natural to ask whether the MAP estimator is close to the truth underlying the data, in either the small noise or large sample size limits. This is a form of Bayesian posterior consistency, here defined in terms of the MAP estimator only. We will study this question for finite observations of a nonlinear forward model, subject to Gaussian additive noise.

The paper is organized as follows:

  • in section 2, we detail our assumptions on Φ and μ0;
  • in section 3, we give conditions for the existence of an Onsager–Machlup functional I and show that the MAP estimator is well defined as the minimizer of this functional;
  • in section 4, we study the problem of Bayesian posterior consistency by studying the limits of Onsager–Machlup minimizers in the small noise and large sample size limits;
  • in section 5, we study applications arising from data assimilation for the Navier–Stokes equation, as a model for what is done in weather prediction;
  • in section 6, we study applications arising in the theory of conditioned diffusions.

We conclude the introduction with a brief literature review. We first note that MAP estimators are widely used in practice in the infinite-dimensional context [22, 30]. We also note that the functional I in (1.2) resembles a Tikhonov–Phillips regularization of the minimization problem for Φ [12], with the Cameron–Martin norm of the prior determining the regularization. In the theory of classical non-statistical inversion, formulation via the Tikhonov–Phillips regularization leads to an infinite-dimensional optimization problem and has led to deeper understanding and improved algorithms. Our aim is to achieve the same in a probabilistic context. One way of defining a MAP estimator for μ given by (1.1) is to consider the limit of parametric MAP estimators: first discretize the function space using n parameters, and then apply the finite-dimensional argument above to identify an Onsager–Machlup functional on $\mathbb {R}^n$. Passing to the limit n in the functional provides a candidate for the limiting Onsager–Machlup functional. This approach is taken in [27, 28, 32] for problems arising in conditioned diffusions. Unfortunately, however, it does not necessarily lead to the correct identification of the Onsager–Machlup functional as defined by (1.3). The reason for this is that the space on which the Onsager–Machlup functional is defined is smoother than the space on which small ball probabilities are defined. Small ball probabilities are needed to properly define the Onsager–Machlup functional in the infinite-dimensional limit. This means that discretization and use of standard numerical analysis limit theorems can, if incorrectly applied, use more regularity than is admissible in identifying the limiting Onsager–Machlup functional. We study the problem directly in the infinite-dimensional setting, without using discretization, leading, we believe, to greater clarity. Adopting the infinite-dimensional perspective for MAP estimation has been widely studied for diffusion processes [9] and related stochastic PDEs [34]; see [35] for an overview. Our general setting is similar to that used to study the specific applications arising in [9, 34, 35]. By working with small ball properties of Gaussian measures, and assuming that Φ has natural continuity properties, we are able to derive results in considerable generality. There is a recent related definition of MAP estimators in [19], with application to density estimation in [16]. However, whilst the goal of minimizing I is also identified in [19], the proof in that paper is only valid in finite dimensions since it implicitly assumes that the Cameron–Martin norm is μ0 -almost surely (a.s.) finite. In our specific application to fluid mechanics, our analysis demonstrates that widely used variational methods [2] may be interpreted as MAP estimators for an appropriate Bayesian inverse problem and, in particular, that this interpretation, which is understood in the atmospheric sciences community in the finite-dimensional context, is well defined in the limit of infinite spatial resolution.

Posterior consistency in Bayesian nonparametric statistics has a long history [15]. The study of posterior consistency for the Bayesian approach to inverse problems is starting to receive considerable attention. The papers [1, 23] are devoted to obtaining rates of convergence for linear inverse problems with conjugate Gaussian priors, whilst the papers [4, 29] study non-conjugate priors for linear inverse problems. Our analysis of posterior consistency concerns nonlinear problems, and finite data sets, so that multiple solutions are possible. We prove an appropriate weak form of posterior consistency, without rates, building on ideas appearing in [3].

Our form of posterior consistency is weaker than the general form of Bayesian posterior consistency since it does not concern fluctuations in the posterior, simply a point (MAP) estimator. However, we note that for linear Gaussian problems there are examples where the conditions which ensure convergence of the posterior mean (which coincides with the MAP estimator in the linear Gaussian case) also ensure posterior contraction of the entire measure [1, 23].

2. Set-up

Throughout this paper, we assume that $\bigl ( X, \Vert \cdot \Vert _X\bigr )$ is a separable Banach space and that μ0 is a centred Gaussian (probability) measure on X with Cameron–Martin space $\bigl (E, \langle \cdot , \cdot \rangle _E, \Vert \cdot \Vert _E\bigr )$. The measure μ of interest is given by (1.1) and we make the following assumptions concerning the potential Φ.

Assumption 2.1. The function $\Phi :X\rightarrow \mathbb {R}$ satisfies the following conditions.

  • (i)  
    For every ε > 0, there is an $M\in \mathbb {R}$, such that for all uX,
  • (ii)  
    Φ is locally bounded from above, i.e. for every r > 0, there exists K = K(r) > 0 such that, for all uX with ||u||X < r, we have
  • (iii)  
    Φ is locally Lipschitz continuous, i.e. for every r > 0, there exists L = L(r) > 0, such that for all u1, u2X with ||u1||X, ||u2||X < r, we have

Assumption 2.1(i) ensures that expression (1.1) for the measure μ is indeed normalizable to give a probability measure; the specific form of the lower bound is designed to ensure that application of the Fernique theorem (see [5] or [25]) proves that the required normalization constant is finite. Assumption 2.1(ii) enables us to obtain explicit bounds from below on small ball probabilities and assumption 2.1(iii) allows us to use continuity to control the Onsager–Machlup functional. Numerous examples satisfying these conditions are given in [18, 33]. Finally, we define a function $I:X\rightarrow \mathbb {R}$ by

Equation (2.1)

We will see in section 3 that I is the Onsager–Machlup functional.

Remark 2.2. We close with a brief remark concerning the definition of the Onsager–Machlup function in the case of a non-centred reference measure $\mu _0=\mathcal {N}(m,\mathcal {C}_0)$. Shifting coordinates by m, it is possible to apply the theory based on the centred Gaussian measure μ0, and then undo the coordinate change. The relevant Onsager–Machlup functional can then be shown to be

3. MAP estimators and the Onsager–Machlup functional

In this section, we prove two main results. The first, theorem 3.2, establishes that I given by (1.2) is indeed the Onsager–Machlup functional for the measure μ given by (1.1). Then theorem 3.5 and corollary 3.10 show that the MAP estimators, defined precisely in definition 3.1, are characterized by the minimizers of the Onsager–Machlup functional.

For zX, let Bδ(z)⊂X be the open ball centred at zX with radius δ in X. Let

be the mass of the ball Bδ(z). We first define the MAP estimator for μ as follows.

Definition 3.1. Let

Any point $\tilde{z}\in X$ satisfying $\lim _{\delta \rightarrow 0}(J^\delta (\tilde{z})/J^\delta (z^\delta ))=1$ is a MAP estimator for the measure μ given by (1.1).

We show later on (theorem 3.5) that a strongly convergent subsequence of {zδ}δ > 0 exists, and its limit, that we prove to be in E, is a MAP estimator and also minimizes the Onsager–Machlup functional I. Corollary 3.10 then shows that any MAP estimator $\tilde{z}$ as given in definition 3.1 lives in E as well, and minimizers of I characterize all MAP estimators of μ.

One special case where it is easy to see that the MAP estimator is unique is the case where Φ is linear, but we note that, in general, the MAP estimator cannot be expected to be unique. To achieve uniqueness, stronger conditions on Φ would be required.

We first need to show that I is the Onsager–Machlup functional for our problem.

Theorem 3.2. Let assumption 2.1 hold. Then the function I defined by (2.1) is the Onsager–Machlup functional for μ, i.e. for any z1, z2E, we have

Proof. Note that Jδ(z) is finite and positive for any zE by assumptions 2.1(i) and (ii) together with the Fernique theorem and the positive mass of all balls in X, centred at points in E, under Gaussian measure [5]. The key estimate in the proof is the following consequence of proposition 3 in section 18 of [25]:

Equation (3.1)

This is the key estimate in the proof since it transfers questions about probability, naturally asked on the space X of full measure under μ0, into statements concerning the Cameron–Martin norm of μ0, which is almost surely infinite under μ0.

We have

By assumption 2.1(iii), for any u, vX,

where L = L(r) with r > max {||u||X, ||v||X}. Therefore, setting L1 = L(||z1||X + δ) and L2 = L(||z2||X + δ), we can write

Now, by (3.1), we have

with r1(δ) → 1 as δ → 0. Thus,

Equation (3.2)

Similarly, we obtain

with r2(δ) → 1 as δ → 0 and deduce that

Equation (3.3)

Inequalities (3.2) and (3.3) give the desired result. □

We note that similar methods of analysis show the following.

Corollary 3.3. Let the assumptions of theorem 3.2 hold. Then for any zE,

where Z = ∫Xexp ( − Φ(u)) μ0(du).

Proof. Noting that we consider μ to be a probability measure and hence,

with Z = ∫Xexp ( − Φ(u)) μ0(du), arguing along the lines of the proof of the above theorem gives

with $\hat{L}=L(\Vert z\Vert _X+\delta )$ (where L( · ) is as in definition 2.1) and r(δ) → 1 as δ → 0. The result then follows by taking $\lim \sup$ and $\lim \inf$ as δ → 0. □

Proposition 3.4. Suppose assumptions 2.1 hold. Then the minimum of $I:E \rightarrow \mathbb {R}$ is attained for some element z* ∈ E.

Proof. The existence of a minimizer of I in E, under the given assumptions, is proved as theorem 5.4 in [33] (and as theorem 2.7 in [7] in the case where Φ is non-negative). □

The rest of this section is devoted to a proof of the result that MAP estimators can be characterized as minimizers of the Onsager–Machlup functional I (theorem 3.5 and corollary 3.10).

Theorem 3.5. Suppose that assumptions 2.1(ii) and (iii) hold. Assume also that there exists an $M\in \mathbb {R}$ such that Φ(u) ⩾ M for any uX.

  • (i)  
    Let zδ = argmaxzXJδ(z). There is a $\bar{z}\in E$ and a subsequence of {zδ}δ > 0 which converges to $\bar{z}$ strongly in X.
  • (ii)  
    The limit $\bar{z}$ is a MAP estimator and a minimizer of I.

The proof of this theorem is based on several lemmas. We state and prove these lemmas first and defer the proof of theorem 3.5 to the end of the section where we also state and prove a corollary characterizing the MAP estimators as minimizers of the Onsager–Machlup functional.

Lemma 3.6. Let δ > 0. For any centred Gaussian measure μ0 on a separable Banach space X, we have

where $c=\exp (\frac{a_1}{2}\delta ^2)$ and a1 is a constant independent of z and δ.

Proof. We first show that this is true for a centred Gaussian measure on $\mathbb {R}^n$ with the covariance matrix C = diag[λ1, ..., λn] in basis {e1, ..., en}, where λ1 ⩾ λ2 ⩾ ⋅⋅⋅ ⩾ λn. Let aj = 1/λj, and $|z|^2=z_1^2+\cdots +z_n^2$. Define

Equation (3.4)

and with Bδ(z) the ball of radius δ and centre z in $\mathbb {R}^n$. We have

for any ε < a1 and where $\hat{\mu }_0$ is a centred Gaussian measure on $\mathbb {R}^n$ with the covariance matrix diag[1/ε, 1/(a2a1 + ε), ..., 1/(ana1 + ε)] (noting that anan − 1 ⩾ ⋅⋅⋅ ⩾ a1). By Anderson's inequality for the infinite-dimensional spaces (see theorem 2.8.10 of [5]), we have $\hat{\mu }_0(B(z,\delta ))\le \hat{\mu }_0(B(0,\delta ))$ and therefore

and since ε is arbitrarily small, the result follows for the finite-dimensional case.

To show the result for an infinite-dimensional separable Banach space X, we first note that $\lbrace e_j\rbrace _{j=1}^\infty$, the orthogonal basis in the Cameron–Martin space of X for μ0, separates the points in X; therefore, $T:u\rightarrow \lbrace e_j(u)\rbrace _{j=1}^\infty$ is an injective map from X to $\mathbb {R}^\infty$. Let uj = ej(u) and

Then, since μ0 is a Radon measure, for the balls B(0, δ) and B(z, δ), for any ε0 > 0, there exists large enough N such that the cylindrical sets $A_0=P_n^{-1}(P_n(B^\delta (0)))$ and $A_z=P_n^{-1}(P_n(B^\delta (z)))$ satisfy μ0(Bδ(0)▵A0) < ε0 and μ0(Bδ(z)▵Az) < ε0 for n > N [5], where ▵ denotes the symmetric difference. Let zj = (z, ej) and zn = (z1, z2, ..., zn, 0, ...), and for 0 < ε1 < δ/2, n > N large enough so that ||zzn||X ⩽ ε1. With $\alpha =c\,\mathrm{e}^{-\frac{a_1}{2}(\Vert z\Vert _X-\varepsilon _1-\delta )^2}$, we have

Since ε0 and ε1 converge to zero as n, the result follows. □

Lemma 3.7. Suppose that $\bar{z}\not\in E$, {zδ}δ > 0X and zδ converges weakly to z in X as δ → 0. Then for any ε > 0, there exists δ small enough such that

Proof. Let $\mathcal {C}$ be the covariance operator of μ0, and $\lbrace e_j\rbrace _{j\in \mathbb {N}}$ the eigenfunctions of $\mathcal {C}$ scaled with respect to the inner product of E, the Cameron–Martin space of μ0, so that $\lbrace e_j\rbrace _{j\in \mathbb {N}}$ forms an orthonormal basis in E. Let {λj} be the corresponding eigenvalues and aj = 1/λj. Since zδ converges weakly to $\bar{z}$ in X as δ → 0,

Equation (3.5)

and as $\bar{z}\not\in E$ for any A > 0, there exist N sufficiently large and $\tilde{\delta }>0$ sufficiently small, such that

where xj = ej(z). By (3.5), for $\delta _1<\tilde{\delta }$ small enough, we have $B^{\delta _1}(z^{\delta _1})\subset B^{\tilde{\delta }}(\bar{z})$ and therefore

Equation (3.6)

Let $T_n:X\rightarrow \mathbb {R}^n$ map z to (e1(z), ..., en(z)), and consider $J_{0,n}^\delta (z)$ to be defined as in (3.4). Having (3.6), and choosing δ ⩽ δ1 such that $\mathrm{e}^{-\frac{1}{4}(a_1+\cdots +a_N)\delta ^2}>1/2$, for any nN, we can write

As A > 0 was arbitrary, the constant in the last line of the above equation can be made arbitrarily small, by making δ sufficiently small and n sufficiently large. Having this and arguing in a similar way to the final paragraph of the proof of lemma 3.6, the result follows. □

Corollary 3.8. Suppose that $z\not\in E$. Then

Lemma 3.9. Consider {zδ}δ > 0X and suppose that zδ converges weakly and not strongly to 0 in X as δ → 0. Then for any ε > 0, there exists δ small enough such that

Proof. Since zδ converges weakly and not strongly to 0, we have

and therefore, for δ1 small enough, there exists α > 0 such that ||zδ||X > α for any δ < δ1. Let λj, aj and ej, $j\in \mathbb {N}$, be defined as in the proof of lemma 3.7. Since zδ⇀0 as δ → 0,

Equation (3.7)

Also, as for μ0-almost every xX, $x=\sum _{j\in \mathbb {N}} e_j(x)\hat{e_j}$ and $\lbrace \hat{e}_j=e_j/\sqrt{\lambda _j}\rbrace$ is an orthonormal basis in $X^*_{\mu _0}$ (closure of X* in L20)) [5], we have

Equation (3.8)

Now, for any A > 0, let N large enough such that aN > A2. Then, having (3.7) and (3.8), one can choose δ2 < δ1 small enough and N1 > N large enough so that for δ < δ2 and n > N1,

Therefore, letting $J_{0,n}^\delta (z)$ and Tn be defined as in the proof of lemma 3.7, we can write

if δ < δ2 is small enough so that $\mathrm{e}^{A\delta ^2}<2$. Having this and arguing in a similar way to the final paragraph of the proof of lemma 3.6, the result follows. □

Having these preparations in place, we can give the proof of theorem 3.5.

Proof of theorem 3.5. (i) We first show that {zδ} is bounded in X. By assumption 2.1(ii), for any r > 0, there exists K = K(r) > 0 such that

for any u satisfying ||u||X < r; thus, K may be assumed to be a non-decreasing function of r. This implies that

We assume that δ ⩽ 1, and then the inequality above shows that

Equation (3.9)

noting that ε1 is independent of δ.

We can also write

which implies that for any zX and δ > 0,

Equation (3.10)

Now suppose {zδ} is not bounded in X, so that for any R > 0, there exists δR such that $\Vert z^{\delta _R}\Vert _X>R$ (with δR → 0 as R). By (3.10), (3.9) and the definition of $z^{\delta _R}$, we have

implying that for any δR and corresponding $z^{\delta _R}$

This contradicts the result of lemma 3.6 (below) for δR small enough. Hence, there exists R, δR > 0 such that

Therefore, there exist a $\bar{z}\in X$ and a subsequence of $\lbrace z^\delta \rbrace _{0<\delta <\delta _R}$ which converges weakly in X to $\bar{z}\in X$ as δ → 0.

Now, suppose either

  • (a)  
    there is no strongly convergent subsequence of {zδ} in X, or
  • (b)  
    if there is one, its limit $\bar{z}$ is not in E.

Let UE = {uE: ||u||E ⩽ 1}. Each of the above situations implies that for any positive $A\in \mathbb {R}$, there is a δ† such that for any δ ⩽ δ†,

Equation (3.11)

We first show that $\bar{z}$ has to be in E. By the definition of zδ, we have (for δ < 1)

Equation (3.12)

Supposing $\bar{z}\not\in E$, in lemma 3.7, we show that for any ε > 0, there exists δ small enough such that

Hence, choosing A in (3.11) such that $\mathrm{e}^{-A^2/2}<\frac{1}{2}\,\mathrm{e}^{K(1)}\,\mathrm{e}^{-M}$, and setting $\varepsilon =\mathrm{e}^{-A^2/2}$, from (3.12), we obtain 1 ⩽ Jδ(zδ)/Jδ(0) < 1, which is a contradiction. We therefore have $\bar{z}\in E$.

Now, knowing that $\bar{z}\in E$, we can show that zδ converges strongly in X. Suppose it does not. Then for $z^\delta -\bar{z}$, the hypotheses of lemma 3.9 are satisfied. Again choosing A in (3.11) such that $\mathrm{e}^{-A^2/2}<\frac{1}{2}\,\mathrm{e}^{K(1)}\,\mathrm{e}^{-M}$, and setting $\varepsilon =\mathrm{e}^{-A^2/2}$, from lemma 3.9 and (3.12), we obtain 1 ⩽ Jδ(zδ)/Jδ(0) < 1, which is a contradiction. Hence, there is a subsequence of {zδ} converging strongly in X to $\bar{z}\in E$.

(ii) Let z* = arg minI(z) ∈ E; the existence is assured by theorem 3.2. By assumption 2.1(iii), we have

with L1 = L(||zδ||X + δ) and $L_2=L(\Vert \bar{z}\Vert _X+\delta )$. Therefore, since Φ is continuous on X and $z^\delta \rightarrow \bar{z}$ in X,

Suppose {zδ} is not bounded in E, or if it is, it only converges weakly (and not strongly) in E. Then, $\Vert \bar{z}\Vert _E < \liminf _{\delta \rightarrow 0} \Vert z^\delta \Vert _E$, and hence for small enough δ, $\Vert \bar{z}\Vert _E<\Vert z^\delta \Vert _E$. Therefore, for the centered Gaussian measure μ0, since $\Vert z^\delta -\bar{z}\Vert _X\rightarrow 0$, we have

Since by the definition of zδ, $J^\delta (z^\delta )\ge J^\delta (\bar{z})$ and hence

this implies that

Equation (3.13)

In the case where {zδ} converges strongly to $\bar{z}$ in E, by the Cameron–Martin theorem we have

and then by an argument very similar to the proof of theorem 18.3 of [25], one can show that

and (3.13) follows again in a similar way. Therefore, $\bar{z}$ is a MAP estimator of measure μ.

It remains to show that $\bar{z}$ is a minimizer of I. Suppose $\bar{z}$ is not a minimizer of I so that $I(\bar{z})-I(z^*)>0$. Let δ1 be small enough so that in the equation before (3.2), $1<r_1(\delta )< \mathrm{e}^{I(\bar{z})-I(z^*)}$ for any δ < δ1 and therefore,

Equation (3.14)

Let $\alpha =r_1(\delta ) \,\mathrm{e}^{-I(\bar{z})+I(z^*)}$. We have

and this by (3.14) and (3.13) implies that

which is a contradiction, since by the definition of zδ, Jδ(zδ) ⩾ Jδ(z*) for any δ > 0. □

Corollary 3.10. Under the conditions of theorem 3.5, we have the following.

  • (i)  
    Any MAP estimator, given by definition 3.1, minimizes the Onsager–Machlup functional I.
  • (ii)  
    Any z* ∈ E which minimizes the Onsager–Machlup functional I is a MAP estimator for measure μ given by (1.1).

Proof. 

  • (i)  
    Let $\tilde{z}$ be a MAP estimator. By theorem 3.5, we know that {zδ} has a subsequence which strongly converges in X to $\bar{z}$. Let {zα} be the said subsequence. Then by (3.13) one can show that
    By the above equation and since $\tilde{z}$ is a MAP estimator, we can write
    Then corollary 3.8 implies that $\tilde{z}\in E$, and supposing that $\tilde{z}$ is not a minimizer of I would result in a contradiction using an argument similar to that in the last paragraph of the proof of the above theorem.
  • (ii)  
    Note that the assumptions of theorem 3.5 imply those of theorem 3.2. Since $\bar{z}$ is a minimizer of I as well, by theorem 3.2 we have
    Then we can write
    The result follows by definition 3.1.

 □

4. Bayesian inversion and posterior consistency

The structure (1.1), where μ0 is Gaussian, arises in the application of the Bayesian methodology to the solution of inverse problems. In that context, it is interesting to study posterior consistency: the idea that the posterior concentrates near the truth which gave rise to the data, in the small noise or large sample size limits; these two limits are intimately related and indeed there are theorems that quantify this connection for certain linear inverse problems [6].

In this section, we describe the Bayesian approach to nonlinear inverse problems, as outlined in the introduction. We assume that the data are found from the application of G to the truth u† with additional noise:

The posterior distribution μy is then of the form (1.6) and in this case, it is convenient to extend the Onsager–Machlup functional I to a mapping from $X \times \mathbb {R}^J$ to $\mathbb {R}$, defined as

We study the posterior consistency of MAP estimators in both the small noise and large sample size limits. The corresponding results are presented in theorems 4.4 and 4.1, respectively. Specifically, we characterize the sense in which the MAP estimators concentrate on the truth underlying the data in the small noise and large sample size limits.

4.1. Large sample size limit

Let us denote the exact solution by u† and suppose that as data we have the following n random vectors:

with $y_j\in \mathbb {R}^K$ and $\eta _j\sim \mathcal {N}(0,\mathcal {C}_1)$ independent identically distributed random variables. Thus, in the general setting, we have J = nK, $G(\cdot )=(\mathcal {G}(\cdot ),\ldots ,\mathcal {G}(\cdot ))$ and Σ a block diagonal matrix with $\mathcal {C}_1$ in each block. We have n independent observations, each polluted by $\mathcal{O}(1)$ noise, and we study the limit n. Corresponding to this set of data and given the prior measure $\mu _0\sim \mathcal {N}(0,\mathcal {C}_0)$, we have the following formula for the posterior measure on u:

Here, and in the following, we use the notations $\langle\cdot ,\cdot \rangle_{\mathcal {C}_1}=\langle\mathcal {C}_1^{-1/2}\cdot ,\mathcal {C}_1^{-1/2}\cdot \rangle $ and $|\cdot |_{\mathcal {C}_1}^2=\langle\cdot ,\cdot \rangle _{\mathcal {C}_1}$. By corollary 3.10, MAP estimators for this problem are minimizers of

Equation (4.1)

Our interest is in studying the properties of the limits of minimizers un of In, namely the MAP estimators corresponding to the preceding family of posterior measures. We have the following theorem concerning the behaviour of un when n.

Theorem 4.1. Assume that $\mathcal {G}:X\rightarrow \mathbb {R}^K$ is Lipschitz on bounded sets and u† ∈ E. For every $n \in \mathbb {N}$, let unE be a minimizer of In given by (4.1). Then there exist a u* ∈ E and a subsequence of $\lbrace u_n\rbrace _{n \in \mathbb {N}}$ that converges weakly to u* in E, almost surely. For any such u*, we have $\mathcal {G}(u^*)=\mathcal {G}(u^\dagger )$.

We describe some preliminary calculations useful in the proof of this theorem, then give lemma 4.2, also useful in the proof, and finally give the proof itself.

We first observe that, under the assumption that $\mathcal {G}$ is Lipschitz on bounded sets, assumptions 2.1 hold for Φ. We note that

Hence,

Define

We have

Lemma 4.2. Assume that $\mathcal {G}:X\rightarrow \mathbb {R}^K$ is Lipschitz on bounded sets. Then for fixed $n\in \mathbb {N}$ and almost surely, there exists unE such that

Proof. We first observe that, under the assumption that $\mathcal {G}$ is Lipschitz on bounded sets and because for a given n and fixed realizations η1, ..., ηn there exists an r > 0 such that max {|y1|, ..., |yn|} < r, assumptions 2.1 hold for Φ. Since $\mathop{\rm arg\ min}_{u} I_n= \mathop{\rm arg\ min}_{u} J_n$, the result follows by proposition 3.4. □

We may now prove the posterior consistency theorem. From (4.3) onwards, the proof is an adaptation of the proof of theorem 2 of [3]. We note that the assumptions on limiting behaviour of measurement noise in [3] are stronger: property (9) of [3] is not assumed here for our Jn. On the other hand, a frequentist approach is used in [3], while here, since Jn is coming from a Bayesian approach, the norm in the regularization term is stronger (it is related to the Cameron–Martin space of the Gaussian prior). That is why in our case, asking what if u† is not in E and only in X is relevant and is answered in corollary 4.3 below.

Proof of theorem 4.1 By the definition of un, we have

Therefore,

Using Young's inequality (see lemma 1.8 of [31], for example) for the last term on the right-hand side, we obtain

Taking expectation and noting that the {ηj} are independent, we obtain

where $K=\mathbb {E}|\mathcal {C}_1^{-1/2}\eta _1|^2$. This implies that

Equation (4.2)

and

Equation (4.3)

  • (1)  
    We first show using (4.3) that there exist u* ∈ E and a subsequence $\lbrace u_{n_k(k)}\rbrace _{k\in \mathbb {N}}$ of {un} such that
    Equation (4.4)
    Let $\lbrace \phi _j\rbrace _{j\in \mathbb {N}}$ be a complete orthonormal system for E. Then,
    Therefore, there exist $\xi _1\in \mathbb {R}$ and a subsequence $\lbrace u_{n_1(k)}\rbrace _{k\in \mathbb {N}}$ of $\lbrace u_n\rbrace _{n\in \mathbb {N}}$ such that $\mathbb {E}\langle u_{n_1(k)},\phi _1 \rangle \rightarrow \xi _1$. Now considering $\mathbb {E}\langle u_{n_1(k)},\phi _2\rangle$ and using the same argument, we conclude that there exist $\xi _2\in \mathbb {R}$ and a subsequence $\lbrace u_{n_2(k)}\rbrace _{k\in \mathbb {N}}$ of $\lbrace u_{n_1(k)}\rbrace _{k\in \mathbb {N}}$ such that $\mathbb {E}\langle u_{n_2(k)},\phi _2 \rangle \rightarrow \xi _2$. Continuing similarly, we can show that there exist $\lbrace \xi _j\rbrace \in \mathbb {R}^\infty$ and $\lbrace u_{n_1(k)}\rbrace _{k\in \mathbb {N}}\supset \lbrace u_{n_2(k)}\rbrace _{k\in \mathbb {N}} \supset \dots \supset \lbrace u_{n_j(k)}\rbrace _{k\in \mathbb {N}}$ such that $\mathbb {E}\langle u_{n_j(k)},\phi _j \rangle \rightarrow \xi _j$ for any $j\in \mathbb {N}$ as k. Therefore,
    We need to show that $\lbrace \xi _j\rbrace \in \ell ^2(\mathbb {R})$. We have, for any $N\in \mathbb {N}$,
    Therefore, $\lbrace \xi _j\rbrace \in \ell ^2(\mathbb {R})$ and $u^*:=\sum _{j=1}^\infty \xi _j\phi _j \in E$. We can now write for any nonzero vE,
    Now for any fixed ε > 0, we choose N large enough so that
    and then k large enough so that
    This demonstrates that $\mathbb {E}\langle u_{n_k(k)}-u^*,v\rangle _E\rightarrow 0$ as k.
  • (2)  
    Now we show the almost sure existence of a convergent subsequence of $\lbrace u_{n_k(k)}\rbrace$. By (4.2), we have $|\mathcal {G}(u_{n_k(k)})-\mathcal {G}(u^\dagger )|_{\mathcal {C}_1}\rightarrow 0$ in probability as k. Therefore, there exists a subsequence {um(k)} of $\lbrace u_{n_k(k)}\rbrace$ such that
    Now by (4.4), we have 〈um(k)u*, vE → 0 in probability as k, and hence there exists a subsequence $\lbrace u_{\hat{m}(k)}\rbrace$ of {um(k)} such that $ u_{\hat{m}(k)}$ converges weakly to u* in E almost surely as k. Since E is compactly embedded in X, this implies that $u_{\hat{m}(k)}\rightarrow u^*$ in X almost surely as k. The result now follows by continuity of $\mathcal {G}$.

 □

In the case where u† ∈ X (and not necessarily in E), we have the following weaker result.

Corollary 4.3. Suppose that $\mathcal {G}$ and un satisfy the assumptions of theorem 4.1, and that u† ∈ X. Then there exists a subsequence of $\lbrace \mathcal {G}(u_n)\rbrace _{n\in \mathbb {N}}$ converging to $\mathcal {G}(u^\dagger )$ almost surely.

Proof. For any ε > 0, by the density of E in X, there exists vE such that ||u† − v||X ⩽ ε. Then by the definition of un, we can write

Therefore, dropping $\frac{1}{n}\Vert u_n\Vert _E^2$ on the left-hand side, and using Young's inequality, we obtain

By the local Lipschitz continuity of $\mathcal {G}$, $|\mathcal {G}(u^\dagger )-\mathcal {G}(v)|_{\mathcal {C}_1}\le C\varepsilon ^2$, and therefore, taking the expectations and noting the independence of {ηj}, we obtain

implying that

Since the $\lim \inf$ is obviously positive and ε was arbitrary, we have $\lim _{n\rightarrow \infty }\mathbb {E}|\mathcal {G}(u^\dagger )-\mathcal {G}(u_n)|_{\mathcal {C}_1}^2=0$. This implies that $|\mathcal {G}(u^\dagger )-\mathcal {G}(u_n)|_{\mathcal {C}_1}\rightarrow 0$ in probability. Therefore, there exists a subsequence of $\lbrace \mathcal {G}(u_n)\rbrace$ which converges to $\mathcal {G}(u^\dagger )$ almost surely. □

4.2. Small noise limit

Consider the case where, as data, we have the random vector

Equation (4.5)

for $n \in \mathbb {N}$ and with u† again as the true solution and $\eta _j\sim \mathcal {N}(0,\mathcal {C}_1)$, $j\in \mathbb {N}$, Gaussian random vectors in $\mathbb {R}^K$. Thus, in the preceding general setting, we have $G=\mathcal {G}$ and J = K. Rather than having n independent observations, we have an observation noise scaled by small γ = 1/n converging to zero. For these data and given the prior measure μ0 on u, we have the following formula for the posterior measure:

By the result of the previous section, the MAP estimators for the above measure are the minimizers of

Equation (4.6)

Our interest is in studying the properties of the limits of minimizers of In as n. We have the following almost-sure convergence result.

Theorem 4.4. Assume that $\mathcal {G}:X\rightarrow \mathbb {R}^K$ is Lipschitz on bounded sets, and u† ∈ E. For every $n \in \mathbb {N}$, let unE be a minimizer of In(u) given by (4.6). Then, there exist a u* ∈ E and a subsequence of $\lbrace u_n\rbrace _{n \in \mathbb {N}}$ that converges weakly to u* in E, almost surely. For any such u*, we have $\mathcal {G}(u^*)=\mathcal {G}(u^\dagger )$.

Proof. The proof is very similar to that of theorem 4.1 and so we only sketch differences. We have

Letting

we hence have arg minuIn = arg minuJn. For this Jn, the result of lemma 4.2 holds true, using an argument similar to the large sample size case. The result of theorem 4.4 carries over as well. Indeed, by the definition of un, we have

Therefore,

Using Young's inequality for the last term on the right-hand side, we obtain

Taking expectation, we obtain

This implies that

Equation (4.7)

and

Equation (4.8)

Having (4.7) and (4.8), and with the same argument as in the proof of theorem 4.1, it follows that there exist a u* ∈ E and a subsequence of {un} that converges weakly to u* in E almost surely, and for any such u*, we have $\mathcal {G}(u^*)=\mathcal {G}(u^\dagger )$. □

As in the large sample size case, here also, if we have u† ∈ X and we do not restrict the true solution to be in the Cameron–Martin space E, one can prove, in a similar way to the argument of the proof of corollary 4.3, the following weaker convergence result.

Corollary 4.5. Suppose that $\mathcal {G}$ and un satisfy the assumptions of theorem 4.4, and that u† ∈ X. Then there exists a subsequence of $\lbrace \mathcal {G}(u_n)\rbrace _{n\in \mathbb {N}}$ converging to $\mathcal {G}(u^\dagger )$ almost surely.

5. Applications in fluid mechanics

In this section, we present an application of the methods presented above to filtering and smoothing in fluid dynamics, which is relevant to data assimilation applications in oceanography and meteorology. We link the MAP estimators introduced in this paper to the variational methods used in applications [2], and we demonstrate posterior consistency in this context.

We consider the 2D Navier–Stokes equation on the torus $\mathbb {T}^{2} := [-1,1) \times [-1,1)$ with the following periodic boundary conditions:

Here, $v :\mathbb {T}^{2} \times (0, \infty ) \rightarrow \mathbb {R}^{2}$ is a time-dependent vector field representing the velocity, $p :\mathbb {T}^{2} \times (0,\infty ) \rightarrow \mathbb {R}$ is a time-dependent scalar field representing the pressure, $f :\mathbb {T}^{2} \rightarrow \mathbb {R}^{2}$ is a vector field representing the forcing (which we assume to be time independent for simplicity) and ν is the viscosity. We are interested in the inverse problem of determining the initial velocity field u from pointwise measurements of the velocity field at later times. This is a model for the situation in weather forecasting where observations of the atmosphere are used to improve the initial condition used for forecasting. For simplicity, we assume that the initial velocity field is divergence free and integrates to zero over $\mathbb {T}^2$, noting that this property will be preserved in time.

Define

and H as the closure of $\mathcal {H}$ with respect to the $(L^{2}(\mathbb {T}^{2}))^{2}$ norm. We define the map $P:(L^{2}(\mathbb {T}^{2}))^{2} \rightarrow H$ to be the Leray–Helmholtz orthogonal projector (see [31]). Given k = (k1, k2)T, define k := (k2, −k1)T. Then an orthonormal basis for H is given by $\psi _k :\mathbb {R}^{2} \rightarrow \mathbb {R}^{2}$, where

for $k \in \mathbb {Z}^{2} \setminus \lbrace 0\rbrace$. Thus, for uH, we may write

where, since u is a real-valued function, we have the reality constraint $u_{-k} = - \bar{u}_k$. Using the Fourier decomposition of u, we define the fractional Sobolev spaces

with the norm $\Vert u \Vert _s:= \bigl (\sum _k (\pi ^{2}| k |^{2})^{s} | u_k |^{2}\bigr )^{1/2}$, where $s \in \mathbb {R}$. If A = −PΔ, the Stokes operator, then Hs = D(As/2). We assume that fHs for some s > 0.

Let t = ℓh, for ℓ = 0, ..., L, and define $v_\ell \in \mathbb {R}^M$ to be the set of pointwise values of the velocity field given by $\lbrace v(x_m,t_\ell )\rbrace _{m \in \mathbb {M}}$, where $\mathbb {M}$ is some finite set of points in $\mathbb {T}^2$ with cardinality M/2. Note that each v depends on u and we may define $\mathcal {G}_\ell :H \rightarrow \mathbb {R}^{M}$ by $\mathcal {G}_\ell (u)=v_\ell$. We let {η}ℓ ∈ {1, ..., L} be a set of random variables in $\mathbb {R}^{M}$ which perturbs the points {v}ℓ ∈ {1, ..., L} to generate the observations {y}ℓ ∈ {1, ..., L} in $\mathbb {R}^{M}$ given by

We let $y=\lbrace y_\ell \rbrace _{\ell =1}^L$, the accumulated data up to time T = Lh, with similar notation for η, and define $\mathcal {G}:H \rightarrow \mathbb {R}^{ML}$ by $\mathcal {G}(u)= (\mathcal {G}_1(u), \ldots ,\mathcal {G}_L(u))$. We now solve the inverse problem of finding u from $y=\mathcal {G}(u)+\gamma \eta$. We assume that the prior distribution on u is a Gaussian $\mu _0=N(0,\mathcal {C}_0)$, with the property that μ0(H) = 1 and that the observational noise {η}ℓ ∈ {1, ..., L} is i.i.d. in $\mathbb {R}^M$, independent of u, with η1 distributed according to a Gaussian measure N(0, I). If we define

then under the preceding assumptions, the Bayesian inverse problem for the posterior measure μy for u|y is well defined and is Lipschitz in y with respect to the Hellinger metric (see [7]). The Onsager–Machlup functional in this case is given by

We are in the setting of subsection 4.2, with γ = 1/n and K = ML. In the applied literature, approaches to assimilating data into mathematical models based on minimizing INS are known as variational methods, and sometimes as 4DVAR [2].

We now describe numerical experiments concerned with studying posterior consistency in the case γ → 0. We let $\mathcal {C}_0 = A^{-2}$ noting that if u ∼ μ0, then uHs almost surely for all s < 1; in particular, uH. Thus μ0(H) = 1 as required. The forcing in f is taken to be f = ∇Ψ, where Ψ = cos (πk · x) and ∇ = J∇ with J the canonical skew-symmetric matrix and k = (5, 5). The dimension of the attractor is determined by the viscosity parameter ν. For the particular forcing used, there is an explicit steady state for all ν > 0, and for ν ⩾ 0.035, this solution is stable (see [26], chapter 2, for details). As ν decreases, the flow becomes increasingly complex, and we focus subsequent studies of the inverse problem on the mildly chaotic regime which arises for ν = 0.01. We use a time-step of δt = 0.005. The data are generated by computing a true signal solving the Navier–Stokes equation at the desired value of ν, and then adding Gaussian random noise to it at each observation time. Furthermore, we let h = 4δt = 0.02 and take L = 10, so that T = 0.2. We take M = 322 spatial observations at each observation time. The observations are made at the gridpoints; thus the observations include all numerically resolved, and hence observable, wavenumbers in the system. Since the noise is added in spectral space in practice, for convenience we define $\sigma = \gamma / \sqrt{M}$ and present the results in terms of σ. The same grid is used for computing the reference solution and the MAP estimator.

Figure 1 illustrates the posterior consistency which arises as the observational noise strength γ → 0. The three curves shown quantify (i) the relative error of the MAP estimator u* compared with the truth, u†, (ii) the relative error of $\mathcal {G}(u^*)$ compared with $\mathcal {G}(u^{\dagger })$ and (iii) the relative error of $\mathcal {G}(u^*)$ with respect to the observations y. The figure clearly illustrates theorem 4.4, via the dashed curve for (ii), and indeed shows that the map estimator itself is converging to the true initial condition, via the blue curve, as γ → 0. Recall that the observations approach the true value of the initial condition, mapped forward under $\mathcal {G}$, as γ → 0, and note that the dashed and dashed–dotted curves shows that the image of the MAP estimator under the forward operator $\mathcal {G}$, $\mathcal {G}(u^*)$, is closer to $\mathcal {G}(u^\dagger )$ than y, asymptotically as γ → 0.

Figure 1.

Figure 1. Illustration of posterior consistency in the fluid mechanics application. The three curves given are the relative error of the MAP estimator u* in reproducing the truth, u† (solid), the relative error of the map $\mathcal {G}(u^*)$ in reproducing $\mathcal {G}(u^{\dagger })$ (dashed) and the relative error of $\mathcal {G}(u^*)$ with respect to the observations y (dash–dotted).

Standard image High-resolution image

6. Applications in conditioned diffusions

In this section, we consider the MAP estimator for conditioned diffusions, including bridge diffusions and an application to filtering/smoothing. We identify the Onsager–Machlup functional governing the MAP estimator in three different cases. We demonstrate numerically that this functional may have more than one minimizer. Furthermore, we illustrate the results of the consistency theory in section 4 using numerical experiments. Subsection 6.1 concerns the unconditioned case, and includes the assumptions made throughout. Subsections 6.2 and 6.3 describe bridge diffusions and the filtering/smoothing problem, respectively. Finally, subsection 6.4 is devoted to numerical experiments for an example in filtering/smoothing.

6.1. Unconditioned case

For simplicity, we restrict ourselves to scalar processes with additive noise, taking the form

Equation (6.1)

If we let ν denote the measure on $X := C([0,T];\mathbb {R})$ generated by the stochastic differential equation (SDE) given in (6.1), and ν0 the same measure obtained in the case f ≡ 0, then the Girsanov theorem states that ν ≪ ν0 with density

If we choose an $F:\mathbb {R}\rightarrow \mathbb {R}$ with F'(u) = f(u), then an application of Itô's formula gives

and using this expression to remove the stochastic integral, we obtain

Equation (6.2)

Thus, the measure ν has a density with respect to the Gaussian measure ν0 and (6.2) takes the form (1.1) with μ = ν and μ0 = ν0; we have

where $\Phi _1:X \rightarrow \mathbb {R}$ is defined by

Equation (6.3)

and

We make the following assumption concerning the vector field f driving the SDE.

Assumption 6.1. The function f = F' in (6.1) satisfies the following conditions.

  • (1)  
    $F \in C^2(\mathbb {R},\mathbb {R})$ for all $u \in \mathbb {R}$.
  • (2)  
    There is $M \in \mathbb {R}$ such that Ψ(u) ⩾ M for all $u \in \mathbb {R}$ and F(u) ⩽ M for all $u \in \mathbb {R}$.

Under these assumptions, we see that Φ1 given by (6.3) satisfies assumptions 2.1 and, indeed, the slightly stronger assumptions made in theorem 3.5. Let H1[0, T] denote the space of absolutely continuous functions on [0, T]. Then the Cameron–Martin space E1 for ν0 is

and the Cameron–Martin norm is given by

where

The mean of ν0 is the constant function mu and so, using remark 2.2, we see that the Onsager–Machlup functional for the unconditioned diffusion (6.1) is thus $I_1:E_1 \rightarrow \mathbb {R}$ given by

Together, theorems 3.2 and 3.5 tell us that this functional attains its minimum over $E_1^{\prime }$ defined by

Furthermore, such minimizers define MAP estimators for the unconditioned diffusion (6.1), i.e. the most likely paths of the diffusion.

We note that the regularity of minimizers for I1 implies that the MAP estimator is C2, whilst sample paths of the SDE (6.1) are not even differentiable. This is because the MAP estimator defines the centre of a tube in X which contains the most likely paths. The centre itself is a smoother function than the paths. This is a generic feature of MAP estimators for measures defined via density with respect to a Gaussian in infinite dimensions.

6.2. Bridge diffusions

In this subsection, we study the probability measure generated by solutions of (6.1), conditioned to hit u+ at time 1 so that u(T) = u+, and denote this measure μ. Let μ0 denote the Brownian bridge measure obtained in the case f ≡ 0. By applying the approach to determining bridge diffusion measures in [17], we obtain, from (6.2), the expression

Equation (6.4)

Since u+ is fixed, we now define $\Phi _2:X \rightarrow \mathbb {R}$ by

and then (6.4) takes again the form (1.1). The Cameron–Martin space for the (zero mean) Brownian bridge is

and the Cameron–Martin norm is again $\sigma ^{-1}\Vert \cdot \Vert _{H^1}$. The Onsager–Machlup function for the unconditioned diffusion (6.1) is thus $I_2:E_2^{\prime } \rightarrow \mathbb {R},$ given by

where m, given by $m(t) = \frac{T-t}{T} u^- + \frac{t}{T} u^+$ for all t ∈ [0, T], is the mean of μ0 and

The MAP estimators for μ are found by minimizing I2 over $E_2^{\prime }$.

6.3. Filtering and smoothing

We now consider conditioning the measure ν on observations of the process u at discrete time points. Assume that we observe $y \in \mathbb {R}^J$ given by

Equation (6.5)

where 0 < t1 < ⋅⋅⋅ < tJ < T and the ηj are independent identically distributed random variables with ηjN(0, γ2). Let $\mathbb {Q}_0(\mathrm{d}y)$ denote the $\mathbb {R}^J$-valued Gaussian measure N(0, γ2I) and let $\mathbb {Q}(\mathrm{d}y|u)$ denote the $\mathbb {R}^J$-valued Gaussian measure $N(\mathcal {G}u,\gamma ^2 I)$, where $\mathcal {G}:X \rightarrow \mathbb {R}^J$ is defined by

Recall ν0 and ν from the unconditioned case and define the measures $\mathbb {P}_0$ and $\mathbb {P}$ on $X\times \mathbb {R}^J$ as follows. The measure $\mathbb {P}_0(\mathrm{d}u,\mathrm{d}y)=\nu _0(\mathrm{d}u)\mathbb {Q}_0(\mathrm{d}y)$ is defined to be an independent product of ν0 and $\mathbb {Q}_0$, whilst $\mathbb {P}(\mathrm{d}u,\mathrm{d}y)=\nu (\mathrm{d}u)\mathbb {Q}(\mathrm{d}y|u)$. Then,

with the constant of proportionality depending only on y. Clearly, by continuity,

and hence,

Applying the conditioning lemma 5.3 in [17] then gives

Thus, we define

The Cameron–Martin space is again E1 and the Onsager–Machlup functional is thus $I_3:E_1^{\prime } \rightarrow \mathbb {R}$, given by

Equation (6.6)

The MAP estimator for this set-up is, again, found by minimizing the Onsager–Machlup functional I3.

The only difference between the potentials Φ1 and Φ3, and thus between the functionals I1 for the unconditioned case and I3 for the case with discrete observations, is the presence of the term $\frac{1}{2\gamma ^2}\sum _{j=1}^J |y_j-u(t_j)|^2$. In the Euler–Lagrange equations describing the minima of I3, this term leads to Dirac distributions at the observation points t1, ..., tJ, and it transpires that, as a consequence, minimizers of I3 have jumps in their first derivatives at t1, ..., tJ. This effect can be clearly seen in the local minima of I3 shown in figure 2.

Figure 2.

Figure 2. Illustration of the problem of local minima of I for the smoothing problem with a small number of observations. The process u(t) starts at u(0) = −1 and moves in a double-well potential with stable equilibrium points at −1 and +1. Two observations of the process are indicated by the two black circles. The curves correspond to four different local minima of the functional I3 for this situation.

Standard image High-resolution image

6.4. Numerical experiments

In this section, we perform three numerical experiments related to the MAP estimator for the filtering/smoothing problem presented in section 6.3.

For the experiments, we generate a random 'signal' by numerically solving the SDE (6.1), using the Euler–Maruyama method, for a double-well potential F given by

with diffusion constant σ = 1 and initial value u = −1. From the resulting solution u(t), we generate random observations y1, ..., yJ using (6.5). Then we implement the Onsager–Machlup functional I3 from equation (6.6) and use numerical minimization, employing the Broyden–Fletcher–Goldfarb–Shanno method (see [13]; we use the implementation found in the GNU scientific library [14]), to find the minima of I3. The same grid is used for numerically solving the SDE and for approximating the values of I3.

The first experiment concerns the problem of local minima of I3. For a small number of observations, we find multiple local minima; the minimization procedure can converge to different local minima, depending on the starting point of the optimization. This effect makes it difficult to find the MAP estimator, which is the global minimum of I3, numerically. The problem is illustrated in figure 2, which shows four different local minima for the case of J = 2 observations. In the presence of local minima, some care is needed when numerically computing the MAP estimator. For example, one could start the minimization procedure with a collection of different starting points and take the best of the resulting local minima as the result. One would expect this problem to become less pronounced as the number of observations increases, since the observations will 'pull' the MAP estimator towards the correct solution, thus reducing the number of local minima. This effect is confirmed by experiments: for larger numbers of observations, our experiments found only one local minimum.

The second experiment concerns the posterior consistency of the MAP estimator in the small noise limit. Here, we use a fixed number J of observations of a fixed path of (6.1), but let the variance γ2 of the observational noise ηj converge to 0. Noting that the exact path of the SDE, denoted by u† in (4.5), has the regularity of a Brownian motion and therefore the observed path is not contained in the Cameron–Martin space E3, we are in the situation described in corollary 4.5. Our experiments indicate that we have $\mathcal {G}(u_\gamma ) \rightarrow \mathcal {G}(u^\dagger )$ as γ↓0, where uγ denotes the MAP estimator corresponding to observational variance γ2, confirming the result of corollary 4.5. As discussed above, for small values of γ, one would expect the minimum of I3 to be unique, and indeed, experiments where different starting points of the optimization procedure were tried did not find different minima for small δ. The result of a simulation with J = 5 is shown in figure 3.

Figure 3.

Figure 3. Illustration of posterior consistency for the smoothing problem in the small-noise limit. The marked points correspond the maximum-norm distance between the true signal u† and the MAP estimator uγ with J = 5 evenly spaced observations. The map $\mathcal {G}(u) = (u(t_1), \ldots , u(t_J))$ is the projection of the path onto the observation points. The solid line is a fitted curve of the form cγ.

Standard image High-resolution image

Finally, we perform an experiment to illustrate posterior consistency in the large sample size limit: for this experiment, we still use one fixed path u† of the SDE (6.1). Then, for different values of J, we generate observations y1, ..., yJ using (6.5) at equidistantly spaced times t1, ..., tJ, for fixed γ = 1, and then determine the L2 distance of the resulting MAP estimate uJ to the exact path u†. As discussed above, for large values of J, one would expect the minimum of I3 to be unique, and indeed, experiments where different starting points of the optimization procedure were tried did not find different minima for large J. The situation considered here is not covered by the theoretical results from section 4, but the results of the numerical experiment, shown in figure 4, indicate that posterior consistency still holds.

Figure 4.

Figure 4. Illustration of posterior consistency for the smoothing problem in the large sample size limit. The marked points correspond the supremum-norm distance between the true signal u* and the MAP estimator $u^\dagger _J$ with J evenly spaced observations. The solid line is a fitted curve of the form cJ−α; the exponent α = −1/4 was found numerically.

Standard image High-resolution image
Please wait… references are loading.