Paper The following article is Open access

Codifference can detect ergodicity breaking and non-Gaussianity

, and

Published 6 May 2019 © 2019 The Author(s). Published by IOP Publishing Ltd on behalf of the Institute of Physics and Deutsche Physikalische Gesellschaft
, , Citation Jakub Ślęzak et al 2019 New J. Phys. 21 053008 DOI 10.1088/1367-2630/ab13f3

Download Article PDF
DownloadArticle ePub

You need an eReader or compatible software to experience the benefits of the ePub3 file format.

1367-2630/21/5/053008

Abstract

We show that the codifference is a useful tool in studying the ergodicity breaking and non-Gaussianity properties of stochastic time series. While the codifference is a measure of dependence that was previously studied mainly in the context of stable processes, we here extend its range of applicability to random-parameter and diffusing-diffusivity models which are important in contemporary physics, biology and financial engineering. We prove that the codifference detects forms of dependence and ergodicity breaking which are not visible from analysing the covariance and correlation functions. We also discuss a related measure of dispersion, which is a nonlinear analogue of the mean squared displacement.

Export citation and abstract BibTeX RIS

Original content from this work may be used under the terms of the Creative Commons Attribution 3.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.

1. Introduction

1.1. Statistical measures in modelling of diffusion

The analysis of stochastic systems has three important and partially distinct aspects: models, properties and estimation. These roughly correspond to physical, mathematical and statistical aspects of research. Modelling is concerned with explaining the nature of a system according to the underlying theory (e.g. 'the particle undergoes Brownian motion, because it rapidly exchanges momenta with the molecules of liquid'). The analysis of statistical properties (also called 'measures')4 relates these models with observable quantities ('Brownian motion has a linear mean squared displacement'). By using suitable estimators we link these parameters to the experimental data ('the mean squared displacement can be efficiently estimated by an arithmetic average over squared displacements').

This work is motivated by our conviction that the choice of statistical measures is too small for contemporary needs, as the scope and number of models increased considerably [1]. The classical models based on the Langevin equation [2], the generalised Langevin equation [3, 4], as; well as short- [5] and long- [6, 7] memory random walks were complemented by motions on fractals [8], motions in complex energy landscapes [9], random walks in random environments [10, 11], random walks with correlated steps and waiting times [1216] and Lévy walks [17], spatially heterogeneous diffusion processes [18], diffusing-diffusivity [19] and more. Distinguishing between different models from this wide class is of course crucially dependent on the physical understanding of the system, but this requirement does not lessen the importance of empirical verification based on various measures and corresponding estimators. From an experimental point of view the large range of different stochastic processes is called for by ever more detailed insights garnered in highly complex environments such as living biological cells or membranes, for instance, by single particle tracking of individual sub-micron tracers of even fluorescently labelled single molecules [2022].

Traditionally, in the study of diffusion phenomena, the three most basic and popular statistical measures in use are: the mean as a measure of location, the mean squared displacement (MSD) as a measure of dispersion and the covariance as a measure of dependence, respectively

Equation (1.1)

Other, alternative choices of measures could be, for example: the median for the location [23], entropy [24] or quantile ranges [23] for the dispersion, the rank correlation [23, 25] or the mutual information [26] for the dependence.

The covariance as defined above should not depend on the choice of s, which is true for stationary processes (the term 'non-ageing' is also in use). We will assume stationarity whenever we will be studying memory. In practical applications this condition is fulfilled by many types of confined motions or increments of free diffusions. The more general non-stationary case will be only briefly mentioned in equations (3.2) and (3.4). Many of the arguments presented here could be further extended to non-stationary models, but it would require a case-by-case study. Conversely, measures of dispersion and location are interesting mostly for non-stationary (ageing) processes, otherwise they are constant, and the discussed cases will fit into that category.

The present range of typically employed measures, which could be effectively used for studying diffusion is indeed quite limited, and the need of a wider range of methods has been acknowledged for many years. Various papers proposed, e.g. studying higher order moments and ratios of moments [27], running maximum [28], p-variation [29], or time averages and ensemble averages of time averages [30]. A prominent example of the last kind of measure is, e.g. the ergodicity breaking parameter [18, 3133]. Recently also single-trajectory power spectral methods were proposed [34, 35]. These techniques are steadily gaining public recognition, but often the range of their application is still narrow. Moreover, a large part of this important research has a limitation of studying properties 'not very different' from the second order on. For example, any power function ${x}^{\alpha }$ for $\alpha \gt 1$ has a similar behaviour to x2 (i.e. it is an increasing, convex function) and parameters based on it are usually not far away from the classical ones5 . They all emphasise highly the tails of the distribution, and any change of distributions for large values of observations has a larger influence than for the small ones. This connection is very helpful in making comparisons, but the important part of the total information is lost and could be extracted using more distinct measures.

1.2. Overview of the codifference

Our main subject of interest, the codifference, is an example for a measure different from those based on moments. It was initially proposed as a tool to measure the dependence for α-stable processes, for which the second moment is infinite [42, 3741]. However, in many systems the divergence of the second moment is not an expected physical property, which limits the range of possible applications of stable processes. It was already noticed, e.g. in [43] that the codifference may be useful for both models with or without finite second moment. In our present work we study the applications of the codifference for a class of models based on Gaussian distributions, which we call conditionally Gaussian processes; as we will demonstrate many useful and widely used models fit into this category.

The definition of the codifference which we will use is as follows: for any stationary process $X$ it is given by the formula

Equation (1.2)

The sample codifference is introduced in a standard way, by replacing the three ensemble averages ${\mathbb{E}}[{\boldsymbol{\cdot }}]$ in the above expression by arithmetic averages $\tfrac{1}{n}{\sum }_{j=1}^{n}({\boldsymbol{\cdot }})$. Similarly, one can consider a time-averaged codifference. For all symmetric distributions the considered averages should be real-valued, so in most of the practical applications one can average over $\cos (\theta ({\boldsymbol{\cdot }}))$ instead of $\exp ({\rm{i}}\theta ({\boldsymbol{\cdot }}));$ this was used for the Monte Carlo simulations which will be presented further on.

Note that the so-called generalised codifference has ${X}_{s+t}$ and Xs multiplied by ${\theta }_{1}$ and ${\theta }_{2}$ respectively and contains even more information [37]. In the context of models that we will consider this additional flexibility does not seem to be meaningful and so the cost of complicating our formulae would be unreasonable.

Conversely, the basic formula for the codifference in the classical book of Samorodnitsky and Taqqu [37] is similar to ours, but with $\theta =1$. In the mathematical study of stable process this is sufficient, but in more broad physical applications introducing an arbitrary dimensional constant equal to unity is not desirable. In our choice of definition the codifference has the unit of X2 due to the introduction of $1/{\theta }^{2}$. This factor makes the codifference comparable to the covariance, and allows us to show them on the same plots. When this is not important the factor $1/{\theta }^{2}$ can be omitted. There exists an even more simplified object, the dynamical functional [44], which is just the numerator minus the denominator from (1.2) with $\theta =1;$ it is used to study ergodicity breaking [30, 45].

Instead of moments such as the covariance, the codifference depends on sines and cosines of $\theta {X}_{s+t}$ and $\theta {X}_{s}$. Expanding these functions into Taylor series around zero up to the two first terms and using the fact that for stationary process ${\mathbb{E}}\left[{X}_{t}\right]=\mathrm{const}.$ shows that the codifference agrees with the covariance for distributions concentrated around the origin. The most essential difference is that the codifference measures mainly the dependence determined by the bulk of the probability density in contrast to the covariance, which puts much larger emphasis on the tails. This is caused by the cancellation of highly oscillatory terms in the tails of the PDF as stated by the Riemann–Lebesgue lemma, which is in contrast to the huge influence of the tails in the covariance caused by the quadratic factor in the probabilistic integral ${\mathbb{E}}[{X}_{s}{X}_{s+t}]$.

Because of the presence of two highly nonlinear transformations: sine/cosine and logarithm, definition (1.2) may initially not seem very intuitive. It becomes more natural if we interpret it as a conveniently transformed Fourier transform of the distribution (that is, the probabilistic characteristic function). In the full, multidimensional form, the characteristic function contains all information about the dependence. Moreover for Gaussian variables it has the very simple form $\exp (-{\left(\theta \sigma \right)}^{2}/2)$, so it seems reasonable to use it as a dependence measure for models related to the Gaussian distribution. Still, it is not obvious that the codifference behaves as we would require from a memory function. Fortunately, simple arguments show that this is the case.

  • (a)  
    When ${X}_{s+t}={X}_{s}$ (the case of total positive dependence) the codifference is a positive constant ${\tau }_{X}^{\theta }(t)={\tau }_{X}^{\theta }(0)\gt 0$. If the values ${X}_{s+t}$ and Xs become independent, the codifference converges to 0. Both facts are immediate consequences of the definition together with
    Equation (1.3)
    and, for ${X}_{s+t}$ independent of Xs,
    Equation (1.4)
  • (b)  
    If the process is a sum of independent components ${X}_{t}={Y}_{t}+{Z}_{t}$ then the respective codifferences are additive
    Equation (1.5)
    This property is important in common applications, where the observed process usually is at least to some degree disturbed by noise, which can most often be assumed to be additive and independent of the basic motion.
  • (c)  
    If ${\mathbb{E}}[{X}_{t}^{2}]\lt \infty $, the covariance can be viewed as a limit of the codifference,
    Equation (1.6)
    which stems from expanding the complex exponents in definition (1.2) into a Taylor series up to the second term and noting that we obtained the logarithm of expression ${\left(1+{\theta }^{2}{r}_{X}(t)+o({\theta }^{2})\right)}^{{\theta }^{-2}}$. It is then justified to treat the codifference as a generalisation of the covariance.
  • (d)  
    For a Gaussian process the codifference equals the covariance for any θ
    Equation (1.7)
    which follows immediately from a short calculation, see equation (3.6). Therefore comparing the codifference and the covariance can be used to measure non-Gaussianity.

One intuitive property, that the codifference does not have, is symmetry. Considering two variables we fix the first one and negate the second one (x ↦ −x), and we expect the strength of dependence to be the same but for the sign to change. This is the case for the covariance, but not for the codifference, which is by design nonlinear. Even in the borderline case ${X}_{s+t}=-{X}_{s}$ we do not have a guarantee that ${\tau }_{X}^{\theta }(t)\lt 0$, counterexamples can be given even for the otherwise well-behaved class of processes considered later. It is actually possible to remove this sometimes inconvenient property by introducing the symmetrised codifference

Equation (1.8)

which for all symmetric distributions changes sign with respect to reflection, ${X}_{s+t}\mapsto -{X}_{s+t}$. This quantity can be useful if one wants to compare the strength of positive and negative dependencies, but there is a cost: the symmetrised codifference is 'linear enough' to ignore many types of nonlinear ergodicity breaking, similarly to the covariance, see equation (3.12). For this reason further on we will use the non-symmetrised codifference and study systems with a positive type of dependence, at least in some suitable limit, such as $t\to \infty $.

Note that if the codifference is a generalisation of the covariance, one should reasonably expect that there exists a generalisation of the MSD defined in a similar spirit. Indeed, let us consider the formula

Equation (1.9)

This quantity may seem trivial, because studying the distribution in Fourier space is a classical method of basic probability theory. But, the distinguishing part of this definition is that the result is treated primarily as a function of time and it is conveniently transformed, so that it can be interpreted as a measure of dispersion with the same unit as X2. Up to a rescaling it can be considered a cumulant generating function calculated at imaginary argument, but such a quantity does not seem to have an established name in the literature, so we will call it by the straightforward term 'log characteristic function', in short LCF. It is clear that in analogy to the features of the codifference, the LCF measures mainly the spread of the bulk of the probability and is much less influenced by the distribution's tails than the MSD. As before, the first factor, here $2/{\theta }^{2}$, is optional and only needed when one wants to compare the LCF to the MSD.

The LCF is indeed a reasonable measure of dispersion, as shown by the following properties.

  • (a)  
    For independent ${Y}_{t},{Z}_{t}$ and ${X}_{t}={Y}_{t}+{Z}_{t}$,
    Equation (1.10)
  • (b)  
    For any Gaussian process the LCF equals the MSD,
    Equation (1.11)
  • (c)  
    As we stretch the probability density of ${X}_{t}$, the LCF diverges, that is,
    Equation (1.12)

The first two facts are analogues of the corresponding properties of the codifference which allow one to trace the influence of the noise and detect non-Gaussianity. The point (c) is just the Riemann–Lebesgue lemma in disguise: it corresponds to the intuition that the rescaled process should have a larger spread. It should be mentioned that in general the LCF can be negative or complex valued, which is highly undesirable. However, for the considered models, which are based on internal Gaussian dynamics, this will never be the case, as proved in proposition 2.

Decomposing any process with independent increments into a sum of its jumps shows that in this case ${\zeta }_{X}^{\theta }(t)$ is a linear function. In particular, this holds of Lévy flights [37]. It also holds for continuous time random walks with exponential waiting times [5], for which

Equation (1.13)

where J is one jump and T is one waiting time of diffusion $X$. The dependence on T is the same as for the MSD,

Equation (1.14)

only the scaling depending on J's distribution changes from nonlinear to linear.

The LCF can also be used for finite- or infinite-variance models which are 'anomalous' in some sense. A basic example is fractional Lévy stable motion ${L}_{\alpha }^{H}$ [46]. It is stable and self-similar which implies that

Equation (1.15)

for some constant Cθ, which depends on the chosen normalisation. This formula agrees with the intuition that a measure of the spread in this case should behave like a power law. Somewhat surprisingly, the situation is different for continuous time random walks with power-law waiting times, which are used to model subdiffusion. Such processes after rescaling converge to subordinated Brownian motion $B({S}_{\alpha }(t))$, for which the LCF can be calculated directly, using the well-known properties of the inverse α-stable subordinator Sα [47],

Equation (1.16)

where Eα is the Mittag-Leffler function [48]. This function approaches infinity like a logarithm; the exact asymptotic is shown in equation (2.3). The difference between these two models of anomalous diffusion is that ${L}_{\alpha }^{H}$ is self-similar, so its PDF spreads in the uniform manner, whereas for $B({S}_{\alpha })$ the bulk is much more constrained than the tails.

After this brief discussion about the general properties of the codifference and related notions, we will study its behaviour in more detail for models based on random parameters of motion and for models based on random and time-varying diffusion coefficient. The next section 2 provides a general physical overview and concrete examples useful for the modelling. The third and the last section 3 is dedicated to presenting mathematical results and calculation techniques. The paper is written such that, if the reader prefers, the physical and mathematical sections 2 and 3 can be read independently.

2. Modelling

2.1. Gaussian diffusion governed by random parameters

One of the core concepts behind ergodicity and ergodicity breaking is the idea of looking at information contained in a single trajectory. We speak about ergodicity if the data that can possibly be gained analysing one, sufficiently long, series of observations, is the same as if one analyses all possible trajectories in the ensemble [49]. Conversely, if this amount of information is smaller, we speak about ergodicity breaking. In other words, there is some information contained in a given trajectory, and using only a single trajectory we omit the amount contained in the rest. This is sometimes also rephrased as confinement in the phase space, but this language must be used carefully as the said space has a subtle structure6 .

From a different perspective, modelling based on the information content often leads to an intuitive description, because the differences between trajectories often stem from differences between diffusing particles and differences between their local surroundings. Both may occur, e.g. in biological systems. The latter case requires the additional assumption that the inhomogeneity present in the surroundings varies on a length scale of the mean distance between trajectories, but does not vary much at the scale of the trajectories themselves. That is, distinct trajectories have distinct surroundings, but each particle is sufficiently localised so that the state of the medium around it does note change significantly. This is reasonable for example when the particles are trapped or the measurement time is sufficiently short—compare, e.g. the absolute spread of the traced particles in [33].

In any case, this information can be parametrised, which leads to the so-called hierarchical or multilevel modelling [51], which in the context of physics is also called 'superstatistics' (a short term for 'superposition of statistics') [52]. Deterministic parameters of the basic model become random on an additional statistical layer.

2.1.1. Random diffusion coefficient

For diffusion the simplest example of an hierarchical model is the motion with a random diffusion coefficient, the situation when different trajectories depict movements with varying average mobilities. A typical model of such observations is the grey Brownian motion [5355]

Equation (2.1)

Here BH is fractional Brownian motion [56] and the diffusion coefficient Dβ is an independent random variable with the so-called β M-Wright distribution [57]. The moments of grey Brownian motion are the same as those of fractional Brownian motion up to a multiplicative constant, therefore the MSD still grows as ${t}^{2H}$ and the process models anomalous diffusion. Nevertheless, a straightforward calculation yields that the LCF can be expressed using the Mittag-Leffler function,

Equation (2.2)

which also yields

Equation (2.3)

Here the asymptotic '$+o(1)$' is pointwise, which is stronger than the asymptotic proportionality '∼'; in the sense of '∼' the term $4H/{\theta }^{2}\mathrm{ln}t$ is dominating and the logarithmic behaviour clearly distinguishes the LCF from the power-law MSD at long times. This crossover behaviour can be used to distinguish grey Brownian motion from fractional Brownian motion (case $\beta =1$ [53]) and diffusing-diffusivity model (equations (2.27) and (2.28)). The very slow log increase of the LCF is not surprising: because the diffusion constant is random, but fixed and it constrains the relaxation of the probability density—it is detected by the LCF, but ignored by the MSD; for a more general result see proposition 7(d).

Grey Brownian motion models free, unconfined movements and is therefore not stationary. Still, the codifference can be used for its increments ${\rm{\Delta }}{B}_{2H,\beta }(t):= {B}_{2H,\beta }(t+{\rm{\Delta }}t)-{B}_{2H,\beta }(t)$. The calculation is again not hard and yields

Equation (2.4)

The covariance decays to zero like a power law ${t}^{2H-1}$, but the function above decays to the non-zero constant

Equation (2.5)

This means that there is some degree of dependence left even at $t=\infty $ which the covariance does not detect, but the codifference does. Indeed, it can be interpreted as a joint dependency on the trajectory-wise fixed but random diffusion coefficient Dβ.

The above simple example shows that the codifference does not directly detect non-ergodicity, it rather detects dependence. The notion of mixing is useful to describe this idea. It is a property which states that the future evolution of the process after a long delay becomes independent of its past values. Formally speaking, the process is mixing when, if we calculate some statistic in some finite time interval starting at s, and later on any other statistic starting at $s+t$, these two must become independent as $t\to \infty $ [58]. Therefore, analysing the codifference, which measures the dependence between $\exp (-{\rm{i}}\theta {X}_{s})$ and $\exp ({\rm{i}}\theta {X}_{s+t})$, allows one to exclude mixing, i.e. to indicate the presence of a non-vanishing dependence. The latter means that the motion is constrained in phase space, which in turn implies ergodicity breaking7 .

Thus, for a very large class of systems one does not need to study time-averages to detect non-ergodicity. It is sufficient to find a proper memory function which will indicate non-mixing. As we demonstrate the covariance fails in this role for the considered models, but the codifference works.

These detecting capabilities of the codifference work under quite general circumstances. If we observe any ensemble of mixing, zero mean Gaussian trajectories, the covariance will converge to zero. This happens because for Gaussian process, mixing is equivalent to a decay of the covariance [58, 59], and the mixture of decaying covariance functions is decaying. But, the ensemble of trajectories as a whole will not be ergodic, which will not be detected by the covariance. Let ${ \mathcal C }$ is some parametrisation of this mixture, then the conditional average $D={\mathbb{E}}[{X}_{t}^{2}| { \mathcal C }]$ be the resulting, possibly random, conditional variance. We call it D because if the data $X$ corresponds to the velocity or increments of displacements, it will be proportional to the diffusion coefficient. Under these assumptions the codifference converges to the constant

Equation (2.6)

as proven in proposition 5. This quantity is related to the coefficient of variation defined as the standard deviation divided by the mean [23]. Denoting it by $\mathrm{CV}[X]$, the formula above can be expressed as ${\theta }^{-2}\mathrm{ln}(\mathrm{CV}{[\exp (-{\theta }^{2}D/2)]}^{2}+1)$ which is an increasing function of $\mathrm{CV}[\exp (-{\theta }^{2}D/2)]$ and asymptotically quadratic for small $\mathrm{CV}$. The coefficient of variation is a measure of dispersion, hence so is ${\tau }_{X}^{\theta }(\infty )$ which reflects the randomness of D. This behaviour is also equivalent to detecting a residual dependence and the resulting non-mixing/non-ergodicity.

Outside of the useful limit $t=\infty $ not much can be said about the properties of the codifference in such a wide and general class. The situation changes if we consider a more specific model. The idea behind grey Brownian motion and many works about superstatistics [52] is that the trajectories differ mainly by the diffusion coefficient, other properties are not significantly distinct. A simple model of such a system can be written as

Equation (2.7)

We assume that the process $Y$ describes the joint form of dependence common for all trajectories. We consider a Gaussian $Y$, which for grey Brownian motion would be fractional Brownian motion. Another reasonable choice would be, e.g. a solution of the Langevin equation. In this case, as long as $Y$ is stationary (i.e. for free diffusion we consider increments or the velocity process), the covariance is

Equation (2.8)

of course as long as ${\mathbb{E}}[D]\lt \infty $. If the process $Y$ has sufficiently long memory, ${r}_{Y}(t)\approx 0$ in the considered time scale, also ${r}_{X}(t)\approx 0$. The covariance does not detect the additional dependence introduced by random D.

At the same time the codifference can be expressed as a function of the covariance of $Y$, precisely as

Equation (2.9)

for any D, no matter if ${\mathbb{E}}[D]\lt \infty $. It clearly converges to the constant (2.6) as ${r}_{Y}(t)\to 0$ and detects the additional nonlinear dependence.

For a general, possibly non-stationary $Y$ with ${\mathbb{E}}[{Y}_{t}^{2}]={\delta }_{Y}^{2}(t)$, the representation of the LCF is

Equation (2.10)

Given some model of D these formulae can be made completely explicit, examples are given in table 1. The first example is the gamma distribution $D\mathop{=}\limits^{d}{ \mathcal G }(\alpha ,\beta )$ in which the coefficient α describes the power-law behaviour of the PDF near 0 and β is the rate of exponential decay of the tails (the specific case ${ \mathcal G }(1,\beta )$ is the exponential distribution); it models common types of experiments in which the distribution of diffusion coefficients resembles a bump concentrated around some finite constant and high values of D become exponentially less probable. This case is also illustrated in figure 1.

Figure 1.

Figure 1. Codifference τ and covariance r of the process ${X}_{t}=\sqrt{D}{Y}_{t}$ with $D\mathop{=}\limits^{d}{ \mathcal G }(1,1)$ and ${r}_{Y}(t)=\cos (t)\exp (-t)$, as given by table 1. Various properties of the codifference are visible: for $\theta \to 0$ it converges to the covariance; the codifference and the covariance increase and decay in the same intervals; at $t=0$ the codifference is smaller than the covariance; as $t\to \infty $ the codifference converges to a θ-dependent value ${\tau }_{X}^{\theta }(\infty )$ which is a functional of the law of D; the type of asymptotic of ${r}_{X}(t)$ and ${\tau }_{X}^{\theta }(t)-{\tau }_{X}^{\theta }(\infty )$ is the same (here: exponential decay). The derivations are presented in proposition 6.

Standard image High-resolution image

Table 1.  Formulae for the codifference and the LCF corresponding to common models of D: gamma, one-sided stable, Gaussian and uniform.

Law of D Codifference ${\tau }_{X}^{\theta }(t)$ LCF ${\zeta }_{X}^{\theta }(t)$
${ \mathcal G }(\alpha ,\beta )$ $\displaystyle \frac{\alpha }{{\theta }^{2}}\mathrm{ln}\displaystyle \frac{{\left(1+{\theta }^{2}/(2\beta )\right)}^{2}}{1+{\theta }^{2}\left(1-{r}_{Y}(t)\right)/\beta }$ $\displaystyle \frac{2\alpha }{{\theta }^{2}}\mathrm{ln}\left(\displaystyle \frac{{\theta }^{2}}{2\beta }{\delta }_{Y}^{2}(t)+1\right)$
${ \mathcal S }(\alpha ,c)$ ${c}^{\alpha }{\theta }^{2\alpha -2}\left({2}^{1-\alpha }-{\left(1-{r}_{Y}(t)\right)}^{\alpha }\right)$ ${2}^{1-\alpha }{c}^{\alpha }{\theta }^{2\alpha -2}{\left({\delta }_{Y}^{2}(t)\right)}^{\alpha }$
${ \mathcal N }(\mu ,{\sigma }^{2})$ $\mu {r}_{Y}(t)+\displaystyle \frac{{\left(\theta \sigma \right)}^{2}}{2}{\left(1-{r}_{Y}(t)\right)}^{2}-\mu -{\left(\displaystyle \frac{{\theta }^{3}}{8}{\sigma }^{2}-\displaystyle \frac{\theta }{2}\mu \right)}^{2}$ $\mu {\delta }_{Y}^{2}(t)-\displaystyle \frac{{\left(\theta \sigma \right)}^{2}}{4}{\left({\delta }_{Y}^{2}(t)\right)}^{2}$
${ \mathcal U }(a,b)$ ${{ar}}_{Y}(t)+\displaystyle \frac{1}{{\theta }^{2}}\mathrm{ln}\left(\displaystyle \frac{{\theta }^{2}(b-a)}{4\left(1-{r}_{Y}(t)\right)}\displaystyle \frac{1-{{\rm{e}}}^{-{\theta }^{2}(b-a)\left(1-{r}_{Y}(t)\right)}}{{\left(1-{{\rm{e}}}^{-{\theta }^{2}(b-a)/2}\right)}^{2}}\right)$ $a{\delta }_{Y}^{2}(t)-\displaystyle \frac{2}{{\theta }^{2}}\mathrm{ln}\left(\displaystyle \frac{2\left(1-{{\rm{e}}}^{-{\theta }^{2}(b-a){\delta }_{Y}^{2}(t)/2}\right)}{{\theta }^{2}{\delta }_{Y}^{2}(t)(b-a)}\right)$

Diffusion coefficients with a heavy-tailed distribution result in a motion that itself exhibits heavy tails of the PDF, a phenomenon actively investigated in transport, finance, turbulence and many other systems [6, 60, 61]. A classical model of this case is the one-sided α-stable subordinator ${ \mathcal S }(\alpha ,c)$, determined by its Laplace transform $\exp \left(-{\left({cs}\right)}^{\alpha }\right)$. The resulting type of process was thoroughly studied in the literature concerned with stable distributions [37]. This process is called sub-Gaussian, which is arguably a confusing term. In this case the process $X$ has no second moment, therefore attempts to estimate its covariance will lead to a diverging result. This is visible in the formulae for the codifference and the LCF, which diverge as $\theta \to 0$. But, for any $\theta \gt 0$ the codifference and the LCF are finite and can be estimated in a standard way, and from the result if one wishes the covariance and the MSD of $Y$ can be reconstructed.

For a distribution concentrated around its mean value one can use Gaussian ${ \mathcal N }(\mu ,{\sigma }^{2})$ or uniform ${ \mathcal U }(a,b)$ distributions, however the former is only a valid model for $\sigma \ll \mu $, when the probability that $D\lt 0$ can be neglected.

Even if the precise model of D is not known, quite a lot can be said about the behaviour of the codifference. In proposition 6 we show that:

  • (a)  
    the codifference is a monotonic function of the covariance. If one increases, the second one also increases, the same goes for decreases;
  • (b)  
    if ${\mathbb{E}}[D]\lt \infty $ the codifference is smaller than the covariance for strong positive correlation, but larger for weak or negative correlations;
  • (c)  
    the approach to the value ${\tau }_{X}^{\theta }(\infty )$ has the same asymptotic as the decay of the covariance
    Equation (2.11)

assuming ${r}_{Y}(t)\to 0$, which is a typical case.

These are all desirable properties: the memory structure of the internal process $Y$ is reflected in a straightforward manner by the codifference. For small values of the covariance their relation is even linear, as stated in (c), and the proportionality constant is finite for any distribution of D, due to the truncating factor $\exp (-{\theta }^{2}D)$.

Another property is that the codifference depends additively on D. Precisely speaking, if we decompose $D=D^{\prime} +D^{\prime\prime} $ for some independent $D^{\prime} $ and $D^{\prime\prime} $, the codifference also decomposes for

Equation (2.12)

where $X^{\prime} $ and $X^{\prime\prime} $ are processes with diffusion coefficients $D^{\prime} $ and $D^{\prime} $ respectively. Therefore subtracting the codifferences estimated from different samples may be used to analyse different sources of diffusivity. The derivation is given in proposition 6.

Analogous features can also be checked for the LCF (proposition 7), which can also be decomposed for $D=D^{\prime} +D^{\prime\prime} $ and is a monotonic function of the MSD, but is always smaller than the MSD, therefore detecting the additional constraints of the motion introduced by a random D.

At the end of the discussion about random diffusion coefficients we note that the behaviour of the codifference near $t=0$ can also give valuable information. In proposition 8 we prove that for a typical case when ${\mathbb{E}}[D]\lt \infty $ its asymptotic reflects that of the covariance. However, if ${\mathbb{E}}[D]=\infty $ and D has power tails, corresponding to the presence of high-volatility trajectories, the asymptotic of the codifference has an additional power law. As for Gaussian processes the behaviour of the covariance near $t=0$ is determined by their fractal dimension [62], chapter 8.8, the same is true for the codifference, which can be applied also for processes with no moments.

2.1.2. Random memory decay rate

Another interesting type of models are ensembles of particles for which the time dependence may vary from trajectory to trajectory. The simplest model of a time-varying dependency is the exponential decay $\exp (-t{\rm{\Lambda }})$, which is the covariance of Ornstein–Uhlenbeck process [63]. It models many kinds of linear relaxation disturbed by additive noise. It was also studied as a model of the additive measurement noise itself [64, 65]. In the hierarchical model the decay rate Λ may be random. The covariance of the resulting mixture of Ornstein–Uhlenbeck type trajectories was studied in [66] in the context of a randomly parametrised Langevin equation.

The coefficient Λ has a different physical interpretation depending on the details of the studied phenomenon. For the velocity of a Brownian particle it is proportional to the friction coefficient and its randomness is related to local changes of the viscosity and/or different shapes of the diffusing particles [67]; in this system the fluctuation-dissipation relation also links the scaling to the temperature. For trapped particles Λ is proportional to the stiffness of the confining harmonic potential (the prominent example being optical tweezers [21, 68]), therefore the randomness of Λ is equivalent to an ensemble of traps with varying sizes, which are proportional to ${{\rm{\Lambda }}}^{-1}$.

Another case worth mentioning is that of viscoelastic anomalous diffusion [69], for which the velocity (or increments) have power-law dependence $\propto {t}^{2H-1}$. This function can be expressed as $\exp (-\mathrm{ln}(t)(1-2H))$. Therefore it is enough to replace $t$ with $\mathrm{ln}t$ and the results further on will also follow for the ensemble of power-law memory trajectories characterised by random parameter $(1-2H)$. It is worth to note that the variability of the of the Hurst index H seems to be more of a rule than an exception for biological systems [7072].

We do not want to make the discussion overly technical, so below we will analyse only the case of deterministic scaling and random decay rate, ${r}_{X}(t| { \mathcal C })={\sigma }^{2}\exp (-t{\rm{\Lambda }})$. Results for more general ${Df}({\rm{\Lambda }})\exp (-t{\rm{\Lambda }})$ are presented in propositions 10, 11 and 12, which prove that the randomness of the scaling is not essential for most of the properties discussed below. We also note that sometimes one can remove the random scaling and normalise the trajectories using the estimate of scaling obtained from the Birkhoff ergodic theorem [58],

Equation (2.13)

However, this procedure requires having access to sufficiently long trajectories.

A particular property of ensembles with fixed scaling is that any marginal distribution is Gaussian, i.e. all variables ${X}_{t}$ have Gaussian distribution with variance ${\sigma }^{2}$. But the codifference can be found to be

Equation (2.14)

and because it does not equal the covariance, the process as a whole is not Gaussian. The codifference indicates the presence of subtle non-Gaussianity of the memory structure. This formula can also be used to derive useful bounds between the codifference and the covariance, see proposition 9.

Expanding in a Taylor series the exponent from (2.14) leads to

Equation (2.15)

Note that ${\sigma }^{2}{\mathbb{E}}[{{\rm{e}}}^{-{kt}{\rm{\Lambda }}}]={r}_{X}({kt})$, so the result is a type of average over the values ${r}_{X}({kt})$. When the distribution of Λ is not sufficiently concentrated near 0 and the covariance decays fast (strictly speaking is rapidly varying [73, 74]), the term $k=1$ dominates the $t\to \infty $ asymptotic. This is the case, e.g. for the one-sided stable variable ${\rm{\Lambda }}\mathop{=}\limits^{d}{ \mathcal S }(\alpha ,c)$ for which

Equation (2.16)

that is, we observe a stretched exponential type of dependence.

When Λ is more concentrated around 0 the situation differs. A basic example would again be the gamma distribution ${\rm{\Lambda }}\mathop{=}\limits^{d}{ \mathcal G }(\alpha ,\beta )$, for which

Equation (2.17)

When $\alpha =1$ (i.e. Λ has an exponential distribution) the above can also be written using the incomplete gamma function. For any α all terms in the sum decay like ${t}^{-\alpha }$ and they are comparable. Because of this, the codifference also decays with the same power law, but the proportionality constant is non-trivial,

Equation (2.18)

It is not surprising that this behaviour is not specific to a gamma distribution and can be observed for any Λ with power-law PDF near ${0}^{+}$, see proposition 10. Similarly, if the PDF of Λ decays fast near ${0}^{+}$, the codifference also decays fast. All these properties are analogous to those of the covariance [66], so here they can be used interchangeably or simultaneously, as a mean to obtain stronger statistical verification.

They are also similar in that both do not detect the non-ergodicity, more precisely the non-mixing, of this system. As was already demonstrated for the covariance it is a common occurrence resulting from its linearity. The codifference fails, because it does measure only a reduced form of mixing. For the process to be mixing it means that any two sets of multiple disjoint measurements must become asymptotically independent, i.e. the vectors $[{X}_{{s}_{1}},{X}_{{s}_{2}},\,\ldots ,\,{X}_{{s}_{n}}]$ and $[{X}_{{s}_{1}+t},{X}_{{s}_{2}+t},\,\ldots ,\,{X}_{{s}_{n}+t}]$ have to become independent as $t\to \infty $. The codifference (and for that matter also the covariance) measures only the dependence between two values Xs and ${X}_{s+t}$.

For a process with a random decay rate these are asymptotically independent and the one-point distributions are relaxing. Therefore, in order to detect non-ergodicity, we need to analyse the dependence between at least three values. A practical choice is to use four values divided into two pairs $[{X}_{s},{X}_{s+{\rm{\Delta }}t}]$ and $[{X}_{s+t},{X}_{s+{\rm{\Delta }}t+t}]$. The values in the first pair are correlated as ${{\rm{e}}}^{-{\rm{\Delta }}t{\rm{\Lambda }}}$ trajectory-wise, analogously for the values of the second pair. This property of both pairs is fixed and random, i.e. it is a constant of motion which can be detected. Probably the simplest method to achieve this is to calculate increments

Equation (2.19)

and study the codifference of those. A short calculation given in proposition 11 shows that this method indeed works and

Equation (2.20)

The result depends on Λ in a complex manner, but it can be easily estimated numerically. We can also use the fact that for small ${\rm{\Delta }}t$ the conditional covariance of increments is

Equation (2.21)

and normalise the process, ${\rm{\Delta }}{\widetilde{X}}_{t}:= {\rm{\Delta }}{X}_{t}/\sqrt{{\rm{\Delta }}t}$. The result then simplifies and becomes independent of ${\rm{\Delta }}t$,

Equation (2.22)

We stress here that this method cannot be applied using the covariance, which, calculated from increments, decays to 0 and does not detect this specific memory structure. Its decay is even quicker than for the original process and proportional to the power law decay ${t}^{-\alpha -2}$ [66]. Intuitively speaking, the decay rate is quicker by a factor ${t}^{-2}$, because the scale of ${\rm{\Delta }}X$ depends on Λ as ${{\rm{\Lambda }}}^{2}$ and the trajectories with stronger correlation have smaller amplitude and add less to the average. This property has its analogy for the codifference, for which ${\tau }_{{\rm{\Delta }}X}^{\theta }(t)-{\tau }_{{\rm{\Delta }}X}^{\theta }(\infty )$ also decays like ${t}^{-\alpha -2}$ (see proposition 12 for a more general result). This time the faster decay rate actually helps in detecting ergodicity breaking, making the limit ${\tau }_{{\rm{\Delta }}X}^{\theta }(\infty )$ visible even at short times. The numerical illustration of the discussed behaviour is shown in figure 2.

Figure 2.

Figure 2. Estimated codifference τ and covariance r estimated from the process with random decay rate ${\rm{\Lambda }}\mathop{=}\limits^{d}{ \mathcal G }(3/2,1/2)$ and ${\rm{\Delta }}t=1$. In the presented domain the covariance ${r}_{{\rm{\Delta }}X}$ was negative, so we plotted the negated value. One can observe the predicted power law decays $\propto {t}^{-3/2}$ and $\propto {t}^{-3/2-2}$ (equation (3.62)); the codifference of increments detects the non-ergodicity by converging to a constant ${\tau }_{{\rm{\Delta }}X}^{1.5}(\infty )\approx 0.105$, which fits perfectly equation (2.20). The value $\theta =1.5$ was chosen to best illustrate the interesting properties; for smaller θ the codifference ${\tau }_{X}^{\theta }$ becomes closer to the covariance ${r}_{X}$, for larger θ the codifference ${\tau }_{{\rm{\Delta }}X}^{\theta }$ converges faster to the $t\to \infty $ limit. To present smooth curves in the whole presented range we used a large 107 sample; the general shape of the presented functions is already visible for samples around 104; a significant difference between ${r}_{{\rm{\Delta }}X}$ and ${\tau }_{{\rm{\Delta }}X}$ is observed using even a few hundred trajectories. Examples for smaller sample sizes are presented in figure A1.

Standard image High-resolution image

2.2. Diffusing-diffusivity

In the preceding sections we considered models which were non-Gaussian and non-ergodic. For non-Gaussian but ergodic models the codifference can also be a useful measure of dependence. In particular we show that it can be successfully used to analyse diffusing-diffusivity models. We now assume that the increments of ${X}_{t}$ are Brownian fluctuations, but rescaled by a time-dependent random diffusivity ${D}_{t}$,

Equation (2.23)

This is a generalisation of the random parameter model, for which ${D}_{t}=\mathrm{const}.$ Because we modified the dynamical equation by replacing the previously constant parameter with a stochastic process, models of this class are sometimes called 'doubly stochastic' [75]. Before application in physics, they were extensively used in financial engineering, where it is natural to assume that parameters of the market, such as the volatility, vary in time. In 1985 Cox et al [76] proposed a model of interest rate (now commonly named CIR), which describes a non-negative stochastic process with linear mean-reverting property. In 2012 Chubynsky and Slater independently proposed a special case of the CIR process as a model of non-Gaussian diffusion [19, 77]. This led the way to a wider range of models based on fluctuating diffusivity coefficient with a short time memory [7882]. The evolution of the diffusion coefficient in the CIR model is defined by the stochastic equation

Equation (2.24)

where $a\gt 0$ describes the speed of return to the mean $b\gt 0$, and $\sigma \gt 0$ regulates the amplitude of the fluctuations. In this equation as ${D}_{t}\to 0$ the term $a(b-{D}_{t}){\rm{d}}t\approx {ab}{\rm{d}}t\gt 0$ starts to dominate the fluctuations with the mean-squared amplitude ${\mathbb{E}}\left[{\left(\sqrt{{D}_{t}}{\rm{d}}{B}_{t}\right)}^{2}\right]={D}_{t}{\rm{d}}t$, consequently ${\rm{d}}{D}_{t}\gt 0$ which causes the motion to stay positive. We assume that the system evolved for a long time before the start of the measurement and has reached the stationary gamma distribution ${D}_{0}\mathop{=}\limits^{d}{ \mathcal G }(2{ab}/{\sigma }^{2},2a/{\sigma }^{2})$ [83]. Because of the non-Gaussianity the LCF function should differ from the MSD. Conditioning by ${D}_{t}$, it can be expressed by the formula

Equation (2.25)

Expanding the above in powers of ${\theta }^{2}$ shows that again ${\zeta }_{X}^{\theta }(t)\to {\delta }_{X}^{2}(t)$ as $\theta \to 0$.

The average in (2.25) appears in the calculation of the expected price of zero-coupon bond and was calculated in the initial paper of Cox et al [76], who derived the differential equation which it fulfils and then solved it; a more general result is also available in [83]. The calculation was performed for the case when D0 is fixed and deterministic, however their result can be easily extended for stationary D by averaging over the equilibrium ${ \mathcal G }(2{ab}/{\sigma }^{2},2a/{\sigma }^{2})$ distribution of D0. Then the formula for the LCF reads

Equation (2.26)

with ${\gamma }_{\theta }=\sqrt{{a}^{2}+{\left(\theta \sigma \right)}^{2}}$. From that a brief calculation proves that the motion is Fickian for long times

Equation (2.27)

and also for short time, albeit with a diffusion scale agreeing with the MSD

Equation (2.28)

which should come as no surprise. For an illustration of these formulae see figure 3, where we present results of Monte Carlo simulations compared to the theoretical predictions. See also the crossover behaviour of the MSD in the random diffusivity model in [81].

Figure 3.

Figure 3. MSD and LCF for the diffusing-diffusivity CIR model defined by (2.23) and (2.24) with $a=1/2,b=1,\sigma =1$. Solid lines are functions estimated from $2\times {10}^{4}$ trajectories simulated using the Euler scheme with ${\rm{\Delta }}t={10}^{-3};$ dashed lines are the analytical predictions given by (2.26); dotted lines are the long-time linear limits (2.27). It is clearly observed that the MSD exhibits a single linear law whereas the LCF switches between two linear laws at $t\to {0}^{+}$ and $t\to \infty $. Also note that for large θ and $t$ the estimation becomes unstable. It is caused by ${\mathbb{E}}[\cos (\theta {X}_{t})]$ becoming comparable in amplitude to the estimation uncertainty; for this reason one should be careful using the codifference and the LCF in the range $\theta {X}_{t}\gg 1$.

Standard image High-resolution image

If we want to analyse the codifference of the CIR model, it would be required to study the memory of the velocity ${V}_{t}=\sqrt{{D}_{t}}{\rm{d}}{B}_{t}/{\rm{d}}t$. But the white noise ${\rm{d}}{B}_{t}/{\rm{d}}t$ is not well-defined in a classical sense. It can be interpreted as a distribution which leads to a similar redefinition of the covariance, the familiar Dirac delta. The codifference is, however, nonlinear and this approach fails. The solution is to consider only the well-defined velocity processes ${V}_{t}=\sqrt{{D}_{t}}{Y}_{t}$ with ${Y}_{t}$ being some classical process which models the velocity as being undisturbed by the fluctuations of the diffusivity. The behaviour of the white noise can be studied if we consider $t$ large enough such that ${r}_{Y}(t)=0$ strictly or approximately. It is natural to assume that ${Y}_{t}$ is Gaussian, while choosing the model of ${D}_{t}$ is more subtle.

The CIR process for ${ab}\in {\mathbb{N}}$, can be proved to be a sum of squared independent Ornstein–Uhlenbeck processes, which follows directly from writing the stochastic differential equation of such a sum [83]. Thus, a natural generalisation is to consider ${D}_{t}$ being a square of a Gaussian process [80, 81]. We will assume that the velocity can be decomposed as

Equation (2.29)

where both ${Z}_{t}$ and ${Y}_{t}$ are Gaussian with variance one. In this model we have ample freedom in describing a wide range of memory types, because any covariance rZ and ${r}_{Y}$ can be used. By choosing ${r}_{Y}$ we model the internal dynamics, if ${r}_{Y}(t)=0$ in the considered time scale we arrive back at (2.23); by choosing rZ we model the memory structure of ${D}_{t}$: exponential, power law, oscillating, etc. The one-dimensional distributions are more rigged, as we limit ourselves to ${D}_{t}$ having the PDF of a square Gaussian, that is ${\chi }_{1}^{2}$ distribution (a special case of the gamma distribution). A rather technical derivation (proposition 13) then shows that the exact form of the codifference is

Equation (2.30)

where

This formula looks complicated, but is composed only of elementary functions. It is illustrated in figure 4, were we plotted the codifference ${\tau }_{V}^{\theta }$ as a function of rZ and ${r}_{Y}$ for four different θs. Having calculated the codifference for at least two θs, one can solve the system of equations resulting from (2.30) and calculate ${r}_{Z},{r}_{Y}$. This procedure may be considered simpler than using the covariance rZ, which requires calculating the average of $| {Z}_{s}{Z}_{s+t}| $ given by a hard-to-evaluate integral. The covariance rV can also be obtained from taking the limit $\theta \to 0$ of the codifference.

Figure 4.

Figure 4. Codifference ${\tau }_{V}^{\theta }$ as a function of rZ and ${r}_{Y}$ as given by equation (2.30). White isolines are drawn at levels $\{\ldots ,\,-2/14,-1/14,0,1/14,2/14,\,\ldots \}$. The dependence on rZ is symmetric, which can be seen directly from the definition ${V}_{t}=\sigma | {Z}_{t}| {Y}_{t}$. For larger θ the codifference varies less and the influence of the positive dependence of ${D}_{t}$ becomes dominating (the isolines become more concave). For a given ${\tau }_{V}^{\theta }$ the covariances rZ and ${r}_{Y}$ can be determined by looking for the crossing points of the corresponding isolines for at least 2 different values of θ.

Standard image High-resolution image

More importantly, when ${r}_{Y}(t)=0$ the codifference is clearly non-zero, so it detects the dependence introduced by ${D}_{t}={Z}_{t}^{2}$. Its asymptotic for small ${r}_{Z}(t)$ (e.g. at long times) in this case is the simple relation

Equation (2.31)

Thus the codifference detects the memory structure of the time-varying diffusion coefficient ${D}_{t}={Z}_{t}^{2}$ even in the regime ${r}_{Y}(t)=0$ in which the covariance ${r}_{V}(t)$ is zero and does not contain any important information. This is also true when ${r}_{Z}(t)=0$ but ${r}_{Y}(t)\ne 0$, this time the codifference is asymptotically proportional to ${r}_{Y}(t);$ the proportionality constant depends only on the one-dimensional distributions of D, the exact form of the dynamics does not matter, see proposition 14.

For some systems different models of ${D}_{t}$ may be more suitable. When ${D}_{t}$ is strongly concentrated around its mean value a possible choice is a simple Gaussian centred around some b, ${V}_{t}=(\sigma {Z}_{t}+b){Y}_{t}$. This model permits the unphysical situation when ${D}_{t}\lt 0$, but when $\sigma \ll b$ the probability of this event is negligible. In this case an elementary formula for the codifference also can be given (see (3.78)) and again even for ${r}_{Y}(t)=0$ the internal dependence of ${D}_{t}$ is still detected, this time with asymptotic

Equation (2.32)

2.3. Discussion

The aim of this work was to provide the theoretical background for using the codifference as a dependence measure suited for the study of various non-Gaussian and ergodicity breaking models. This goal was achieved in few steps. First we proved that the codifference has intuitive properties that one would expect from a reasonable memory function, such as additivity, positivity for the case of complete dependence and being null for the case of independence. Second, we showed that it can be calculated using fairly straightforward methods for typical random parameters and diffusing-diffusivity models, which represent a significant extension of the previously established results for stable and infinitely divisible processes. Finally, we analysed how the codifference detects forms of dependence and ergodicity breaking which cannot be easily studied using solely covariance-based methods.

We also showed one example of non-detected ergodicity breaking, the case of a Langevin equation with a random return rate. In this case we offer an easy fix: the codifference works well for the increments of this process. We note that within this paper we did not analyse ergodicity breaking caused by ageing. In principle, the codifference should work, but the analytical analysis will be challenging for many of these phenomena.

In addition to the codifference, we also discussed a related quantity, the logarithm of the characteristic function (LCF), which was interpreted as a measure of dispersion. Our contribution is an extension of the Fourier methods and a distinct view based on ideas previously developed only for heavy tailed α-stable distributions. The codifference is also very closely related to the theory of the dynamical functional, which was already successfully used for real data, and should be considered a part of the same framework.

The cost of using this technique is that linearity is a powerful analytical tool, especially for complicated models, and a significant part of this strength is lost when using the codifference. The more complicated defining formula also may make its form more complicated (e.g. see table 1). However, it is a clear application of the characteristic function which does not seem to be commonly acknowledged and the Fourier-based techniques by themselves are widely used by the scientific community. Thus, it has an advantage, offering a wide choice of established analytical methods and estimation techniques. In some cases (e.g. (2.30)) the codifference has a simpler form than the covariance.

We believe that the most important example that was considered was also the simplest: deterministic motion with its scale (diffusion coefficient) varying from trajectory to trajectory. The observed asymptotical behaviour of the codifference contains a lot of useful information and lays the foundation for possible future applications in more complex and realistic models, some of which we discussed. At the same time we stress that even this initial, highly simplified model is being commonly used, especially in biophysical systems.

We are confident that the obtained results are interesting in their own right, but we also promote their additional value by indicating the limitations of the methodology based on the MSD and the covariance. Both are, without a doubt, essential parts of the scientific language related to diffusion and complex phenomena, but their limitations are becoming more and more evident, as contemporary research starts to concentrate around non-Gaussian systems with complicated memory structure; the change is stimulated by increasing experimental evidence. These complex and nonlinear phenomena require new complex and nonlinear methods.

3. Derivations

3.1. Basic definitions and properties

All processes considered in this work can be labelled as 'conditionally Gaussian'. In practical applications these processes are Gaussian locally, in the temporal or spatial sense. The formal definition is more general.

Definition 1. We call a process conditionally Gaussian when any of its finite-dimensional distributions is a Gaussian distribution under some conditioning by σ-algebra ${ \mathcal C }$. That is, any finite dimensional distribution ${\boldsymbol{X}}:= [{X}_{{t}_{1}},\,\ldots ,\,{X}_{{t}_{n}}]$ can be written as

Equation (3.1)

where A and ${\boldsymbol{\mu }}$ are a ${ \mathcal C }$-measurable $n\times n$ random matrix and an n-dimensional random vector. Both may depend on ${t}_{1},\,\ldots ,\,{t}_{n}$. The vector ${\boldsymbol{Y}}$ is i.i.d. ${ \mathcal N }(0,1)$ and is independent of A and ${\boldsymbol{\mu }}$.

If ${\boldsymbol{\mu }}=0$ for any ${t}_{1},\,\ldots ,\,{t}_{n}$ we call a process conditionally centred Gaussian. Further on we will consider only this class. Similarly, we call a process conditionally stationary Gaussian, if the distribution of A and ${\boldsymbol{\mu }}$ does not depend on time translation ${t}_{1},\,\ldots ,\,{t}_{n}\mapsto {t}_{1}+t,\,\ldots ,\,{t}_{n}+t$.

Proposition 1. The distribution of a conditionally Gaussian process is completely determined by the knowledge of ${ \mathcal C }$, the conditional mean and the conditional covariance

Equation (3.2)

The process is conditionally centred if and only if ${\mu }_{X}(t| { \mathcal C })=0$. The process is conditionally stationary if and only if ${\mu }_{X}(t| { \mathcal C })=\mathrm{const}.$ and ${r}_{X}(s,t| { \mathcal C })$ is a function of t − s, denoted ${r}_{X}(t-s| { \mathcal C })$.

Proof. This is a direct consequence of the equality

Equation (3.3)

The conditional probability on the right is a Gaussian integral and a function of ${\mu }_{X}(t| { \mathcal C })$ and ${r}_{X}(s,t| { \mathcal C })$. The representation of conditionally centred and stationary processes are just a reflection of the analogical representations for Gaussian processes. □

Definition 2. We define the codifference function as

Equation (3.4)

For stationary process it is a function of t − s, which we denote as ${\tau }_{X}^{\theta }(t)$, similarly as for the covariance, see also equation (1.2).

Additionally, we define the log characteristic function (LCF) as

Equation (3.5)

All expected values in the above definitions are finite, but they may be complex and the denominator may be 0. This is however not the case in the class of processes considered herein.

Proposition 2. For any conditionally centred Gaussian process the codifference and the LCF are well-defined real-valued functions.

Proof. The Gaussian function centred at 0 is positive-definite. The mixture of positive-definite functions is positive-definite. Therefore all expected values in definition 2 are real numbers larger than 0 and less or equal 1. The logarithms are therefore real. □

We also note that for conditionally centred Gaussian processes a reduced formula for the codifference is available,

Equation (3.6)

which is very useful for calculations. For non-centred process the additional term

Equation (3.7)

appears. Here all averages are finite, but they can generally be complex values, moreover in particular cases the averages in the denominator can be 0. This strongly suggests the codifference should be used carefully in this case (the same applies to the LCF).

Additionally, representation (3.6) yields another desirable property of the codifference:

Proposition 3. For a conditionally centred Gaussian process with positive covariance ${r}_{X}(s,t| { \mathcal C })$ the codifference ${\tau }_{X}^{\theta }(s,t)$ is also positive, a negative conditional covariance implies negative codifference.

If the support of ${r}_{X}(s,t| { \mathcal C })$ is on both positive and negative half-axes, the sign of the codifference may vary, but it is worth noting that with ${r}_{X}(t,t| { \mathcal C })$ and ${r}_{X}(s,s| { \mathcal C })$ fixed, it depends monotonically on ${r}_{X}(s,t| { \mathcal C })$, so if the conditional covariance is smaller in the sense of stochastic dominance, the codifference will also be smaller.

Now, a simple fact follows only from the expansion $\mathrm{ln}(x)\in x-1+o(x)$ as $x\to 1$.

Proposition 4. For any stationary process $X$ with asymptotically independent values

Equation (3.8)

Proof. We assume that ${X}_{s+t}$ and Xs are asymptotically independent as $t\to \infty $ (note that this property is not sufficient to imply that $X$ is mixing). Therefore

Equation (3.9)

and the ratio of expected values under the logarithm converges to 1 so we can use the expansion $\mathrm{ln}(x)\approx x-1$. □

This simple fact is a prototype for the later results, which describe cases when it is possible to remove the nonlinear logarithmic function if the process can be somehow decomposed as a transformation of some weakly dependent variables.

If the process $X$ does not have asymptotically independent values the non-linearity cannot be removed at $t\to \infty $, but if it is an ensemble of such processes (i.e. the conditioned process is mixing), it can be shown that the codifference converges to a positive constant, non-linearly dependent on the law of D.

3.2. Random parameter models

Proposition 5. If the process $X$ is an ensemble of mixing stationary centred Gaussian processes, then, denoting $D={\mathbb{E}}\left[{X}_{t}^{2}| { \mathcal C }\right]$,

Equation (3.10)

and equal 0 only for deterministic D.

Proof. The calculation is simple. Because ${r}_{X}(t| { \mathcal C })\leqslant D$ almost surely the random variable ${{\rm{e}}}^{{\theta }^{2}({r}_{X}(t| { \mathcal C })-D)}$ is positive and bounded by 1 for every $t$. We can commute the limit with the logarithm and the averaging, getting

Equation (3.11)

The non-negativity of the above stems from Jensen's inequality applied to the function $x\mapsto {x}^{2}$ and the variable ${{\rm{e}}}^{-{\theta }^{2}D/2}$. □

Remark. A similar calculation repeated for symmetrised codifference (1.8) shows that it does not exhibit this behaviour. Under the same assumptions

Equation (3.12)

i.e. it cannot detect this form of residual dependence and ergodicity breaking.

Proposition 6. Let the process $X$ have the form

Equation (3.13)

where $Y$ is a stationary Gaussian process, ${\mathbb{E}}[{Y}_{t}^{2}]=1$, and $D\gt 0$ is a random variable independent of $Y$. Then the codifference has the form

Equation (3.14)

  • (a)  
    It is additive with respect to D, that is if $D=D^{\prime} +D^{\prime\prime} $ for independent $D^{\prime} $ and $D^{\prime\prime} $, then
    Equation (3.15)
    where ${X}_{t}^{{\prime} }=\sqrt{D^{\prime} }{Y}_{t}$ and ${X}_{t}^{{\prime\prime} }=\sqrt{D^{\prime\prime} }{Y}_{t}$.
  • (b)  
    It is an increasing function of the covariance ${r}_{Y}(t)$, which is smaller than ${r}_{X}(t)$ for ${r}_{Y}(t)$ close to 1 and larger than ${r}_{X}(t)$ when the latter is close to 0. If ${\mathbb{E}}[D]\lt \infty $ the difference ${\tau }_{X}^{\theta }(t)-{r}_{X}(t)$ decreases as a function of ${r}_{Y}(t)$.
  • (c)  
    For any mixing $Y$ the difference ${\tau }_{X}^{\theta }(t)-{\tau }_{X}^{\theta }(\infty )$ exhibits the same type of asymptotic as the covariance ${r}_{Y}(t)$, that is
    Equation (3.16)

Proof. Let us start from writing the conditional covariance,

Equation (3.17)

which implies that

Equation (3.18)

If we substitute $D=D^{\prime} +D^{\prime\prime} $ both numerator and denominator factorise as products of independent random variables. The formula

Equation (3.19)

follows.

In point (b) the monotonic dependence is a consequence of the fact that only the numerator of the fraction in (3.14) depends on ${r}_{Y}(t)$. It is a Laplace transform of the variable D calculated at the point ${\theta }^{2}\left(1-{r}_{Y}(t)\right)$, it decreases as the argument increases, so it is an increasing function of ${r}_{Y}(t)$. This dependence is continuous. When ${r}_{X}(t)=0$, e.g. always for $t=0$ formula (3.14) simplifies and we can apply Jensen's inequality,

Equation (3.20)

For ${r}_{Y}(t)$ close to 0 we can use proposition 5 to determine that the codifference is positive. For the last property listed in b), let us write the difference ${\tau }_{X}^{\theta }(t)-{r}_{X}^{}(t)$ as a function of $r={r}_{X}(t)$,

Equation (3.21)

Using the majorised convergence theorem, the derivative of the numerator exists and determines the sign of $f^{\prime} $. Denoting ${F}_{r}:= {\theta }^{2}(D-{\mathbb{E}}[D])(1-r)$ we have

Equation (3.22)

where we used the fact that ${\mathbb{E}}[{F}_{r}]=0$ and $x({{\rm{e}}}^{-x}-1)\leqslant 0$.

For (c) consider ${\tau }_{X}(t)-{\tau }_{X}(\infty )$ and use the expansion $\mathrm{ln}(x)\approx x-1$

Equation (3.23)

Now we can rearrange the right side of the above equation and get

Equation (3.24)

The analogues of (a) and (b) also hold for the LCF, the derivation is very similar as in proposition 6 so we only state the result.

Proposition 7. Let the process $X$ have the form

Equation (3.25)

where $Y$ is a centred Gaussian process and $D\gt 0$ is a random variable independent of $Y$.

Then the LCF has the form

Equation (3.26)

and:

  • (a)  
    if ${\mathbb{E}}[D]\lt \infty $ then
    Equation (3.27)
  • (b)  
    It is additive with respect to D, that is if $D=D^{\prime} +D^{\prime\prime} $ for independent $D^{\prime} $ and $D^{\prime\prime} $, then
    Equation (3.28)
    where ${X}_{t}^{{\prime} }=\sqrt{D^{\prime} }{Y}_{t}$ and ${X}_{t}^{{\prime\prime} }=\sqrt{D^{\prime\prime} }{Y}_{t}$.
  • (c)  
    It is an increasing function of the MSD ${\delta }_{Y}^{2}(t)$.
  • (d)  
    For ${\mathbb{E}}[D]\lt \infty $ the difference ${\delta }_{X}^{2}(t)-{\zeta }_{X}^{\theta }(t)$ is non-negative and increases as ${\delta }_{X}^{2}(t)$ increases.

The asymptotic of the codifference near zero depends on the tail behaviour of pD and can be used to study it. This statement is clarified by the following result.

Proposition 8. If the stationary Gaussian process $Y$ is mean-square continuous and ${X}_{t}=\sqrt{D}{Y}_{t}$, then

  • (a)  
    for ${\mathbb{E}}[D]\lt \infty $
    Equation (3.29)
    and
    Equation (3.30)
  • (b)  
    If
    Equation (3.31)
    for some slowly varying function L, then
    Equation (3.32)

Proof. For a mean-square continuous $Y$ the covariance ${r}_{Y}$ is a continuous function. The codifference is also continuous and $\mathrm{ln}(x)\approx x-1$ implies that

Equation (3.33)

Because

Equation (3.34)

The derivation for ${\zeta }_{X}^{\theta }$ is similar. For point b) we write the asymptotic of ${\tau }_{X}^{\theta }(0)-{\tau }_{X}^{\theta }(t)$ as the integral

Equation (3.35)

and simplify the ratio under investigation

Equation (3.36)

Now, let us move our attention from a random D to the class of processes, for which the shape of the covariance function varies from trajectory to trajectory:

Proposition 9. For a mixture of stationary Gaussian processes with fixed non-random scale $D={\sigma }^{2}$

Equation (3.37)

The above formula also implies that

Equation (3.38)

Proof. Assumption of a fixed variance means that ${\mathbb{E}}[X{\left(t\right)}^{2}| { \mathcal C }]={\sigma }^{2}$ for some deterministic ${\sigma }^{2}$. Using the conditional expectancy it follows that

Equation (3.39)

Now the left inequality is just Jensen's inequality applied to the function $\mathrm{ln}$. The right inequality follows from two approximations: the first is $\mathrm{ln}x\leqslant x-1$, the second is $\exp (x)\leqslant {L}^{-1}\sinh (L)x+\cosh (L)$ for $-L\leqslant x\leqslant L$. □

For the exponentially decaying conditional covariance stronger results are available:

Proposition 10. For a mixture of stationary centred Gaussian processes with conditional covariance ${r}_{X}(t| {\rm{\Lambda }},D)=D{{\rm{e}}}^{-t{\rm{\Lambda }}}$, with Λ and D independent, we observe the following asymptotic properties.

  • (a)  
    Power law behaviour: if ${p}_{{\rm{\Lambda }}}(\lambda )\sim L(\lambda ){\lambda }^{\alpha -1},\lambda \to {0}^{+}$ for slowly varying L, then
    Equation (3.40)
    where the constant ${C}_{\alpha ,\theta }$ is
    Equation (3.41)
  • (b)  
    Quick decay behaviour: if ${p}_{{\rm{\Lambda }}}(\lambda )\in { \mathcal O }({\lambda }^{\infty }),\lambda \to {0}^{+}$ then
    Equation (3.42)
  • (c)  
    Truncation: if ${\rm{\Lambda }}={\lambda }_{0}+\widetilde{{\rm{\Lambda }}}$ for deterministic ${\lambda }_{0}\gt 0$ then
    Equation (3.43)
    where $\widetilde{X}$ is a solution of the Langevin equation with viscosity $\widetilde{{\rm{\Lambda }}}$ and the same D.

Proof. For (a) first we apply the expansion $\mathrm{ln}(x)\approx x-1$ to ${\tau }_{X}^{\theta }(t)-{\tau }_{X}^{\theta }(\infty )$

Equation (3.44)

Therefore

Equation (3.45)

Note that the sum within consists of positive terms, so the commutation of expectation and sum is justified.

Now, knowing the asymptotic ${p}_{{\rm{\Lambda }}}(\lambda )\sim {\lambda }^{\alpha -1},\lambda \to {0}^{+}$ we can apply the Tauberian theorem

Equation (3.46)

The sum (3.45) consists of positive terms, so let us study its asymptotic

Equation (3.47)

where the commutation of taking the limit and the sum is justified by the inequality

Equation (3.48)

The right term is convergent with respect to $t$, therefore it is bounded, so the left term is uniformly bounded with respect to k and we can use the dominated convergence theorem.

Note that the resulting sum is also bounded with respect to α,

Equation (3.49)

This concludes the derivation of (a). Now let us prove (b). We fix integer $N\gt 0$ and then make the estimation

Equation (3.50)

to obtain

Equation (3.51)

The limit follows because it is a convergent sum of positive terms.

In order to prove the last point (c) notice that ${{\rm{e}}}^{-t{\lambda }_{0}}\lt 1$ and ${{\rm{e}}}^{t{\lambda }_{0}}\gt 1$ so $x\mapsto {x}^{{{\rm{e}}}^{-t{\lambda }_{0}}}$ is a concave function and ${{\rm{e}}}^{-{\theta }^{2}D{{\rm{e}}}^{t{\lambda }_{0}}}\leqslant {{\rm{e}}}^{-{\theta }^{2}D}$. Therefore

Equation (3.52)

In the next proposition we will study the properties of the increment process

Equation (3.53)

and use it to detect non-ergodicity.

Proposition 11. Considering the same process as in proposition 10, the codifference of its increments ${\rm{\Delta }}{X}_{t}$ converges to a constant

Equation (3.54)

which equals 0 only when both D and Λ are deterministic. After suitable rescaling ${\rm{\Delta }}{\widetilde{X}}_{t}:= {\rm{\Delta }}{X}_{t}/\sqrt{{\rm{\Delta }}t}$ the limit becomes independent of ${\rm{\Delta }}t$,

Equation (3.55)

Proof. The reasoning is similar to the one shown in the proof of proposition 6 (b). The increment process ${\rm{\Delta }}{X}_{t}$ is a stationary process, which is conditionally Gaussian. We can calculate its conditional variance

Equation (3.56)

and the variance of the difference

Equation (3.57)

The limit of the codifference is

Equation (3.58)

Applying Jensen's inequality to the variable ${{\rm{e}}}^{2{\theta }^{2}D{{\rm{e}}}^{-{\rm{\Delta }}t{\rm{\Lambda }}}}$ and the function $x\mapsto {x}^{2}$ yields the inequality.

For the rescaled process it is straightforward to calculate that

Equation (3.59)

The last considered class of covariance functions is ${Df}({\rm{\Lambda }})\exp (-t{\rm{\Lambda }})$. The increment process from proposition 11 fits this class with $f({\rm{\Lambda }})=1-\exp (-{\rm{\Delta }}t{\rm{\Lambda }})$, higher order increments and other similar transformations correspond to more complex f, but their behaviour at ${0}^{+}$ can be easily traced. Note that the proposition below is not a straightforward generalisation of proposition 10. The statements and methods of the derivation below are similar, but the assumptions do not coincide, because the introduction of the scaling $f({\rm{\Lambda }})$ with a power law at 0 was made at the cost of adding the strong requirement about the fast decay of tails of D, ${\mathbb{E}}[\exp ({\theta }^{2}D)]\lt \infty $:

Proposition 12. Let us consider the stationary, conditionally Gaussian process characterised by the conditional covariance

Equation (3.60)

Now, let us assume that D and Λ are independent, ${\mathbb{E}}\left[{{\rm{e}}}^{{\theta }^{2}D}\right]\lt \infty $ and the PDF of Λ has the form

Equation (3.61)

for slowly varying function L. Then, for this class of processes

Equation (3.62)

Proof. We start from the formula

Equation (3.63)

which has the asymptotic

Equation (3.64)

We thus need to study the tail behaviour of

Equation (3.65)

We will analyse it using a bottom-up approach and start from considering the long time asymptotic of the conditional expected value ${\mathbb{E}}[{\boldsymbol{\cdot }}| D]$ for one term,

Equation (3.66)

Now take $\delta \gt 0$ such that $f(\lambda )\lt 1$ for all $0\leqslant \lambda \lt \delta $ and $\epsilon \gt 0$ such that $L(1/t)\gt {t}^{-1/2}$ for sufficiently large $t$

Equation (3.67)

where we additionally used the inequality ${x}^{k}{{\rm{e}}}^{-x}\leqslant {k}^{k}{{\rm{e}}}^{-k}$. Now, for the left term above observe that

Equation (3.68)

so it is bounded with respect to $t$ by some constant, let it be c1,

Equation (3.69)

And for the right term, the Stirling formula shows that

Equation (3.70)

Moreover straightforward calculation yields

Equation (3.71)

so the whole series behaves like ${k}^{-1-\alpha -\gamma }$ and is summable.

Therefore, we have shown that we can use the dominated convergence theorem with respect to series (3.65) multiplied by ${t}^{\alpha +\gamma }/L(1/t)$. According to (3.66) the term $k=1$ converges to ${\mathbb{E}}[D]{\rm{\Gamma }}(\alpha +\gamma )$ and all terms $k\gt 1$ decay like ${t}^{-k\gamma }$. Only the first term remains in the limit $t\to \infty $ and

Equation (3.72)

Remark. Propositions 10 and 12 above can be generalised by replacing $t$ by $g(t)$ in the formula for the covariance, the only requirement is that $g(t)\to \infty $ as $t\to \infty $. This allows one to consider some more general types of the dependence, e.g. the power-law ${t}^{-2H}$ corresponds to ${\rm{\Lambda }}=2H$ and $g(t)=\mathrm{ln}(t)$.

3.3. Diffusing diffusivity

Proposition 13. Let us assume that $Y$ and Z are centred stationary Gaussian processes. Without loss of generality we assume ${\mathbb{E}}[{Y}_{t}^{2}]={\mathbb{E}}[{Z}_{t}^{2}]=1$. Let $X$ be given by

  • (a)  
    Equation (3.73)
  • (b)  
    Equation (3.74)

with deterministic $\sigma ,d\gt 0$. Then the codifference of $X$ is given by elementary formulae, as given at the end of corresponding derivations in equations (3.78) and (3.82).

Proof. We begin by conditioning over ${Z}_{t}$, the averages then become Gaussian averages rescaled by values ${Z}_{t}$. Next we calculate the denominator in the codifference

Equation (3.75)

The last equality corresponds to calculating a Gaussian integral, which can also be interpreted as a Laplace transform of the distribution ${\chi }^{2}(1)$. The numerator is more complicated,

Equation (3.76)

The above expectation can be calculated if we decompose $[{Z}_{s+t},{Z}_{s}]\mathop{=}\limits^{d}[{A}_{+}+{A}_{-},{A}_{+}-{A}_{-}]$ where ${A}_{+},{A}_{-}$ are independent Gaussian variables, whose variances can be found to be ${\mathbb{E}}[{A}_{\pm }^{2}]=(1\pm {r}_{Z}(t))/2$. After substitution the exponent in (3.76) factorises into

Equation (3.77)

Both obtained terms are Gaussian integrals which can be easily evaluated. Taking both together and calculating the logarithm we obtain

Equation (3.78)

For (b) the denominator is simple and yields ${\left(1+{\left(\theta \sigma \right)}^{2}\right)}^{-1}$. The numerator can be expressed as

Equation (3.79)

Using the formula for the two-dimensional density of $[{Z}_{s+t},{Z}_{s}]$, the term under the logarithm in the formula for the codifference can be expressed as an integral over the function

Equation (3.80)

where

Equation (3.81)

The integration over ${{\mathbb{R}}}^{2}$ of (3.80) can be changed to an integration over ${{\mathbb{R}}}_{+}^{2}$

Equation (3.82)

Now, the codifference is ${\tau }_{X}^{\theta }(t)={\theta }^{-2}\mathrm{ln}I$.□

When ${r}_{Y}(t)=0$ the above formulae simplify significantly and simple asymptotic can be derived by direct computation, see equations (2.31) and (2.32). The case ${r}_{Z}(t)=0$ also leads to a simplification and can be considered in a more general setting.

Proposition 14. If ${Y}_{t}$ is a stationary Gaussian process, ${\mathbb{E}}[{Y}_{t}^{2}]=1$ and for large enough $t$ values ${D}_{s}$ and ${D}_{s+t}$ are i.i.d. and independent of $Y$, then for ${X}_{t}=\sqrt{{D}_{t}}{Y}_{t}$

Equation (3.83)

where D has the same distribution as ${D}_{s}$ or ${D}_{s+t}$.

Proof. We take $t$ large enough so that we can represent the values of $X$ as ${X}_{s}=\sqrt{{D}_{1}}{Y}_{s}$ and ${X}_{s+t}=\sqrt{{D}_{2}}{Y}_{s+t}$ for i.i.d. ${D}_{1}$ and ${D}_{2}$. Using a conditioning on ${D}_{1},{D}_{2}$ the codifference can be expressed as

Equation (3.84)

Now we consider the numerator in the above, divide it by ${r}_{Y}(t)$ and, using dominated convergence as in previous propositions,

Equation (3.85)

The result follows. □

Acknowledgments

We acknowldge funding from the Polish National Science Centre, HARMONIA 8 grant no. UMO-2016/22/M/ST1/00233, and from Deutsche Forschungsgemeinschaft, grants ME1535/6-1 and ME1535/7-1. RM was supported by an Alexander von Humboldt Polish Honorary Research Scholarship from the Foundation for Polish Science (Fundacja na rzecz Nauki Polskiej). We acknowledge the support of Deutsche Forschungsgemeinschaft (German Research Foundation) and Open Access Publication Fund of Potsdam University.

: Appendix. Sample size dependence of codifference and covariance

In supplement to figure 2 we show in figure A1 that even for smaller sample sizes such as 104, 103, and 500 significant differences between the covariance and codifference of increments are visible.

Figure A1.

Figure A1. Estimated codifference τ and covariance r for sample sizes (from left to right) 104, 103, and 500.

Standard image High-resolution image

Footnotes

  • Strictly speaking, these are sufficiently regular functionals acting on the space of observations. In quantum mechanics each such linear functional corresponds to an observable. In statistical mechanics a similar rôle is fulfilled by ${\mathbb{E}}[f(X)]$ for bounded continuous functions f. In statistics linearity is usually not required and various measures have the form $g({\mathbb{E}}[f(X)])$. This is the case also in the present work.

  • This similarity is what causes the 'strong anomalous diffusion' property, for which the power-law dependency ${\mathbb{E}}| {X}_{t}{| }^{q}\propto {t}^{{qv}(q)}$ is observed for non-constant function v [36].

  • Even for classical Brownian motion it is the infinitely dimensional Wiener space [50].

  • The remaining class of processes which are ergodic but non-mixing is complicated and those do not seem to appear in applications. For a mathematically constructed example of such a process and the discussion see [59].

Please wait… references are loading.