Brownian motion, dynamical randomness and irreversibility

A relationship giving the entropy production as the difference between a time-reversed entropy per unit time and the standard one is applied to stochastic processes of diffusion of Brownian particles between two reservoirs at different concentrations. The entropy production in the nonequilibrium steady state is interpreted in terms of a time asymmetry in the dynamical randomness between the forward and backward paths of the diffusion process.


Introduction
The Brownian motion of small particles suspended in liquids was discovered under a microscope in 1827 by the botanist Robert Brown [1]. By the end of the 19th century and the beginning of the 20th century, it became clear that this irregular movement is caused by the incessant collisions of the Brownian particles with the molecules composing the fluid [1,2]. In a famous series of papers published in 1905 and in the following years, Einstein developed a molecular kinetic theory of Brownian motion showing how the diffusion coefficient of a Brownian particle depends on its radius as well as on the temperature and viscosity of the surrounding fluid [3]. Moreover, Einstein proposed the measurement of the diffusion coefficient D from the direct observation of the random walk of the Brownian particle according to the formula that he deduced from the diffusion equation for the density n of Brownian particles. His work was the basis for Perrin's experiments of 1908, which demonstrated the reality of atoms and molecules [4]. 1908 is also the year when Langevin proposed a Newtonian equation for the Brownian particle by introducing a fluctuating force due to the fluid molecules [5,6]. These years mark the beginning of the theory of stochastic processes in mathematics, physics, chemistry and biology. One of the main features of Brownian motion and other stochastic processes is their property of dynamical randomness, according to which their trajectories present a temporal disorder. Dynamical randomness can be characterized in much the same way as Einstein himself developed his theory of Brownian motion. Indeed, one of his main ideas was to connect the probability P(α) of a random configuration or phase α of the microsystem to the thermodynamic entropy S(α) according to Boltzmann's formula where k B = 1.38 × 10 −23 J K −1 is Boltzmann's constant. The probability P(α) is here defined at a given time. This idea was extended to dynamical randomness by Shannon [7], Onsager and Machlup [8,9], Kolmogorov and Sinai [10]- [12], and others by considering the multiple-time probability that the system successively visits a sequence of states ω 0 ω 1 ω 2 · · · ω n−1 at regular time intervals t k = kτ (k = 0, 1, 2, . . . , n − 1). The states are defined by a partition P of the phase space of the system into cells ω. The above sequence means that the system visits the state or cell ω k at time t k = kτ: α(t k ) ∈ ω k . Typically, the multiple-time probability decays exponentially with the number n of time intervals τ as P(ω 0 ω 1 · · · ω n−1 ) ∼ exp [−nτh(P)]. (4) The rate of decay is the so-called entropy per unit time of the process with respect to the partition or coarse graining P and is defined as [10]- [12] h(P) ≡ lim n→∞ − 1 nτ ω 0 ω 1 ···ω n−1 P(ω 0 ω 1 · · · ω n−1 ) ln P(ω 0 ω 1 · · · ω n−1 ).
This quantity is the rate at which the memory of a computer will be occupied during the optimal recording of the trajectory of a stochastic process with a resolution into cells ω [13]. In this sense, the entropy per unit time is a rate of production of information and it characterizes the randomness generated by the stochastic process during its time evolution. Many random processes are concerned with these considerations and, especially, Brownian motion. If the cells of the partition are of size , the entropy per unit time defines an -entropy, h( ) = h(P ), a concept introduced by Shannon and Kolmogorov [14]- [18]. For a Brownian motion of diffusion coefficient D, the -entropy per unit time scales as The -entropy increases as the resolution decreases, meaning that Brownian motion has dynamical randomness on arbitrarily small scales. The behaviour (6) has been experimentally confirmed [19]. Many other stochastic processes have a positive entropy per unit time. Furthermore, spatially extended stochastic systems may have a positive entropy per unit time and unit volume [13,14]. The dynamical randomness of stochastic processes is incompatible with an underlying microdynamics which is ruled by deterministic Newton's equations, unless this deterministic dynamics itself has a positive entropy per unit time. One known mechanism is the sensitivity to initial conditions which arises in dynamical systems having positive Lyapunov exponents. In this case, the entropy per unit time reaches a supremum as the resolution decreases ( → 0), which defines the Kolmogorov-Sinai entropy per unit time: which is independent of the coarse graining P [10]- [12]. In 1976, Pesin proved that, in closed and finite dynamical systems, the Kolmogorov-Sinai entropy is equal to the sum of positive Lyapunov exponents [20,21]. Deterministic systems with dynamical randomness are called chaotic [13]. The Lyapunov exponents and Kolmogorov-Sinai entropy have been studied in deterministic models of Brownian motion [13,14,22]. These quantities are related by the following inequality: showing that the dynamical instability provides room for very large dynamical randomness in systems of interacting particles [19]. However, there exist systems where the standard definition of Lyapunov exponents does not apply, such as quantum systems as well as deterministic systems with stochastic boundary conditions ruling a flux of particles in and out the system. In these cases, the characterization of dynamical randomness is no longer possible in terms of the usual Lyapunov exponents which should be replaced by the more powerful concept of entropy per unit [23,24]. It should be emphasized that the entropy per unit time is not directly related to the thermodynamic entropy. They clearly differ by their physical units. Moreover, the entropy per unit time also differ from the entropy production since the former can be positive at equilibrium when the latter vanishes. However, recent advances in dynamical system theory have shown that the transport coefficients as well as other irreversible properties such as entropy production can be related to the difference between two large-deviation quantities, characterizing the underlying dynamics and, in particular, its dynamical randomness. A series of fundamental relationships of this type has recently been discovered in: • the escape-rate theory [25]- [29]; • the theory of the hydrodynamic modes of diffusion [30]- [32]; • the fluctuation theorem [33]- [42]; and • the theory of the large-deviation dynamical properties of the nonequilibrium steady state [43].
An example of such relationships is given by the escape-rate formula which gives the diffusion coefficient in terms of the difference between the sum of positive Lyapunov exponents and the Kolmogorov-Sinai entropy for open systems with escape of trajectories [26,27] For diffusion, escape occurs when the Brownian particle reaches the boundary of some domain. If the domain is contained between two parallel planes separated by the distance L, the escape rate is given by γ D(π/L) 2 [25]- [27]. On the other hand, the escape rate is equal to the slight difference between the dynamical instability characterized by the sum of positive Lyapunov exponents, λ i >0 λ i , and the dynamical randomness h KS . This difference comes from the nonequilibrium condition imposed at the boundaries [13].
A very similar relationship has been derived in the theory of the hydrodynamic modes of diffusion in dynamical systems on a spatially periodic lattice such as the periodic Lorentz gases [30]- [32]. The diffusive modes are constructed as the Liouvillian eigenstates of the Frobenius-Perron operator associated with some Pollicott-Ruelle resonances [44]- [47]. Since the system is spatially periodic, these eigenstates as well as the resonances depend on a wavenumber k introduced by spatial Fourier analysis [13,48,49]. The Pollicott-Ruelle resonance of the diffusive modes is given by Van Hove's formula [50] where r is the position of the Brownian particle. Recent work has shown that the diffusive modes are singular with fractal properties [30]- [32] and the entropy production has been derived from the underlying deterministic dynamics, thanks to the construction of these modes [51]- [55]. In two-dimensional Lorentz gas models of diffusion, the Hausdorff dimension D H is given by the root of the following equation [30]- [32]: where λ(D H ) and h KS (D H ) are the positive Lyapunov exponent and the Kolmogorov-Sinai entropy calculated with respect to the Hausdorff invariant measure. We notice the similarity with equation (9) where the wavenumber is given by k = π/L and where the sum contains a single positive Lyapunov exponent for a two-dimensional Lorentz gas. In equations (9) and (11), both sides vanish with the wavenumber and the similarity holds term by term.
Further relationships of the same kind have been obtained for the entropy production of nonequilibrium steady state on the basis of the fluctuating quantity [38]- [42] which involves the ratio of the multiple-time probabilities of a path ω 0 ω 1 · · · ω n−1 and its time reversal ω n−1 · · · ω 1 ω 0 . This quantity characterizes the lack of detailed balance out of equilibrium and its mean value gives the entropy production The probability that Z(t) takes the given value ζt decays exponentially at the rate R(ζ) such that The decay rate R(ζ) obeys the fluctuation theorem [33]- [42] Here again, an irreversible property such as ζ is given as the difference between two largedeviation properties representing decay rates of multiple-time probabilities. The fluctuation theorem concerns, in particular, processes of reaction and diffusion [40]- [42]. Finally, we can introduce a concept of time-reversed entropy per unit time as [43] h which is similar to the standard entropy (5) except that the logarithm contains the probability of the backward path. This quantity allows us to relate the entropy per unit time (5) to the entropy production according to for τ → 0, as shown in [43] where this result is proved for Markovian processes. Equation (17) has a structure which is again similar to equations (9), (11) and (15). Here, it is the irreversible property of entropy production, which is given in terms of the difference between two characteristic quantities of dynamical randomness. The time-reversed entropy per unit time h R characterizes the dynamical randomness of the paths followed backward in time but averaged with respect to the forward time evolution. Equation (17) thus shows that the entropy production in a nonequilibrium steady state is due to a lack of time-reversal symmetry in the dynamical randomness followed either backward or forward in time. We notice that equation (17) gives a finite entropy production under the condition that P(ω n−1 · · · ω 1 ω 0 ) = 0 if P(ω 0 ω 1 · · · ω n−1 ) = 0. This condition is satisfied even for far-from-equilibrium systems. Figure 1. Schematic representation of the diffusion process of Brownian particles in a pipe composed of L cells {X i } L i=1 of volume V between two particle reservoirs, A and B.
Our purpose in the present paper is to apply the relationship (17) to diffusion in an open nonequilibrium system, where particles diffuse on a finite chain between two reservoirs at different concentrations. The number of particles in each part of the chain is finite and fluctuates. The resulting random process can be characterized by both the entropy per unit time h(P) and the time-reversed entropy h R (P). We would like to show explicitly that the difference between these two quantities gives the well-known entropy production of diffusion.
The plan of the paper is as follows. In section 2, the general scheme we consider is set up. The entropies per unit time are calculated in section 3, and we show that equation (17) gives the entropy production of diffusion. Conclusions are drawn in section 4.

The master equation
We suppose that diffusion takes place in a finite pipe between two reservoirs of particles. We adopt a stochastic description in terms of the master equation by Nicolis and co-workers [56]- [62]. The pipe is divided by regularly spaced planes separated by a distance = x. The domains at the intersection of the pipe with two next-neighbour planes are cells of volume V . The whole pipe is composed of L cells X i so that the total volume of the pipe V = L V and its length L x = L . The ith cell is supposed to contain N i diffusing particles. These numbers are the basic random variables of the stochastic process. The local density of particles is given by n = N i / V and forms a random field in the continuum limit. The system is depicted in figure 1.
The particles randomly jump from cell to cell independent of each other, which induces the process of diffusion. At the left-hand side of the cell X 1 , there is a reservoir A with a density A/ V of particles. Similarly, there is a reservoir B with a different density B/ V at the righthand side of the cell X L . The process of random jumps along the chain can be schematically depicted as where we denote by +i the elementary step when a particle jumps from X i to X i+1 and by −i the reversed step.
The whole random process is ruled by the following master equation for the probability P(N 1 , N 2 , . . . , N L ; t) to find the given numbers of particles in the cells: , N i , N i+1 , . . .)P(. . . , N i , N i+1 , . . .) The transition rates are given by If we introduce the integer numbers ν ρ of the jumps by 0 or ±1 of the numbers N = (N 1 , N 2 , . . . , N L ) of particles in the cells, the master equation (19) can be rewritten in the compact form: which is useful for the following.

The stochastic process
The master equation (25) rules a stochastic process for which the path or history of the system is a succession of numbers {N l } and random jumps {ρ l } occurring at random times {t l }: The successive numbers and jump times are related by with l = 1, 2, . . . , m. If the system is in the state N, the next step ρ occurs with the probability which fixes the jumps ν ρ . The corresponding waiting time T = T l between two consecutive jumps is a continuous random variable which has the exponential probability density This stochastic process can be simulated by Gillespie's algorithm [63,64].
Since the path (26) is specified by the numbers {N l } and the jumps {ρ l }, the states used to describe the process should be defined as ω = N ρ −→ N = N + ν ρ . If the process is stroboscopically observed with the sampling time τ, the probability of the path (26) specified by the states of the systems at the times t k = kτ is given by where ρ ∈ {∅, ±0, ±1, ±2, . . . , ±L}. Here, we adopt the convention that ρ = ∅ in the case where no reactive event occurs in the time interval τ. The numbers appearing in the multiple-time probability (30) are defined by N k = N(kτ) with k = 0, 1, 2, . . . , n. The conditional probabilities are given in terms of the transition rates as

Nonequilibrium steady state
The stationary solution of the master equation (24) is given by a product of Poisson distributions as with the mean values corresponding to a linear profile of concentration between both reservoirs with a gradient of concentration ∇n = (B − A)/[ V x(L + 1)]. Therefore, the mean flux of particles moving from A to B at the boundary between the cells X i and X i+1 takes the value: The flux of particles is given by the surface integral of the current density j as J = j V/ x. Fick's law, j = −D∇n, is thus satisfied if the rate constant is related to the diffusion coefficient by The fluctuations of the occupation numbers in the different cells are statistically independent of each other in the nonequilibrium steady state because with δN i = N i − N i st . Accordingly, the spatial correlation function of the density fluctuations is given by At equilibrium, the mean occupation is the same for all the cells, N i eq = A = B, so that the density or concentration is uniform and the mean current (34) vanishes.

The fluctuating diffusion equation
If the mean occupation numbers of the cells are large, N i st 1, the master equation (19) can be replaced by the Fokker-Planck equation for the probability density p (N 1 , N 2 , . . . , N L ): with This Fokker-Planck equation corresponds to the Itô stochastic differential equations with the Gaussian white noises We can now take the continuum limit to get a stochastic partial differential equation for the density field. A noise field is identified as In order to determine the correlation function of this random field, we take a test function f( r; r ) so that Replacing Q ij by its expression (40) and (41) and taking the continuum limit x → 0, we find that This complicated expression can be simplified if we introduce the random fields η( r, t), such that In the continuum limit, the Itô stochastic differential equations (42) become the fluctuating diffusion equation with the fluctuating current η given by the Gaussian random fields: At the mesoscopic level of description, the equation therefore has the form where the current can be derived according to from the free energy functional and with the Onsager coefficient where is the mobility coefficient proportional to the diffusion coefficient by Einstein's relation. The equilibrium state which is a solution of the Fokker-Planck equation is given in terms of the free energy as The main point we want to emphasize is that the process of diffusion we consider involves a large but finite number of Brownian particles diffusing in the pipe. These Brownian particles are independent of each other and each follows a Brownian path for the lapse of time between their entrance at one reservoir and their exit at one or other of the reservoirs. Accordingly, the local density n( r, t) in the neighbourhood of a point r in the pipe is a random variable described by the fluctuating diffusion equation (49). The original diffusion equation (2) is recovered on very large spatial scales where the fluctuations are negligible. We notice that the diffusion equation also holds as the master equation associated with the Langevin stochastic equation for the motion of a single Brownian particle. In this case, the density n( r, t) in equation (2) should be replaced by the probability density p( r, t) such that the Brownian is located at the position r by the time t. Both stochastic processes are related but the Fokker-Planck equation (2) rules a stochastic process with a single particle, albeit the Fokker-Planck equation (38) (or the master equation (19)) rules a stochastic process with many particles. Therefore, we should expect that the dynamical randomness is larger for the latter than for the former. The dynamical randomness of the former is characterized by the -entropy per unit time given by equation (6). Let us now proceed with the characterization of the latter.

Entropy production and dynamical randomness
In this section, our aim is to show that the entropy production can be given by equation (17) as the difference between the time-reversed and standard entropies per unit time, which characterize dynamical randomness forward and backward in time. For this purpose, we first obtain the entropy production from the time evolution ruled by the master equation. Thereafter, we define and calculate the entropies per unit time for the stochastic process of diffusion. Finally, we verify equation (17).

The entropy production
The entropy associated with the probability distribution P(N; t) at time t can be defined by where S 0 (N) is the entropy due to the disorder in the degrees of freedom other than the numbers N themselves. The second term of equation (58) is the contribution to entropy due to the disorder in the probability distribution of the N numbers of particles in the cells. It is known [61,62] that the time variation of the entropy (58) evolving according to the master equation (25) has two contributions which are the entropy flux d e S/dt and the entropy production This latter is expressed in terms of the particle fluxes and the affinities associated with the elementary steps ρ = ±0, ±1, ±2, . . . , ±L [40]. For the master equation (24), the fluxes are given by , N i , N i+1 , . . .), (64) and the affinities by In the nonequilibrium steady state (32), the entropy production is therefore equal to In the macroscopic limit, we recover the well-known entropy production of the diffusion equation (2) as With the profile (33), we find that This entropy production is depicted in figure 2 as a function of the concentration A/ V of the left-hand side reservoir for a fixed concentration of the right-hand side reservoir. The entropy production vanishes at the equilibrium A = B.

The entropies per unit time
The dynamical randomness of the stochastic process ruled by a master equation such as equation (25) is characterized by an entropy per unit time (5), which depends on the sampling time τ [13,14]. For the master equation (25), this τ-entropy per unit time is given by On the other hand, the time-reversed τ-entropy per unit time is [43] h Their difference no longer contains the terms in ln(e/τ). The next terms in the difference are transformed by using ν −ρ = −ν ρ and replacing N by N − ν ρ in the sums over N to get In a nonequilibrium steady state, the probabilities are constant in time: d(P st (N))/dt = 0. As a consequence of the master equation (25), we find that In the limit τ → 0, the right-hand side is thus equal to the expression (60)-(62) for the entropy production, whereupon we obtain the relationship (17) [43] According to the master equation (24), the entropies are explicitly given for the nonequilibrium steady state (32) by with the constants which are depicted in figure 3 as a function of the concentration A/ V of the left-hand side reservoir for a fixed concentration of the right-hand side reservoir. We observe that these constants and, consequently, the τ-entropies per unit time, can take very large values. This is the manifestation of the important dynamical randomness because of the many particles in the system. Indeed, at equilibrium A = B, we obtain Q = 2κA ln A + 2κA ln(N + 1) eq L.
For A 1, we have that ln(N + 1) eq ln A and the τ-entropy per unit time can be estimated as If we use equation (35), A = n V and A(L + 1) N tot , we find at equilibrium that This entropy per unit time should be compared with the Kolmogorov-Sinai entropy per unit time obtained for a fluid at temperature T and density n of N tot hard spheres of mass m and diameter d [65] h KS 4N tot nd 2 πk B T m ln 3.9 πnd 3 .
Since the inequality (8) must be satisfied and because the size x of the cells cannot be smaller than the mean free path, we find that the sampling time τ must be larger than the minimum value otherwise the stochastic process may generate more dynamical randomness than chaos in the underlying microdynamics.
Since the term diverging as ln(1/τ) is the same for both entropies (77) and (78), the entropy production should be given by the difference The difference k B (Q − Q R ) is depicted as dots in figure 2, where we see the nice agreement with the entropy production (69). Even if the constants Q and Q R and the dynamical entropies h and h R may take very large values, they differ precisely by the entropy production of the nonequilibrium steady state.

Conclusions
In the present paper, we have shown that the entropy production of a diffusion process can be related to the difference between two quantities characterizing dynamical randomness either forward or backward in time. The first is the standard entropy per unit time which, in the case of the diffusion process studied here, is a τ-entropy per unit time as previously introduced [13,14]. This quantity measures the amount of information which is generated by the stochasticity of the underlying microscopic dynamics. For stochastic processes with continuous random variables such as the waiting times between the random jumps, the τ-entropy per unit time increases as ln(1/τ) for τ → 0, meaning that the paths generate a randomness on arbitrarily small time intervals. This behaviour is typical of stochastic processes ruled by a Pauli-type master equation such as the diffusion process studied here [13,14]. The τ-entropy per unit time takes a very large and positive value even at the thermodynamic equilibrium, so that it cannot be directly related to the entropy production which vanishes at equilibrium. The τ-entropy per unit time simply characterizes the dynamical randomness (or stochasticity) of the underlying process, which is here a Gillespie stochastic process [63,64]. On the other hand, a concept of time-reversed entropy per unit time has very recently been introduced, which now allows the connection to the entropy production in nonequilibrium steady states [43]. This new quantity characterizes the randomness backward in time among the most probable paths of the forward process. For a stochastic process ruled by a Pauli-type master equation, the time-reversed entropy per unit time also depends on the sampling time τ and increases as τ → 0. However, the difference between both dynamical entropies has a welldefined limit as τ → 0, which precisely gives the entropy production, as shown here in detail for diffusion.
This analysis validates equation (17) in the case of stochastic diffusion. As explained in section 1, this formula belongs to the same family of large-deviation formulas as the escaperate formula (9), the formula (11) for the Hausdorff dimension of the diffusive modes and the fluctuation theorem (15). The formula (17) for the entropy production shows that, in nonequilibrium steady states, the forward time evolution has less randomness relative to the backward time evolution. This asymmetry has its origin in the breaking of the time-reversal symmetry by the stochastic boundary conditions imposed by the nonequilibrium constraints at the border of the system [13,66]. Indeed, the particles enter the chain from the reservoirs without special statistical correlations, although they exit the chain with very fine statistical correlations due to their interaction inside the chain. Therefore, the nonequilibrium steady state is described by an invariant probability measure which is not time-reversal symmetric. This explains that the time-reversed dynamical entropy differs from the standard one. The remarkable result is that this difference is precisely equal to the entropy production which can therefore be interpreted as this time asymmetry in the dynamical randomness. The formula (17) shows that the nonequilibrium constraints select paths or histories having some time asymmetry, in the sense that some paths may have a higher probability than their time reversal in nonequilibrium steady states. This selection introduces biases in the probabilities and can influence the evolution of systems capable of memory such as the biological systems. In this regard, the formula (17) can help to understand better the constructive role of the arrow of time in nonequilibrium systems.