Paper The following article is Open access

From single-particle stochastic kinetics to macroscopic reaction rates: fastest first-passage time of N random walkers

, and

Published 2 October 2020 © 2020 The Author(s). Published by IOP Publishing Ltd on behalf of the Institute of Physics and Deutsche Physikalische Gesellschaft
, , Citation Denis S Grebenkov et al 2020 New J. Phys. 22 103004 DOI 10.1088/1367-2630/abb1de

Download Article PDF
DownloadArticle ePub

You need an eReader or compatible software to experience the benefits of the ePub3 file format.

1367-2630/22/10/103004

Abstract

We consider the first-passage problem for N identical independent particles that are initially released uniformly in a finite domain Ω and then diffuse toward a reactive area Γ, which can be part of the outer boundary of Ω or a reaction centre in the interior of Ω. For both cases of perfect and partial reactions, we obtain the explicit formulas for the first two moments of the fastest first-passage time (fFPT), i.e., the time when the first out of the N particles reacts with Γ. Moreover, we investigate the full probability density of the fFPT. We discuss a significant role of the initial condition in the scaling of the average fFPT with the particle number N, namely, a much stronger dependence (1/N and 1/N2 for partially and perfectly reactive targets, respectively), in contrast to the well known inverse-logarithmic behaviour found when all particles are released from the same fixed point. We combine analytic solutions with scaling arguments and stochastic simulations to rationalise our results, which open new perspectives for studying the relevance of multiple searchers in various situations of molecular reactions, in particular, in living cells.

Export citation and abstract BibTeX RIS

Original content from this work may be used under the terms of the Creative Commons Attribution 4.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.

1. Introduction

Physical kinetics studies the dynamics of molecular chemical reactions in terms of the time dependence of the reactant concentrations [1]. If only one reactant is present with concentration [A] (unimolecular reaction), the associated first-order rate equation is typically written in the form d[A]/dt = −k[A] with the reaction rate k. The resulting dynamics is exponential, [A] = [A]0 exp(−kt), with initial concentration [A]0 and half-life time t1/2 = ln(2)/k. A physical derivation of chemical reaction rates in terms of the diffusivity of the molecular reactants was given in the seminal 1916 paper by von Smoluchowski [2] for immediate coagulation. For instance, the Smoluchowski rate of a particle with diffusion coefficient D to hit an immobile spherical target of radius a reads kS = 4πDa[A]0. A more general approach to calculate the reaction rate combining reaction and diffusion limitation was formulated by Collins and Kimball [3]. Different diffusion mechanisms can hereby lead to an effective renormalisation of the target size in these theories. Thus, when DNA-binding proteins search for their target site on a DNA chain, the combination of three-dimensional diffusion with one-dimensional diffusion along the DNA causes the target size a to be replaced by an 'effective sliding length'. In the 'facilitated diffusion picture' this quantity is based on the typical distance covered by the protein during sliding while it intermittently binds to the DNA [47]. Such predictions and their generalisations can by-now be measured routinely in single-molecule assays [814].

In chemical rate-based formulations of reaction kinetics such as those by Smoluchowski or Collins and Kimball it is tacitly assumed that the reactants are present at sufficiently high abundance, thus effecting smooth concentration levels. Moreover, in a 'well-stirred' reaction container spatial coordinates can be neglected. By-now the understanding is that such theories provide an adequate description of the chemical kinetics for most systems based on the interplay of molecular reaction and diffusion—upon some refinements and generalisations incorporating various physical factors missed in the original works [1524]. Exceptions are provided by several particular reaction schemes in which, under some rather restrictive constraints, so-called fluctuation-induced behaviour emerges, see, e.g., [2534] and references therein. One particular question concerns the possible concentration-dependence of the effective reaction rates [15, 3538]. Indeed, the original Smoluchowski approach and many of its generalisations are only plausible for sufficiently low albeit finite concentrations. At higher concentrations diffusive transport as the rate-controlling factor becomes less important, and concurrently the diffusion coefficients themselves acquire a dependence on the concentrations of reactants and products.

Conversely, in a shift of general interest towards understanding the kinetic behaviour of reactions in diverse biochemical and biophysical systems low concentrations are increasingly considered to be a salient feature. Indeed, many intracellular processes of signalling, regulation, infection, immune reactions, and metabolism as well as transmitter release in neurons occur upon the arrival of one or few biomolecules to small specific regions [39, 40]. The well-stirredness assumption based on simple diffusion models leads to the conclusion that DNA-binding proteins rapidly homogenise in the cell. Experiments show, however, that even in relatively small bacterial cells such transcription factors are localised around their encoding gene and further inhomogeneity is caused by the nucleoid state [41]. Moreover, genes that are controlled by a specific transcription factor tend to locate next to the gene encoding this transcription factor [42, 43]. This is consistent with explicit models for intracellular gene regulation [44, 45]. Such gene regulatory signals often rely on nanomolar concentrations [46, 47], similar to molecular concentration levels of autoinducer molecules controlling the state of cell colonies in quorum sensing [4852]. This causes strong fluctuations of regulation events [44, 5355].

The nanomolar concentration range of relevant molecular species renders the concept of reaction rates rather ill-defined. Due to the lack of a sufficiently large number of molecules, reaction times become strongly defocused and one cannot describe the system in terms of a single time scale associated with a reaction, but rather in terms of the full distribution of random times to a reaction event [5658, 6871]. Indeed, the shortest relevant time scale of the associated first-passage of molecules to their reaction target in such situations is 'geometry-controlled' [57, 70] in terms of 'direct paths' from the molecules' initial position to its target. This shortest characteristic time scale is also the most probable in the reaction time distribution [56, 57]. The longest time scale, typically orders of magnitude longer than the most probable time scale, corresponds to 'indirect' trajectories that on their path hit the boundary of the confined volume and thus lose any signature of their original position [57, 70].

Most approaches determining reaction rates or full probability densities of reaction times rely on a single-particle scenario, yet, despite of the small concentrations we alluded to above, typically a given number of particles are searching for a common target in parallel, for instance, several transcription factors seeking to bind to a specific binding site on the cellular DNA. For particles searching for a common target in parallel two relevant questions can be asked: (i) how much faster is the search process when more than one searcher is present and (ii) which searcher comes first, for instance, when we think of two different species of transcription factors competing for the same binding site. We could also think of an entire colony of cells, for instance, the many thousands of cells in a bacteria colony [72, 73], competing with each other for which cell is able to react first to a common environmental challenge. On a different level such competitive scenarios come into play when sperm cells race towards the egg cell. Indeed, such 'particle number' effects on the reaction kinetics of few-molecule diffusion-limited reactions have recently been addressed [7476]. Generally, there is a range of systems in molecular and cellular biology for which the number of species involved lies in some intermediate range—much more than a few, but still much less than a macroscopic number (i.e., of the order of Avogadro's number NA ≈ 6 × 1023) [77]. In particular, neuronal connections often occur on a dendritic spine, where calcium is present in big amounts and arrival of the fastest of the calcium ions can trigger a transduction [78]. In another instance, the post-synaptic current is generated when the first receptor of a neurotransmitter is activated, a process which is mediated by the release and transport of several thousands of neurotransmitters from the pre-synaptic terminal. In immunology, which hosts a variety of such examples, a gene mechanism responsible for the selection and expression of a specific membrane receptor on B-cells relies on many such receptors, which bind directly to recognise a molecular unit of a pathogen.

A mathematical model in which N Brownian particles start from the same position simultaneously and search for an immobile small target was first analysed by Weiss et al almost 40 years ago [79]. Various additional aspects of this problem were subsequently considered [8091]. Specifically it was already shown by Weiss et al that the mean first arrival time of the fastest of the N Brownian searchers to the target is inversely proportional to ln N as N. This means that the change of the 'efficiency' of a reaction is quite modest as compared to the quite large investment in requiring a larger number of searchers. It was thus concluded in [79] that the speedup due to many searchers is a comparatively minor effect: namely, to reduce the reaction time scale in a noticeable way, one needs a very large amount of searchers. For instance, to have a reduction by just a factor of 10, more than 20 000 searchers need to be deployed, an expensive number for many biological scenarios. Indeed, it was recently argued in the context of the tens of millions of sperm cells competing for a single egg in higher mammals that when N such searchers are launched from the same initial location to find a given target, the decisive time scale is the arrival time of the first searcher, that is, the shortest time [7476].

In this paper, we address the reaction dynamics between an immobile small target site and N searchers diffusing inside a bounded domain Ω of finite volume |Ω|. A major part of our analysis pertains to a very general geometry of the system such that a bounded domain may have an arbitrary shape, with the only constraint being that the boundary is smooth and thus having a finite area |∂Ω|. The target Γ with an area |Γ| and a characteristic extent R can be placed either on the boundary or in the bulk. For illustrative purposes, we will use some simple domains.

We take here a broader perspective beyond the mean shortest time associated with the arrival of the fastest among N Brownian searchers. Namely, we analyse the full distribution function of the time of the first reaction event. This comprises three different relevant aspects. First, as it is already known from the single-searcher scenario that the full distribution of reaction times spans several distinct characteristic time scales [5658, 70] ranging from the above-mentioned most probable time, a crossover time scale from a hump-like region to a plateau-like regime in which all values of the first passage times are nearly equally probable, and ultimately, the mean first passage time to the reaction event, followed by an exponential decay. When N searchers operate in parallel, we establish the full probability density of the fastest first-passage time (fFPT).

Second, compared to previous works we include imperfect reactions characterised by a finite intrinsic reaction constant κ (with κ = corresponding to a perfect reaction). Namely, for a chemical reaction to be successful, it is not sufficient for the diffusing molecule just to arrive to the reaction centre, but a reaction activation barrier needs to be overcome [3, 18, 5967]. On top of this many biomolecules present a specific binding area, further reducing the probability of immediate reaction on encounter. As a consequence repeated collisions with the target are required, leading to repeated excursions in the volume. The effected further defocusing of the reaction times, analysed in detail for the case N = 1 in [5658], was partially studied in [84, 85] for searchers starting from the same location. Here we highlight some additional features.

The third and the most important difference to most previous works is that we consider the scenario in which the searching particles initially are placed at distinct random positions. The characteristic properties are then obtained by averaging over these fixed initial positions, which are supposed to be uniformly spread across the confining domain. Recall that for a target placed away from a boundary, in the thermodynamic limit when both the number of particles N and the volume |Ω| of the confining domain Ω tend to infinity while the concentration N/|Ω| is kept finite (neither very small nor too large), the Smoluchowski approach (see appendix A) provides an exact solution in the case of a perfect reaction [9296]. As shown in [97, 98] this even holds for the case of imperfect reactions when the finite reactivity is modelled in terms of a stochastic Poisson gating process. In this sense it can be argued that our analysis provides a connection between single-molecule reaction kinetics and standard chemical kinetics based on effective reaction rates even in the case when targets are located on the boundary. We also stress the conceptual difference in dealing with the uniform initial distribution as compared to a fixed starting position for all searchers (see also [88] for a related discussion). In both cases, the randomness of reaction times follows from random realisations of the particle trajectories. However, an additional source of randomness comes into play due to the random initial placement of each particle within the confining domain. This creates an intriguing new aspect to the problem, and we analyse its impact on the reaction kinetics resorting to an analysis of the typical behaviour, the averaged behaviour, and quantify fluctuations around the averaged behaviour. A uniform initial condition leads to a drastically different behaviour as compared to the case of a fixed starting point considered previously in [8185]. In particular, as first noticed in [79] and later explored in [8688], in a finite domain the contribution to the effective rate due to a diffusive search for a target decreases in proportion to 1/N2, while the contribution due to a penetration through a barrier against reaction vanishes as 1/N in the limit N. This means that (i) placing N searchers at distinct positions gives a substantial increase in the reaction efficiency, which can be orders of magnitude larger as compared to the case when all N searchers start from the same point, (ii) as compared to a standard Collins–Kimball relation, which is valid in the thermodynamic limit and in which both contributions have the same 1/N-dependence on the number of searchers, here the contribution due to a diffusive search acquires an additional power of a concentration of searchers, (which is a strong concentration effect), and hence, (iii) in the limit N ≫ 1 reactions become inevitably controlled by chemistry (kinetically-controlled reactions) rather than by diffusion (diffusion-controlled reactions). Overall, the analysis developed here provides a comprehensive insight into the binding kinetics of the extremes of first-passage phenomena in finite systems with multiple diffusing reactants competing for a reactive target.

The paper is organised as follows. In section 2 we discuss the implications of fixed versus uniform starting positions. Section 3 contains our main theoretical results including a brief overview of general properties for bounded domains (section 3.1), approximations for the mean first-reaction time (section 3.2) and the variance (section 3.3), the long-time behaviour of the volume-averaged probability density of the reaction time (section 3.4), the fluctuations between individual realisations (section 3.5), and the typical reaction time density (section 3.7). Section 4 is devoted to the discussion of these results and their implications to chemical physics and biological systems. Details of derivations and some additional analyses are presented in the appendices.

2. Fixed versus randomly distributed starting point

Before proceeding to the results we discuss the uniform initial condition analysed in this paper. In the conventional macroscopic description of chemical kinetics, the reaction rate is computed by averaging the probability density $\rho \left(t\vert {\mathbf{x}}_{0}\right)$ of the first-passage time (FPT) to the target from the starting point x0,

Equation (1)

where ${c}_{0}\left({\mathbf{x}}_{0}\right)$ is the initial concentration profile of particles. As we assume a uniform initial concentration in the bounded confining domain Ω with finite volume |Ω|, the reaction rate is simply proportional to the volume-averaged probability density, $J\left(t\right)\propto \overline{\rho \left(t\right)}$, with

Equation (2)

As a consequence, the reaction rate J(t) incorporates two intertwined sources of randomness: a random choice of the initial position x0 and a random trajectory from x0 to the target. Even though both affect the FPT distribution, their respective roles are in fact not that well understood, thus deserving a more specific investigation.

A volume-averaged description as entering equation (2) is justified whenever the number of diffusing particles is macroscopically large so that one can actually speak about concentrations. In many of the biologically relevant settings discussed above, however, the number of diffusing particles can be small or moderately large (say, a few tens or a few thousand). One may therefore question whether the above macroscopic approximation is still applicable. In particular, what is the role of stochastic fluctuations between different realisations of the starting points? This question becomes particularly relevant in the short-time limit. In fact, if there is a single searcher, the density $\rho \left(t\vert {\mathbf{x}}_{0}\right)$ decays exponentially fast at short times, as ec/t, where c is proportional to the squared distance to a target from x0. In contrast, the volume average in equation (2) superimposes all $\rho \left(t\vert {\mathbf{x}}_{0}\right)$ including the contributions from those particles that started infinitely close to the target are dominant. As a result, $\overline{\rho \left(t\right)}$ exhibits not an exponential but a power-law behaviour as t → 0, as we detail below. One can conclude that $\overline{\rho \left(t\right)}$ is not representative of an actual behaviour here, as one could expect. But what happens for N particles, in particular, in the large N limit? In other words, we aim at uncovering the role of stochastic fluctuations related to random initial positions of the particles as we change N.

In what follows we investigate the gradual transition from the single-particle description, in which stochastic fluctuations are significant, to the macroscopic description. We consider N independent identical particles undergoing Brownian dynamics with diffusion coefficient D that search in parallel for a common target (figure 1). The FPT τ1 for a single particle, started from x1, is characterised by its survival probability $S\left(t\vert {\mathbf{x}}_{1}\right)={\mathbb{P}}_{{\mathbf{x}}_{1}}\left\{{\tau }_{1}{ >}t\right\}$, from which the probability density is obtained by differentiation,

Equation (3)

The independence of the diffusing particles immediately implies that the FPT ${\mathcal{T}}_{N}=\mathrm{min}\left\{{\tau }_{1},\dots ,{\tau }_{N}\right\}$ of the fastest particle among N, the so-called fFPT, follows from the N-particle survival probability

Equation (4)

The knowledge of this survival probability for a single particle therefore fully determines the one for multiple non-interacting particles searching in parallel, and the problem of characterising the fFPT may look trivial at first thought.

Figure 1.

Figure 1. Arbitrary bounded domain Ω with a smooth reflecting boundary ∂Ω (in grey) and a target (in red) located on the boundary (similarly, the target can be located in the bulk). N searchers (shown by small blue circles) are initially randomly (uniformly) distributed throughout the volume. (a) N = 10, (b) N = 100.

Standard image High-resolution image

However, an exact explicit form of the survival probability S(t|x) is known for a very limited number of simple settings such as, for instance, the FPT to the endpoints of an interval or to the boundary of a sphere [99]. In turn, only the asymptotic behaviour or some approximate forms are known for most practically relevant cases such as the narrow escape problem [100109] (see also the review [110]). Moreover, even if S(t|x) is known explicitly, finding the moments of the fFPT remains a challenging and quite involved problem. While most former studies of these extreme FPTs focused on the case when all particles start from the same fixed point (i.e., x1 = ... = xN) [74, 76, 79, 8385] we here explore a different direction, namely, the case when the particles start from independent and uniformly distributed points. In other words, we aim at uncovering the role of stochastic fluctuations related to random initial positions of the particles. Qualitatively, as N increases, the particular random realisation of the starting points is expected to become irrelevant, and the macroscopic description should become increasingly accurate. Here, we investigate the transition from the single-particle setting to the macroscopic limit and address the practically important question of when and how such a description becomes applicable.

3. Theory

To develop the theoretical framework we consider a given realisation of starting points x1, ..., xN. Then the probability density function (PDF) of the fFPT to a target domain Γ follows from equation (4) in the form

Equation (5)

If (x1, ..., xN) are independent and uniformly distributed points in the bounded confining domain Ω, the volume-averaged density reads

Equation (6)

where

Equation (7)

is the volume-averaged survival probability.

3.1. Summary of general single-particle properties

For a bounded domain Ω of an arbitrary shape with a smooth boundary ∂Ω the survival probability admits in the most general case the spectral expansion [99, 111, 112]

Equation (8)

where the asterisk denotes the complex conjugate, λn are non-negative eigenvalues which are sorted in ascending order,

and the un(x) are L2(Ω)-normalised eigenfunctions of the Laplace operator, −Δun = λnun, subject to appropriate boundary conditions. The eigenvalues and eigenfunctions encode all the information about the geometry of the domain, the location of the target and its size [113]. We consider three typical situations (in increasing order of generality):

  • (a)  
    a fully reactive boundary described by the Dirichlet boundary condition (un)|∂Ω = 0 (here, Γ = ∂Ω);
  • (b)  
    a partially reactive boundary described by the Robin boundary condition ${\left(D{\partial }_{\mathbf{n}}{u}_{n}+\kappa {u}_{n}\right)}_{\vert \partial {\Omega}}=0$, where κ is the reactivity and ∂n is the normal derivative oriented outwards the domain (here, Γ = ∂Ω); and
  • (c)  
    a partially reactive target Γ on the otherwise reflecting boundary ∂Ω\Γ, described by the mixed boundary condition ${\left(D{\partial }_{\mathbf{n}}{u}_{n}+\kappa {u}_{n}\right)}_{\vert {\Gamma}}=0$ and ${\left({\partial }_{\mathbf{n}}{u}_{n}\right)}_{\vert \partial {\Omega}{\backslash}{\Gamma}}=0$. We emphasise that the last situation includes two distinct cases: a target located on the reflecting boundary and a target located in the bulk (figure 1). In the latter case, the boundary of the confining domain Ω has two disjoint components: the outer reflecting boundary and the inner reactive target. Even though these two locations of the target are usually distinguished in physical literature, their mathematical description is the same. Note that the target region Γ does not need to be connected, i.e., one can consider multiple targets.

The volume-averaged survival probability then reads

Equation (9)

where

Equation (10)

Similar spectral expansions hold for the PDF and its volume average,

Equation (11)

and

Equation (12)

From these expansions one can easily derive the volume-averaged moments of the FPT ${\mathcal{T}}_{1}$,

Equation (13)

Throughout the paper we distinguish the volume average over random initial points (denoted by the overline) and the ensemble average over random trajectories (denoted by angular brackets).

At long times the survival probability and the PDF decay exponentially fast, with decay rate 1/(1) determined by the smallest eigenvalue λ1. The volume average does not affect this decay. In contrast, the short-time behaviour is drastically different for fixed-point and volume-averaged quantities. In fact, the volume-averaged survival probability behaves as

Equation (14a)

Equation (14b)

which follows from the short-time expansion of the heat content [114117] (see also appendix B for the next-order correction). Qualitative arguments behind this asymptotic result are simple. In the diffusion-limited regime (κ = ) only those particles in a thin layer of width $\sqrt{Dt}$ near the reactive boundary Γ can reach this boundary and thus disappear within short time. The relative fraction of such particles is $\sqrt{Dt}\enspace \vert {\Gamma}\vert /\vert {\Omega}\vert $, where |Γ| is the surface area of Γ and |Ω| is the volume of the domain Ω. In contrast, in the reaction-limited regime, the decay of $\overline{S\left(t\right)}$ is limited by κ and is thus proportional to t. An immediate consequence of equation (14) is

Equation (15)

In contrast, $S\left(t\vert {\mathbf{x}}_{0}\right)$ and $\rho \left(t\vert {\mathbf{x}}_{0}\right)$ exhibit a fast exponential decay as t → 0,

Equation (16a)

Equation (16b)

with the constants A and α, see [70, 76, 85, 118120].

As shown in [88], the above short-time asymptotic behaviour of the survival probability for a single particle determines the leading behaviour of the moments of the fFPT and its volume-averaged probability density $\overline{{\rho }_{N}\left(t\right)}$ in the limit N. In particular, for a uniform distribution of the starting points, $\overline{{\rho }_{N}\left(t\right)}$ is close to the Weibull density

Equation (17)

where k = 1/2 and $B=2\vert {\Gamma}\vert \sqrt{D}/\left[\sqrt{\pi }\vert {\Omega}\vert \right]$ for a perfectly reactive target, while k = 1 and B = |Γ|κ/|Ω| for a partially reactive target.

3.2. Volume-averaged mean fFPT

We now turn to the many-particle case and obtain a simple approximation for the volume-averaged mean fFPT

Equation (18)

where we used equation (6). We first split this integral into two parts,

Equation (19)

The first integral will be evaluated explicitly by using the short-time asymptotic formula (14). Under an appropriate choice of T, the contribution of the second term is greatly attenuated for large N and can be ignored. The 'threshold time' T can be chosen to minimise the error of such an approximation but we will see that the final result does not depend on T, as expected.

3.2.1. Perfect reactivity

For a perfectly reactive target (κ = ) and a finite domain we use equation (14a) to get

Equation (20)

where $z=\frac{2\vert {\Gamma}\vert \sqrt{DT}}{\sqrt{\pi }\vert {\Omega}\vert }$ is a number between 0 and 1 that is set by our choice of T. When

Equation (21)

the correction term (1 − z)N+1 can be neglected, and one gets

Equation (22)

This approximate expression indeed does not depend on T, as it should. Figure 2(a) illustrates the remarkable accuracy of this approximation in the case of an interval (0, L) with absorbing endpoints (here, |Γ|/|Ω| = 2/L). This high accuracy results from the fact that the asymptotic relation (14) is exponentially accurate for the interval,

Equation (23)

see the exact representation (C6b). For this reason, we kept the form 1/[(N + 1)(N + 2)] in equation (22) because its replacement by the leading term 1/N2 would yield the correction O(N−3).

Figure 2.

Figure 2. Relative error of our approximation of the volume-averaged mean fFPT for: (a) an interval (0, L) with absorbing endpoints, (b) an absorbing sphere of radius L, and (c) an absorbing spherical target of radius R = 0.1L surrounded by a reflecting sphere of radius L, with L = 1 and D = 1. The solid line shows the relative error between $\overline{\langle {\mathcal{T}}_{N}\rangle }$ computed by integrating numerically equation (18) and from our approximation (22) for the interval, and (24) for spheres. The dashed line indicates the relative error for the case when only the leading term is kept in equation (24). For the interval the approximation is remarkably accurate because the next-order correction to the survival probability is exponentially small. For a small absorbing target, the asymptotic regime is established at much larger N. Note that $\overline{S\left(t\right)}$ was computed here via the spectral expansion (C21) truncated after 10 000 terms to ensure accurate integration at short times.

Standard image High-resolution image

In general, however, the next-order term in the short-time behaviour of the survival probability is of order O(t) that deteriorates the exponential accuracy of equation (22). In appendix B, we compute the correction O(N−3) term to this formula:

Equation (24)

where $\mathcal{R}$ is the mean curvature radius of the target defined in equation (B2) (note that $\mathcal{R}$ can be negative, see below). The leading, 1/N2-term was first mentioned in the seminal paper [79] and later re-discovered in [86, 87]. Moreover, a transition from an intermediate 1/N scaling to the ultimate 1/N2 behaviour was analysed in [86]. A mathematical derivation of the leading 1/N2-term for a rather general class of diffusion processes was recently reported in [88]. Figure 2(b) shows the relative error of our approximation for diffusion inside a ball of radius L toward its absorbing boundary (i.e., Γ = ∂Ω and $\mathcal{R}=L$). As expected, the accuracy is lower at small N but still remains excellent. Finally, figure 2(c) presents the relative error for the case of a perfectly absorbing spherical target of radius R = 0.1L surrounded by a reflecting sphere of radius L (here, the mean curvature radius is negative: $\mathcal{R}=-R$). In contrast to previous examples, the asymptotic regime is established at much larger N, in agreement with the condition (21). In fact, here z ∝ |Γ| ∝ (R/L)2 is 100 times smaller than in the case of an absorbing sphere, i.e., one needs a hundred-fold increase of N to get a comparable accuracy.

How can one rationalise the unexpected 1/N2 scaling? As the starting points of the searchers are spread in the domain, the closest particle to the target has more chances to reach it first. As discussed in appendix D, the average distance $\overline{\delta }$ between the closest particle and the target is of the order of 1/N (and not 1/N1/d as one might expect from a standard estimate of the average inter-particle distance in d dimensions). Moreover, if N ≫ 1, there are many particles that start from a comparable distance. Even if the closest particle will diffuse far away from the target, there is a high chance that one of the other particle started at the distance of order 1/N will hit the target. In other words, this problem resembles the diffusion of a single particle on the interval $\left(0,\overline{\delta }\right)$ with absorption at 0 (on the boundary) and reflection on $\overline{\delta }$, for which the mean FPT is ${\overline{\delta }}^{2}/\left(2D\right)\propto 1/{N}^{2}$. We conclude that the scaling 1/N2 arises from the fact that many particle can start close to the target. Moreover, the generic properties of the heat kernel ensure that this scaling still holds for non-uniform initial distributions which do not exclude particles from a close vicinity of the target (see appendix D). As we will discuss in section 4, the decay $\overline{\langle {\mathcal{T}}_{N}\rangle }\propto {N}^{-2}$ is much faster and drastically different from the scaling law $\langle {\mathcal{T}}_{N}\rangle \propto 1/\mathrm{ln}\enspace N$ obtained earlier in the case of a fixed starting point [79].

3.2.2. Partial reactivity

For a partially reactive target (κ < ), equation (14b) entails a different scaling for the volume-averaged mean fFPT with N,

Equation (25)

i.e. the contribution to the volume-averaged mean fFPT in the regime of a reaction control vanishes only as a first inverse power of N. The derivation of the correction term is presented in appendix B. The leading 1/N-term was recently obtained in [88].

Figure 3 illustrates the quality of this approximation for several values of κ for an interval of length L with a partially reactive endpoint (in which case Γ = {0} and thus |Ω|/|Γ| = L). The accuracy is lower than in the former case of absorbing boundaries, because the error of the approximation, O(N−1), decreases slower than that of equation (24). In addition, we plot the relative error in the case of a spherical partially reactive target of radius R = 0.1L surrounded by a reflecting sphere of radius L. In this setting, the coefficient in front of the correction term, $\sqrt{\kappa \vert {\Omega}\vert /\left(D\vert {\Gamma}\vert \right)}\approx 5.8$, is not small so that this correction term turns out to deteriorate the quality of the approximation at small N, as can be seen from the dashed line. In turn, when N is large, the correction term improves the accuracy, as expected.

Figure 3.

Figure 3. Absolute value of the relative error of the approximation (25) for the volume-averaged mean fFPT. (a) The case of an interval (0, L) with partially reactive endpoint L = 1 and reflecting endpoint 0, with D = 1 and the three values of κL/D indicated in the plot. Lines represent the ratio between $\overline{\langle {\mathcal{T}}_{N}\rangle }$ computed from numerical evaluation of equation (18) and from the approximation (25). We checked numerically that the relative error of the approximation decays as O(N−1) (thin black line). (b) The case of a spherical partially reactive target of radius R = 0.1L surrounded by a reflecting sphere of radius L, with κ = 1. The dashed line represents the relative error obtained by using only the leading term in (25).

Standard image High-resolution image

Since the ratio N/|Ω| in (25) can be interpreted as a mean concentration [A] of diffusing particles, our result is consistent with standard chemical kinetics, in which the reaction rate scales linearly with [A]. Note, however, that we also found the next-order term, which behaves as $\sqrt{\left[A\right]}$. In contrast, expression (24) vanishes as 1/N2 and hence, is proportional to a squared concentration of particles. This is very different from the Smoluchowski rate which is linear with the concentration, because it appears as a limiting form when both the volume and the number of particles grow to infinity, while their ratio is kept fixed and is sufficiently small (see appendix A). It means that, in contrast to a standard Collins–Kimball relation, in which both rate controlling factors enter with the same power of concentration, for bounded domains in the limit N the reaction inevitably enters into a regime of kinetic control, while the controlling factor of diffusion becomes subdominant and defines only some finite corrections. In a way, this situation is similar to a transition to kinetic control upon reduction of the size of the escape window, which was predicted recently for the so-called narrow escape problem [58, 109].

3.3. Variance of the fFPT

As the fFPT ${\mathcal{T}}_{N}$ includes two sources of randomness (from Brownian trajectories and from random initial positions), there are at least two ways of characterising the fluctuations of the ${\mathcal{T}}_{N}$. In the first way, one may use the variance defined as

Equation (26)

where the second term is just the square of the mean volume-averaged fFPT. From a physical point of view, it is more convenient to consider the volume-averaged conditional variance

Equation (27)

In other words, one first evaluates the variance of ${\mathcal{T}}_{N}$ for a fixed set of initial positions x1, ..., xN, and then averages this conditional variance over these positions distributed uniformly in Ω.

To find both variances, we first evaluate the second moment of ${\mathcal{T}}_{N}$ with respect to the probability density ρN(t|x1, ..., xN),

Equation (28)

and then average it over the starting points to find

Equation (29)

To get the variance in (26), it is sufficient to subtract the squared mean $\overline{\langle {\mathcal{T}}_{N}\rangle }$ studied in section 3.2. In turn, for the conditional variance in (27), one needs to subtract the squared first moment $\langle {\mathcal{T}}_{N}\rangle $, averaged over the starting points,

Equation (30)

In order to further simplify this expression we use the spectral expansions (8) and (11) along with the orthonormality of Laplacian eigenfunctions to derive the identities

Equation (31a)

Equation (31b)

Equation (31c)

In particular, the first identity implies

Equation (32)

which is twice smaller than $\overline{\langle {\mathcal{T}}_{1}^{2}\rangle }$, compare equation (13).

Using the identity (31a) we complete the above computation,

Equation (33)

from which the volume-averaged conditional variance of the fFPT follows in the form

Equation (34)

To proceed, we substitute again $\overline{S\left(t\right)}$ by its short-time approximation (14). For a perfectly absorbing boundary (κ = ) we get from equation (29),

Equation (35)

where T is the cut-off time and $B=2\vert {\Gamma}\vert \sqrt{D}/\left[\sqrt{\pi }\vert {\Omega}\vert \right]$ (note that exponentially small corrections due to ${\left(1-B\sqrt{T}\right)}^{N}$ are neglected in equation (35)). Substituting equations (35) and (24) into equation (26), we get

Equation (36)

Similarly, we get

Equation (37)

Combining these results, we get the volume-averaged conditional variance of the fFPT in the leading in the limit N order:

Equation (38)

Dividing this variance by the squared volume-averaged mean value of the fFPT produces

Equation (39)

Similarly, one can also obtain the squared coefficient of variation of ${\langle {\mathcal{T}}_{N}\rangle }_{{\mathbf{x}}_{1},\dots ,{\mathbf{x}}_{N}}$ with respect to the random variables x1, ..., xN,

Equation (40)

The following point is to be emphasised. In statistical analysis of the first-passage phenomena in bounded domains [68, 89, 90], the coefficient of variation of the PDF is a meaningful characteristic which probes its 'effective' broadness. Typically, in situations when this parameter is much less than unity, one deals with a narrow distribution and the actual behaviour is well-captured by its first moment—the mean FPT, which sets a unique time scale. Only in this case the FPTs are 'focused', i.e., concentrated around the mean value. Conversely, when the coefficient of variation is of order of unity or even exceeds it, the PDF, despite the fact that it possesses moments of arbitrary order, exhibits in some aspects a behaviour reminiscent of so-called broad distributions, like heavy-tailed distributions which do not possess all moments. In this case, fluctuations around the mean value are comparable to the mean value itself and hence, it is most likely that the values of FPTs observed in two realisations of the process will be disproportionately different. In the case at hand, we notice that the coefficient of variation of the volume-averaged fFPT, equation (39), is of order of unity, which implies that here the fluctuations of this quantity around its mean value are of order of this value itself. More striking, the fluctuations of the fFPT corresponding to randomly distributed initial positions exceed the mean value of the fFPT, see equation (40), which signifies that randomness in the initial positions of the searchers has a very pronounced effect on the spread of fFPTs. Therefore, we conclude that for both volume-averaged and 'bare' fFPTs no unique time scale exists and their actual behaviour cannot be fully characterised by their mean values.

3.4. Long-time behaviour

Now we consider the volume-averaged probability density for N particles. As usual, the long-time limit presents the simplest setting for bounded domains due to the exponential decay of the survival probability and the PDF, as well as their volume averages. In particular one gets from equation (9)

Equation (41)

that is, the characteristic decay time 1/(1) for a single particle gets simply reduced by the factor N. This steeper exponential decay shifts the PDF to smaller times. The decay time is a natural timescale of the diffusive exploration of the whole domain. For a single particle, this timescale is usually close to the mean FPT. Surprisingly enough, this is not true in case when multiple particles are searching for perfect targets starting from distinct random locations. As evidenced by our result (24), here the mean fFPT scales as 1/N2, i.e., it is much smaller than 1/(1N). Therefore, the speedup of the reaction kinetics due to a deployment of N searchers starting at distinct random positions appears to be really striking, as compared to a much more modest logarithmic increase in the efficiency predicted in the case when all of them start from the same point.

It is also instructive to compare the exponential decay in (41) to the asymptotic Weibull density (17). For partially reactive targets, k = 1 and the functional form of (17) generally agrees with result (41), except for the coefficients. In contrast, for the case of perfectly reactive targets, k = 1/2, and (17) exhibits a stretched-exponential decay with time. The difference between these two asymptotic behaviours stems from the order of how the limits are taken: the Weibull density (17) was derived for a fixed t as N [88], whereas expression (41) corresponds to the limit t with fixed N.

3.5. Fluctuations between individual realisations

When the starting point x0 is random, the survival probability $S\left(t\vert {\mathbf{x}}_{0}\right)$ and the PDF $\rho \left(t\vert {\mathbf{x}}_{0}\right)$ for each fixed t can be considered as random variables themselves. This circumstance has been emphasised in the insightful paper [122] studying a survival of an immobile target in presence of diffusive traps. Moments of the survival probability regarded as a random variable, and the difference between the typical and mean behaviour was analysed [123125] for the problem of survival of a diffusive particle in the presence of immobilised traps. The volume-averaged PDF, $\overline{\rho \left(t\right)}$, is simply the mean of $\rho \left(t\vert {\mathbf{x}}_{0}\right)$. In order to characterise fluctuations it is instructive to compute the variance of $\rho \left(t\vert {\mathbf{x}}_{0}\right)$.

3.5.1. Single particle

We characterise fluctuations by the squared coefficient of variation

Equation (42)

where the second moment can be evaluated using equation (31c). At long times, we get

Equation (43)

where c1 is defined by equation (10). For instance, for an interval (0, L) with absorbing endpoints, c1 = 8/π2 and thus γ() = π2/8 − 1 ≈ 0.23 is of order of unity.

For short times, equation (15) implies $\overline{{\rho }^{2}\left(t\right)}\simeq \left(B/{2}^{7/2}\right){t}^{-3/2}$ with $B=2\sqrt{D}\vert {\Gamma}\vert /\left[\sqrt{\pi }\vert {\Omega}\vert \right]$, and thus

Equation (44)

In other words, the coefficient of variation diverges in this limit, meaning that the volume-averaged density is no longer representative. This is confirmed by figure 5(a) see below.

3.5.2. Multiple particles

This illustrative computation can be extended to the case of N particles. We get

Equation (45)

Using the identities (31), we find that the squared coefficient of variation obeys

Equation (46)

At long times, the ratios entering this expression approach 1/c1 so that we get

Equation (47)

The Hölder inequality, applied to the first (positive) eigenfunction u1, implies that

Equation (48)

As a consequence, γN(t) exponentially diverges with N for large t. In section 3.6 below we explain this paradoxical blow-up of fluctuations.

At short times, we find

Equation (49)

(note that one can also get correction terms O(1)/N from the first expression). As the number of particles increases, there are two effects. On one hand, the first (divergent) term is progressively attenuated. In other words, for any fixed t, taking N large enough we diminish γN(t) and hence, get narrower distribution of the random variable ρ = ρN(t|x1, ..., xN) (here we forget that ρN is itself the probability density and consider it as a random variable due to the randomness of x1, ..., xN). On the other hand, the second term increases and rapidly reaches a constant contribution 1/2 to the squared coefficient of variation. Even though the second term comes with the negative sign, it cannot render the coefficient of variation negative: indeed, the short-time asymptotic relation (49) is only valid for t small enough such that the first term in (49) provides the dominant contribution.

Figure 4 shows an excellent agreement between the exact expression for γN(t), its short-time asymptotic behaviour and Monte Carlo simulations results for an interval (0, L). Note that we replaced the O(1) term in equation (44) by −1 because for an interval, the short-time asymptotic formulas (15) are exponentially accurate, and the only correction −1 comes from the definition of the coefficient of variation.

Figure 4.

Figure 4. Squared coefficient of variation γN(t) for an interval (0, L) with absorbing endpoints, with L = 1 and D = 1, and N = 1 (a), N = 5 (b), and N = 10 (c). The thick line represents the exact solution (46) (computed by truncating spectral expansions at n = 1000), the dashed line shows the short-time asymptotic (49). Filled circles show the results of Monte Carlo simulations with M = 105 trials.

Standard image High-resolution image

Figure 5 illustrates the PDFs ρN(t|x1, ..., xN) for 100 random combinations of the starting points x1, ..., xN chosen uniformly and independently on the interval (0, L). Fluctuations of the PDFs ρN(t|x1, ..., xN) around their volume-averaged mean $\overline{{\rho }_{N}\left(t\right)}$ are present and significant for all N ranging from 1 to 1000. The broadness of these fluctuations can be even better seen on figure 6 which illustrates 10 000 random realisations of the starting points. This is a rather counter-intuitive observation as a sort of convergence to the volume-averaged PDF is expected as N increases. Note that the asymptotic Weibull density (17) accurately describes the volume-averaged mean $\overline{{\rho }_{N}\left(t\right)}$ already for N = 100.

Figure 5.

Figure 5. Probability density of the FPT for an interval (0, L) with perfectly absorbing endpoints (κ = ), L = 1 and D = 1. The thin grey lines represent for 100 realisations of the PDFs ρN(t|x1, ..., xN) with uniformly and independently chosen starting point x1, ..., xN. The thick yellow line shows the volume-averaged PDF $\overline{{\rho }_{N}\left(t\right)}$ from equation (6) with $\overline{\rho \left(t\right)}$ and $\overline{S\left(t\right)}$ given by (C4a) and (C6b), whereas red filled circles represent the asymptotic Weibull density (17). In turn, filled blue triangles show the typical PDF ρ1,typ(t) from equation (58), computed by numerical integration over the starting point. (a) N = 1, (b) N = 10, (c) N = 100, and (d) N = 1000. The vertical dashed line shows the volume-averaged mean fFPT from equation (22). Coloured bars at the top present ten 10%-quantiles based on $\overline{{S}_{N}\left(t\right)}$.

Standard image High-resolution image
Figure 6.

Figure 6. The same as figure 5 but with 10 000 realisations of the PDFs ρN(t|x1, ..., xN).

Standard image High-resolution image

We emphasise that the PDF is rapidly shifted towards smaller times as N increases. Intuitively, one could expect that this shift is controlled by the long-time exponential decay and its timescale, T1/N, see section 3.4. However, the appropriate time scale here is the volume-averaged mean fFPT, $\overline{\langle {\mathcal{T}}_{N}\rangle }$, which decays much faster, as 1/N2, see the vertical dashed lines. We stress again that the decay time here is not related to the mean fastest FPT.

The general shape of the PDFs ρN(t|x1, ..., xN) is significantly different from the volume-average $\overline{{\rho }_{N}\left(t\right)}$ in figure 5. Thus for a single realisation the starting point is a finite distance away from the target, which effects the exponential cutoff to very short times and the emergence of the peak (the most probable FPT). At longer times we see the exponential shoulder with time scale T1/N. In figure 5(a) individual realisations feature a slight bend at intermediate times after the most probable FPT and the exponential shoulder, however, this feature is getting lost for increasing N. In this one-dimensional setting the pronounced plateau in the FPT density uncovered in [57] is not present, a further separation of the most probable time and the longest time scale T1/N would build up for decreasing reactivity. The volume-averaged PDF $\overline{{\rho }_{N}\left(t\right)}$, in strong contrast, includes particles starting arbitrarily closely to the target, and here we do not see the initial exponential suppression. Instead $\overline{{\rho }_{N}\left(t\right)}$ decays monotonically, and the only remaining relevant scale remaining is $\overline{\langle {\mathcal{T}}_{N}\rangle }$. Nevertheless the FPT density remains broad, as quantified by the 10%-quantiles in figure 5 and our results for the coefficient of variation.

Note that the small q-quantiles can be approximated by using again the short-time asymptotic formula (14). In fact, writing

Equation (50)

which is valid for small t and thus small q, one gets easily

Equation (51)

where $B=2\sqrt{D}\vert {\Gamma}\vert /\left[\sqrt{\pi }\vert {\Omega}\vert \right]$. As can be seen in figure 5 the relative locations of the quantiles hardly change for N = 10, 100, to 1000 but are just shifted to shorter times, as expected. This approximate relation is indeed quite accurate. Here, the scaling is again as 1/N2, which is a direct consequence of the leading term $O\left(\sqrt{t}\right)$ in the short-time expansion of the volume-averaged survival probability.

3.6. Paradoxical divergence and its rationalisation

The constant c1 from equation (10) determines the behaviour of the coefficient of variation in the long-time limit. This constant is defined to be independent of the size of the domain, i.e., any dilation of the domain does not change c1. For instance, one has c1 = 8/π2 ≈ 0.81 for an interval with absorbing endpoints (of any length) and c1 = 6/π2 ≈ 0.61 for an absorbing sphere (of any radius). According to inequality (48), this coefficient cannot exceed 1, and it is actually equal to 1 only when the eigenfunction u1 is constant, in other words, for a reflecting boundary without any reaction. As a consequence, the coefficient of variation γN at long times exhibits an exponential divergence with N. This observation appears paradoxical. We here resolve this counter-intuitive behaviour.

First, we recall that assuming the limit N while keeping the volume |Ω| fixed corresponds to a diverging concentration, which is evidently unphysical. Moreover, in typical applications, the diffusing particles search for a small target contained within a much larger confining domain with reflecting boundary. In this setting, the ground eigenfunction u1(x) is close to a constant so that c1 is close to 1. In other words, if the target is fixed but the confining domain grows, c1 tends to 1, and the double limit N and |Ω| → with a fixed concentration N/|Ω| leads to a finite value of γN and thus mends this paradoxical divergence. To clarify these issues, it is therefore important to investigate its behaviour in the small target limit. We first analyse an exactly solvable setting and then briefly discuss the general case.

We consider the case of a spherical target of radius R surrounded by a larger reflecting sphere of radius L. In appendix C.4, we provide the exact formula (C22) for the coefficients cn, which depend on the solutions αn of equation (C20). For small R the smallest solution α0 of this equation is expected to be small. Denoting epsilon = R/L ≪ 1, we consider separately the cases of infinite and finite reactivity.

  • (a)  
    For infinite reactivity (κ = ) one substitutes α0a1epsilon + a2epsilon2 + O(epsilon3) into equation (C20) and expands it into a Taylor series to determine the expansion coefficients ai, from which we get
    Equation (52)
    As a consequence, the coefficient of variation at long times can be approximated as
    Equation (53)
    where [A] = N/|Ω| is the concentration of the diffusing particles.
  • (b)  
    In case of a finite reactivity (κ < ) more terms are needed. Substituting α0a1epsilon + a2epsilon2 + a3epsilon3 + a4epsilon4 + O(epsilon5) into equation (C20), we find
    Equation (54)
    and thus
    Equation (55)
    To ensure the validity of this asymptotic analysis, the first correction term in equation (54) has to be small, which is realised when the inequality κR2DL holds. This inequality is evidently valid in the low concentration limit when the coefficient of variation vanishes, or when diffusion is very fast. Moreover, it holds in case of a sufficiently low reactivity κ; in this case, evidently, before a reaction eventually takes place there are multiple encounters with a target interspersed with bulk excursions such that the initial spatial heterogeneity due to a random distribution particles gets speared away, effecting γN() to be small. In contrast, a larger domain favours a higher degree of spatial heterogeneity and thus increases γN. Note also that, in principle, one can relax this restrictive relation between the parameters and use equations (C22) and (47) directly. This will permit us to compute c1 and hence, γN() for any values of the parameters.

In the above setting, the target was a small ball located in the bulk and surrounded by a large reflecting sphere. If the target is located on the outer sphere (e.g., a small circular hole), one has to consider mixed Dirichlet–Neumann boundary condition on the sphere, for which there is no exact solution for the survival probability. In this case, one can either resort to the approximate solution in [58], or rely on the asymptotic analysis in the narrow escape limit. In the latter case, Ward and Keller showed for a perfectly reactive target that

Equation (56)

where the coefficient E1 was expressed as an integral of the first-order correction to the ground eigenfunction [121]. This coefficient can also be expressed in terms of the surface Neumann Green function. When the confining domain Ω is a ball, the explicit form of this Green function was given in [105]. As a consequence, one can access the coefficient E1 and thus the asymptotic behaviour of c1 in the narrow escape limit. Moreover, this computation is valid for multiple small targets located on the sphere. Importantly, the asymptotic relation (56) remains valid even for nonspherical three-dimensional domains, even though the computation of E1 is much harder. We emphasise the distinct scaling of 1 − c1 with epsilon: it is linear for a target on the boundary, and quadratic for a target in the bulk, see equations (52) and (56). In two dimensions, the approach of c1 to 1 is much slower:

Equation (57)

where 2epsilonL is the length of the target region, see [104, 121] for details. An extension of these asymptotic results to partially reactive targets located on the boundary presents an interesting perspective.

3.7. Typical PDF

Following reference [122] we aim at determining the typical PDF of the fastest FPT

Equation (58)

where τ is a timescale to render the expression in the logarithm dimensionless. For this purpose, we need to compute

Equation (59)

where

Equation (60)

As x1, ..., xN are independent uniformly distributed starting points the sum in equation (59) contains N identical terms. In turn, the first term U(t) is more sophisticated as the logarithm couples different particles. Using the following representation for the logarithm

Equation (61)

we get

Equation (62)

where

Equation (63)

3.7.1. Long-time behaviour

The long-time behaviour is easy to determine. As ρ(t|x) = −∂S(t|x)/∂t the ratio ρ(t|x)/S(t|x) is a positive function that is monotonously increasing from 0 at t = 0 to 1 as t. As a consequence equation (62) implies

Equation (64)

Moreover, we have

Equation (65)

where

Equation (66)

and we recall that u1(x) can be defined to be positive (factors |Ω|±1/2 were included to get dimensionless quantities in logarithms). In this limit, one also has $\rho \left(t\vert {\mathbf{x}}_{0}\right)/S\left(t\vert {\mathbf{x}}_{0}\right)\simeq D{\lambda }_{1}$ so that

Equation (67)

In contrast, the analysis of the short-time behaviour of the typical PDF is much more difficult and remains an open problem for future research.

4. Discussion

We studied the fFPT distribution for diffusion of N Brownian particles in a finite domain Ω with smooth boundary ∂Ω. The particles react with a surface region Γ of perfect or partial reactivity contained in ∂Ω. We obtained the fFPT moments and probability density in the case of a uniform initial condition. In the previously considered case of a common, fixed initial position, the mean fFPT was shown to exhibit the very slow, 1/ln N scaling [74, 76, 79, 8385]. In strong contrast, when the N particles are released random-uniformly in the domain Ω, the mean fFPT shows the much faster decay 1/N2 for a perfectly reactive target and 1/N for a target with finite reactivity. In the former case, this scaling is also different from that of the decay time, T1/N. We provided a rationale of this significantly altered behaviour in the dependence on the specific initial condition by scaling arguments and supported it by direct numerical computations.

While the scenario of a fixed initial condition bears relevance for the example of the released sperm cells considered in [7476], spread-out initial conditions are relevant in other biological systems. Thus, in biofilms autoinducer molecules are released on the cell surfaces all over the colony [72, 73]. In intracellular gene regulation, so-called global regulatory proteins are distributed approximately evenly throughout the cell volume [43]. Other regulatory proteins may abound around their encoding gene [41], and genes that are controlled by a specific transcription factor tend to locate next to the gene encoding this transcription factor [42, 43]. These examples demonstrate the need for a theory that considers spread-out initial conditions as considered here.

In many biological systems the number of searching entities such as sperm cells or regulatory proteins are produced at the expense of chemical and energy resources. At the same time a large number of searchers guarantees a high degree of redundancy, which is of particular importance for the case of sperm cells in singular, crucial events such as reproduction. In contrast, for processes that are constantly running off, such as gene regulatory processes, the biochemical resources would quickly be drained if excessively many molecules had to be produced. In this sense, and given the above scenarios when the molecules abound in the vicinity of their target, the scaling 1/N and 1/N2 provides a new perspective on why relatively few molecules may already be sufficient to effect comparatively speedy regulation.

While in our discussion we contrasted a fixed initial position with the scenario of random-uniform initial positions, figure 5 indicates that the value of the mean fFPT provides the correct scale even for individual random realisations of starting points. In this sense, one may roughly distinguish two classes of initial conditions: (i) either the particles are allowed to start near their target (distributed in the entire domain or concentrated in a subdomain around the target), (ii) or the initial particle position is characterised by a minimal distance to the target. The former class corresponds to our analysis here (see also [88]), the latter class is described by the results in previous works [74, 76, 8385].

In summary, we believe that our results for the statistic of the fFPT for spread initial particle positions provide a fresh perspective to the field of many-particle search for an immobile target in a finite domain and its applications. It should be of interest to further develop this approach both from a mathematical point of view and with regard to concrete applications.

Acknowledgments

DSG thanks MJ Ward for fruitful discussions and acknowledges a partial financial support from the Alexander von Humboldt Foundation through a Bessel Research Award. RM acknowledges funding from the German Science Foundation (DFG, Grant ME 1535/7-1) as well as support from the Foundation for Polish Science (Fundacja na rzecz Nauki Polskiej, FNP) within an Alexander von Humboldt Polish Honorary Research Scholarship. GO is grateful to B Meerson for helpful discussions. The authors also thank the unknown referee for bringing our attention to reference [88] that was published after the submission of our manuscript and provided complementary views on the fFPT. We acknowledge the support of German Research Foundation (DFG) and Open Access Publication Fund of Potsdam University.

Appendix A.: Smoluchowski limit

In this appendix we recall the behaviour in the conventional thermodynamic limit when both N and |Ω| tend to infinity with the concentration [A] = N/|Ω| being fixed. In this limit, one gets

where

Equation (A1)

For the survival probability of a spherical target of radius R one gets (see, e.g., [120])

Equation (A2)

In the limit t the dominant behaviour of F(t) is provided by the first term, which yields the Collins–Kimball relation

Equation (A3)

where K = 4πR2κ is the intrinsic reaction constant.

In the limit of an infinitely large intrinsic reactivity (κ), one retrieves from equation (A2) the classical Smoluchowski result

Equation (A4)

Appendix B.: Improved formula for the mean fFPT

Our computation of the volume-averaged mean fFPT relied on the leading term of the short-time expansion of the survival probability. We can improve this computation by accounting for the next-order correction.

B.1. Perfect reactivity

For the perfect reactivity, one has [114, 115]

Equation (B1)

where $B=\left(2/\sqrt{\pi }\right)\sqrt{D}\frac{\vert {\Gamma}\vert }{\vert {\Omega}\vert }$ and

Equation (B2)

where H(s) is the mean curvature of the boundary at the point s. Substituting this expression into the first integral in equation (19), we get

where we changed the integration variable as $z=1-B\sqrt{t}+Ct$, with ${z}_{0}=1-B\sqrt{T}+CT$ and

For large N, the main contribution comes from the vicinity of z = 1 so that one can expand the function f(z) into a Taylor series around this point

and we used that f(1) = 0. Integrating term by term, one gets

with f'(1) = −1/B2 and f''(1) = 6C/B4. As a consequence, we get

Equation (B3)

which takes the form of equation (24).

B.2. Partial reactivity

For partial reactivity, one has [116]

Equation (B4)

where

Equation (B5)

Substituting this expression into the first integral in equation (19), we get

where we changed the integration variable as z = 1 − Bt + Ct3/2, with z0 = 1 − BT + CT3/2, $f\left(z\right)=1/\left(B-3C\sqrt{t\left(z\right)}/2\right)$, and t(z) is the inverse of the above function z(t). Even though f(z) can be written explicitly in terms of the root of the cubic polynomial, it is not needed as we only use its expansion around z = 1, which reads

Integrating term by term, one gets

Equation (B6)

which takes the form of equation (25).

Appendix C.: Explicit solutions

In this Appendix, we summarise the well-known explicit solutions for two simple settings that we used for illustrations: an interval and a sphere.

C.1. Interval with absorbing endpoints

For the interval (0, L) with absorbing endpoints at 0 and L, the survival probability admits two equivalent explicit expansions

Equation (C1a)

Equation (C1b)

The first relation is numerically more convenient at small t, whereas the second at large t. We also have

Equation (C2a)

Equation (C2b)

Note also that the Laplace-transformed probability density reads

Equation (C3)

from which one can easily compute positive-order moments.

The volume-averaged probability density is then

Equation (C4a)

Equation (C4b)

and its Laplace transform of this density reads

Equation (C5)

where $z=L\sqrt{p/D}$. One also has

Equation (C6a)

Equation (C6b)

For an interval (0, L), one has λn = π2n2/L2, ${u}_{n}\left(x\right)=\sqrt{2/L}\enspace \mathrm{sin}\left(\pi nx/L\right)$, and ${c}_{n}=2{\left(1-{\left(-1\right)}^{n}\right)}^{2}/\left({\pi }^{2}{n}^{2}\right)$.

We also easily get

Equation (C7)

and

Equation (C8)

C.2. Interval with a partially reactive endpoint

We also consider the interval (0, L) with partially reactive endpoint at L and reflecting endpoint at 0, for which the Laplacian eigenvalues and eigenfunctions are (see, e.g., [126])

Equation (C9)

with

Equation (C10)

and αn are the positive zeros of the equation αn sin αn = h cos αn, and h = κL/D. For each n = 1, 2, ..., αn belongs to an interval [π(n − 1), π(n − 1/2)) that facilitates their numerical computation. Note that this problem is equivalent to diffusion on the interval (−L, L) whose both endpoints are partially reactive. The spectral expansion of the survival probability is [126]

Equation (C11)

Its volume average reads

Equation (C12)

The probability density ρ(t|x0) and its volume average $\overline{\rho \left(t\right)}$ follow by taking the time derivative.

C.3. Absorbing sphere

We also consider the first-passage problem to the absorbing boundary of a ball of radius L, for which the survival probability is also explicitly known,

Equation (C13a)

Equation (C13b)

As a consequence, the PDF reads

Equation (C14)

We also get

Equation (C15a)

Equation (C15b)

and

Equation (C16a)

Equation (C16b)

We also easily get

Equation (C17)

and

Equation (C18)

where ζ(z) is the Riemann zeta function with ζ(3) ≈ 1.2021. Here we used the following expression for the first eigenfunction: ${u}_{1}\left(\mathbf{x}\right)={\beta }_{1}\frac{\mathrm{sin}\left(\pi \vert \mathbf{x}\vert /L\right)}{\pi \vert \mathbf{x}\vert /L}$, where ${\beta }_{1}=\sqrt{\pi /\left(2{L}^{3}\right)}$ is the normalisation factor. Note also that c1 = 6/π2.

C.4. Spherical target surrounded by a reflecting sphere

We also consider a more elaborate situation of a spherical target of radius R, which is surrounded by a larger reflecting concentric sphere of radius L. The related FPT distribution for a single particle was studied in [57]. In particular, the exact spectral expansion for the survival probability was provided:

Equation (C19)

where

μ = D/(κ(LR)), and αn (with n = 1, 2, .... are positive solutions of the equation

Equation (C20)

One easily gets $H\left(t\vert {\mathbf{x}}_{0}\right)$ by taking the time derivative.

The volume-averaged survival probability is obtained by integration over x0,

Equation (C21)

with

Equation (C22)

Appendix D.: The closest starting point

In this appendix, we generalise and rationalise the scaling relation $\overline{\langle {\mathcal{T}}_{N}\rangle }\propto 1/{N}^{2}$ in terms of the closest particle to the target. Let us start with an example of the interval (0, L) with the absorbing endpoint 0 and reflecting endpoint L. For N independent particles randomly placed on the interval at positions x1, ..., xN, the distance to the target at 0 is δ = min{x1, ..., xN}. This is a random variable characterised by the simple law: $\mathbb{P}\left\{\delta { >}x\right\}={\left(\mathbb{P}\left\{{x}_{1}{ >}x\right\}\right)}^{N}={\left(1-x/L\right)}^{N}$, where we assumed the uniform distribution for xi. As a consequence, the mean shortest distance is

Equation (D1)

Similarly, the second, the third, etc closest particle are located on average at the distance 2L/(N + 1), 3L/(N + 1), etc. This is expected: if one had to place N particles at equal distances between any two neighbours (and from the endpoints), one would get their positions at kL/(N + 1), with k = 1, 2, ..., N. If N is large, there are many particles that start at the distance of the order L/N from the target, and there is high chance that one of these particles reaches the target within the time of the order of ${\overline{\delta }}^{2}/D$, in agreement with equation (22), up to a numerical prefactor.

The above argument can be easily extended to arbitrary bounded domains with a target having a smooth boundary Γ. Here, we define again the shortest distance to the target as δ = min{|x1 − Γ|, ..., |xN − Γ|}. This random variable is again characterised by the law $\mathbb{P}\left\{\delta { >}x\right\}={\left(\mathbb{P}\left\{\vert {\mathbf{x}}_{1}-{\Gamma}\vert { >}x\right\}\right)}^{N}$ due to the independence of the starting points. Even though the probability law for the distance from a single particle is now complicated, its asymptotic behaviour remains simple. Indeed, for small x, $\mathbb{P}\left\{\vert {\mathbf{x}}_{1}-{\Gamma}\vert {< }x\right\}$ is the probability of locating x1 within a thin boundary layer of width x near the target Γ. For smooth Γ and small x, the volume of this layer can be estimated as x|Γ|, where |Γ| is the surface area of Γ. As a consequence, $\mathbb{P}\left\{\vert {\mathbf{x}}_{1}-{\Gamma}\vert {< }x\right\}\simeq x\vert {\Gamma}\vert /\vert {\Omega}\vert $ as x → 0. The mean shortest distance is then

where L is the maximal distance from the target: L = maxx∈Ω|x − Γ|. For large N, the main contribution comes from x close 0, for which $\mathbb{P}\left\{\vert {\mathbf{x}}_{1}-{\Gamma}\vert {< }x\right\}$ is small. Substituting the above approximation in this region, one gets

Equation (D2)

where the integral was truncated at L' = |Ω|/|Γ| to ensure that the integrand function remains positive. Similar computation can be performed to estimate that the mean distance to the target from the next-to-the-closest points is of the same order. If $\overline{\delta }$ is small as compared to the mean curvature of the target, diffusion in the lateral direction does not matter, and the problem is again reduced to diffusion of a single particle on the interval $\left(0,\overline{\delta }\right)$, for which the mean FPT is again ${\overline{\delta }}^{2}/\left(2D\right)$. This rationalises the scaling relation $\overline{\langle {\mathcal{T}}_{N}\rangle }\propto 1/{N}^{2}$ that we derived in equation (22).

An important consequence of this reasoning is that the scaling 1/N2 holds far beyond the considered case of the uniform distribution of starting points. First, one can realise that only the behaviour of the distribution near the boundary of the target matters in the limit of large N; indeed, the integrand function ${\left(1-\mathbb{P}\left\{\vert {\mathbf{x}}_{1}-{\Gamma}\vert {< }x\right\}\right)}^{N}$ becomes exponentially small when $\mathbb{P}\left\{\vert {\mathbf{x}}_{1}-{\Gamma}\vert {< }x\right\}$ is not small. Second, even in the vicinity of the target, the distribution does not need to be uniform. For instance, if $\mathbb{P}\left\{\vert {\mathbf{x}}_{1}-{\Gamma}\vert {< }x\right\}\simeq {\left(x/A\right)}^{\alpha }$ with some scale A > 0 and any exponent α > 0, the integral in equation (D2) yields $\overline{\delta }\simeq A/\left(\alpha N+1\right)$, i.e., the scaling 1/N is just affected by a prefactor. This result confirms the generality of the scaling relation (22). This conclusion can also be obtained on a more rigorous basis by considering the heat kernel short-time asymptotic behaviour. In fact, the asymptotic relation (14) for the volume-averaged survival probability holds in a much more general setting, in particular, after the average with a smooth nonuniform distribution of the starting point [114, 116, 117]. We do not explore this direction in the paper.

Only if the distribution of the starting point excludes points near the target, the volume-averaged mean fFPT would scale differently. A standard example is the Dirac distribution that fixes the starting point, which was studied previously and yielded the $\overline{\langle {\mathcal{T}}_{N}\rangle }\propto 1/\mathrm{ln}\enspace N$ behaviour [79] (see also section 1 for more references). A similar behaviour is expected for any distribution that excludes points from some δ0-vicinity of the target, i.e., if there exists δ0 > 0 such that $\mathbb{P}\left\{\vert {\mathbf{x}}_{1}-{\Gamma}\vert {< }{\delta }_{0}\right\}=0$. Moreover, this strict exclusion condition can be relaxed. For instance, if $\mathbb{P}\left\{\vert {\mathbf{x}}_{1}-{\Gamma}\vert {< }x\right\}\simeq {\mathrm{e}}^{-A/x}$ as x → 0 (with some scale A > 0), the integral in equation (D2) yields again the 1/ln N behaviour for large N. A more systematic analysis of this problem and finding the sharp exclusion condition, present interesting problems for future research.

Please wait… references are loading.