Optimal Self-Organization

We present computational and analytical results indicating that systems of driven entities with repulsive interactions tend to reach an optimal state associated with minimal interaction and minimal dissipation. Using concepts from non-equilibrium thermodynamics and game theoretical ideas, we generalize this finding to an even wider class of self-organizing systems which have the ability to reach a state of maximal overall ``success''. This principle is expected to be relevant for driven systems in physics like sheared granular media, but it is also applicable to biological, social, and economic systems, for which only a limited number of quantitative principles are available yet.


Introduction
Extremal principles are fundamental in our interpretation of phenomena in nature.One of the best known examples is the second law of thermodynamics [1][2][3], governing most physical and chemical systems and stating the continuous increase of entropy ("disorder") in closed systems.Most systems in our natural environment, however, are open, which is true for driven physical systems, but even more for biological, economic, and social systems.As a consequence, these systems are usually characterized by self-organized structures , which calls for principles that apply on time scales shorter or comparable to the life spans of these systems.For example, it is known that in growth and aggregation processes it is usually the most unstable mode that determines the finally evolving structure.Hence, there exists an extremum principle of fastest propagation, which is applicable to so apparently different phenomena like crystall growth on the one hand [37] and pattern formation in bacterial colonies on the other hand [38].
Recent simulations point to the possible existence of additional optimality principles in certain kinds of driven multi-particle or multi-agent systems.As examples we mention 1. lane formation in pedestrian crowds [33] (see Fig. 1), which appears to be similar to the size segregation in sheared granular media [34], 2. the self-organization of coherent motion in a mixture of cars and trucks [35] (see Fig. 2), and 3. the evolution of trail systems [36] (see Fig. 3).
In these systems, the respective interacting entities (pedestrians, driver-vehicle units, or particles) have to coordinate each other in order to reach a system state which is "favourable" to them.It was conjectured [36,35] that the resulting system states are optimal in some sense, but there are many open questions to be addressed: 1. Is there really a quantity which is optimized by the self-organizing system in the course of time?
2. If yes, which quantity is it?Is there a systematic way to derive it?
3. What are the conditions for the existence of such a quantity?4. Is there any systematic connection between self-organization and optimization?
5. If optimal self-organized systems exist at all, are they exceptional or quite common?

An Example: Lane Formation
To illustrate the non-trivial aspects of optimal self-organization, let us consider the dynamics of pedestrian crowds.We begin with a system of oppositely moving pedestrians in a corridor, for which lane formation has been observed in empirical studies [47].Readers who prefer an example from physics may instead imagine a long vertical column of a viscous fluid with light (rising) and heavy (sinking) particles of equal size (where we will assume that the absolute density difference of the particles with regard to the fluid is the same).In fact, we encourage the reader to do this new experiment.
By x α (t) we denote the position of pedestrian or particle α at time t, by v α (t) = dx α (t)/dt its velocity, by v 0 its equilibrium speed in the absence of (interparticle) interactions, and by e α its "desired" or "preferred" direction of motion.Then, the equation mimicking pedestrian or describing particle motion reads (in the overdamped limit).f αβ represents repulsive interactions between pedestrians or particles α and β, which were assumed to decrease monotonically with their distance d αβ (t) = x α (t) − x β (t) .We will not specify the exact form of f αβ , since it turns out to be quite irrelevant for the kind of phenomena we want to describe, here.The forces can be even chosen in a velocity-dependent way [47,48].
In cases of two opposite desired directions of motion, simulations of the above model reproduce the formation of lanes of uniform walking directions observed for pedestrians (see Fig. 1), while there are no stable self-organized states in cases of four different desired directions of motion (e.g., at intersections). 1 It is clear that lane formation will maximize the average velocity in the respective desired walking direction and, therefore, the quantity which is a measure of the "efficiency" or "success" of motion.(Here, .α t denotes the average over the pedestrians and over time.)Moreover, optimization of efficiency immediately implies that the system minimizes the quantity i.e., the average interaction intensity opposite to the respective desired direction of motion.
Hence, even without the use of difficult mathematics we could show that the system minimizes the interaction intensity of the pedestrians (or particles), if it shows segregation into lanes of uniform directions of motion.Note, however, that lane formation is not a trivial effect, but eventually arises only due to the smaller relative velocity and interaction rate that pedestrians with the same walking direction have (see Section 3).In more detail, the mechanism of lane formation can be understood as follows: Pedestrians moving in a mixed crowd or moving against the stream will have frequent and strong interactions.In each interaction, the encountering pedestrians move a little aside in order to pass each other.This sidewards movement tends to separate oppositely moving pedestrians.Moreover, once the pedestrians move in uniform lanes, they will have very rare and weak interactions.Hence, the tendency to break up existing lanes is negligible.Furthermore, the most stable configuration corresponds to a state with a minimal interaction rate.Therefore, lane formation and minimal interaction rate are two sides of the same medal.Nevertheless, lane formation does not occur in all driven repulsive systems.There are certain conditions for it, which we will work out later on (see Section 6).

The Macroscopic Equation
To give an analytical description of the macroscopic dynamics of the considered system, we will set up continuum equations for the pedestrian densities.By indexes a and b, we will distinguish different (sub-)populations defined by the different desired walking directions.For the mathematical description of lane formation, it is sufficient to focus on the one-dimensional dynamics perpendicular to the desired walking directions.(Imagine a projection of pedestrian dynamics on a cross section of the walkway).The distribution of the N a pedestrians of population a over the locations x of this one-dimensional space will be represented by the densities ρ a (x, t) ≥ 0.
Assuming conservation of the number of pedestrians in each population a, where I denotes the spatial extension of the system, we obtain the so-called continuity equations [1] ∂ρ a (x, t) ∂t Here, V a (x, t) is the average velocity of pedestrians of population a perpendicular to their desired walking direction.In the following, we will give a rough estimate of this velocity: It will be proportional to the frequency ν a of interactions that a pedestrian of population a encounters with other pedestrians.Also, it will be proportional to the average amount ∆x that a pedestrian moves aside when evading another pedestrian.Finally, it will be proportional to the difference of the probabilities p + and p − to move in positive or negative x-direction, respectively.In summary, we have the relation With a prefactor c = 1 like c(x, t) = [1 − a ρ a (x, t)/ρ max ], one can take into account that the motion is slowed down in crowded areas.It also limits the local density to the maximum density ρ max .
The interaction rate of pedestrians belonging to (sub-)population a with others is where C ba > C aa because of the higher average relative velocity between oppositely moving pedestrians.(Although this inequality is enough to know for the following discussion, we mention that [49].Herein, D is the so-called "total cross section", which corresponds to the effective diameter of a pedestrian.The factor χ reflects the increase of the interaction rate with growing density due to the finite space requirements of the pedestrians.An approximate formula is χ = 1/[1 − a ρ a /ρ max ] κ , where κ ≥ 1 is a suitable constant and ρ max is the maximum pedestrian density.Furthermore, θ a is the velocity variance of pedestrians belonging to (sub-)population a. Finally, v a > √ θ a and v b > √ θ b represent the average velocities of subpopulations a and b in their desired walking directions.) We will assume that the probability of moving by ∆x in positive (or negative) xdirection, when evading a pedestrian, is inversely proportional to the interaction rate at position x + = x + ∆x (or x − = x − ∆x, respectively): A first order Taylor expansion of the nominator and the denominator gives the following approximate relation for the difference of these probabilities: (which, strictly speaking, is restricted to cases of small gradients ∂ν a /∂x as in the linear regime around the homogeneous solution).Hence, with ( 6) and ( 7), we finally obtain the following formula for the average velocity of motion perpendicular to the desired walking direction: Defining S ab = −(∆x) 2 C ab and generalizing to an arbitrary number A of (sub-) populations gives We may rewrite this in terms of a gradient of the linear density-dependent function where the constants S 0 a do not matter at all.Formula (13) can be interpreted as a linear approximation of a more general function S a (x, t) of the densities.
Notice that, for higher-dimensional spaces, relation ( 12) becomes a potential condition.Later on, we will see that this is one of the conditions which must be fulfilled for optimal self-organization.Since it is not satisfied for pedestrian crowds with four different desired walking directions (at intersections), we can now understand why, in this case, there exist no optimal self-organized patterns of motion, which are stable.Nevertheless, collective patterns of motion like "rotary traffic" can form temporarily [47].

Self-Optimization
In the following, we will prove that, under certain conditions, the function is a Lyapunov function which monotonically increases in the course of time.Notice that S(t) can be viewed as being analogous to a thermodynamic non-equilibrium potential [53], allowing the determination of the characteristic quantities S ab by functional derivatives.For example, if S ba = S ab , we have By deriving (14) with respect to t, using (13), and properly interchanging indices a and b, one can eventually obtain If S ab is antisymmetric (i.e. S ba = −S ab ), S is obviously an invariant of motion.However, in the following we will focus on the case S ba = S ab of symmetric interactions, which applies to our pedestrian, granular, and trail formation examples.Inserting ( 5) and ( 12) into ( 16), and applying (13), we get Making use of partial integration (for spatially periodic systems), we finally arrive at This result establishes self-optimization for symmetical interactions and can be easily transferred to discrete spaces (see Section 6 and Fig. 4) and to higher-dimensional spaces.In case of slightly asymmetric interactions, small non-linear contributions to (13), or small diffusion, relation (18) will still be a good approximation, i.e. the system will behave close to optimal.This is exemplified by heterogeneous freeway traffic [35].Later on, we will see that more or less symmetric interactions are very natural for the kind of self-organizing systems we are considering, here (see Section 7).
If we interpret the function dS(t)/dt as a measure of dissipation per unit time in the system, equation (18) immediately implies that the system approaches a stationary state of minimal dissipation, since S(t) is bounded for any finite system (which means dS(t)/dt = 0 in the limit of large times t).This may be viewed as a generalization of the related Onsager principle of minimal dissipation of entropy [51,52], dating back to Lord Rayleigh [50].In contrast to the non-linear, far-from equilibrium systems considered above, the Onsager principle applies to linearly treatable, close-to-equilibrium systems only, which usually tend to approach a homogeneous state.
According to (18), the stationary solution ρ st a (x) is characterized by for all a, which is fulfilled by homogeneous or by step-wise constant solutions.Hence, the stationary solution can be non-homogeneous, but nevertheless satisfies minimal dissipation, which is quite interesting.We would not have recognized this, if we would have derived the relations (19) for the stationary solution directly from the continuity equation ( 5) with ( 12) and ( 13).

The Relation with Game Theory
Physicists have recently gained a considerable interest in applications of methods from statistical physics and nonlinear dynamics to biological [10][11][12][13][14][15][16][17][18][19][20][21], economic [22][23][24][25][26][27][28][29], and social [29-32, 6-9, 42] systems.Hence, it is worth stressing that the above model can be applied to such kinds of systems, if we give a more general interpretation to it.First of all, instead of particles or pedestrians, we may have other kinds of entities, which we again need to subdivide into uniform (sub-)populations, according to their behaviors.Second, the space may be rather abstract than real, for example, a behavioral space or an opinion spectrum [8,31].The same applies to motion, which may correspond to a change of behavior or opinion.(In the case of trail formation, a point x in space even corresponds to a path connecting two places, namely a pedestrian source with one of their destinations.) Having again a look at the relation (12), it makes sense to interpret the function S a (x, t) as the "(expected) success" per unit time for an entity of population a at location x, since it is plausible that the entities move into the direction of the greatest increase of success (which is the direction of the gradient).Furthermore, one can give a more concrete meaning to the coefficients S ab : If an entity of kind a interacts with entities of kind b at a rate ν ab and the associated outcome of the interaction can be quantified by some "payoff" P ab , we have the relation S ab = ν ab P ab .Positive payoffs P ab belong to attractive or, more general, profitable interactions between populations a and b, while negative ones correspond to repulsive or competitive interactions.We think, it is quite surprising that, based on competitive interactions, there can be self-organized and even optimal system states at all (see Section 7).
Notice that, despite of the mentioned relations with game theory, our model differs from the conventional game dynamical equations [6][7][8] in several respects: 1. We have a topology (like in the game of life [44,45]), but define abstract games for interactive motion in space with the possibility of local agglomeration at a fixed number of entities in each population.
2. The payoff does not depend on the variable that the individual entities can change (i.e. the spatial coordinate x).
3. Individuals can only improve their success by redistributing themselves in space.
4. The increase of success is not proportional to the difference with respect to the global average of success, but to the local change of success in a population.
Moreover, below we will establish a new connection between self-optimization and self-organization (see Section 7), which we consider to be important.

Self-Organization
Our generalized model allows to describe all kinds of different combinations between attractive or profitable and repulsive or competitive interactions within and among the different populations.It is, therefore, desireable to know the exact conditions under which the corresponding system forms a self-organized, i.e. a non-homogeneous state.In order to derive these, we will carry out a linear stability analysis around the homogeneous stationary solution ρ hom a = N a /I, where I again denotes the spatial extension of the system.For simplicity, we will restrict ourselves to the case of two (sub-)populations a, b ∈ {1, 2}.
We start with the continuity equation where we have introduced an additional diffusion term with diffusion coefficient c D a on the right-hand side, since we will discuss the effect of fluctuations later on.Next, we write down the corresponding linearized partial differential equations: Inserting into these equations the ansatz where k has the meaning of a wave number, leads to the following linear eigenvalue problem with eigenvalue λ: ).The linear system of equations can be solved for the two eigenvalues In order for the homogeneous solution to be stable, the real values of both eigenvalues need to be negative.This requires Notice that, in the case of an unstable eigenvalue, it is the mode with the largest wave number (i.e. with the shortest wave length) that grows fastest.This somewhat unrealistic behavior is a consequence of simplifications made, namely the linear approximation underlying relation (9).In reality, the spatial extension of the entities will introduce a natural cutoff for the wave lengths.One may consider this in equation (20) by additional spatial derivatives of higher (e.g.fourth) order.However, one can easily circumvent these problems by setting up a discrete version of the model (where the spatial discretization should be chosen in accordance with ∆x).
The results of Figures 4 and 5, for example, were obtained with the following discrete analogue of the model defined by Eqs. ( 5), (12), and ( 13).We assumed a periodic lattice with I lattice sites x ∈ {1, . . ., I} and two populations a ∈ {1, 2} with a total of N = N 1 + N 2 ≫ I entities.(In the figures we used I = 40 and N 1 = N 2 = 200.)At time t = 0, our simulations started with a random initial distribution of the entities.Then, we repeatedly applied the following update steps to determine the distribution of entities at time t + 1: 1. Calculate the successes where n b x (t) = ρ b (x, t)I represents the number of entities of population b at site x.
2. For each entity α, determine a random number η α that is uniformly distributed in the interval [0, S max ] with a large constant S max (S max = 20 in the figures).
3. Move entity α belonging to population a from site x to site x + 1, if but to site x − 1, if (Figure 4 and 5 are for c(x, t) = 1.)In cases, where we assumed errors in the estimation of the expected success, we replaced S a (x, t) by S a (x, t) + ξ α (t), where ξ α (t) is determined according to some probablility distribution.
We applied a random sequential update rule, which is most reasonable [46].However, a parallel update, which defines a simple cellular automaton [54,55], yields qualitatively the same results (even nicer looking ones, because it does not have the fluctuations caused by a random update).

Results and Discussion
Let us first discuss the case D a ≈ 0 of negligible diffusion.For this case, the relations (25) imply that the homogeneous solution of the model (cf. Figure 4a) is unstable if In other words, if one of the conditions ( 29) or ( 30) is fulfilled, the stable stationary solution is a self-organized, non-homogeneous state which, according to (19), corresponds to complete segregation (or aggregation).If the populations interact in a symmetric way, the underlying self-organization process is related to self-optimization (see (18)).Condition ( 29) is satiesfied by systems in which the interactions within the (sub-) populations are attractive or profitable.Such systems show always some form of agglomeration (see Figs. 4c and d).
In the following, we focus on the more common and much more interesting cases where ( 29) is not valid.This will allow us to answer the question, why self-organizing systems of the considered type tend to reach an optimal state.The solution is: "because they tend to be symmetric!".This can be seen as follows: Introducing we have and condition (30) becomes Hence, there is a tight connection between self-organization and self-optimization in the considered kinds of systems: If there is self-organization, it is likely to come with optimality, at least approximately.For this reason, one may speak of a "selforganized system with optimality" or more compact of "self-organized optimality" (although there is no immediate relation with "self-organized criticality" [56]).However, the more precise term is probably "optimal self-organization". 2 A good example is uni-directional multi-lane traffic of cars and trucks [35], which develops a self-organized, coherent state only in a small density range, which is also characterized by minimal interactions (minimal lane-changing and interaction rates).From the above, we conclude that, only in this density range, the interactions of cars and trucks become sufficiently symmetric to give rise to self-organization and selfoptimization.This conclusion is quite reasonable, since, according to the assumed traffic model, the interactions of cars and trucks are more symmetric when their average velocities are similar.
In our pedestrian example, we find optimal self-organization, since the symmetry condition is satisfied exactly, and condition ( 30) is also fulfilled (see Eq. ( 7) and below).Note, however, that there are many other examples of segregation in the natural and social sciences [34,[57][58][59]29].We point out that the spatial regions occupied by one population need not be connected (cf.Figures 4b to 4d), and that the corresponding configuration may correspond to a relative optimum as in Figures 4c  and 4d.If S aa < 0 for all a, the distributions ρ st a (x) tend to be flat, as in the case of lane formation by repulsive pedestrian interactions (Figure 4b).Instead, we have agglomeration (local clustering), if S aa > 0 for all a (Figures 4c and 4d). Figure 4c describes the segregation of populations with repulsive interactions (e.g."ghetto formation").The case of Figure 4d allows to understand the conjectured optimality of trail systems which, based on attractive interactions, result by a bundling of trails ending at different destinations [36].
Finally, we discuss the influence of fluctuations.Including the effect of diffusion, the conditions for self-organization will become or Hence, large diffusion coefficients will produce homogeneous equilibrium states, and the principle (18) of self-optimization is not valid anymore (see Figure 4e). 3However, small diffusion can further self-organization.Not only will the system be able to escape relative optima and eventually reach the global optimum (cf.Figures 4f and  5), although the interactions are short-ranged (see Eq. ( 12)).According to (36), small diffusion can also reduce the stability of the homogeneous stationary solution, which is quite surprising.

Summary and Outlook
Our investigations were motivated by the question why many self-organized systems seem to optimize certain macrosopic quantities, for which we have given a number of realistic examples.In order to explain this, we have derived macroscopic equations for lane formation in systems of oppositely moving driven particles or pedestrians.We could show that, in cases of repulsive interactions, the system tends to minimize the interaction rate and the intensity of interactions.The applicability of the model, however, could be extended to cases of attractive interactions and, using game theoretical ideas, also to competitive or profitable interactions in many kinds of non-physical systems like biological, economic, or social ones.
After having shown that many driven systems can be represented as a game between interacting (sub-)populations, we have constructed a functional for such systems, which is related to thermodynamic non-equilibrium potentials and can be interpreted as overall (expected) success.In cases of symmetric interactions among the populations, this function increases monotonically in the course of time, meaning that the overall success of these systems is optimized.In other words, as individual entities are trying to maximize their own success, these systems tend to reach a state with the highest global success, which is not trivial at all.Since the form of the increasing function reminds of a generalized thermodynamic dissipation function, one can also say that the system approaches a state of minimal dissipation (which may be considered as a generalized Onsager principle).This principle of "minimal waste of energy" may be particularly interesting for biology, where it can be conjectured that organisms use energy very efficient.For example, it is known that pedestrians tend to move at the speed which is least energy consuming [60].We think it is worth pointing out that there are quite a number of living systems (for which we have given some realistic examples) to which existing methods and notions from thermodynamics can be successfully applied, if they are generalized in a suitable way.It would be surely interesting to look for other systems, for which similar results or principles can be found.
Here, we obtained that the precondition for self-organization with self-optimization is symmetric interactions.However, we could give arguments indicating that the considered systems, if they self-organize at all, are also (more or less) symmetric and, hence, behave (close to) optimal.Therefore, the phenomenon of "self-organized optimality" or "optimal self-organization", as we call it, is expected to be quite common in nature.Already for two symmetrically interacting populations, one can classify more than ten different situations (dependent on whether S 11 , S 22 , and S 12 = S 21 are smaller or greater than zero, and whether conditions (29) or (30) are fulfilled or not).This includes quite surprising cases as for repulsive interactions within each population and stronger attractive interactions between them (e.g. S 11 = S 22 = −1, S 12 = S 21 = 2), which leads to agglomeration analogous to Figure 4d rather than homogeneously distributed, mixed populations as in Figure 4a.Apart from the results displayed in Figure 4, there are also cases where one population agglomerates, but the other one is distributed homogeneously (if we choose S 11 different from S 22 ).In systems with diffusion, the variety of different cases covered by the above approach is even greater.In particular, we have observed that, while large noise generally destroys self-organized solutions, small noise can further selforganization in the considered systems, which is surprising.
Finally, we point out that our results are relevant for practical applications.For example, when optimizing multi-agent systems (like the coordination of vehicle or air traffic, or the usage of CPU time in computer networks), it is desireable to apply control strategies that are insensitive to system failures (like the temporary breakdown of a control center).Hence, it would be favourable if the system would optimize its state by means of the interactions in the system.For this, one needs to implement a suitable type of interactions (namely, symmetric ones), which can be reached by technical means (intelligent communication devices determining the proper actions of the interacting entities).Notice that this kind of multi-agent optimization is decentralized and, therefore, much more robust than classical, centralized control approaches.Simulation results for a uni-directional two-lane freeway used by two different vehicle types, cars and trucks.We assume that fast vehicles overtake slow ones, if possible and safe.In addition, cars tend to drive with their respective "optimal (safe) velocity" (red squares), which depends on the local vehicle density (the inverse distance to the next vehicle ahead).The same applies to trucks, but with a considerably smaller optimal velocity (blue squares).(For details see Ref. [35].)According to our simulations, traffic flow is stable up to densities of about 24 vehicles per kilometer and lane, while stop-and-go traffic develops at higher densities.a Between about 22.5 and 24 vehicles per kilometer and lane, the difference (green line) between the average velocities of cars (red line) and trucks (blue line) becomes almost zero, which is a consequence of the breakdown of the lane changing rate (see b).Hence, a coherent state of motion appears only in a small density range which, at the same time, is characterized by a minimal interaction rate and a minimal overtaking rate.Hence, self-organization (coherent motion) and optimality (minimal interaction) are directly related in this example.
Figure 3: Schematic representation of a human trail system (solid lines) evolving between four entry points and destinations (full circles) on an initially homogeneous ground [36].When the frequency of trail usage is small, the direct way system (consisting of the four ways along the edges and the two diagonal connections) is too long to be maintained in competition with the regeneration of the vegetation.Here, by bundling of trails, the frequency of usage becomes large enough to support the depicted trail system.It corresponds to the optimal compromise between the diagonal ways and the ways along the edges, supplying maximum walking comfort at a minimal detour of 22% for everyone.In this example, it is the discomfort of walking multiplied by the length of the individual ways that is minimized.d are examples of "optimal self-organization" or "self-organized optimality", since the finally evolving optimal states are self-organized, non-homogeneous states.In contrast to the results displayed in a to d, in e and f we have additionally introduced fluctuations corresponding to errors in the estimation of success S a (x, t).Large fluctuations destroy the tendency of self-optimization and produce a homogeneous distribution of entities (see e), whereas small fluctuations help to escape relative ("local") optima, leading to a continuation of the agglomeration process until the absolute ("global") optimum is reached (see Figure 5).The above figures show the numbers n 1 x and n x = (n 1 x + n 2 x ) of entities as a function of the lattice site x at time t = 4000 (in f: t = 40000) and the evolution of the overall success S(t) as a function of time t.The fluctuations around the monotonic increase of S(t) in a to d are caused by the fluctuations η α (see Eqs. ( 27), ( 28)) and the random sequential update.During the first few thousand time steps (not displayed), the small fluctuations play a subdominant role, and the entities agglomerate around several lattice sites due to the assumed attractive interactions among the entities, similar to Figure 4d.The clusters resulting in this first dynamic stage are more distant from each other than the range of interaction.Hence, their merging at later times is mainly due to fluctuations.The fluctuations of the individual entities around the centers of the clusters, which originate from errors in the estimation of success, can sum up and cause a slow variation of the centers of the clusters themselves (see above).In this way, initially distant clusters can accidentally come close to each other in the course of time, and merge.This is related with "evolutionary jumps" in the overall expected success (see Figure 4f) (which may be compared to "synergy effects" connected with the fusion of companies).

Figure 1 :Figure 2 :
Figure1: Formation of lanes of uniform walking directions in crowds of oppositely moving pedestrians.Red circles represent pedestrians walking up the street, blue ones move downwards.The simulation assumed periodic boundary conditions, but we could also use walls on both sides or randomly feed pedestrians into the upper and lower boundaries of the simulation area, without destroying the effect of lane formation.(To see this, you may check out the Java applets supplied on this internet site: http://www.theo2.physik.uni-stuttgart.de/helbing.html) , Overall Success Lattice Site, Time Step / 100 c S 11 = S 22 = 2, S 12 = S 21 = Entities, Overall Success Lattice Site, Time Step / 10000 f Like d, with small fluctuations

Figure 4 :
Figure 4: Illustration of various forms of self-optimization for two different populations: a Homogeneous distribution in space, b segregation of populations without agglomeration, c repulsive agglomeration, d attractive agglomeration.Cases b tod are examples of "optimal self-organization" or "self-organized optimality", since the finally evolving optimal states are self-organized, non-homogeneous states.In contrast to the results displayed in a to d, in e and f we have additionally introduced fluctuations corresponding to errors in the estimation of success S a (x, t).Large fluctuations destroy the tendency of self-optimization and produce a homogeneous distribution of entities (see e), whereas small fluctuations help to escape relative ("local") optima, leading to a continuation of the agglomeration process until the absolute ("global") optimum is reached (see Figure5).The above figures show the numbers n 1x and n x = (n 1 x + n 2 x ) of entities as a function of the lattice site x at time t = 4000 (in f: t = 40000) and the evolution of the overall success S(t) as a function of time t.The fluctuations around the monotonic increase of S(t) in a to d are caused by the fluctuations η α (see Eqs. (27), (28)) and the random sequential update.

Figure 5 :
Figure5: Illustration of the temporal evolution of the distribution of the number of entities over the lattice sites related to the simulation displayed in Figure4f.During the first few thousand time steps (not displayed), the small fluctuations play a subdominant role, and the entities agglomerate around several lattice sites due to the assumed attractive interactions among the entities, similar to Figure4d.The clusters resulting in this first dynamic stage are more distant from each other than the range of interaction.Hence, their merging at later times is mainly due to fluctuations.The fluctuations of the individual entities around the centers of the clusters, which originate from errors in the estimation of success, can sum up and cause a slow variation of the centers of the clusters themselves (see above).In this way, initially distant clusters can accidentally come close to each other in the course of time, and merge.This is related with "evolutionary jumps" in the overall expected success (see Figure4f) (which may be compared to "synergy effects" connected with the fusion of companies).