Complex networks: when random walk dynamics equals synchronization

Synchrony prevalently emerges from the interactions of coupled dynamical units. For simple systems such as networks of phase oscillators, the asymptotic synchronization process is assumed to be equivalent to a Markov process that models standard diffusion or random walks on the same network topology. In this paper, we analytically derive the conditions for such equivalence for networks of pulse-coupled oscillators, which serve as models for neurons and pacemaker cells interacting by exchanging electric pulses or fireflies interacting via light flashes. We find that the pulse synchronization process is less simple, but there are classes of, e.g., network topologies that ensure equivalence. In particular, local dynamical operators are required to be doubly stochastic. These results provide a natural link between stochastic processes and deterministic synchronization on networks. Tools for analyzing diffusion (or, more generally, Markov processes) may now be transferred to pin down features of synchronization in networks of pulse-coupled units such as neural circuits.

3 periodic orbits with a positive volume basin of attraction), speed limits to synchronization that are determined by network topology [11] as well as other non-standard dynamics in symmetric systems [12,13].
In this paper, we analyze the conditions under which the asymptotic synchronization process of pulse-coupled oscillator networks is mathematically equivalent to diffusion (random walks) on the same graphs. In particular, we specify the conditions required for stable synchronization and derive the stability operators that determine the resynchronization dynamics in response to instantaneous perturbations away from synchrony. Networks of leaky integrate-and-fire (LIF) oscillators-that model spiking neural circuits or populations of flashing firelies and are used for self-organized slot synchronization in distributed communication networks-play a key role. In particular, we reveal that (re-)synchronization of certain LIF networks is equivalent to relaxation of diffusion processes on the same underlying graph, whereas networks of other pulse-coupled (non-LIF) units, in general, do not exhibit this equivalence. Intriguingly, this is very distinct from networks of phase-coupled oscillators that with suitably normalized coupling matrices exhibit such equivalence independent of the exact type of nonlinearity.
This paper is organized as follows: section 2 summarizes key properties of random walks on graphs. Section 3 presents the phase representation of pulse-coupled oscillators and the derivation of the stability operators S. In section 4, we derive the conditions that networks W must fulfil to yield a degenerate doubly stochastic S, while in section 5 we present three example network classes that comply with these conditions. Section 6 briefly discusses how the results relate to phase-coupled oscillator networks and continuous-time random walks and section 7 summarizes all the results.

Random walks on a graph
Consider particles jumping randomly in discrete time on a graph of N vertices. The vertices here may represent the nodes of a real physical network or abstract states of a complex stochastic system. Accordingly, the 'particles' may be physical particles, agents, viruses or abstractly the state of a system. If one particle is at node j ∈ {1, . . . , N } at some time t n−1 , n ∈ Z, the probability of being at some other node i in the next time step t n is given by the transition matrix element M i j . We assume a Markov process, such that these transition probabilities are constant in time and do not depend on the history of the path the particle took to get to node j. For any given j the total probability to go from j to some state i ∈ {1, . . . , N } is equal to unity, i.e. ∀ j N i=1 M i j = 1. The dynamics is then given by where p = ( p 1 , p 2 , . . . , p N ) are the probabilities of a particle to be in state i = {1, . . . , N } and p(t 0 ) is the initial state. Note that here we follow the convention common in theoretical physics to multiply state vectors from the right to transition matrices. We assume that the walk is aperiodic-meaning that there is no common divisor larger than 1 that divides the length of all cycles of the graph-and that the underlying graph is strongly connected, i.e. there is a path from every node j to every other node i ∈ {1, . . . , N }. The latter is equivalent to the graph having an irreducible adjacency matrix A such that A is a full-rank N × N square matrix with A i j = 1 if there is a directed connection from j to i and is 0 otherwise.

4
Such a process on a graph belongs to the class of finite, ergodic, time-homogeneous Markov chains, and its properties are well studied in the literature [15]. In particular, independent of the initial condition, the fraction of times a particle visits each node converges to a stationary distribution, the (unique) invariant distribution π with π = Mπ. This distribution is asymptotically approached exponentially, i.e. p − π ∼ e −n/τ rel with relaxation time τ rel . By the Perron-Frobenius theorem (see e.g., [16]), it follows that M has N eigenvalues, whereof the maximal, λ 1 , is equal to unity and belongs to the translation invariant (right) eigenvector v 1 = (1, . . . , 1) . All other λ i , i ∈ {2, . . . , N }, have absolute value |λ i | < 1. Hence for n → ∞, M n is dominated by the largest non-trivial eigenvalue |λ m | := max i {|λ i | : |λ i | < 1} and we identify τ rel = −1/log(|λ m |).

Synchronization of pulse-coupled oscillator networks
Another important class of dynamical processes in complex networks is that of synchronization of the constituent dynamical units (see, e.g., [17] and references therein). In particular, nodes that interact via discrete pulses are of importance in fields such as neuroscience, behavioral biology or mobile communication technology, where the nodes and pulses can be such different entities as neurons and action potentials, fireflies and light blinks, or mobile phones and time slots for data transmission, respectively [1][2][3].
The possibly simplest collective dynamical state in such systems is that of global synchrony, where all oscillators emit pulses in perfect unison. Synchrony can be both beneficial (e.g. in information transmission or feature binding in sensory processing [18]) or detrimental (as in epilepsy) for the function of networks, so to understand the dependence of its stability properties on the properties of the oscillators and couplings is of great interest.
To be specific, we thus consider synchrony in networks of pulse-coupled oscillators j ∈ {1, . . . , N } whose states are given by a potential function V j (t) such that g > 0 is a smooth function that determines the internal dynamics of oscillator j. Whenever V i reaches a threshold value θ at time t, a pulse is emitted and V i is reset to a potential V i (t + ) := V R . The time t = t i,k indicates the kth emission of a pulse by i. After a transmission delay τ ji , the pulse is received by oscillator j, inducing a potential jump V j (t i,k + τ + ji ) = V j (t i,k + τ ji ) + W ji . As a consequence, the potential trajectories V j (t) are smooth except at the event times of pulse emission and pulse reception. The condition g j (V j ) > 0 guarantees that in isolation (∀ i, j W ji = 0) the units are periodic oscillators with finite periods T 0, j .
The dynamics of the network is equivalently described by time-like phase variables satisfying the simple linear dynamics at all but the event times. There is a one-to-one mapping of phases to potentials via U j (φ j ) :=Ṽ j (φ j T 0, j ) such that φ j ∈ (−∞, 1]. HereṼ j (t) is the solution of the uncoupled (W i j ≡ 0) oscillator j through the initial conditionṼ j (0) = 0. We require the rise function U to Intuitive explanation for the synchronizing effect of inhibitory coupling for concave upward U : the sketch gives an intuitive explanation of why synchronization is stable for purely negative interactions if U < 0, while it is unstable for purely excitatory interactions: in the first case an original phase difference |φ 1 − φ 2 | (difference between vertical thin black lines) is decreased (black dashed lines) due to the concave down shape of U , whereas in the latter case it is increased (gray dashed lines), hence pushing an initial phase difference to larger values. For concave up U, the same argument holds for pulses of opposite sign (stable if the pulse is excitatory and unstable otherwise).
Condition (4)(iii a), ((4)(iii b)) ensures that incoming pulses with phase retarding (or phase advancing, respectively) impact decrease the phase difference between receiving oscillators and thus support synchrony [14], cf figure 1 and section 3.1. At the phase threshold φ j (t) 1 a pulse is emitted and the phase is reset, φ j (t + ) = 0. The pulse reception at i at time t + τ i j induces a phase jump to a new phase In general, the size of the phase jump H W i j (φ j ) − φ j depends nonlinearly on φ j and can be excitatory (W i j > 0) or inhibitory (W i j < 0). For simplicity of presentation, we assume, in the following, identical oscillators, g j ≡ g, with natural period T 0 ≡ T 0, j , identical delays ∀ i j τ i j = τ < T 0 , and a strongly connected network graph, and we exclude self-couplings W ii = 0.

Conditions for linear synchronization dynamics
The difference between the phases of two oscillators i j (t) := |φ i (t) − φ j (t)|, measured circularly, quantifies their degree of synchrony. The synchronous state ∀ i, j,t i j (t) ≡ 0 of a 6 network of N pulse-coupled identical oscillators as introduced in the last section exists iff [14] ∀ i∈{1,...,N } where the sum runs over all presynaptic neurons j. Stability analysis [19] shows that for potential functions U (φ) that comply with conditions 4(i)-(iv), synchrony is asymptotically stable if the perturbation to pulse times is smaller than the delay τ and if either Intuitively speaking, these conditions ensure that an existing phase difference between oscillators will decrease, cf figure 1.
In the case of strictly non-negative coupling (equation (7)(ii)), the possible phase values are bounded below by 0, because this is the reset, there is no negative input in the system and U(φ) is monotonically increasing, while for strictly non-positive coupling the phases can be arbitrarily negative depending on the inhibitory input from other oscillators. The constraint that perturbation size needs to be τ , on the other hand, stems from the fact that this ensures that all pulses are emitted before any pulse is received. Thus, all oscillators still emit exactly one pulse per period.
Since the synchronous state is characterized by zero phase difference i j for all pairs i, j, all phases can be described by the same periodic dynamics ∀ i∈{1,.
To assess the impact of small perturbations δ(t 0 ) ≡ φ(t 0 ) − φ 0 (t 0 ) delivered at some time t 0 ∈ (τ, T − τ ) after the last common spike emission at t = 0 and the last common spike reception at t = τ and well before the next threshold crossing, we assume that denotes the first return time to threshold of oscillator i, is then determined and linearized (for a detailed derivation see appendix A and [14]) yielding the first-order stability operator with and for n ∈ {1, . . . , k i }, with k i in degree of oscillator i. S, in general, depends on the rank order, i.e. the resulting temporal order of threshold crossings of the oscillators due to the applied perturbation, implying that not only each initial condition δ(t 0 ) induces a different S, but also each iteration can change the rank order and lead to a new operator S. Hence, in general there is not one particular S(W ) for each network 7 realization W that determines the growth or decay of a perturbation, but a whole ensemble S(W, δ) (multioperator problem [14]).
A class of pulse-coupled units with special relevance to theoretical neuroscience is the class where T 0 = log I /(I − γ θ ) /γ . For this oscillator type, equation (5) assumes the simple form In particular, the p i,n become linear functions of j m=1 W i j , i.e.
and S(W ) is hence directly proportional to W . As we demonstrate in appendix B, it is indeed uniquely the LIF class that generically yields exactly one S(W, δ) = S(W ) independent of the rank order of the perturbation δ for any particular network realization W (cf also [11] for an earlier discussion of the multioperator problem). In the following, we will thus confine the analysis to this special oscillator class. The asymptotic dynamics of the evolution of a small perturbation δ is then dominated by where, without loss of generality, we shifted the time axis such that t 0 = 0. S has one unique eigenvalue equal to unity belonging to the time-translation invariant mode (cf appendix B, (B.6)). If the synchronous mode is stable, all other eigenvalues are smaller than 1 (cf appendix B, (B.7)). Hence, the second largest eigenvalue(s) |λ m | := max i {|λ i : |λ i | 1} determines the asymptotic behavior of the evolution of the dynamics, i.e. (cf appendix B, [11]) with andδ Note, however, that for non-normal matrices S, i.e. SS = S S, the eigenvectors are not orthogonal and hence the perturbation can, in principle, grow transiently (see, e.g., [20]).

Equivalence of spike synchronization and particle diffusion
What are the conditions for the oscillator synchronization dynamics to be equivalent to the relaxation of diffusion processes on the same graph? The linear operator S is a row-stochastic matrix by construction, cf (10).

8
If S is, moreover, doubly stochastic, it can at the same time be interpreted as a transition matrix of a time-homogeneous Markov chain as introduced in section 2 on the same underlying network graph, meaning that the identical dynamical system describes both the asymptotic synchronization dynamics of the pulse-coupled oscillator network as well as the relaxation of particle diffusion, S is doubly stochastic if Conditions (20)(i) and (iii) are always fulfilled given (10), (7), (B.5), while (20)(ii) is not satisfied in general.
In particular, the relaxation time constant τ rel of a particle diffusion system specified by (19) that was perturbed from equilibrium will be identical to the synchronization time constant τ sync of the oscillator network (cf figures 2(a)-(c)), given that (20) is fulfilled.

Network classes
We will now discuss three examples of systems that equivalently describe both relaxation and resynchronization, and compare analytical results to simulations.

Fully connected networks with identical weights
We denote the number of connections per neuron by k and assume all coupling matrix entries to be set to The eigenvalues are then the simple root λ 1 = 1 and the (N − 1)-fold degenerate root and thus τ a2a figure 3).

Sparse coupled random networks with identical weights
The second most straightforward network type is the symmetric sparsely coupled random network ensemble with identical coupling strength for all the connections. The adjacency matrices A are Erdös-Rényi G(N , E) random graphs [21] with a fixed number of nodes N and a fixed number E of randomly assigned edges; plus the additional constraint that each node has ∀ i∈{1,...,N } k i = k < (N − 1) incoming and outgoing edges; hence E = N k (regular graphs in both in-and out-degree). The elements S i j = S ji are given by (21). Random matrix theory (RMT) can then be applied to obtain the expected eigenvalue distribution for N → ∞ [11,22] and especially an estimate for the second largest eigenvalue of S|λ m | s , i.e. (cf [11]) where σ 2 is the variance of the elements of S. Exemplary spectra together with the large-N prediction are shown in figure 4(a). The result for asymmetric sparse matrices follows analogously, given that the associated adjacency matrix A fulfils however [11,23] |λ m | a RMT = |λ m | s /2.  This implies for the asymptotic synchronization time scales τ s sync = −1/log |λ m | s > τ a sync = −1/log |λ m | a .
As an important consequence of this analytic prediction, symmetric networks synchronize/relax more slowly than asymmetric ones. show exemplary eigenvalue spectra for the asymmetric and symmetric cases, respectively; solid lines depict the RMT predictions, where we applied Girko's circular law (a) [23] and Wigner's semicircle law (b) [22] to the sparse {0, S 1 }-ensemble with S 1 given by (21). For small N , k the eigenvalue density is low and the N → ∞ limit of the RMT predictions does not hold very well (cf (a, b), N = 64). This leads to an underestimation of τ sync for small N and k (cf figure 4). Other parameters are as specified in figure 3.

Networks with randomly distributed weights
Given (B.1) and (20), the weights W i j can be chosen arbitrarily such that (7) (i) or (ii) is fulfilled. If they are i.i.d. random variables drawn from a probability distribution with finite variance, the results of sections 5.1 and 5.2 can be generalized to heterogeneously coupled systems and the synchronization time τ sync can be estimated from (empirical) RMT 6 , cf figures 6(a) and (b). One possible way to generate fully connected doubly stochastic heterogeneously coupled asymmetric networks was presented by Diaconis and Sturmfels in [25] and is outlined in appendix C.3 and can be extended to symmetric networks as described in appendix C.4. The Diaconis-Sturmfels algorithm (DSA) creates networks with exponentially distributed entries [25]. After normalization to ensure condition (6) the corresponding stability matrix S has diagonal entries S 0 (cf (21)) and off-diagonal entries following the distribution

12
(b) (a) Figure 6. Eigenvalue spectra of coupling matrices of fully connected networks with randomly distributed weights. Panels (a) and (b) show exemplary eigenvalue spectra for the asymmetric and symmetric cases of fully connected networks with randomly distributed weights, respectively; solid lines depict the RMT predictions [23] and Wigner's semi-circle law [22] for the exponentially distributed weights ensemble.

with mean E[S i j ] = γ w/N and variance E[S i j ] 2 . Given conditions (4) the second largest eigenvalue of a network realization can be estimated by RMT to be
for fully coupled asymmetric networks and for networks with sparsened connectivity characterized by = k/N ∈ [0, 1). For the symmetric network, the second largest eigenvalue estimate again yields demonstrating that also for networks with heterogeneous weights, the asymmetric networks synchronize (or relax) faster. Figure 6 shows examples of a fully coupled network with a coupling matrix generated with the DSA such that the resulting W is symmetric (figure 6(a)) or asymmetric ( figure 6(b)). The full lines give the Wigner's semi-circle prediction in figure 6(a) and Girko's circular law prediction in figure 6(b) for exponentially distributed random variables (cf (28)), respectively.

Comparison of relaxation dynamics
Two main conclusions can be drawn from the network class examples in section 5: (i) symmetric connectivity leads to slower synchronization/relaxation dynamics than asymmetric connectivity; (ii) sparse networks with fixed in-degree k have finite synchronization time in the N , |w| → ∞ limit, whereas networks where k scales with network size synchronize infinitely fast in this limit.
The latter point was extensively discussed in [11] for sparse random networks with identical weights, where τ sync with a fixed k yielded (cf (23), (25)) while for the fully coupled networks with k = N − 1 or sparsely coupled networks with k = N , < 1 [11] Here we find that the same result holds for networks with a arbitrary continuous distribution of entries with finite and fixed variance such as the example presented in section 5.3, assuming that RMT is applicable: let the coupling matrix have entries W i j (W ii = 0 for all i) such that (20) holds and such that they follow a distribution P(W i j ) with finite mean µ W = w/N and raw second moment µ 2 W := N j=1 W 2 i j /N . Then the elements of the shifted stability matrix S := (S − S 0 I ), with identity matrix I , areS i j = γ W i j / , with mean µ = γ w/N and raw second moment µ 2 = γ 2 µ 2 W / 2 . S ii = S 0 as given in (21). It follows that lim |w|→∞ S 0 = 0. In the case when the number of connections k scales with network size such that k = N ( ∈ [0, 1), = (N − 1)/N for global coupling), the second largest eigenvalue and hence relaxation time are estimated to be For sparsely connected networks with a fixed k however, τ sync/rel stays finite, i.e.
Here, W i j quantifies the transition rate from node i to j and ∀i, j W i j 0. The solution to the Kolmogorov equation is of the form P(t) = H t P(0), where H t = e Lt determines the dynamics and steady-state distribution π = H t π of the system. H t is column-stochastic with the largest eigenvalue λ 1 = 1 and all other eigenvalues |λ i | 1, i ∈ {2, . . . , N }. In particular, the relaxation time τ rel after a perturbation of the Markov chain from the steady state is hence given by the eigenvalue second largest in magnitude |λ m |. Consider, on the other hand, a strongly connected network of N phase-coupled Kuramoto oscillators such that the phases φ i (t) change according tȯ where ω 0 is the natural frequency of the oscillators i ∈ {1, . . . , N } and W i j , ∀i, j W i j 0, is the coupling strength from j to i. To assess local stability of the fully synchronous state (where the system is linearized [26], yieldingδ where the Laplacian L now reads The resynchronization dynamics can again be expressed by a stroboscopic map δ(T ) = S T δ(0) with row-stochastic S t = e L t , where the eigenvalues of S t are given by λ 1 = 1 and | i | 1, i ∈ {2, . . . , N } [26]. For the resynchronization time, this yields sync ∼ 1/| m |, | m | := max i {| i |: Hence, if H t = S t is a doubly stochastic matrix, the relaxation dynamics of continuous time random walks is mathematically equivalent to the local resynchronization dynamics of coupled phase oscillators. This, in particular, implies that the time scales of resynchronization τ sync and relaxation τ rel are identical. We remark that for such continuous-time systems the equivalence is much simpler and, moreover, holds under weaker conditions: equivalence can hold independent of the specific nonlinear interactions between oscillators and even when the summed weights at each node are inhomogeneous, i.e. node dependent, in contrast to pulse-coupled systems, where both the summed weights and the nonlinear interactions need to fulfil specific additional constraints.

Discussion
In summary, we have demonstrated under which conditions and how two seemingly unrelated network phenomena-the resynchronization dynamics in networks of pulse-coupled units and particle diffusion on a graph-can be described by the same dynamical equation. For the class of LIF units, if (a) all W i j ( ) 0 if the potential function of the neurons U fulfils U > 0, U < (>) 0 and (b) the coupling matrix W is doubly stochastic, the asymptotic dynamics of these two processes are mathematically identical.
A necessary condition for the local stability of global synchrony in a network of identical oscillators is that all eigenvalues of the linear stability operator S have an absolute value of at most unity. It is uniquely the LIF oscillator that leads to one single S (cf appendix B) and hence determines local stability for all possible small perturbations.
The linear stability matrix S then, and only then, inherits the symmetry properties of the coupling matrix W and is thus also doubly stochastic if W is. S is thus both row-and columnnormalized to one and if it is, moreover, positive definite it can equivalently be viewed as a transition matrix for a particle diffusing on the same underlying network graph. This implies the equality of the time scale τ sync of the asymptotic resynchronization process and the time constant τ rel of relaxation to the equilibrium distribution of a particle diffusing on a random graph.
We presented several network class examples for which such systems can be constructed and gave explicit expressions for the expected τ sync and τ rel , which are directly related to the second largest eigenvalue of S, i.e. |λ m |. For coupling matrices with elements drawn from a random distribution, we made use of Wigner's semi-circle law and Girko's circular law to estimate |λ m | [22,23]. As is to be expected, the circle law predictions improve with increasing network size N .
We showed, moreover, that in symmetrically coupled networks resynchronization/ relaxation takes longer than in asymmetrically coupled networks if in both cases matrix elements are drawn from the same statistical distribution. This is a direct consequence of the nature of the eigenvalue spectra for symmetrical and asymmetrical random matrices [27]. In this context, it is interesting to note that recent studies of small world spectra [28] revealed that small world synchronization time scales are almost independent of whether the networks are directed or undirected for rewiring probabilities of up to about q = 0.1, whereas in the regime q = 1 (random networks) our above results point to a pronounced difference between synchronization times for directed and undirected graphs.
Sparse networks in which the number of edges scales with the network size have infinitely fast synchronization time in the limit of infinite coupling strength and network size, while τ sync remains finite if the number of edges is fixed and independent of N [11]. Here, we generalized that result to networks with arbitrary continuous weight distributions with finite variance. Our more general results can now be used to find topologies and weight distributions that ensure optimal, that is, fastest synchronization time by maximizing the gap 1 − |λ m |. In this line, Donetti et al [29] discussed how a large spectral gap favors synchronizability of phase-coupled oscillators, as well as fast diffusion on random graphs, and presented a more general mathematical analysis of topologies that maximize this gap. In [30], on the other hand, the effect that symmetry versus asymmetry has on the synchronizability and critical coupling strength of phase-coupled oscillators was analyzed; however, the authors do not comment on synchronization speed.
We emphasize that the results presented here can be extended to phase-locked patterns in pulse-coupled oscillator networks. As was shown in [31], periodic patterns due to heterogeneity in the input w i = N j=1 W i j can be stably sustained in pulse-coupled oscillator networks with instantaneous pulse transmission (τ = 0). In [32], the analysis was extended to networks with delayed interactions and the results for global synchrony presented here generalize to this case: patterns that have a near-synchronous orbit with a total phase spread smaller than or equal to the delay τ are stable to small perturbation. In [33], it was shown that it is, moreover, possible to predefine a pattern and find all possible coupling matrices producing that given pattern in dependence on system parameters (e.g. weights, delays or oscillator types). This approach could be extended to find coupling matrices with distributed w i (which are then no longer row-or column-normalized) or non-identical oscillators that still yield doubly stochastic associated stability operators S.
Finally, because the LIF oscillator is one of the most commonly used neuron models in computational neuroscience, these generalizations and, in particular, the equivalence to diffusion processes raise hope that some of the mathematical tools for analyzing diffusion (or, more generally, Markov processes) may now be transferred to help understand the features of synchronization in neuronal networks.  1) and (A.2). The first line sets the initial condition of oscillator i between emission of a pulse with corresponding reset and reception of pulses after a delay τ . The most advanced presynaptic neuron of i is at phase i,1 at time 0 and its pulse will hence arrive at i after a period − i,1 . At that time the phase φ i is retarded (advanced) if the pulse is inhibitory (excitatory) by an amount specified by equation (5). The next event of pulse reception will analogously take place at τ − i,2 and so forth till all k i pulses are received at t = τ − i,2 and i is at phase β i,k i (penultimate line). The time to the next threshold crossing T (0) i is then given by the current time τ − i,k i plus the distance of φ i = β i,k i to threshold, i.e. 1 − β i,k i (last line).
The last line of table A.1 together with equation (8) gives Expansion of the nested nonlinear expression β i,k i in D i,n yields to first order (cf [14]) This represents a linear map Equations (A.6) and (A.9) imply that S is row normalized to 1: From (B.5), it moreover follows directly that W and (S − f (0)I ), where I is the identity matrix, have the same underlying strongly connected graph. Hence, with (7), S is a non-negative row-stochastic irreducible matrix, such that the Perron-Frobenius theorem [16] yields (i) S has a spectral radius λ 1 = 1, (ii) λ 1 is unique, (iii) the right eigenvector v 1 = (1, . . ., 1) .

C.1. Symmetric sparse random matrices with homogeneous weights
The aim of this section is to create sparse random matrices A, i.e. matrices A with a finite number of zero elements A i = j = 0. All other elements shall be 1. The positions of the zero and one elements shall be random. Moreover, for each row i ∈ {1, . . . , N } and for each column j ∈ {1, . . . , N }, we demand that j A i j = k and i A i j = k. To achieve this we start with A = O, the matrix with all elements equal to 0. The first row i = 1 is filled with k 1's at random positions j = i. The first column j = 1 is then set to ∀ m∈{2,...,N } A m1 = A 1m . For all m ∈ {2, . . . , N } we proceed the same way with the all-zero minors A (m) i j = {A i j } i, j∈{m,...,N } , just that the probabilities to add a 1-entry at A m j are weighted according to the presence of 1's in the jth column of the full (N × N )-matrix A. In the end, every row/column of matrix A is normalized, i.e. A norm i j = A i j /k. If k > N /2 we proceed as before, but distribute zeros in a matrix that initially has all elements equal to 1.

C.2. Asymmetric sparse random matrices with homogeneous weights
Proceed as in appendix C.1 without the copy step in the nth iteration, i.e. without ∀ m∈{1,...,N } A mn = A nm .

C.3. All-to-all asymmetric random matrices with heterogeneous weights
To obtain doubly stochastic fully occupied random matrices, we follow a scheme introduced in [25]. Let the initial matrix be A = 1 N −1 (I − Id). At every iteration, randomly sample four matrix elements such that for the indexes it holds: i, m = { j, n} (additionally, we impose on diagonal elements to never be changed from 0).
Let where rand ∈ [0, 1] is a uniform random variable. The final distribution of matrix elements is exponential [25], and the algorithm in our implementation stops when the empirical distribution satisfies statistical testing for the exponential distribution P(x) = N exp[−N x] (the Anderson-Darling test, cf [34]). Finally, A norm i j = A i j .

C.4. All-to-all symmetric random matrices with heterogeneous weights
Proceed as in appendix C.3, with the additional operations r = B(1) + (B(2) − B(1)) + rand, A i j → A i j + r and A ji → A ji + r, A m j → A m j − r and A jm → A jm − r, A mn → A mn + r and A nm → A nm + r. (C.2)