Paper The following article is Open access

Generating directed networks with prescribed Laplacian spectra

, , , and

Published 23 November 2020 © 2020 The Author(s). Published by IOP Publishing Ltd
, , Citation Sara Nicoletti et al 2021 J. Phys. Complex. 2 015004 DOI 10.1088/2632-072X/abbd35

2632-072X/2/1/015004

Abstract

Complex real-world phenomena are often modeled as dynamical systems on networks. In many cases of interest, the spectrum of the underlying graph Laplacian sets the system stability and ultimately shapes the matter or information flow. This motivates devising suitable strategies, with rigorous mathematical foundation, to generate Laplacians that possess prescribed spectra. In this paper, we show that a weighted Laplacian can be constructed so as to exactly realize a desired complex spectrum. The method configures as a non trivial generalization of existing recipes which assume the spectra to be real. Applications of the proposed technique to (i) a network of Stuart–Landau oscillators and (ii) to the Kuramoto model are discussed. Synchronization can be enforced by assuming a properly engineered, signed and weighted, adjacency matrix to rule the pattern of pairing interactions.

Export citation and abstract BibTeX RIS

Original content from this work may be used under the terms of the Creative Commons Attribution 4.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.

1. Introduction

Complex networks play a role of paramount importance for a wide range of problems, of cross-disciplinary breadth. In several cases of interest, networks define the skeleton of pairwise interaction between coupled populations, families of homogeneous constituents anchored to a given node of the collections. The nature of existing paired relationships between mutually entangled populations is encoded in the weight of the links, that bridge adjacent nodes. Cooperative and competitive interactions can, in principle, be accommodated for by allowing for the weights to take positive or negative values. Local and remote non linear couplings shape the ensuing dynamics, and possibly steer the system towards a stationary stable equilibrium which is compatible with the assigned initial condition. The stability of the fixed points can be analyzed by studying the dynamical system in its linearized version. For reaction-diffusion systems defined on networks, the stability of the inspected equilibrium is ultimately dictated by the spectrum of the discrete Laplacian matrix [13]. The eigenvalues of the Laplacian define, in fact, the support of the dispersion relation, the curve that sets the rate for the exponential growth of the imposed perturbation. More specifically, external disturbances can be, in general, decomposed on the basis formed by the eigenvectors of the Laplacian operator. Each eigenvector defines an independent mode, which senses the web of intricate paths made accessible across the network: the perturbation can eventually develop, or, alternatively, fade away, along the selected direction, depending on the corresponding entry of the dispersion relation, as fixed by its associated eigenvalues. Stability is an attribute of paramount importance as it relates to resilience, the ability of a given system to oppose to external perturbations that would take it away from the existing equilibrium. Similarly, synchronization, a widespread phenomenon in distributed systems, can be enforced by properly adjusting the spectrum of the matrix which encodes for intertwined pairings. Based on the above, it is therefore essential to devise suitably tailored recipes for generating networks, which display a prescribed Laplacian spectrum, compatible with the stability constraint [4, 5].

The problem of recovering a network from a set of assigned eigenvalues has been tackled in the literature both from an algorithmic [68] and formal [911] standpoints. In [10], a procedure is discussed to generate an undirected and weighted graph from its spectrum. The result extends beyond the well-known theorem of Botti and Merris [12] which states that the reconstruction of non-weighted graphs is, in general, impossible since almost all (non-weighted) trees share their spectrum with another non-isomorphic tree. In [13], a method is proposed to obtain a, directed or undirected, graph whose eigenvalues are constrained to match specific bounds, which ultimately reflect the nodes degrees, as well as the associated weights. In [14], a mathematically rigorous strategy is instead developed to yield weighted graphs which exactly realize any desired spectrum. As discussed in [14], the method translates into an efficient approach to control the dynamics of various archetypal physical systems via suitably designed Laplacian spectra. The results are however limited to undirected Laplacians, characterized by a real spectrum. The purpose of this paper is to expand beyond these lines, by proposing and testing a procedure which enables one to recover a signed Laplacian operator which displays a prescribed complex spectrum. Signed Laplacians are often used in the literature for applications which relate to social contagion, cluster synchronization or repulsive–attractive interactions [19, 20]. In engineering, they are often employed in modeling microgrids dynamics [21].

The paper is organized as follows. The first section is devoted to illustrating the devised method, focusing on the mathematical aspects. We then turn to discussing the implementation of the scheme and introducing the sparsification algorithms that are run to cut unessential links. In the subsequent section, we elaborate on the conditions that are to be met to generate a positively weighted network. This discussion is carried out with reference to a specific setting. Then, we apply the newly introduced technique to the study of an ensemble made of coupled Stuart–Landau oscillators [2224] and to (a simplified version of) the Kuramoto model [27, 28]. Finally, in the last section, we sum up the contributions and provide concluding remarks.

2. A recipe to obtain a Laplacian with assigned complex eigenvalues

Consider a network made of Ω nodes and denote by A the (weighted) adjacency matrix, where structural information is encoded. More precisely, the element Aij is different from zero when a directed link exists from j to i. The entries of the matrix A are real numbers and their signs reflect the specificity of the interaction at play: negative signs stand for inhibitory (or antagonistic) couplings, while positive entries point to excitatory (or cooperative) interaction. From the adjacency matrix, one can define its associated Laplacian operator. This is the matrix L, whose elements are Lij = Aij ki δij , where ki = ∑j Aij represents the natural extension of the concept of (incoming) connectivity to the case of a weighted network and δij denotes the Kronecker delta.

We shall here discuss a procedure to generate a Laplacian matrix, which displays a prescribed set of eigenvalues. As anticipated above, we will focus in particular on directed Laplacians, which yields, in general, complex spectra. Concretely, we begin by introducing a collection of Ω = 2N + 1 complex quantities defined as 4

Equation (1)

The first 2N elements come in complex conjugate pairs and we set in particular ${{\Lambda}}_{i}={{\Lambda}}_{i+N}^{{\ast}}$, ∀ i = 1, ..., N, where (⋅)* stands for complex conjugate. The aim of this section is to develop a rigorous procedure to construct a directed (and weighted) graph G with 2N + 1 vertices, whose associated Laplacian has {Λi } for eigenvalues. Recall that {Λi } contains the null element, since this latter is, by definition, a Laplacian's eigenvalue.

The procedure that we are going to detail in what follows exploits the eigenvalue decomposition of the Laplacian matrix. To this end we will seek to introduce a proper eigenvector basis such that

Equation (2)

is a Laplacian. In equation (2), D is a diagonal matrix where the sought Laplacian eigenvalues are stored. More specifically, Dii = Λi−1 for i = 2, ..., 2N + 1 and D11 = Λ2N+1 = 0. The problem is hence traced back to constructing V, whose columns are the right eigenvectors of L. We also recall that rows of the inverse matrix V−1 are the left eigenvectors of L. As outlined in [15], Laplacian (right and left) eigenvectors should satisfy a set of conditions:

  • (a)  
    The columns of V, which refer to complex conjugate eigenvalues, must be complex conjugate too.
  • (b)  
    The same condition holds for the rows of V−1.
  • (c)  
    Moreover, the columns of V (resp. the rows of V−1) corresponding to eigenvalues different from 0 should sum up to zero.
  • (d)  
    Finally, the right eigenvector relative to the null eigenvalue should be uniform (i.e. display identical components).

In light of the above, we put forward for V the following structure:

Equation (3)

where c is an arbitrary constant, i stands for the imaginary unit, U is an invertible N × N matrix having real entries. Further, vector u = (1...1)T has dimension N × 1 and vector v is defined as

Equation (4)

The first column of V is hence a uniform vector, corresponding to the eigenvector associated to the null eigenvalue. By construction, every other column sums up to zero, that is (4) holds. In the following, we will write Djj = αj + iβj , which, in turn, implies Dj+N,j+N = αj − iβj , for j = 2, ..., N + 1. Here, αj and βj are real quantities and respectively denote the real and imaginary parts of the jth eigenvalue. To proceed further, one needs to determine the inverse of V.

To achieve this goal we begin by considering a generic matrix W, which satisfies the general constraints that are in place for V−1. In formulae:

Equation (5)

where

Equation (6)

Note that the jth and (N + j)th rows of W, for j = 1, ..., N + 1, are complex conjugated, as required. Moreover, summing all the elements of each row (but the first) yields zero, a condition that the inverse of V should meet, as anticipated above. Building on these premises, we shall here determine the unknown S, w and d so as to match the identity WV = I, where I stands for the (2N + 1) × (2N + 1) identity matrix. This implies, in turn, that WV−1 due to the uniqueness of the inverse matrix.

A straightforward manipulation yields the following conditions for, respectively, d and w

Equation (7)

where use has been made of the identity uT u = N and where

Equation (8)

The quantity d is completely specified by the first of equations (7) and solely depends on c and N, the size of the system. By making use of the identities (4) and (6), one can progress in the analysis of the second and third conditions (7) to eventually get:

Equation (9)

Equation (10)

where:

Equation (11)

Equation (12)

It is, therefore, immediate to conclude that

Equation (13)

Equation (14)

The analysis can be pushed further to relate S to matrices U and E. The calculation, detailed in appendix A, yields

Equation (15)

The expression for S* can be immediately obtained by taking the complex conjugate of the above equation. In conclusion, the matrix W defined in (5) is the inverse matrix of V, provided that d and S are respectively assigned as specified above.

Clearly, matrix L defined in (2) has the desired spectrum (1). We should, however, prove that L is a Laplacian. This amounts to showing that L is a real matrix, whose columns sum up to zero. The proof is given hereafter.

Proposition. Matrix L is real.

From equations (3) and (5), one can readily compute the elements of L via matrix products and, taking advantage of the block structures of V and W, prove that L is real. $\mathfrak{R}\left(\cdot \right)$ is introduced to represent the real part of (⋅). The generic element Lst can be written as:

Equation (16)

Equation (17)

due to the diagonal structure of the matrix D and recalling that D11 = 0. By making use of the specific form of V and W, one gets:

Equation (18)

Equation (19)

Equation (20)

since $\alpha \beta +{\alpha }^{{\ast}}{\beta }^{{\ast}}=2\mathfrak{R}\left(\alpha \beta \right)$, for any complex numbers α and β. One can thus conclude that L, as generated by the above procedure, is real.

Proposition. Each column of L sums up to zero.

Because of the diagonal structure of D:

Equation (21)

Then:

Equation (22)

since (i) D11 = 0 and (ii) the components of all eigenvectors corresponding to non-null eigenvalues, sum up to zero. Notice that this result can also be proven by observing that the uniform vector d 1 is the left eigenvector corresponding to the null eigenvalue, that is

Equation (23)

From (23), it follows that ∑i Lij = 0, for every j.

Proposition.  L is balanced.

We can also show that ∑j Lij = 0 i.e. that the sum of all the elements of any given row i returns zero. According to (2), the first column of V is the right eigenvector corresponding to eigenvalue 0, namely

Equation (24)

This implies, in turn, ∑j Lij = 0 ∀ i, which ends the proof. The Laplacian is hence balanced, as the sums on the rows and on the (corresponding) columns return the same result.

From the generated Laplacian operator, one can readily calculate the adjacency matrix of the underlying network. In general, for any assigned spectrum, the recovered adjacency matrix is fully connected, meaning that there exist links connecting each pair of nodes. Notice that links are weighted and signed. The weights can be small or have a modest impact on the spectrum of the associated Laplacian. This motivates the implementation of a dedicated sparsification procedure, which seeks to remove unessential links, in terms of their reflection on the ensuing Laplacian spectrum. The next section is devoted to elaborating along these lines.

3. Examples and sparsification

In this section we discuss a sparsification procedure, which aims at a posteriori simplifying the structure of the recovered network [16-18]. To this end, we begin by generating a network following the strategy outlined in the preceding section, and which yields an assigned spectrum for the associated Laplacian. The Laplacian spectrum that we seek to recover consists of Ω = 2N + 1 complex entries, the eigenvalues, which are here confined in the left portion of the complex plane, by setting $\mathfrak{R}\left({{\Lambda}}_{j}\right)={\alpha }_{j}{< }0$ for j ⩾ 2, see blue crosses in figure 1(a). This choice is somehow arbitrary, and ultimately amounts to enforce stability into a linear system of the form:

Equation (25)

where xi is the ith entry of the Ω-dimensional state vector x. In the final part of the paper, we will turn to considering more complex scenarios where the stability of the examined dynamics is also influenced by local reaction terms.

Figure 1.

Figure 1. The recipe discussed in the main text is applied to generate a network of Ω = 2N + 1 = 101 nodes, whose associated Laplacian displays the spectrum depicted (blue stars) in panel (a). The spectrum of the Laplacian obtained from the sparsified network is shown with red circles. We here follow the first sparsification recipe as illustrated in the main body of the paper. More specifically, we trim unessential links, chosen among those that bear very modest weights, while confining the Laplacian spectrum in a bounded domain, located in the negative portion of the complex plane. Here, σ is 0.01 and δ = 0.5. In panel (b), the sparsity pattern of the adjacency matrix obtained upon application of the sparsification algorithm is shown. The entries of the adjacency matrices, before and after the sparsification are respectively plotted, with an appropriate color-code, in panels (c) and (d). The main structure of the network is preserved upon application of the devised sparsification protocol.

Standard image High-resolution image

The network that we obtain, following the scheme outlined in the preceding sections yielding a Laplacian with the prescribed spectrum, is in general fully connected. In other words, a weighted link exists between any pair of nodes. The weights of the link can be, in principle, very small and, as such, bear a modest imprint on the ensuing Laplacian spectrum. Motivated by this observation, we perform an a posteriori sparsification of the obtained network: this aims to identifying and then removing the finite subset of links that appear to have a modest impact on the eigenvalues of the associated Laplacian.

The first sparsification procedure that we have considered, aims at removing unessential links while confining the spectrum of the Laplacian operator within a bounded region of the complex plane. More precisely, we focus on the links which display a weight in the range (−σ, σ), where σ is a small, arbitrarily chosen, cut off. All links whose weight is smaller that σ in absolute value are selected, in a random order. The selected link is removed and the modified Laplacian spectrum computed. Denote by ${\tilde {{\Lambda}}}_{j}$, for j = 2, ..., 2N + 1, the Laplacian eigenvalues obtained upon removal of the link. The change to the network arrangement becomes permanent, if $\vert {\mathrm{m}\mathrm{i}\mathrm{n}}_{j}\left[\mathfrak{R}\left({\tilde {{\Lambda}}}_{j}\right)\right]-{\mathrm{m}\mathrm{i}\mathrm{n}}_{j}\left[\mathfrak{R}\left({{\Lambda}}_{j}\right)\right]\vert {< }\delta $ and $\vert {\mathrm{m}\mathrm{a}\mathrm{x}}_{j}\left[\mathfrak{I}\left({\tilde {{\Lambda}}}_{j}\right)\right]-{\mathrm{m}\mathrm{a}\mathrm{x}}_{j}\left[\mathfrak{I}\left({{\Lambda}}_{j}\right)\right]\vert {< }\delta $, for j = 2, ..., N. Here $\mathfrak{I}\left(\cdot \right)$ stands for the imaginary part of (⋅) and δ is an arbitrary threshold which quantifies the amount of perturbation that is deemed acceptable for the problem at hand. As a further condition, we check that $\mathfrak{R}\left({\tilde {{\Lambda}}}_{j}\right){< }0$ for j = 2, ..., 2N + 1, which, in turn, corresponds to preserving the stability of the linear system (25). Clearly, the order of extraction of the links, which are candidate to be trimmed, matters. Different realizations of the procedure of progressive sparsification illustrated above might hence result in distinct final outcomes. In figure 1(a), the eigenvalues obtained after the sparsification algorithm are plotted (red circles) for one choice of the cutoff δ. The sparsity pattern of the adjacency matrix obtained at the end of the above procedure is displayed in panel (b) of figure 1. In panels (c) and (d) of figure 1 we plot, with an appropriate color code, the entries of the adjacency matrices, before and after the sparsification. Only weights which are significantly different from zero (see annexed colorbars) are displayed. As appreciated by visual inspection, the skeleton of the network is not altered by the applied sparsification. To monitor how the eigenvalues get redistributed within the bounded domain to which they belong, we introduce the following indicators:

Equation (26)

Equation (27)

The quantity Ix measures the dispersion along the imaginary axis, by weighting the squared distance of each eigenvalue from the horizontal axis. Conversely, Iy reflects the scattering of the eigenvalues about their mean, in the direction of the real axis. In figure 2, Ix and Iy , normalized to their respective values obtained before application of the sparsification algorithm, are shown versus N, an indicator of the size of the generated networks. The sparsification procedure shrinks the eigenvalues in the x-direction, while the opposite tendency is observed for the distribution along the y-direction.

Figure 2.

Figure 2. Panel (a): Ix as measured at the end of the sparsification and normalized to the corresponding value, before the sparsification is plotted versus N, the size of the explored network. Panel (b): Iy , calculated after the sparsification and normalized to the corresponding value, before the sparsification is depicted versus N. In both cases, blue symbols are computed, as the average over 15 different realizations of the generated network (with the same given spectrum). For each value of N, the real and imaginary components of the eigenvalues are random number, drawn from a normal distribution. The solid line is a guide for the eye. Here, δ = 0.5 and σ = 0.01.

Standard image High-resolution image

The second sparsification method implements a more stringent constraint. Just like before, we select the links with weights in the range (−σ, σ), where σ acts as a small threshold amount. Unlike the former case, we now eliminate the selected link only if the change produced on the modulus of each of the N eigenvalues is smaller than δ, namely if $\vert {{\Lambda}}_{j}-{\tilde {{\Lambda}}}_{j}\vert {< }\delta $, for j = 2, ..., 2N + 1. In figure 3, the eigenvalues obtained after the sparsification algorithm are plotted (red circles) for two choices of the cutoff δ. The number of links that can be effectively removed grows with δ, the size of the allowed perturbation, as clearly demonstrated in figure 4.

Figure 3.

Figure 3. Effect of the second method of sparsification on a network made of Ω = 2N + 1 = 101 nodes. The original spectrum is plotted with (blue) stars. The modified one with (red) circles. Panel (a) refers to δ = 0.07 while panel (b) to δ = 0.2. Here, σ = 0.01.

Standard image High-resolution image
Figure 4.

Figure 4. The percentage of links that are cut against the allowed perturbation δ. Blue symbols refer to five independent realizations, for each value of δ. The black line goes through the average values, computed from the collection of independent runs, at fixed δ. Here, N = 50 (Ω = 2N + 1 = 101) and σ = 0.01.

Standard image High-resolution image

As mentioned above, this latter method implements a local, hence stringent, constraint. By allowing modest re-arrangements of individual eigenvalues yields a spectral distribution in the complex plane which is by construction close to the original one. This method is better suited when the room for maneuver is limited, due to the specific constraints that the reactive model implements. On the other hand the number of pruned links can be scanty, when small local perturbations are solely allowed for. A global re-organization of the operator's spectrum, which entails a significant reduction in the associated number of active links, might instead be accomplished when the eigenvalues are constrained to fall within a compact domain in the complex plane, as follows the former sparsification procedure.

Summing up, we have developed and tested a procedure to generate a network which returns an associated Laplacian matrix with a prescribed complex spectrum. The weighted network obtained following the above procedure is, in general, fully connected. Dedicated sparsification strategies can be applied to remove the links which carry a small weight, and bear a modest imprint on the ensuing Laplacian spectrum. In the following, we will consider a specific setting of the aforementioned generation scheme, which makes it possible for the Laplacian elements to be computed analytically.

4. Focusing on the special case U = qI

In the previous sections we described a general method to generate a Laplacian matrix which displays a designated spectrum. The method assumes a generic matrix U, which can be randomly assigned. In the following, we will focus on the specific case where U is proportional to the identity matrix and progress with the analytic characterization of the obtained Laplacian. As we shall argue in the following, working in this framework allows us to derive a set of closed conditions for constraining the weights of the underlying network to strictly positive values. To proceed in this direction we set:

Equation (28)

where I stands for the identity matrix and q is scalar.

A straightforward calculation returns the following expression for matrix S:

Equation (29)

while w is a uniform vector with identical entries equal to Na + ((N − 1)ab)i and the quantities d, a, b are specified by

Equation (30)

Equation (31)

Equation (32)

From equations (16) and (18) one can obtain a closed expression for each element of the Laplacian, as function of the eigenvalues. The interested reader can find the detailed computations in appendix B. In the following the final formulae are reported.

The diagonal elements satisfy:

Equation (33)

Equation (34)

Equation (35)

where use has been made of the identity $\mathfrak{R}\left({D}_{kk}\right)={\alpha }_{k}$. The first row and column are given by:

Equation (36)

Equation (37)

Equation (38)

Equation (39)

while the remaining elements are:

Equation (40)

Equation (41)

Equation (42)

Equation (43)

Equation (44)

Equation (45)

4.1. Controlling the sign of non-diagonal Laplacian entries

The Laplacian matrix obtained with the procedure illustrated above has both positive and negative entries. Signed Laplacians are often used in consensus problems, where negative weights model antagonistic interactions. In other contexts, when e.g. the Laplacian is stemming from diffusive interactions, non diagonal entries are constrained to positive values. In the following, we will provide a set of necessary conditions for the assigned spectrum to eventually yield a Laplacian with positive extra diagonal elements, Lij > 0 for ij. Clearly, Lii < 0, as summing on the rows should return zero. The underlying network, therefore, displays positive weights, as its adjacency matrix is basically obtained from the Laplacian matrix by replacing the diagonal elements with zeros.

Further, we will set $\mathfrak{R}\left({{\Lambda}}_{j}\right)={\alpha }_{j}{< }0$ for j ⩾ 2, an assumption which corresponds to dealing with a stable linear system of the type given in equation (25). This requirement immediately yields L11 < 0, as it follows from relation (33). We will also operate in the setting analyzed above, i.e. assuming U = qI. The obtained expressions for the Laplacian elements allow to recast the sought conditions on their signs as:

Equation (46)

where the inequalities hold for t = 2, ..., N + 1.

The above conditions can be simplified, after some algebraic manipulations, to return

Equation (47)

and

Equation (48)

The interested reader can access the detailed steps in appendix C.

Assume that the assigned Laplacian spectrum matches the above conditions, while having αk < 0 for k > 2. Then the Laplacian matrix obtained with the above procedure, with U = qI, displays positive non diagonal entries.

Let us explore the consequences of conditions (47) and (48). To this end, introduce $\bar{\alpha }$, the average of the non-negative real parts of the Laplacian eigenvalues, i.e. $\bar{\alpha }=\left({\sum }_{k}{\alpha }_{k}\right)/\left(2N\right)$. A straightforward analysis allows us to conclude that the generated Laplacian returns positive non-diagonal elements, if the assigned non trivial eigenvalues (i.e. Λk , with k > 1) fall in a bounded rectangular domain of the complex plane. More specifically, the rectangular region is symmetric, with respect to the horizontal (real) axis, and extends along the vertical direction (imaginary axis) of ${\pm}2N\bar{\alpha }/\left(4{N}^{2}-1\right)$. The rectangle is completed by two vertical sides, positioned at $\frac{4{N}^{2}}{4{N}^{2}-1}\bar{\alpha }$  and  $\frac{2N}{2N+1}\bar{\alpha }$. Working at fixed N, the larger is $\vert \bar{\alpha }\vert $ the more extended is the rectangle along the vertical direction. Conversely, when making N larger, the rectangle shrinks in the horizontal direction and becomes eventually degenerate for N. In other words, for large values of N, eigenvalues should align on a vertical segment positioned at $\bar{\alpha }$, and whose extension increases linearly with $\vert \bar{\alpha }\vert $ (while decreasing with N). This is shown in figure 5, where three different spectra are depicted (for different choices of $\bar{\alpha }$) which yield a Laplacian with positive off-diagonal elements.

Figure 5.

Figure 5. Examples of discrete spectra (red symbols) which yield a Laplacian with positive non diagonal entries. The rectangular boxes identify the region where eigenvalue should fall for the ensuing network to display positive weights and are traced according to the conditions derived in the main body of the paper. Here, Ω = 2N + 1 = 21 (a), Ω = 2N + 1 = 41 (b) and Ω = 2N + 1 = 81 (c).

Standard image High-resolution image

In the following section, we will apply the method of Laplacian generation here discussed to the study of two prototypical examples of dynamical systems on networks.

5. Selected applications

In this section we consider two different models of interacting oscillators, defined on a network. In both cases, the coupling between individual oscillators is implemented via a discrete Laplacian operator, which reflects the specific network arrangement. It will be shown that a suitable network arrangement can be a priori established, building on the procedure illustrated above, so as to make the inspected systems stable against external perturbations.

5.1. Coupled Stuart–Landau oscillators

Consider an ensemble made of 2N + 1 nonlinear oscillators and denote by Wi their associated complex amplitude. We assume the oscillators to be mutually coupled via a diffusive-like interaction which is mathematically modeled by a discrete Laplacian operator. Each oscillator obeys a complex Stuart–Landau equation [25]. The dynamics of the system can be cast in the form:

Equation (49)

where c1 and c2 are real parameters. The index j runs from 1 to 2N + 1, the total number of oscillators. Here, K is a suitable parameter setting the coupling strength. Without loss of generality, in what follows it is assumed that K = 1. Lij = Aij ki δij is the Laplacian, Aij is the generic entry of the directed and weighted adjacency matrix A and ki = ∑j Aij .

The system admits a homogeneous limit cycle solution in the form ${W}_{LC}\left(t\right)={\mathrm{e}}^{-\mathrm{i}{c}_{2}t}$. To characterize the stability of the cycle [26], one can introduce a non homogeneous perturbation in polar coordinates as:

Equation (50)

By linearizing around the limit cycle solution (ρi (t) = 0, θi (t) = 0), one gets:

Equation (51)

To proceed further, expand the perturbations ρj and θj on the Laplacian eigenvectors v(α), that is

Equation (52)

By inserting this expansion in (51) and using the relation

Equation (53)

for α = 1, ..., 2N + 1, we obtain a condition formally equivalent to the expression of the continuous dispersion relation

Equation (54)

If ${\lambda }_{\mathfrak{R}}=\mathfrak{R}\left({\lambda }_{\mathrm{max}}\right)$ is positive for some Λ(α), the perturbation grows exponentially in time, and the initial homogeneous state turns out to be unstable. Conversely, if ${\lambda }_{\mathfrak{R}}=\mathfrak{R}\left({\lambda }_{\mathrm{max}}\right){< }0$, for every Λ(α), the perturbation gets re-absorbed and the system converges back to the fully synchronized state. The condition ${\lambda }_{\mathfrak{R}}{< }0$ can be further processed analytically, as discussed in [15]. In particular, it can be shown that the latter condition is fulfilled, if the Laplacian eigenvalues fall in a specific portion of the parameter plane, which reflects the choice made for the reaction parameters c1, c2 and K. The region of interest is the one enclosed between the two solid lines, displayed in figure 6(a) for the specific selection of the parameters. The blue symbols depicted in figure 6(a) are randomly generated so as to fall in the region of the complex plane where stability holds. They represent the spectrum of the Laplacian that we seek to recover following the method illustrated above. In figure 6(b), ${\lambda }_{\mathfrak{R}}$ is plotted against $-{{\Lambda}}_{\mathfrak{R}}=-\mathfrak{R}\left({\Lambda}\right)$, confirming the stability of the homogeneous solution.

Figure 6.

Figure 6. Panel (a): the system of coupled Landau–Stuart oscillators is stable if the eigenvalues of the Laplacian operator fall in the portion of the complex plane comprised between the two solid lines. Crosses (in blue) represent the (randomly assigned) eigenvalues of the Laplacian that we aim at obtaining by means of the procedure introduced in this paper. Circles (in red) refer to the spectrum obtained as follows the sparsification (see main body of the paper). In the sparsification procedure we assumed σ = 0.01 and δ = 5. The percentage of pruned links is around 35% of the total. Panel (b): ${\lambda }_{\mathfrak{R}}$ is plotted against $-{{\Lambda}}_{\mathfrak{R}}=-\mathfrak{R}\left({\Lambda}\right)$. This is an alternative way to show that the system is stable with the prescribed Laplacian spectrum. The solid line stands for the dispersion relation obtained in the continuum limit, when the discrete Laplacian is replaced by a standard differential operator. Panel (c): the time evolution of the real components $\mathfrak{R}\left({W}_{j}\right)$ is shown, with an appropriate color code. Here, the weighted network which specifies the coupling between the nodes is obtained from the sparsified Laplacian. Here, c1 = 3, c2 = 2.4224 and K = 1.

Standard image High-resolution image

We now proceed by generating a Laplacian matrix, which is constructed so as to yield the spectrum depicted in figure 6(a), see crosses (in blue). Circles (in red) refer to the spectrum obtained after the sparsification. In this case we operated with the more stringent sparsification protocol, the second of those introduced above. This latter appears more suited to the case at hand because of the shape of the domain where the eigenvalues should eventually fall to enforce stability. As an additional condition to the algorithm, we accepted only the sparsification moves which prevent the eigenvalues to invade the region of the complex plane associated to instability.

The sparsified adjacency matrix A defines the network of interactions between coupled oscillators, as follows equation (49). We hence integrate numerically the governing equations, assuming the initial state to be a perturbation of the homogenous synchronized equilibrium. As expected, the perturbation fades away and the system regains its unperturbed, fully synchronized, equilibrium.

5.2. Coupled Kuramoto oscillators

As a second example we set to study the Kuramoto model. Consider a system made of 2N + 1 oscillators, denote by θi the phase of the ith oscillator, and ωi its natural frequency. The oscillators evolve as dictated by the following system of 2N + 1 coupled differential equations:

Equation (55)

Here, Aij stands for the entries of the adjacency matrix A which sets the interactions between pairs of oscillators. The matrix is, in principle, weighted, and may display positive and negative entries as reflecting the specific interaction (excitatory or inhibitory) being at play.

As an additional assumption, we will here focus in the simplified setting where ωi = ωi. We can then introduce the new variable ψi = θi ωt, and write the governing equation in the equivalent form:

Equation (56)

A homogeneous solution always exists with ψi = Ψ ∀ i, and for any constant Ψ ∈ [0, 2π), as it can be immediately checked by substitution. To assess the stability of the solution, one sets ψi = Ψ + δi , and expands (56) at the leading order in the δi . In this way, one gets:

Equation (57)

where Lij = Aij ki δij is the Laplacian operator which stems from Aij . The stability of the simplified Kuramoto model here considered is controlled by a linear system of the type introduced in (25), with the obvious replacement of xi with δi . The system proves hence stable if the (non trivial) eigenvalues of the Laplacian operator display negative real parts. Our aim, here, is to generate a Laplacian (and therefore a matrix of binary weighted connections among oscillators) which warrants the stability of the system. To this end we assign the eigenvalues (which appear in conjugate pairs) to belong to the negative portion of the complex plane, see (blue) crosses in figure 7(a). The null eigenvalue is clearly included into the spectrum. Running the procedure discussed in the first part of the paper, we obtain the corresponding Laplacian and compute the associated adjacency matrix. The first of the two sparsification procedures is then applied to remove unessential links, yielding the spectrum of the associated Laplacian depicted with (red) circles in figure 7(a). The Kuramoto model (56) is then integrated numerically by assuming the recovered expression for A. As predicted, the system is stable to external perturbations as one can clearly appreciate by inspection of figure 7(b).

Figure 7.

Figure 7. Panel (a): crosses (in blue) represent the spectrum of the Laplacian operator that we seek to recover. The eigenvalues are distributed in the left portion of the complex plane to assure stability of the inspected Kuramoto model. Circles (in red) stand for the spectrum of the Laplacian after the sparsification procedure. We here apply the first sparsification algorithm as discussed in the main body of the paper. More specifically, unessential links with modest weights are removed, while confining the Laplacian spectrum in a bounded domain, located in the negative portion of the complex plane. Here, σ = 0.01 and δ = 0.2. The percentage of trimmed links is around 9% of the total. Panel (b): θi vs time. Here, the adjacency matrix employed in the numerical integration follows the sparsification. As an initial condition, we perturb the homogenous solution (assumed Ψ = 0) by a random heterogeneous amount. After a transient, the perturbation gets absorbed and the oscillators evolve in unison. Here Ω = 2N + 1 = 51.

Standard image High-resolution image

6. Conclusions

Studying the dynamics of an ensemble made of interacting units on a network is central for a large plethora of applications. In many cases of interests, individual units evolve under the influence of homologous constituents, the interaction stemming in general from binary exchanges. Distinct fundamental units are assigned to different nodes of the collection, paired via physical or virtual links. For a relevant subclass of problems, the stability of the ensuing equilibrium can be traced back to the spectrum of the Laplacian operator, computed from the adjacency matrix which defines the network arrangement. Symmetric networks yield Laplacian operators with real spectrum, while directionality in the couplings reflects in an imaginary spectrum. Methods exist which allow one to generate a symmetric network, hence Laplacian, with a prescribed real spectrum. Starting from these premises, we have proposed and tested a novel procedure to generate a (signed and directed) Laplacian which returns an a priori assigned (complex) spectrum. A special case has also been considered, which enables one to recover closed analytical expressions for the entries of the sought Laplacian. Working in this setting, we can elaborate on the conditions that have to be matched for the ensuing Laplacian to solely display positive non diagonal elements. Dedicated sparsification procedures are also discussed to help removing unessential links in terms of their impact on the associated spectrum. The algorithm for Laplacian generation has been successfully tested with reference to two prototypical examples of coupled oscillators. Taken together, our work explores possible strategies for network generation with the emphasis placed on dynamical, rather than structural features. The dynamics is indirectly modulated by the spectrum of the Laplacian operator, which is here constraining the generation algorithm.

Data availability statement

All data that support the findings of this study are included within the article (and any supplementary information files).

Appendix A.: On the explicit expression of S

Given two arbitrary constants α and β, the following identity holds

Equation (A1)

for any matrix E. If E is the matrix defined in (8) one gets:

Equation (A2)

and then

Equation (A3)

If $\beta =-\frac{\alpha }{1+\alpha N}$, from (A3) we get

Equation (A4)

The above result can be used to derive the structure of matrix S, as reported in the main body of the paper. To this end, we begin by rewriting the matrices A and B as

Equation (A5)

Equation (A6)

From (A4) we obtain

Equation (A7)

and then, after some calculations, we get

In conclusion,

because of

Equation (A8)

which proves the results.

Appendix B.: About the computation of Lij

The aim of this section is to detail the computations needed to obtain explicitly the entries of the Laplace matrix L under the assumption U = qI starting thus from equations (16) and (18).

Let us begin by computing the diagonal elements of L, namely Lii for i = 1, ..., N. For i = 1, one gets:

Equation (B1)

From (4) and (6) we obtain for k = 2, ..., N + 1:

Equation (B2)

Equation (B3)

Then

Equation (B4)

Equation (B5)

Equation (B6)

where use has been made of the identity $\mathfrak{R}\left({D}_{kk}\right)={\alpha }_{k}$.

We can proceed in analogy for the others diagonal elements of the Laplacian matrix. In particular, for s = 2, ..., N + 1, we have that

Equation (B7)

and then

Equation (B8)

Equation (B9)

Equation (B10)

Equation (B11)

Conversely, for s = N + 2, ..., 2N + 1, we have

Equation (B12)

and consequently:

Equation (B13)

Equation (B14)

Let us proceed now with the other elements of the first row of L. For t = 2, ..., N + 1 we can write

Equation (B15)

Equation (B16)

Equation (B17)

Equation (B18)

Equation (B19)

and for t = N + 2, ..., 2N + 1 we get

Equation (B20)

Equation (B21)

Equation (B22)

Equation (B23)

For s = 2, ..., N + 1 and t = N + 2, ..., 2N + 1 with tsN, we get:

Equation (B24)

Equation (B25)

Equation (B26)

Equation (B27)

Equation (B28)

while, for s = N + 2, ..., 2N + 1 and t = 2, ..., N + 2 with tsN, the following expression holds:

Equation (B29)

Equation (B30)

Equation (B31)

For s = 2, ..., N + 1

Equation (B32)

Equation (B33)

Equation (B34)

Equation (B35)

Equation (B36)

while, for s = N + 2, ..., 2N + 1, we obtain:

Equation (B37)

Equation (B38)

Equation (B39)

Equation (B40)

For t = 2, ..., N + 1

Equation (B41)

Equation (B42)

Equation (B43)

and for t = N + 2, ..., 2N + 1

Equation (B44)

Equation (B45)

Equation (B46)

For t, s = 2, ..., N + 1 and st

Equation (B47)

Equation (B48)

Equation (B49)

and, finally, for s, t = N + 2, ..., 2N + 1 and st, one gets:

Equation (B50)

Equation (B51)

Equation (B52)

Summing up, we have here provided closed-form analytical expressions for all the entries of the Laplacian matrix, as a function of the assigned spectrum.

Appendix C.: Positiveness of L

The aim of this section is to work out the algebraic steps needed to rewrite (46) in the simpler form given by (47) and (48).

Let us thus rewrite (46)

where the inequalities hold for t = 2, ..., N + 1. Then the second and the third conditions of system (46) can be matched simultaneously provided that:

Equation (C1)

Remark that:

Equation (C2)

holds for arbitrary values of αt < 0 and N. Hence, system (46) simplifies as follows:

Equation (C3)

for t = 2, ..., N + 1.

Let us now focus on the conditions for βt . We assume that the following condition holds:

Equation (C4)

and we set to explore its consequences. Equation (C4) yields:

Equation (C5)

For N > 1, conditions (C3) maps therefore in the following equivalent system:

Equation (C6)

Remark that:

Equation (C7)

due to the inequality

Equation (C8)

which holds for each N > 1. Hence, system (C6) takes the form:

Equation (C9)

Then, (C9) has no solutions, under the working hypothesis that we have put forward to deriving it. In fact:

Equation (C10)

that is

Equation (C11)

and summing on every t we get

Equation (C12)

which, in turn, implies $\frac{4{N}^{2}}{4{N}^{2}-1}{< }1$, a condition that is obviously never met. We now go back to revise ansatz (C4), and consider the alternative scenario:

Equation (C13)

Then, (C3) becomes

Equation (C14)

Following a path analogous to the one discussed above, we get:

Equation (C15)

and

Equation (C16)

Footnotes

  • We shall assume all the eigenvalues but 0 to be complex numbers. Let us remark that the method developed readily adapts to the case where also real eigenvalues are present. In this case, the eigenvectors associated to real eigenvalues are generated according to the prescriptions of [14]. Eigenvectors linked to complex eigenvalues are instead assigned following the procedure outlined here.

Please wait… references are loading.
10.1088/2632-072X/abbd35