This site uses cookies. By continuing to use this site you agree to our use of cookies. To find out more, see our Privacy and Cookies policy.
Paper The following article is Open access

Escaping the avalanche collapse in self-similar multiplexes

, and

Published 22 May 2015 © 2015 IOP Publishing Ltd and Deutsche Physikalische Gesellschaft
, , Citation M Ángeles Serrano et al 2015 New J. Phys. 17 053033 DOI 10.1088/1367-2630/17/5/053033

1367-2630/17/5/053033

Abstract

We deduce and discuss the implications of self-similarity for the robustness to failure of multiplexes, depending on interlayer degree correlations. First, we define self-similarity of multiplexes and we illustrate the concept in practice using the configuration model ensemble. Circumscribing robustness to survival of the mutually percolated state, we find a new explanation based on self-similarity both for the observed fragility of interconnected systems of networks and for their robustness to failure when interlayer degree correlations are present. Extending the self-similarity arguments, we show that interlayer degree correlations can change completely the global connectivity properties of self-similar multiplexes, so that they can even recover a zero percolation threshold and a continuous transition in the thermodynamic limit, qualitatively exhibiting thus the ordinary percolation properties of noninteracting networks. We confirm these results with numerical simulations.

Export citation and abstract BibTeX RIS

Content from this work may be used under the terms of the Creative Commons Attribution 3.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.

1. Introduction

Self-similarity is defined in a wide sense as the property of some systems to be, either exactly or statistically, similar to a part of themselves. This property is found in certain geometric objects that are intrinsically embedded in metric spaces, so that distance in the metric space gives a natural standard of measurement to uncover similar patterns at different observation scales [1]. In complex networks, the definition of self-similarity is not obvious since many networks are not explicitly embedded in any physical geometry and the only available metric is the one induced by the collection of shortest path lengths between nodes. This metric has, in fact, been used to measure the fractal and self-similar properties of complex networks [2, 3]. However, the small-world property typically found in real complex networks strongly limits the range of scales where such properties can be observed.

In the absence of a natural geometry, the main problem in the definition of self-similarity stems from the fact that there is, a priori, no way to decide what is the 'part' of the system that should be compared to (and look alike) the 'whole'. In this sense, self-similarity is not an intrinsic property of the system but it is directly related to the specific procedure to identify the appropriate subsystem. In previous work on single networks, self-similarity was properly defined on the basis of a nested hierarchy of subgraphs and proved for general classes of models. These include random scale-free models with and without underlying metric spaces and models of growing networks [4, 5]. Interestingly, metric network models are able to provide a plausible explanation for key topological properties observed in real networks [68], including scale-free degree distributions, high levels of clustering, the small world property, and self-similarity.

Self-similarity has important implications in the global structure of networks and, in particular, in their vulnerability to failures of their constituents. For instance, self-similarity alone—independently of the divergence of the second moment of the degree distribution—explains the absence of a percolation threshold in random scale-free networks, with a proof that avoids the usual locally tree-like and other limiting assumptions [5]. Moreover, the same proof applies to ensembles of graphs with highly non-trivial topologies as long as they belong to the same self-similarity class. In [5], the absence of a percolation threshold was also proved and numerically confirmed in ensembles of random networks embedded in metric spaces with strong clustering and in ensembles of growing networks with bounded topological fluctuations.

In this work, we extend the concept of self-similarity to multiplexes—defined as networks of nodes interconnected with different classes of links, each class named a layer [9]. Out of the many different self-similar ensembles in single networks, we chose for simplicity the configuration model and generalize it to multiplexes in order to state explicitly the definition and significance of self-similarity in such structures. In particular, we study the implications of self-similarity for the robustness to failure of multiplexes with and without interlayer degree correlations. Circumscribing robustness to survival of the mutually percolated state [10, 11], we find a new explanation based on self-similarity both for the observed fragility of uncorrelated scale-free systems of networks [10, 12] and for their robustness to failure when correlations are present [13, 14]. We find that interlayer degree correlations can change completely the global connectivity properties of self-similar scale-free multiplexes, which can recover a zero percolation threshold and a continuous transition in the thermodynamic limit qualitatively exhibiting so the ordinary percolation properties of single scale-free networks.

The paper is organized as follows. In section 2, we review the definition of self-similarity in single-layered networks and extend it to multiplexes. In section 3, we discuss the self-similarity properties of the canonical configuration model generalized to multiplexes, both with and without interlayer degree correlations. In section 4, we use this model to deduce and discuss the implications of self-similarity on mutual percolation and check our predictions against numerical simulations. Finally, we conclude in section 5.

2. Self-similar ensembles

In the next section, we first review our findings on this topic in the case of single networks and, then, extend them to the case of multiplexes.

2.1. One-layered self-similar ensembles

Let $\mathcal{G}(\{\alpha \})$ be an ensemble of sparse graphs in the thermodynamic limit, where $\{\alpha \}$ is the set of model parameters. For example, in the case of the Erdös–Rényi model [15, 16] the set $\{\alpha \}$ is just the average degree $\langle k\rangle $. Consider a transformation rule T that for each graph $G\in \mathcal{G}(\{\alpha \})$ selects one of G's subgraphs. Denote the ensemble of these subgraphs by ${{\mathcal{G}}_{T}}(\{\alpha \})$. The ensemble $\mathcal{G}(\{\alpha \})$ is called self-similar with respect to T if the transformed ensemble is the same as the original one except for some transformation of the model parameters, that is

Equation (1)

where $\{{{\alpha }_{T}}\}$ are the ensemble parameters after the filtering process. This definition does not assume anything about the transformation rule T and, in fact, the same ensemble can be self-similar under different rules. As a simple example consider the Erdös–Rényi model with $N\gg 1$ nodes and connection probability among pairs of nodes $p=\langle k\rangle /N$. Now consider the transformation rule that selects NT nodes uniformly at random out of the original N nodes, along with their connections. It is easy to see that such subgraph belongs to the Erdös–Rényi ensemble but with an average degree

Equation (2)

Note that the average degree of subgraphs generated with this procedure is smaller than the average degree of the original network.

In the ensembles studied in [4, 5]—including the standard configuration model with scale-free degree distributions and zero clustering, scale-free networks with finite clustering and metric structure, and non-equilibrium networks, like generic growing network models—, the only model parameter that changes after the transformation is the average degree of the subgraph, ${{\langle k\rangle }_{T}}$. Typically, this average is a monotonic function of the ratio between the size of the original network N and the size of the subgraph NT, that is

Equation (3)

In this case, the sign of its derivative determines the class of self-similarity of the model and, in turn, the structural properties of the entire network. For instance, when f(x) is a monotonic increasing function, any graph of the ensemble contains subgraphs with an arbitrary large average degree within the subgraph. This is the case of the configuration model with a scale-free degree distribution with exponent $2\lt \gamma \lt 3$ and, remarkably, of many real-world networked systems [4]. This simple property, together with the fact that these subgraphs belong to the same ensemble, imply a zero percolation threshold in the thermodynamic limit [5], even if $\gamma \gg 3$. Remarkably, this is a consequence of self-similarity alone and not of the divergence of the second moment of the degree distribution. The proof in [5] represents a powerful alternative to typical techniques applied to the study of percolation in complex networks, since it avoids the usual locally tree-like and other limiting assumptions.

2.2. Self-similar multiplexes

Formally, self-similarity of random multiplexes can be defined as for single networks. As in equation (1), let $\mathcal{M}(\{\alpha \})$ be a multiplex ensemble of sparse graphs in the thermodynamic limit, where $\{\alpha \}$ is the set of model parameters, now including sets of model parameters for each layer. Consider a transformation rule T that for each multiplex $M\in \mathcal{M}(\{\alpha \})$ selects one of M's subgraphs. This transformation rule selects nodes in the multiplex according to specific conditions imposed on each layer. Denote the ensemble of subgraphs by ${{\mathcal{M}}_{T}}(\{\alpha \})$. The ensemble $\mathcal{M}(\{\alpha \})$ is called self-similar with respect to the transformation rule T if the transformed multiplex ensemble is the same as the original one except for some transformation of the model parameters, that is

Equation (4)

To get insights on the nature and consequences of self-similarity in multiplexes, hereafter we focus on the soft version of the configuration model, the simplest self-similar ensemble with a non-trivial degree distribution [5]. Nevertheless, the generalization to other ensembles is straightforward.

3. The soft configuration model

The configuration model is defined as the maximally random ensemble of graphs with a given degree sequence, that is, a predefined degree assigned to each single node of the network [1719]. The soft configuration model (SCM) is very similar to the original one except that, in this case, nodes are given their expected degrees and not their actual degrees [2023]. This makes the model more appropriate to deal with structural topological correlations that are unavoidable when the degree distribution is broadly distributed [24, 25].

In the particular case of scale-free networks, graphs are generated by assigning to each of the N nodes a hidden variable κ drawn from a power-law probability density $\rho (\kappa )=(\gamma -1)\kappa _{0}^{\gamma -1}{{\kappa }^{-\gamma }}$, $\kappa \geqslant {{\kappa }_{0}}$. Nodes with expected degrees κ and $\kappa ^{\prime} $ are then connected with probability $r(\kappa ,{{\kappa }^{\prime }})\equiv r(\mu \kappa {{\kappa }^{\prime }})$, where function $r(x)\leqslant 1$ is an arbitrary function with $r(0)=0$ and $r^{\prime} (0)\ne 0$. Constant μ fixes the average degree $\langle k\rangle $ through the relation

Equation (5)

With this choice, it is easy to see that the average degree of a node with hidden variable κ is proportional to κ, so that the degree distribution scales as well as a power law with exponent γ [24]. When function r(x) is chosen to be

Equation (6)

the model produces maximally random graphs with a given expected degree sequence [2628]. Random graphs with arbitrary structural correlations can be generated as well by choosing the appropriate connection probability r(x) [24]. Hereafter, we use the maximally random ensemble with connection probability given in equation (6). This particular ensemble has, in the thermodynamic limit, only two free parameters, the exponent of the degree distribution γ and the average degree $\langle k\rangle $. Notice that ${{\kappa }_{0}}$ is a dummy parameter that can be absorbed in the definition of the hidden variable κ so that it can be set to unity at any moment. However, it is useful to keep it during the transformation rule that we apply below. Unlike the regular configuration model (where the actual degrees are fixed a priori) nodes in the canonical configuration model can end up having zero degree and, therefore, the average degree $\langle k\rangle $ can take any positive value, even below 1.

As already discussed, ensemble self-similarity is always tied to a particular prescription to extract subgraphs out of a given graph. In the case of ensembles of scale-free networks, the natural transformation rule selects subgraphs by removing all nodes with degrees lower than a given threshold value. In the case of the SCM, the transformation rule T removes nodes with hidden variable κ below an arbitrary threshold ${{\kappa }_{T}}\gt {{\kappa }_{0}}$. In [4, 5], we proved that the ensemble of subgraphs so obtained is the same as the original one but with a transformed average degree. In the case of ensembles of scale-free networks, the natural transformation rule selects subgraphs by removing all nodes with degrees lower than a given threshold value (see figure 1).

Equation (7)

This simple result provides important insights on how hubs are organized within the network. We first notice that by varying continuously the threshold ${{\kappa }_{T}}$, we obtain a nested sequence of subgraphs. When $\gamma \gt 3$, ${{\langle k\rangle }_{T}}$ is a monotonic decreasing function of ${{\kappa }_{T}}$. This implies that subgraphs made of high degree nodes are very sparsely connected among them. Thus, even if the original graph is globally connected, connectivity between two hubs is always mediated by chains of low degree nodes. When $\gamma \lt 3$, ${{\langle k\rangle }_{T}}$ is a monotonic increasing function of ${{\kappa }_{T}}$. In turn, this implies that, in the thermodynamic limit, any graph always contains subgraphs made of hubs with arbitrary high connectivity, even if the average degree of the original graph $\langle k\rangle $ is arbitrarily small. This implies that such graphs always have a giant connected component and, so, the original network has a zero percolation threshold [5].

3.1. Generalization of the SCM for multiplexes: self-similarity properties

In this paper, we restrict our analysis to self-similar multiplexes with two layers. Generalizations to more than two layers or other ensembles are again straightforward. In the two-layered SCM, each node is characterized by two hidden variables, ${{\kappa }_{a}}$ and ${{\kappa }_{b}}$, distributed according to

Equation (8)

with ${{\kappa }_{a}}\geqslant {{\kappa }_{a0}}$, ${{\kappa }_{b}}\geqslant {{\kappa }_{b0}}$, and $\int _{1}^{\infty }\int _{1}^{\infty }\hat{\rho }(x,y){\rm d}x{\rm d}y=1$. In this way, $\langle {{\kappa }_{a}}\rangle $ and $\langle {{\kappa }_{b}}\rangle $ are proportional to parameters ${{\kappa }_{a0}}$ and ${{\kappa }_{b0}}$ so that they can be set to unity at any moment. In each layer, pairs of nodes connect with connection probabilities ${{r}_{a}}({{\mu }_{a}}{{\kappa }_{a}}\kappa _{a}^{\prime })$ and ${{r}_{b}}({{\mu }_{b}}{{\kappa }_{b}}\kappa _{b}^{\prime })$, where parameters ${{\mu }_{a}}$ and ${{\mu }_{b}}$ read

Equation (9)

Notice that the only relation between the two layers comes from the joint distribution $\rho ({{\kappa }_{a}},{{\kappa }_{b}})$, which may encode interlayer degree-correlations.

As for the transformation rule T, analogously to the case of single networks, given a multiplex generated from this ensemble, we remove nodes in the multiplex such that their hidden variables ${{\kappa }_{a}}$ and ${{\kappa }_{b}}$ in each layer are below certain threshold values ${{\kappa }_{aT}}$ and ${{\kappa }_{bT}}$. Next, we analyze under which conditions the multiplex SCM is self-similar.

3.1.1. Self-similar scale-free multiplexes with uncorrelated interlayer degrees

When ${{\kappa }_{a}}$ and ${{\kappa }_{b}}$ are uncorrelated variables, the joint degree distribution corresponds to the factorization of the degree distributions of each layer, so that self-similar ensembles of subgraphs can only be achieved if the one-layer degree distributions are scale-free, that is

Equation (10)

Thus $\hat{\rho }(x,y)$ is the factorization of two homogeneous functions of degrees $-{{\gamma }_{a}}$ and $-{{\gamma }_{b}}$, which gives a bi-dimensional homogeneous function of degree $-\alpha =-({{\gamma }_{a}}+{{\gamma }_{b}})$. After the transformation, the remaining nodes in the subgraph are distributed according to the same scale-free distributions once we replace ${{\kappa }_{a0}}\to {{\kappa }_{aT}}$ and ${{\kappa }_{b0}}\to {{\kappa }_{bT}}$. The number of nodes that remain in the subgraph is

Equation (11)

The transformation does not change either the hidden variables of filtered nodes or their connection probability, which implies that parameters ${{\mu }_{a}}$ and ${{\mu }_{b}}$ remain invariant in the subgraph. Therefore, by combining equations (9) and (11), we conclude that the transformed ensemble is self-similar with re-scaled average degrees

Equation (12)

and

Equation (13)

Notice that in multiplexes with uncorrelated degrees the two thresholds, ${{\kappa }_{aT}}$ and ${{\kappa }_{bT}}$, are completely independent.

3.1.2. Self-similar scale-free multiplexes with correlated degrees

In multiplexes with correlated degrees, self-similarity is achieved when the joint distribution $\hat{\rho }(x,y)$ is a bi-dimensional homogeneous function of degree α, that is

Equation (14)

When the degrees in each layer are correlated, this condition enforces a relation between the two thresholds, i. e. ${{\kappa }_{aT}}/{{\kappa }_{a0}}={{\kappa }_{bT}}/{{\kappa }_{b0}}$, which are not independent anymore 3 . Using the homogeneity property equation (14), it is easy to check that the number of nodes within a subgraph with ${{\kappa }_{a}}\gt {{\kappa }_{aT}}$ and simultaneously ${{\kappa }_{b}}\gt {{\kappa }_{bT}}={{\kappa }_{b0}}{{\kappa }_{aT}}/{{\kappa }_{a0}}$ is

Equation (15)

As in the case of uncorrelated multiplexes, the transformation does not change either the hidden variables of filtered nodes or their connection probability, which implies that parameters ${{\mu }_{a}}$ and ${{\mu }_{b}}$ remain invariant in the subgraph. Then, by combining equations (9) and (15) we conclude that the ensemble is self-similar with re-scaled average degrees in each layer

Equation (16)

4. Robustness of mutually percolated states in self-similar scale-free multiplexes

As mentioned in the introduction, the percolation properties of systems of networks can be radically different as compared to single networks depending on the patterns of connectivity between layers [10, 13, 14, 29]. We shall show that self-similarity can explain several of the previous results on the robustness of systems of networks and can predict new behaviors in a large class of self-similar multiplexes. Notice that the results presented here are qualitatively valid in multiplex ensembles beyond the SCM if those present similar self-similarity properties.

In multiplexes, the percolated state can be defined according to different criteria. Here, we assume that nodes in each layer mutually depend on nodes in other layers and that only the nodes that belong to the giant mutually connected component remain functional. The giant mutually connected component of a multiplex network (MCC) is defined as the largest set of nodes that are mutually connected by at least one path in each layer traversing nodes in the MCC [10, 11].

For single networks, perturbations in the form of a random failure of a fraction of $1-p$ nodes produce typically a critical phase transition for a specific value pc, so that below pc the network is fragmented into small components. In multiplexes with a MCC, perturbations can propagate back and forth between the layers so that even small initial failures can produce avalanches of damage leading to a discontinuous collapse of the MCC [10]. Site percolation on random multiplexes has shown indeed a discontinuous hybrid transition at some finite value of the number of nodes removed, where the size of the MCC drops abruptly to zero, like in a first order transition, while the critical behavior is only observed above the transition, like in a second-order one [10, 12]. So, perturbations are amplified by the interaction between the layers and systems of networks are said to be more fragile as compared to single networks. The presence of interlayer degree correlations can however revert the situation [30]. Interdependent networks with mutually dependent nodes having identical degrees are statistically more robust than random coupled networks with the same degree distribution. Besides, when $\gamma \lt 3$, they disintegrate via a second-order phase transition—in the same way as noninteracting networks—and are thus very resilient against random failures [13]. More structured systems of correlated interconnected networks [14] or with overlaps [31] have been proved to be robust to failure as well. In [31], the authors consider link overlap (links existing in both layers simultaneously) as the source of correlations. In this case, overlapping links form a single network, thus with the well-known percolation properties of complex networks. Notice, however that while overlap induces inter-layer degree correlations, the opposite is not true in general.

Next, we assess the resilience of MCCs to random failures in scale-free multiplexes on the basis of their self-similarity properties and check numerically our predictions. Before that, we note that the average degree $\langle k\rangle $ in the SCM ensemble defined in section 3 is equivalent to the site percolation probability p and it can then be used in robustness studies as the control parameter. Indeed, when a random fraction of $1-p$ nodes is removed from a given graph of the ensemble, the hidden variables κ of the remaining nodes are distributed as in the original graph and the connection probability among them remains unchanged. However, the number of nodes in the subgraph is pN. Since μ remains unchanged, equation (5) implies that this ensemble is self-similar under a random removal of nodes with a modified average degree ${{\langle k\rangle }_{T}}=p\langle k\rangle $. This means that, in the thermodynamic limit, removing a random fraction of nodes $1-p$ of a network with average degree $\langle k\rangle $ is equivalent to generating a graph of the same ensemble but with an average degree $p\langle k\rangle $. Because of this equivalence, hereafter we use $\langle k\rangle $ as the control parameter of the percolation properties of the ensemble.

4.1. Fragility of uncorrelated scale-free multiplexes explained by self-similarity

Single scale-free self-similar networks in the thermodynamic limit with $\gamma \lt 3$ always contain subgraphs made of hubs with arbitrary high connectivity, even if the average degree of the original graph $\langle k\rangle $ is arbitrarily small, which implies that such graphs always have a giant connected component and, so, a zero percolation threshold [5]. This makes such structures robust to random failures. In the case of uncorrelated multiplexes, the question is whether it is still possible to find a continuous set of nested subgraphs such that the average degrees within the subgraphs increase in both layers simultaneously. In that case the multiplex would be robust to random failures, being able to maintain a MCC despite perturbations.

To have a nested ensemble of subgraphs, ${{\kappa }_{bT}}$ must be either constant or a monotonic increasing function of ${{\kappa }_{aT}}$ (or vice versa). Let ${{\kappa }_{bT}}=g({{\kappa }_{aT}})$ be such function. Then, the condition for equations (12) and (13) to be simultaneously monotonic increasing functions of ${{\kappa }_{aT}}$ can be obtained by imposing a positive derivative of equations (12) and (13) with respect to ${{\kappa }_{aT}}$, that is

Equation (17)

However, these inequalities can only hold if the lower bound is smaller than the upper bound, which is equivalent to the inequality $\alpha ={{\gamma }_{a}}+{{\gamma }_{b}}\lt 4$. This is clearly not possible in scale-free sparse graphs with ${{\gamma }_{a}}$ and ${{\gamma }_{b}}$ in the range $(2,3)$, implying that, while it is possible to have a sequence of subgraphs with increasing average degree in one of the layers (if one of the inequalities is satisfied), the same sequence of subgraphs has necessarily a decreasing average degree in the other layer.

This result explains the fragility of scale-free systems of networks first reported in [10]. In single scale-free networks, global connectivity is mainly provided by the interconnection of high degree nodes, which gives the main explanation for their robustness. In uncorrelated scale-free multiplexes, the situation is different. Our self-similarity argument starts by selecting a subgraph of high degree nodes in layer A and so an almost fully connected subgraph that contains the majority of nodes of the giant component of layer A. However, as our previous result shows, the average degree in layer B of the subgraph induced by the subgraph in A is smaller than in B and, thus, its giant component in B—which is the candidate set to contain the MCC of the mutually percolated multiplex—is also reduced. We could now select a subgraph of the subgraph in layer B such that its average degree is high enough to contain its layer B giant component. However, the average degree of the induced sub-subgraph in layer A will decrease below its original value, and so its giant component. This process can be iterated at infinitum and, at each iteration, the size of the potential subgraph to contain a MCC is reduced. We thus conclude that the MCC cannot be sustained by high degree nodes alone and must rely on the connectivity of low degree nodes. This makes scale-free multiplexes always more fragile than more homogeneous networks with the same average degree.

4.2. Robustness of correlated scale-free multiplexes explained by self-similarity

The picture changes completely when the degrees in each layer are positively correlated. In the case of sparse scale-free self-similar multiplexes with uncorrelated degrees in the two layers, $\alpha ={{\gamma }_{a}}+{{\gamma }_{b}}\gt 4$ so that the conditions for a stable MCC are not fulfilled. However, when ${{\kappa }_{a}}$ and ${{\kappa }_{b}}$ are positively correlated, it is possible to find ensembles with $3\lt \alpha \lt 4$. As an example, consider the joint distribution

Equation (18)

Its marginal distribution is $\hat{\rho }(x)=(\gamma -1){{2}^{\gamma -1}}{{(1+x)}^{-\gamma }}$ 4 . From here, the conditional average is $\langle x|y\rangle =(y+\gamma )/(\gamma -1)$, so that the correlation between x and y increases when $\gamma \to 2$. The joint distribution equation (18) is a homogeneous function with $\alpha =\gamma +1$. Therefore, according to equations (16), when $\gamma \lt 3$ the ensemble has self-similar subgraphs with increasing average degree in both layers simultaneously. This readily implies that the ensemble always possesses a MCC so that its percolation threshold is zero in the thermodynamic limit. Besides, the 'transition' is continuous, in the sense that the relative size of the MCC approaches zero monotonously when $p\to 0$. This generalizes the result found in [13] for networks with identical degrees in both layers and makes an important step forward as it quantifies the precise level of correlations (and so the value of α) that is needed to go from a hybrid discontinuous transition to a continuous one. We should also note that, as opposed to the result in [13], our derivation is exclusively based on the property of self-similarity and, thus, also applies to ensembles that are not locally tree-like, like networks embedded in metric spaces [4, 5].

4.3. Numerical simulations

To check numerically the predicted percolation properties of self-similar scale-free multiplexes, we generated two-layered multiplexes using the canonical configuration model. In all cases, $N=5\times {{10}^{5}}$ and $\langle {{k}_{{\rm min} }}\rangle =2$. For uncorrelated scale-free multiplexes, we used the joint probability distribution equation (10), while we implemented correlations according to equation (18). In practice, for each node we first draw a hidden degree in one of the layers according to the marginal probability density $\hat{\rho }(x)$. The hidden degree in the other layer is then generated from the conditional probability density $\hat{\rho }(y|x)=\hat{\rho }(x,y)/\hat{\rho }(x)$ with the value of x previously generated. Once hidden degrees in both layers have been assigned, each pair of nodes is evaluated and connected in each layer with a probability given by equation (6). Finally, to compute mutually connected components, we implemented an efficient algorithm based on [32], which keeps track of all the MCCs, not only the giant, present in a multiplex. The algorithm represents each layer of the multiplex by the dynamic connectivity structure defined in [33]. This structure allows for maintaining information about network components and their sizes, while updating the network by deletion or insertion of edges. The algorithm works in two phases. First, we find MCCs in the initial multiplex and second, we calculate the size of the giant MCC for all values of the parameter p.

To compute all MCCs in the initial multiplex, we identify connected components for each layer separately and if needed, we reconnect all single components by adding a minimum number of ad hoc edges. Thus, after this step every layer is a single connected component and the multiplex a single MCC. Next, we sequentially delete all ad hoc edges. Each single removal creates two separated components in the given layer. We then check all possible node pairs, where each node in the pair belongs to a different component and remove, in all other layers, edges connecting them. Whenever any removed edge breaks a connected component into two, we have to continue with the removal of all edges that connect disconnected components in all other layers. Finally, when all ad hoc edges are removed, all layers consist of connected components corresponding to MCCs. In the second phase, we generate a random sequence defining the order of node removals. Removal of each node is accomplished by removing all its adjacent edges from all layers. Every edge is removed in the same way as ad hoc edges in the first phase of the algorithm. Similarly as in the first phase, after removing the node all layers consist of connected components corresponding to MCCs. The size of the largest component is outputted as the size of the largest MCC for the corresponding p value.

Figure 1.

Figure 1. Illustration of a self-similar ensemble of graphs embedded into a metric space, a circle of radius $R\sim N$, under a transformation that removes nodes with degrees below a certain threshold [4]. In this visualization, each node is given a radial coordinate inversely proportional to its degree so that we obtain the desired subgraph by removing all nodes outside the blue dashed circle.

Standard image High-resolution image

In figure 2, we show the average degrees in the subgraphs and the size of the largest connected components in each layer and the MCC as a function of the filtering thresholds ${{\kappa }_{aT}}$ and ${{\kappa }_{bT}}$. In all cases, networks are scale-free with $\gamma =2.8$. In uncorrelated multiplexes, the average degrees of the subgraphs cannot increase simultaneously as the thresholds increase. This is shown in figure 2(a) for ${{\kappa }_{aT}}={{\kappa }_{bT}}$ and in figure 2(c) for ${{\kappa }_{bT}}={{\kappa }_{b0}}=1$. As clearly seen in the figures, the only possibilities are that the average degrees decrease simultaneously (when ${{\kappa }_{aT}}={{\kappa }_{bT}}$) or that the average degree of one of the layers increases while the other decreases (when ${{\kappa }_{bT}}$ is constant). This induces the fragility of the MCC which, as shown in figures 2(b) and (d), reduces its size abruptly at some relatively small value of the threshold. Interlayer degree correlations change completely the picture. In figures 2(e) and (f), we show the average degrees in the subgraphs and the size of the different components for ${{\kappa }_{aT}}={{\kappa }_{bT}}$ in a canonical configuration model multiplex ensemble with the joint degree distribution given by equation (18). In this case, it is possible to produce sequences of subgraphs with increasing average degrees in both layers simultaneously, so that the MCC becomes very robust. Finally, the inset in figure 2(f) shows the relative size of the MCC (relative to the remaining number of nodes after the filtering process), which approaches 1 for large values of the thresholds, indicating that, as predicted, such self-similar multiplex contains a small but macroscopic subgraph that is completely connected in both layers simultaneously.

Figure 2.

Figure 2. Average degrees (left column) and the size of the largest connected components (right column) as a function of filtering parameters ${{\kappa }_{aT}}$ and ${{\kappa }_{bT}}$. Panels (a) and (b) show results for a multiplex network with uncorrelated degrees where ${{\kappa }_{aT}}={{\kappa }_{bT}}$. Panels (c) and (d) show results for a multiplex network with uncorrelated degrees where ${{\kappa }_{bT}}$ is fixed to the minimum value ${{\kappa }_{b0}}$. Panels (e) and (f) show the results for a multiplex network with correlated degrees where ${{\kappa }_{aT}}={{\kappa }_{bT}}$. Solid lines correspond to the analytical results given by equations (12), (13) and (16). In all cases, the multiplex network is composed of two layers with $N=5\times {{10}^{5}}$ nodes, $\gamma =2.8$, $\langle {{k}_{{\rm min} }}\rangle =2$, and we evaluated the absolute size of the largest connected components SaT and SbT in individual layers A and B, the size of the MCC ST, and the size of the network NT after applying the corresponding transformation.

Standard image High-resolution image

To get further insights into the percolation properties of self-similar multiplexes, we adopt the conventional percolation criterion of measuring the breakdown of the largest MCC. We computed the relative size of the largest MCC versus the fraction of nodes p remaining in the multiplex for different values of the power-law exponent γ. Results are shown in figure 3(a) for multiplexes with uncorrelated degrees and in figure 3(b) for correlated ones. For all values of γ, the transition between the mutually percolated and the fragmented states is discontinuous in the uncorrelated case while it is continuous and approaching zero in the correlated case. This can be corroborated by the scaling of the susceptibility vs. the system size, where the susceptibility χ is defined as 5

Equation (19)

Here S is the size of the largest MCC at any value of p and averages are taken over a large number of complete random sequences of node removals. This quantity is able to distinguish between discontinuous, continuous, and hybrid phase transitions. In continuous phase transitions, χ shows a clear peak close to the critical point that diverges as the system size increases. Instead, in discontinuous transitions, χ shows a discontinuity at the critical point but no dependence on the system size. In the case of hybrid phase transitions, χ shows a diverging peak approaching the critical point from one side, a discontinuity, and then a size independent behavior on the other side. According to these criteria, figure 3(c) indicates that the transition is hybrid in multiplexes with uncorrelated degrees whereas figure 3(d) indicates that χ has a continuous divergence with a peak that approaches zero in the thermodynamic limit as a power law, while the height of the peak diverges also as a power law. This is clearly visible in figure 4, where we show the behavior of the position and height of the peak, pmax and ${{\chi }_{{\rm max} }}$, for different values of γ. These results clearly corroborate our theoretical prediction about a zero percolation threshold but with critical fluctuations when $p\to 0$.

Figure 3.

Figure 3. Comparison between the percolation properties of scale-free multiplexes with uncorrelated (left panel) and correlated (right panel) degrees. Panels a and b show the relative size of the largest mutually connected component versus the fraction p of nodes remaining undamaged. In both cases $N=5\times {{10}^{5}}$ and $\gamma =2.2$, 2.5, and 2.8. Each curve corresponds to one complete random sequence of node removals. Panels c and d show the susceptibility χ as a function of site occupation probability p for scale free multiplexes of $\gamma =2.8$ and different sizes. The different curves $\chi (p)$ are computed from 104 complete random sequences of node removals. In all cases, multiplexes are composed of two layers and $\langle {{k}_{{\rm min} }}\rangle =2$.

Standard image High-resolution image
Figure 4.

Figure 4. Size dependence of the position and height of the peak of the susceptibility equation (19) for $\gamma =2.5$ and 2.8. Dashed lines are power law fits to the data.

Standard image High-resolution image

5. Conclusions

Self-similarity is a widespread property in network models and has also been observed in many real-world networks [4]. Beyond the mathematical beauty of self-similarity, this property has important implications for the structural properties of networks. The power of the concept was illustrated in single-layered networks by the proof of a zero percolation threshold for a general class of self-similar networks, which only required the self-similarity property with a hierarchy of nested subgraphs whose average degrees grow with their depth in the hierarchy [5] and without the need of usual limiting requirements.

In this paper, we have extended the concept to multiplexes and illustrated its importance by assessing the robustness of scale-free multiplexes in terms of their self-similarity properties. To state in a clear and explicit way the definition and relevance of self-similarity, we have focused on the SCM ensemble. However, we should stress that the results presented here are qualitatively valid in other multiplex ensembles with similar features, that is, with similar self-similarity properties, degree distributions and interlayer degree correlations. Interestingly, the observed fragility of scale-free multiplexes or the robustness to failure of correlated systems of networks can be explained and predicted based only on their self-similarity characteristics. In particular, we have found that scale-free multiplexes can recover a zero percolation threshold and a continuous transition in the thermodynamic limit, and so the ordinary percolation properties of single scale-free networks. Self-similarity can as well have important implications for other critical phenomena taking place in multiplex structures when the critical point is a function of the connectivity of the system.

Acknowledgments

This work was supported by a James S McDonnell Foundation Scholar Award in Complex Systems; the European Commission LASAGNE project no. 318132 (STREP); the ICREA Academia prize, funded by the Generalitat de Catalunya; the MINECO project no.FIS2013-47282-C2-1-P; the Generalitat de Catalunya grant no.2014SGR608; APVV (project APVV-0760-11); and the Ramón y Cajal program of MINECO.

Footnotes

  • For the two thresholds to be independent and the ensemble self-similar, one would need a scaling relation of the type $\hat{\rho }(ax,by)={{a}^{-\alpha }}{{b}^{-\beta }}\hat{\rho }(x,y)$ for any a and b. However, the only function in ${{\mathbb{R}}^{2}}$ that satisfy this condition is the factorization of two power laws, which correspond to the case of a multiplex without degree correlations.

  • Notice that a homogeneous distribution in two dimensions does not imply that its marginal is also a homogeneous function.

  • Notice that this is not the standard definition of susceptibility, which is defined with N in the denominator instead of $\langle S\rangle $. However, both definitions have the same critical properties and diverge as power laws at the critical point of a continuous phase transition. From a numerical point of view, our definition has been proven more useful in heterogeneous networks and this is why we adopt it here. Nevertheless, the relation between their critical exponents can be found in [34].

Please wait… references are loading.
10.1088/1367-2630/17/5/053033