On the information-theoretic formulation of network participation

The participation coefficient is a widely used metric of the diversity of a node's connections with respect to a modular partition of a network. An information-theoretic formulation of this concept of connection diversity, referred to here as participation entropy, has been introduced as the Shannon entropy of the distribution of module labels across a node's connected neighbors. While diversity metrics have been studied theoretically in other literatures, including to index species diversity in ecology, many of these results have not previously been applied to networks. Here we show that the participation coefficient is a first-order approximation to participation entropy and use the desirable additive properties of entropy to develop new metrics of connection diversity with respect to multiple labelings of nodes in a network, as joint and conditional participation entropies. The information-theoretic formalism developed here allows new and more subtle types of nodal connection patterns in complex networks to be studied.

Many real-world networks exhibit modular structure, in which nodes form densely interconnected modules with relatively sparse connectivity between modules.Such modularity is observed in social networks, food webs, metabolic networks, protein-protein interaction networks, air-traffic networks, and brain networks [1].Within such modular networks, individual nodes can vary substantially in their degree of within-versus acrossmodule connectivity.These differences can provide important insights into a node's functional role within a network, such as facilitating local information processing (consistent with strong within-module connectivity) versus distributed/integrative communication (strong crossmodule connectivity).
To measure the extent to which a given node's connections are distributed within or across modules, the participation coefficient was introduced by Guimerà and Amaral [1,2].It has been used widely to analyze networks across domains, including the Internet, metabolic, air transportation, protein-interaction, and neural networks [3,4].For example, the participation coefficient of nodes in macroscopic brain networks has been used to distinguish levels of consciousness caused by brain injury [5] and to identify emerging new research directions from scientific publication citation networks [6].This concept of nodal connection diversity across modules was also formulated as a Shannon entropy by Rubinov and Sporns [7].Quantifying diversity is a general problem studied across many fields, with a prominent application to species diversity in ecology for which the Shannon entropy and Gini-Simpson index (the measure underlying the participation coefficient [7]) formulations have been used for decades, among a host of alternative indices [8,9].Mathematical relationships between different formulations of diversity indices have been uncovered.For example, the Gini-Simpson index and Shannon entropy have each been shown to be special cases of 'generalised entropies' [10][11][12].Zhang and Grabchak [13] have further shown that the Gini-Simpson index can be expressed as a first-order Taylor approximation to the Shannon entropy formulation of diversity.
Despite the wide variety of diversity indices used in ecology, the participation coefficient has remained the dominant measure of node participation in network theory since it was introduced in 2005 [1,2].Here we connect the problem of quantifying nodal connection diversity in networks with a large and existing literature on diversity indices, and in particular explain the relationship between the participation coefficient and the corresponding Shannon entropy-based formulation of connection diversity [7], which we call 'participation entropy' here.We argue that participation entropy is a better-motivated measure of node participation diversity, primarily due to its additive behaviour with respect to chaining probability distributions, an operation which can arise naturally when the nodes in a network have labels in multiple module sets.Taking advantage of this behavior, we define novel measures of connection diversity-'joint' and 'conditional' participation entropy-for quantifying more nuanced types of connection patterns in complex networks.

Participation Coefficient and Participation Entropy
We consider a binary, undirected network partitioned into M non-overlapping modules, with each node labeled as belonging to a module, from the set M = {m 1 , m 2 , ..., m M }.Note that this modular partition is most commonly obtained as the result of a communitydetection algorithm operating on the network [14], but could in general represent any assignment of categorical labels to nodes in a network.Given M, the participation coefficient, P i , of node i is defined as arXiv:2307.12556v1 [physics.soc-ph]24 Jul 2023 where κ ij is the number of edges between node i and a node in module m j , and k i is the degree of node i (the total number of connections made to all other nodes in the network) [1,2].For simplicity, we focus on undirected networks here, but note that this formulation extends straightforwardly to weighted networks (substituting κ ij and k i for weighted versions that sum edge weights) and directed networks (e.g., by defining κ ij and k i as counting connections outward from, or arriving to node i, as the in-degree or out-degree).Equation (1) exhibits the desired behavior of a connection diversity metric, taking a minimal value for a node with connections entirely within a single module (P i = 0) and a maximal value for a node that connects equally across all M modules (P i = 1 − 1/M ).a.A probabilistic formulation.An alternative interpretation of Eq. ( 1) can be considered by identifying κ ij /k i as the probability, p i (m j ), that a randomly selected connected neighbor of node i is assigned to module m j .An example is depicted in Fig. 1a, which depicts the connected neighbors of node i across each of three modules, M = {m 1 , m 2 , m 3 }.This, or any other pattern of connectivity, can be represented as a probability distribution, {p i (m)} m∈M , plotted for this simple example in Fig. 1b.In this probabilistic formulation, P i can be expressed as a function of p i (m) by rewriting Eq. ( 1) as This formulation allows us to clearly see that the participation coefficient is an implementation of the Gini-Simpson index of diversity [15], as observed previously [7].This is an important measure used in many other contexts, including quantifying biodiversity [8,9].Following the interpretation that motivated Simpson's original formulation [15], P i can be interpreted as the probability that two randomly selected nodes connected to node i (with replacement) lie in different modules.
b. Participation Entropy The Shannon entropy [16,17] of p i (m) is a natural measure of the connection diversity of node i across the label set, M: We term E i (M) the 'participation entropy' of node i, which measures the uncertainty (or average surprise) in the module labels (from M) of its connected neighbors.This matches a previous formulation of nodal connection diversity introduced by Rubinov and Sporns [7] (named the 'diversity coefficient' in its implementation in code in the Brain Connectivity Toolbox [18]).Participation entropy exhibits the same desired qualitative behavior as the participation coefficient, P i ; that is, E i = 0 is minimal when all connected neighbors of node i are in the same module (minimum uncertainty about the module label of node i's neighbors) and E i = log(M ) is maximal when connected neighbors are equally distributed across all of the modules (maximum uncertainty about the module label of node i's neighbors).Note that both E i and P i may be normalised by dividing by their maxi- mum value for a given number of modules M , if desired (as 'normalized connection diversity' [7], which has the effect of setting its range to the unit interval).Compared to P i , quantifying connection diversity as an entropy, E i , is the unique formulation that satisfies three key advantageous properties.First, it is continuous with respect to changes in p i (m).Second, it increases monotonically with the number of modules, M , when p i (m) = 1/M , ∀m.Third, and most importantly, E i can be chained consistently across multiple labeling sets for nodes [16,19], opening new ways of quantifying and interpreting nodal connection patterns in networks, as we develop later.As per the original formulation of P i , it also generalizes straightforwardly to weighted and directed networks.
c. Connecting the two formulations The mathematical relationship between the Gini-Simpson index and Shannon entropy is well-known [10][11][12] and has been demonstrated in the context of species diversity indices [13].But the connection has not been reported for the corresponding measures of nodal connection diversity in networks, P and E. The relationship can be seen through the series expansion of participation entropy via the logarithm in Eq. ( 2): This quantity converges for 0 < p i (m) ≤ 1, and we take 0 log 0 → 0 by convention, so there is no contribution from any p i (m) = 0. Limiting the expansion to the lead-ing term, n = 1, yields We thus recapitulate the participation coefficient as a first-order approximation to participation entropy (as per the Gini-Simpson index and Shannon entropy [13]).
In order to investigate the discrepancy between E i and its first-order approximation, P i , we sampled from possible distributions, p i (m) for M = 2, ..., 5, and plotted the resulting accessible regions of P i -E i space in Fig. 2. Our numerical results match analytic expressions for these regions for the underlying measures on p i (m) derived by Vajda and Zvárová [12].We find that P i varies monotonically with E i for M = 2, but for M > 2, allowed values of P i and E i are constrained to specific regions of the space.This accessible region expands with the addition of each new module; Fig. 2 annotates the additional accessible region with each increment of M .The results indicate that there can be a substantial discrepancy between an analysis using P i versus E i , with greater potential for differences at moderate-to-high values of P i and with increasing M .Published results using P i to quantify nodal diversity (or extract a list of 'high-participation nodes' [4,20]), may thus obtain different results when using E i instead of its first-order approximation, P i .

Joint and conditional participation entropy
A major advantage of formulating E i as an entropy is the ability to capture more subtle types of connectionpattern diversity in networks.Here we demonstrate this capability by developing entropy-based network participation measures for the case that each node is annotated with multiple labels.Specifically, we consider L different module sets, M 1 , M 2 , ..., M L , that each define a labeling of network nodes.In a social network, this could correspond to individuals being labeled by both gender, M g , and friendship group, M f .Or, in a brain network, it could correspond brain regions being labeled by both their hemisphere, M h (left or right) and their functional network module, M f (e.g., auditory, visual, association, etc.).There is no clear way of extending P i to such a setting, but it can be incorporated naturally in the information-theoretic formulation of E i .
Extending participation entropy with respect to any single labeling of nodes, M, we now consider the diversity of connections involving node i across multiple label sets jointly.Writing the L sets as M = (M 1 , M 2 , ..., M L ), and a combination of labels from M for a given node as m = (m (1) , m (2) , ..., m (L) ), we define the joint probability distribution p i (m) for the connected neighbors of node i.We can then define the joint participation entropy of node i as: This tells us the total diversity of connections across these multiple module sets, M.
Similarly, we can define the conditional participation entropy as the entropy of modular assignments m from sets M of the connected neighbors of node i, given knowledge of the modular assignments n from other sets N : This quantifies the remaining uncertainty in the distributions of connections across the modules of sets M, given that we already know their distributions across sets N .The joint participation entropy, E i (M), and conditional participation entropy, E i (M|N ), are related via the chain rule for entropies [17] (vis-à-vis Eq. ( 7)), which means that we can consistently decompose and recompose the diversity of connections over multiple module sets, regardless of which order we chain our knowledge of the module labelings.This property is unique to the information-theoretic formulation [19].
To illustrate the calculation of conditional participation entropy, we show some illustrative examples in Fig. 3 for the simple case of two node labelings: M = {m 1 , m 2 , m 3 } and S = { , , }.The three cases shown in Fig. 3 correspond to distinct types of connection patterns of node i with respect to M and S. In Fig. 3a, the labels assigned to node i's connected neighbors are redundant with respect to M and S. That is, for a given connected neighbor, knowledge of the label s leaves no uncertainty about the label m (and vice-versa), resulting in the symmetric p(m i |s j ) matrix shown in Fig. 3b.For this case, the conditional participation entropy of node i, For the connection pattern shown in Fig. 3c, the labelings m and s are statistically independent.That is, for a given connected neighbor, knowledge of the label s does not reduce our uncertainty about the label m, as reflected in the p(m i |s j ) matrix in Fig. 3d.In this case, In general, a node's connection pattern will involve non-trivial statistical dependencies between the combinations of labels.Such a case is shown in Fig. 3e, where knowledge of the label s reduces our uncertainty about m.For example, as depicted in Fig. 3f, if we learn that a node is labeled s = , then our uncertainty about its label, m, is reduced, from {p(m 1 ), p(m 2 ), p(m 3 )} = {0.25,0.5, 0.25} to {0, 0.5, 0.5}.As such, 0 < E i (M|S) < E i (M) here.The conditional participation entropy thus provides a new way to quantify a node's connection diversity across multiple labelings of network nodes.For example, in a structural brain network in which brain areas (nodes) are annotated by both by a functional annotation, M f (e.g., visual, auditory, motor, etc.) and their hemisphere, M h (left or right), E(M h |M f ) could be used to highlight nodes whose diversity of connectivity between left and right hemispheres depends on which functional module they connect to.

Conclusion
We have introduced an information-theoretic formulation of nodal connection diversity in complex networks, incorporating results from the broader literature on quantitative diversity indices that builds on a prior introduction of the Shannon entropy formulation of participation coefficient [7].Quantifying connection diversity as the average uncertainty in the module label of a connected neighboring node, termed participation entropy, E i , has mathematically favourable properties over the more commonly used participation coefficient.Using a probabilistic formulation of the two measures, we show that the participation coefficient is a first-order approximation to the participation entropy (as per the relationship of the underlying measures of diversity [13]).Using the additivity of participation entropy with respect to chaining probability distributions for multiple module sets, we introduce new ways of measuring connection diversity for cases where nodes are labeled from multiple label sets, defining joint and conditional participation entropy.
Future work may build on the theoretical foundations laid here, including applying the new measures to data.This will require developing statistical significance tests against appropriate null distributions.For example, analysis on the conditional participation entropy of a node, E i (M|N ) (i.e., the diversity of connectivity across modules M given the labeling N ) requires comparison to an appropriate null hypothesis.One choice of null hypothesis is that node i connects randomly with respect to M, while preserving the distribution of connections over N (which could be sampled from numerically).Future work could also explore alternative probabilistic formulations of connection diversity that may differently account for module size [20,21].In summary, the new theory introduced here enables practical new ways of understanding and quantifying more subtle types of nodal connection patterns in complex networks.

FIG. 1 .
FIG. 1.A probabilistic formulation of a node's connection diversity with respect to a set of labels.a plot the connected neighbors of a given node, i, which span three labeled modules: m1 (3 edges), m2 (4 edges), and m3 (2 edges).b This pattern can be represented as a probability distribution, pi(m), that captures the probability of node i's connected neighbors being in each of the modules.The participation coefficient, Pi, and participation entropy, are then computed from pi(m).

FIG. 2 .
FIG. 2. Constraints on the relationship between Piand Ei for networks containing M = 2, ..., 5 modules.The shaded region for each M indicates the additional allowed region, in addition to that accessible for lower M values.

FIG. 3 .
FIG.3.Participation entropy can be extended to the case where each node is labeled according to multiple module sets.Here we illustrate the case in which each node is labeled by two different module sets: M = {m1, m2, m3} and S = { , , }.Three cases are illustrated: a, b, labelings M and S are redundant; c, d, labelings M and S are statistically independent; and e, f, labelings M and S exhibit non-trivial dependence.a, c, e show the pattern of connectivity from a target node, i, to a set of nodes labeled by M and S. b, d, f show conditional probability matrices, p(mi|sj) for connected neighbors of node i from both labelings.