Brought to you by:
Paper The following article is Open access

The dual problems of coordination and anti-coordination on random bipartite graphs

, and

Published 10 November 2021 © 2021 The Author(s). Published by IOP Publishing Ltd on behalf of the Institute of Physics and Deutsche Physikalische Gesellschaft
, , Citation Matthew I Jones et al 2021 New J. Phys. 23 113018 DOI 10.1088/1367-2630/ac3319

Download Article PDF
DownloadArticle ePub

You need an eReader or compatible software to experience the benefits of the ePub3 file format.

1367-2630/23/11/113018

Abstract

In some scenarios ('anti-coordination games'), individuals are better off choosing different actions than their neighbors while in other scenarios ('coordination games'), it is beneficial for individuals to choose the same strategy as their neighbors. Despite having different incentives and resulting population dynamics, it is largely unknown which collective outcome, anti-coordination or coordination, is easier to achieve. To address this issue, we focus on the distributed graph coloring problem on bipartite graphs. We show that with only two strategies, anti-coordination games (two-colorings) and coordination games (uniform colorings) are dual problems that are equally difficult to solve. To prove this, we construct an isomorphism between the Markov chains arising from the corresponding anti-coordination and coordination games under certain specific individual stochastic decision-making rules. Our results provide novel insights into solving collective action problems on networks.

Export citation and abstract BibTeX RIS

Original content from this work may be used under the terms of the Creative Commons Attribution 4.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.

1. Introduction

An n-coloring of a graph is a labeling of the vertices of the graph with n different colors such that for each pair of vertices connected by an edge, the vertices have different labels. Finding n-colorings is a classic graph theoretic problem. However, in recent years, graph colorings have also been adopted into the field of collective dynamics to study networked coordination games [1, 2].

For the purposes of this paper, collective action games fall into two broad categories: games where individuals coordinate to pick the same strategies (referred to as coordination games) [3, 4, 14], and games where individuals coordinate to pick different strategies (referred to as anti-coordination games) [57, 12, 13]. Coordination games can often be resolved if the players are allowed to communicate, but asymmetries in anti-coordination games can make cooperation difficult and highly dependent on network structure [6]. In general, these lead to vastly different population dynamics, but in this paper we will see that under certain circumstances, these two classes of games can be thought of as the dual problem of one another.

There is a rich history of playing games and modeling interactions on graphs as a way to examine the effects of our social structure [1, 6, 1218]. For example, studying the incentives and frameworks that impact player behavior has been a particularly useful area for those interested in fostering certain kinds of behavior like cooperation by allowing punishment or partner choice, among others [2730]. Also, many social coordination problems can be phrased as graph coloring problems, like time tabling and radio frequency assignments [8, 9]. However, unlike in the purely graph theoretic context, these social problems come with the additional complication that individuals may not have complete knowledge of the population structure. A graph coloring problem in which each vertex has to choose its edge using only local information (the colors of its neighbors) introduces new complications to the classic problems, and stochastic behavior is often needed to successfully find an n-coloring of the graph [10, 11]. Distributive graph coloring problems can be considered as one kind of anti-coordination game, where individuals are playing games with their neighbors and trying to choose different strategies, or colors. Solving the graph coloring is equivalent to finding the social optimum.

This framework can also be used to study opinion dynamics in structured populations. Like our coordination game, the voter model is a classic example of individuals in a networked population playing a coordination game using myopic update rules in an attempt to reach consensus with those around them using only limited local information [3133]. At the same time, the anti-coordination game can appear in the context of contrarians or 'hipsters' who make choices specifically to distinguish with those around them [34, 35].

In this work, we consider the simple case of a connected, bipartite graph which always admits exactly two two-colorings. For an omniscient observer that can view the entire graph and dictate colors to vertices, finding one of these two-colorings is a trivial matter. However, things become more difficult when there is no central decision-maker, and instead each vertex represents an individual who must choose her own color with no information except the colors of her neighbors [11]. This new game, which uses local information instead of global information, has an interesting consequence: finding a two-coloring of the graph, which models an anti-coordination game, is equivalent to getting all individuals in the graph to choose the same color, which is a coordination game.

Thus, in the context of bipartite graphs, anti-coordination games and coordination games are dual problems, and a whole new class of coordination games where everyone wants to opt for the same strategy can also be modeled as a graph coloring problem. We show this by defining two Markov chains [19, 20] on the space of colored graphs, one where individuals are playing the anti-coordination game and one where individuals are playing the coordination game, and showing that they are isomorphic.

2. Theoretical results

2.1. A natural bijection for update rules for two-colorings and uniform colorings

In this paper, the individuals located at each vertex will operate using a simple set of update rules. These rules can incorporate random behavior, but the update decisions depend only on the color of an individual's neighbors. Consider the relationship between update rules for anti-coordination and coordination games. We will see that any update rule for an individual playing an anti-coordination game can be adapted to an update rule for playing a coordination game and vice versa. At its most basic, an anti-coordination rule aims to minimize the number of neighbors with the same color, and the goal of a coordination rule is to maximize the number of neighbors with the same color. Therefore, we can turn an anti-coordination update rule into a coordination update rule just by picking the opposite color every time.

Suppose we have an individual vertex with a neighbors playing color A and b neighbors playing color B, like in figure 1(a). When we define an anti-coordination rule where the central individual will select color A with probability p(a, b) and color B with probability 1 − p(a, b), we can make the corresponding coordination rule as follows: choose A with probability 1 − p(a, b) and B with probability p(a, b).

Figure 1.

Figure 1. A simple case to demonstrate the bijection of update rules with two color choices. Making an anti-coordination decision in (a) will have the same outcome as making a coordination decision in (b), since all the colors of the neighbors have changed to the other color. If an individual would have chosen blue in (a) to match with as few neighbors as possible, that would correspond to choosing blue in (b), where the goal is to match with as many neighbors as possible.

Standard image High-resolution image

Consider an update rule (anti-coordination or coordination) that has a function p(a, b) that gives the probability of choosing color A. If we switch the colors of all neighbors, the probability of choosing A is now p(b, a) because now b neighbors are playing A and a neighbors are playing B. There is a natural restriction to impose on the possible update rules. If we switch the color of every neighbor, moving from figures 1(a) and (b), the probabilities of the central vertex choosing color A, p(a, b), and color B, 1 − p(a, b), should switch as well. This restriction gives us the following complementary condition, by setting the probability of choosing A equal to the probability of choosing B after switching all the neighbors' colors:

Equation (1)

For any anti-coordination update rule, a vertex with aA neighbors and bB neighbors will choose A with some probability p(a, b). If we switch the colors of all the neighbors, the vertex will choose A with probability p(b, a) = 1 − p(a, b), but this is equal to the probability of a coordination player choosing A. Therefore, an anti-coordination algorithm can be converted into its dual algorithm for a coordination game by temporarily switching the colors of all the neighbors, using the anti-coordination update rule, and switching the neighbors' colors back. As an example, an anti-coordination update rule on figure 1(a) will have the same behavior as a coordination update rule on figure 1(b).

The same process can be used to convert a coordination algorithm to an anti-coordination algorithm.

To put the above individual choice function p(a, b) in context, it is worthwhile to introduce a few intuitive anti-coordination update rules. The first update rule, called randomness-first, involves making a random choice with probability r, and otherwise with probability 1 − r makes a color choice that minimizes color conflicts. This update rule can be expressed as:

Equation (2)

Under the second update rule, called memory-0, individuals first attempt to choose any color that eliminates all color conflicts. If that is not possible, the individual chooses randomly with probability r and otherwise with probability 1 − r chooses the color minimizing conflicts with neighbors. In our terms, this algorithm is

Equation (3)

The third main update rule, called memory-1, is like the memory-0 rule except that the agent only makes a random choice if no neighbors have changed color in the last round of updates. Since this is not a memory-less update rule, it does not have a corresponding p(a, b) function, and the following proof would need to be slightly modified, particularly by significantly enlarging the state space of the Markov chains to include the last N colorings of the graph, to prove the equivalence for update rules with finite memory. While we do not go over all the details of proving that a finite-memory update rule also satisfies the isomorphism, we do show results of computer simulations to demonstrate that the duality of coordination and anti-coordination holds in section 3.

This is only a small selection of all possible update rules. Any function that satisfies equation (1) and returns values between 0 and 1 could be an update rule, although many would be very ineffective. The three update rules described above are all intuitively reasonable and simple to express, which made them excellent candidates for study in prior work on network graph coloring problems [11]. However, there are other natural update rules that we do not explicitly describe here. For example, an individual may wish to choose each color proportional to the number of neighbors playing that color.

In what follows, we demonstrate that an anti-coordination update rule is exactly as effective at finding a two-coloring as the corresponding coordination update rule is at finding a uniform color for the whole bipartite network.

2.2. Two Markov chains

For a connected, bipartite graph G of size N, let col(G) be the set of all possible labelings of the graph G. Note that here we refer to all ways of labeling the vertices of G with either color A or color B, not just two-colorings in which no neighbors share the same color.

The system will update as follows: the graph is initialized by randomly assigning each vertex a color. An update order is created that describes the order in which the labelled vertices will update their color. The update order is represented as a list of the numbers 1 through N, which is just a permutation of N elements. The set of all permutations of N elements, called the symmetric group on N elements, is denoted SN . The vertices continually update their colors in this order, one at a time, until the desired coloring (either a two-coloring or uniform coloring) is found.

Now we can define our Markov chains. Let {Xi } be a Markov chain using an anti-coordination update rule, and let {Yi } be the Markov chain using the associated coordination update rule, as described above. The state space Ω of both chains is the set of ordered triples (G*, σ, m) where G* ∈ col(G), σSn , and m ∈ {1, 2, ..., n}. Unsurprisingly, G* represents the colors of the vertices of the graph at some time i. σ is the order in which the vertices update, and m is the current position in the update step.

The state space is quite large, but for each state, there are exactly two states to which the Markov chains can move with non-zero probability, shown in figure 2.

Figure 2.

Figure 2. A demonstration of the possible transitions in both Markov chains. The next vertex to update is marked by a gold ring. Transitioning from (a) to (b) is minimizing matching with neighbors' colors, and is more likely to appear in an anti-coordination Markov chain, while transitioning from (a) to (c) is matching with as many neighbors as possible, and more likely in the coordination Markov chain.

Standard image High-resolution image

To begin, we initialize both Markov chains (anti-coordination and coordination) by sampling from the uniform distribution Π over Ω, so each starting coloring is equally likely.

Without loss of generality, let Xj = Yj = (G*, σ, m). Here, σ(m) is the vertex that is about to update. Let ${G}_{A}^{\ast }$ be the colored graph that is the same as G* except possibly σ(m) which has color A, and ${G}_{B}^{\ast }$ the same but for color B. In each step of the Markov chains, σ(m) selects one of two colors and the position in the update cycle increases by one, resetting to 1 if necessary. The update order σ remains unchanged. Thus, if σ(m) has a color A neighbors and b color B neighbors,

Equation (4)

Equation (5)

Equation (6)

Equation (7)

2.3. A Markov chain isomorphism

For bipartite graphs, we claim that these Markov chains {Xi } and {Yi } are isomorphic. First, because G is a connected, bipartite graph, the vertices can be divided into two groups. In a two-coloring, all the vertices in the same group will be the same color, and all vertices in different groups will be different colors. Let S be the set of vertices of one of these groups. Because we are working with two-colorings of bipartite graphs, we can define a function ϕ : col(G) → col(G) by switching the color of every vertex in S, and define ψS : Ω → Ω as the extension of ϕ in the natural way. We claim that this is a Markov chain isomorphism between Xi and Yi . This requires proving two conditions hold. First, ψS must be bijective. Second, ψS must commute with the transition matrices of Xi and Yi , i.e. the probability of Xi moving from x to y is the same as Yi moving from ψS (x) to ψS (y). More formally, for all x, y ∈ Ω,

Equation (8)

If equation (8) holds, the two Markov chains are equivalent in that after relabelling the states in Ω (according to ψS ), the Markov chains are identical.

2.4. Proof of isomorphism

That ψS is bijective is fairly obvious. For any colored graph G*, because we are only working with two-colorings on bipartite graphs, ϕ(G*) is well-defined, and only ϕ(G*) maps to G*, so it is both one-to-one and onto, and therefore ψ is as well.

Now we will prove equation (8). Since we are considering Markov chains moving from x to y (or ψS (x) to ψS (y)), let x = (G*, σ, m). Let a and b be the number of color A and color B neighbors of σ(m) in G*, respectively.

We begin with the conditional statement Xi = x = (G*, σ, m). Equations (4) and (5) give the only two possibles states of Xi+1 and their transition probabilities:

Equation (9)

Equation (10)

Once again, ${G}_{A}^{\ast }$ and ${G}_{B}^{\ast }$ are the same as G* except σ(m) which has color A or B, respectively.

Now we consider Yi+1 given that Yi = ψ(x) = ψ((G*, σ, m)) = (ϕ(G*), σ, m). σ(m) is the next vertex to update, and either it is in the subset S or it is not. These two cases must be handled separately.

2.5. Case 1: σ(m) ∈ S

If σ(m) ∈ S, none of σ(m)'s neighbors are in S, so σ(m) still has a color A neighbors and b color B neighbors. Because we are now in the coordination Markov chain {Yi }, σ(m) chooses its color according to equations (6) and (7).

With probability p(a, b), σ(m) chooses color B. Because σ(m) ∈ S, ϕ(G*) becomes $\phi ({G}_{A}^{\ast })$ when σ(m) chooses B. Thus, ${Y}_{i+1}=(\phi ({G}_{A}^{\ast }),\sigma ,m\enspace \text{mod}(n)+1)=\psi ({G}_{A}^{\ast },\sigma ,m\enspace \text{mod}(n)+1)$.

With probability 1 − p(a, b), σ(m) chooses color A, and ${Y}_{i+1}=(\phi ({G}_{B}^{\ast }),\sigma ,m\enspace \text{mod}(n)+1)=\psi ({G}_{B}^{\ast },\sigma ,m\enspace \text{mod}(n)+1)$.

Thus, when σS, equation (8) holds (figure 3).

Figure 3.

Figure 3. An example on a small bipartite graph showing that ψS commutes with the transition matrices, when σ(m) ∈ S. Color A is blue and color B is red. The top row shows the transition in the anti-coordination Markov chain, and the bottom is the transition in the coordination Markov chain. In both chains, this particular transition occurs with probability p(1, 2).

Standard image High-resolution image
Figure 4.

Figure 4. An example showing that ψS commutes with the transition matrices when σ(m) ∉ S. Color A is blue and color B is red. The top is the anti-coordination Markov chain, and the bottom is the coordination Markov chain. This time, the transition occurs with probability p(2, 1).

Standard image High-resolution image

2.6. Case 2: σ(m) ∉ S

If σ(m) ∉ S, then all of its neighbors are. So in ϕ(G*), σ(m) has b color A neighbors and a color B neighbors.

With probability 1 − p(b, a) = p(a, b), σ(m) chooses color A, and ${Y}_{i+1}=(\phi ({G}_{A}^{\ast }),\sigma ,m\enspace \text{mod}(n)+1)$.

With probability p(b, a) = 1 − p(a, b), σ(m) chooses B, and ${Y}_{i+1}=(\phi ({G}_{A}^{\ast }),\sigma ,m\enspace \text{mod}(n)+1)$.

So equation (8) holds when σ(m) ∉ S (figure 4). Therefore, ψ is a Markov chain isomorphism.

2.7. Equivalence of the two-coloring and uniform coloring problems

Now we are prepared to state and defend the main claim of this work: when using local information, the anti-coordination and coordination problems are equivalent. Any result regarding the efficacy of an update rule p(a, b) for an anti-coordination game can also be applied to a coordination game, and vice versa.

Since the initial distribution Π is the uniform distribution and ψ is bijective, ψ(Π) = Π and both Markov chains begin from the same distribution. Furthermore, because ψ switches the color of the set S, for any state Xi that is a valid two-coloring, ψ(Xi ) = Yi is a uniform coloring. For all times i, applying equation (8) i times tells us that moving the anti-coordination chain from a state X0 to state Xi happens with the same probability as moving the coordination chain from Y0 = ψ(X0) to Yi = ψ(Xi ). Because Π is the uniform distribution, for all x ∈ Ω and for all times i:

Equation (11)

Critically, this says that the probability of solving the anti-coordination problem in i steps is the same as solving the coordination problem in i steps, for all i. Additionally, the process is linked at each step, so the expected number of player color changes will be the same, for example.

This result also holds for any update rules with finite memory. Any stochastic process whose transition probabilities only depend on a finite number of previous states can be reexpressed as a Markov chain by defining the new state space to be lists of elements from the previous state space, and this works here with any update rule that considers the last n update steps.

3. Simulation results

This result has been confirmed with a variety of simulation results. First, we take a broad approach: we create a large number of different networks, and populating each with individuals playing a particular anti-coordination update rule. Then we repeatedly attempt to find a two-coloring of the network, collecting data on probability of finding a two-coloring, the number of update cycles needed, and the number of players updated. Then, using the same network with individuals playing the associated coordination update rule, we repeatedly search for a uniform coloring, collecting data on the same metrics. After repeating this on all the networks, we have a large data collection that, if anti-coordination and coordination games are equivalent, should be two samples of the same probability distribution.

And we see that this is the case using the two-sample Kolmogorov–Smirnov test on data collected from 1000 different networks. For all three metrics (probability of solving the network, update cycles, and updated players), the K–S statistic is below 0.015 with a p-value greater than 0.999. This strongly suggests that the samples are drawn from the same distribution and the two problems are equivalent.

We can also consider a closer examination of the moment-to-moment behavior of each system by counting the number of color conflicts in the network at every time step, averaged over multiple runs. A color conflict is an edge who ends have the same color (in the case of an anti-coordination game) or different colors (in the case of a coordination game). Previous work [11] dealt mainly with three update rules: randomness-first, memory-0, and memory-1. In figure 5, we see the result of many simulations on the same graph, with these three different update rules. The x axis is log scaled, to clearly show the behavior in the short and long term.

Figure 5.

Figure 5. Plots showing the time evolution of the number of color conflicts using three reasonable update rules. (a) Is randomness-first, (b) is memory-0, and (c) is memory-1. Crucially, the anti-coordination and coordination variants of the same update rule have the same behavior in all three plots. Curves are the average of 1000 simulations for each update rule. For randomness-first, the random behavior probability was 0.5. For memory-0 and memory-1, the random probability was 0.1.

Standard image High-resolution image

Although the proof given above does not strictly apply to the memory-1 update rule, it can be modified to work for any update rule that gives its agents finite memory by enlarging the state space to ordered tuples of network colorings. In figure 5(c), we see that finding uniform colorings and two-colorings are equally difficult on random bipartite graphs.

These simulations confirm that the behavior when searching for a two-coloring is the same as when searching for a uniform coloring, regardless of the specific update rule.

4. Discussion & conclusion

Studying the collective behavior of individuals in a large group has long been an important research area of statistical physics and relevant fields. The question of 'collective action', the tendency for individuals in a group to forgo short-term selfish behavior in favor of long-term group benefit, has been extensively discussed and examined. Of particular interest is classifying the environmental factors that foster cooperation within group, particularly in the case of a public goods game and the Prisoner's dilemma [23]. There are a plethora of studies that use networks to model a social structure on the group, and the exact topology of networks can have a profound impact on the cooperation inside a group [21, 2426]. Additionally, empirical research uses human trials to examine how humans behave rationally (or irrationally) when actually playing public goods games with others [22].

Our results add to the study of collective action by approximating public goods games in that individuals sometimes need to make selfless actions (choosing colors that increase color conflicts) with the long-term goal of increasing success for the entire group (finding a two-coloring or uniform coloring) [11]. Our present work shows that these two fundamentally different games behave in the same way on random bipartite networks.

Our finding is counter intuitive, but it is important to remember that it applies in a relatively narrow range of scenarios. A bipartite structure is unlikely in most social networks, which means anti-coordination and coordination are equivalent problems only in the small selection of populations that happen to be bipartite with an initial coloring sampled uniformly from all possible colorings. However, these bipartite networks do occur widely in real systems with two different types of individuals like media producers and consumers [36, 37] or in a sexual contact network (that only considers heterosexual connections) [38]. More generally, there are also no parallels for n-colorings for n > 2.

Acknowledgments

FF is grateful for the generous financial support by the NIH COBRE Program (Grant No. 1P20GM130454), the Bill & Melinda Gates Foundation (Award No. OPP1217336) and the Neukom CompX Faculty Grant.

Data availability statement

No new data were created or analysed in this study.

Please wait… references are loading.
10.1088/1367-2630/ac3319