Private list sharing leads to cooperation and central hubs emergence in ABM

We introduce an agent based model framework to investigate how an alternative to classic image score and gossip can support the emergence of cooperation in a repeated prisoner dilemma game with agents employing mixed strategies. We debate the universality of image scores, arguing that they cannot be considered an objective property of the agents observed but rather a subjective property of each observer. From this assumption, we develop a private list mechanism for opponent selection and gossip sharing among the population of the simulation. The results show that the private list mechanism is able to foster the emergence of cooperation, and that for various levels of list usage different levels of cooperation correspond in the system. Finally, we observe interesting topological properties emerging, with networks characterised by one ‘super-hub’ connected to every other node, suggesting the emergence of centralized entities to support cooperation.


Introduction
Cooperation, where individuals assist others at a personal expense, stands in stark contrast to defection, where individuals act selfishly.Understanding the emergence of cooperation through natural selection has been a biological objective since Darwin's era.In the context of repeated games, optimal strategies like 'tit for tat' , demonstrated famously by Axelrod, promote reciprocating cooperation and defection [1].It has been shown that even a single element adopting this strategy in an otherwise finite, all-defective population can create circumstances conducive to the emergence of cooperation [2].However, the exact mechanisms promoting the emergence of cooperation are still being researched.
Two of these are direct and indirect reciprocity.They represent fundamental mechanisms through which cooperation can be fostered and supported in social systems.Direct reciprocity is a social interaction model where individuals respond to actions directed at them by reciprocating in kind.Essentially, it is the principle of 'I scratch your back, you scratch mine.'If one individual helps another, they expect to be helped in return in the future [3,4].
Indirect reciprocity refers to a phenomenon where an individual's action towards others influences the actions that third parties will reciprocate towards them.In other words, if Alice helps Bob, then Charlie, who has seen this action, is more likely to help Alice in the future.This system promotes cooperative behavior, as good deeds are 'repaid' not by the direct recipient of the action, but by others in the group [5,6].
Indirect reciprocity models often rely on 'image scores' [7].Image scoring is a method of quantifying an individual's reputation based on their past actions, thus promoting indirect reciprocity.Image scores serve as a form of social credit that informs others about an individual's history of cooperation or defection [8].A higher image score, generally achieved by consistently engaging in cooperative behavior, encourages others to reciprocate with cooperation.Conversely, a low image score can signal a history of non-cooperative behavior and discourage others from cooperating with the individual in question.Therefore, image scoring acts as a crucial mechanism regulating social behavior and promoting cooperation in groups [2].
However, the traditional approach to image scoring raises questions about the universality of this information.If image score is used as a proxy for various characteristics within a population subset, it may be more fitting to consider it a property of the observer rather than the agent observed.
Another fundamental mechanism supporting cooperation is gossip.Gossip functions as a powerful tool for promoting cooperation in social groups by influencing reputation and trust.By disseminating information about individuals' past behaviors, gossip indirectly incentives cooperative actions.Individuals tend to behave cooperatively when they know their actions can be shared amongst the group, promoting a positive reputation.Additionally, gossip allows group members to identify and preferentially interact with cooperators while avoiding defectors, thus reinforcing cooperative norms and discouraging selfish behavior [9,10].It has been suggested by empirical studies that gossip is more powerful when positive: it is better to know the cooperators rather than ostracize defectors [11].
Starting from our objection to the nature of image score and the tendency of positive gossip to outperform negative one in fostering cooperation, our work has been directed towards exploring the role of a subjective image score and gossip mechanism.
To explore these issues we relied on the prisoner's dilemma (PD), a well-established game theory model demonstrating the intricacy of decision-making in situations involving cooperation and competition [1,[12][13][14][15][16].When played repeatedly, PD allows players to adapt their strategies based on past outcomes, opening the door to the utilization of social information, like gossip, to influence others' behavior.Gossip, serving as a potent means of information transmission, has been evidenced to play a key role in fostering cooperation in social networks [17].
In this study, we present an agent based model (ABM)-based framework for simulating a repeated PD game with an opponent selection mechanism simulating a 'subjective' image score, and a tool to share this score with others, simulating gossip.ABM is a powerful computational approach that enables the creation and analysis of individual units, or 'agents' , following specified rules within a model.They are used to model complex systems following a bottom-up approach.Their application is widespread and they are used to model economic and financial systems [18][19][20][21][22], to ecological ones [23][24][25][26].Despite limitations dictated by the specificity intrinsic to ABMs which can hinder the replicability of studies [27], one of the key advantages of ABM is the easiness with which complex behaviours, such as gossip, can be implemented in the simulations.In fact, among the first uses of this computational methodology was the exploration of the complexity of cooperation [28].
The purpose of this study is to examine the effect of a 'subjective' image score and gossip on the evolution of cooperation, trust, and reputation among a population of interacting agents.In addition, we attempt to define a sufficient condition for the spread of cooperation in our simulated social system by excluding explicit ostracism.To represent the fact that social group members rarely employ fixed strategies, we assumed the agents had mixed strategies and allowed them to improve their strategies using a genetic algorithm.Each agent has a 'subjective' image score of every other agent they have interacted with, which is used for opponent selection.These 'subjective' image scores are recorded in private lists, and each agent can select its opponent either randomly or from the private list of potential partners.If certain conditions are met, these lists can be shared with others, simulating gossip.We analysed how the use of private lists for partner selection affected the emergence of cooperation in the system and how this impacted the system's topological properties.
Our present framework sets the stage to further diminish the uncertainties associated with exogenous factors in the emergence of cooperation.This can be done introducing agents empowered by reinforcement learning algorithms, similar to those already employed in other contexts [29,30], which will enable a more adaptive and dynamic decision-making process.Reinforcement learning agents will simulate the evolution of strategies in changing environments.By operating within dynamic graphs that reflect the fluid social networks of human societies, RL agents will offer insights into the evolution of cooperation under conditions that closely mirror the evolution of human cooperation through time.

Private image score approach
In the realm of studying cooperation dynamics, the concept of an image score has been pivotal, with significant contributions from researchers like Nowak and Sigmund [8].Nowak's approach to a public image score has been foundational, offering profound insights into how cooperative behaviors can be understood and modeled.The public image score, potentially representing a proxy for various universally available information, allows for a straightforward identification of cooperators within a system.Such an approach is undoubtedly sound and has yielded substantial results in the field.
However, in our study, we venture to explore a different aspect of image scoring-a private, subjective one.We posit that while public image scores are useful, their objective nature could lead to a scenario where everyone is capable of identifying cooperators.This universal recognition, in turn, might enable individuals to 'play' strategically by hunting known cooperators in order to exploit them by defecting.Our approach, therefore, introduces the concept of a private image score, which is subjective and unique to each observer.We believe that this approach adds a further layer of complexity and realism to the model, as it mirrors the nuanced nature of personal perceptions and judgments in real-world interactions.By adopting this subjective stance, our model aims to capture the variability and individuality of interactions, reflecting the diversity of strategies and outcomes that occur in natural cooperative scenarios.

Model and Methods
In this section, we delve into the details of our model and the methodologies employed in its development.Our aim is to provide a comprehensive understanding of the operational mechanics of the model and the tools used in its construction.A visualization of these mechanisms can be found in figure 1(a).To facilitate ease of reading and reference, we have compiled a glossary of all the acronyms used in this section, which can be found in table 1.We will detail the specific processes that govern agent interactions, the overall dynamics of the simulation and the tools used.
In our ABM we consider a population of N agents who play an repeated prisoner dilemma game (RPDG) among each other for T time steps; the i-agent uses a mixed strategy, deciding to cooperate with probability represented by their strategy score, SC i , and to defect with probability 1 − SC i , with 0 ⩽ SC i ⩽ 1 and 0 ⩽ i ⩽ N. The model has been built with the programming language Python, the network visualization has been created with the help of the software GEPHI and the Python package NetworkX.

Opponent selection
At each time step of the simulation, each agent chooses an opponent with whom to play the PDG.The choice is either random among all of the agents, with an equal probability of picking any of the agents of the whole population, or random among the agents present in the cooperators' list, with the probability of picking an agent weighted over the cooperation scores of each agent in the list.As shown in figure 1(b), the choice is either random an agent will decide whether to pick the completely random approach or the cooperators' list approach, selecting the latter with probability equal to the cooperators' list usage percentage, CL, of the simulation, and the former with probability 1 − CL, with 0 ⩽ CL ⩽ 1.In practice, we used values of CL in incremental steps of 0.1 from 0.0 (never using the list mechanism) to 0.9 (using the list mechanism to choose the opponent 90% of the time).We exclude the case of CL = 1.0 since the results are trivial as a consequence of the agents to ever choose their opponents randomly and therefore populate their lists beyond the initialization period.

Payoff matrix
The payoff matrix for the game is as follows:

Private Cooperators List Creation
After each game both players record if the opponent cooperated on a private list; if an agent cooperates its 'cooperation score' , S, on the list increases by 1, if defects it decreases by 1; however, cooperation scores cannot be negative as to say that defectors are not overly punished for their behaviour and that there's no defectors list.This condition, for the i-agent with M elements in its list, is expressed as follows: S j i ⩾ 0, where 0 ⩽ i ⩽ N and 0 ⩽ j ⩽ M.

List Sharing
At the start of the simulation, a 50 time-step initialization period allows each agent to develop their own private list of cooperators.During this phase, agents interact and adjust their lists based on the outcomes of these interactions.This panel illustrates the sequence of events occurring during a single time step in the model.Initially, each agent selects an opponent and engages in a prisoner's dilemma (PD) game.Following the game, agents record the outcome: if their opponent cooperated, they add them to their private list, or if the opponent is already on the list and did not cooperate, they reduce the opponent's cooperator score by one unit.Subsequently, agents share their lists with others included in their own list.The cycle concludes with the application of a genetic algorithm, which identifies agents with the highest payoffs.Agents with lower payoffs are then replaced by copies of those with the best performance, simulating natural selection and strategy evolution.(b) Opponent selection mechanism visualization.This panel illustrates how agents select their opponents in each time step.With a probability of 1 − CL (the probability to use the cooperators' list), an agent selects an opponent randomly, treating each potential opponent equally.Conversely, with a probability of CL, the agent selects from its list of known cooperators.In this case, the selection is weighted based on the cooperator score of each agent on the list, meaning agents with higher cooperator scores are more likely to be chosen.Once the initialization period concludes, agents begin to share their lists under specific conditions.An agent will share its list with another if that agent's cooperation score exceeds the average score of all agents on the list.Mathematically, this condition for sharing the list of agent i with agent k is expressed as: where S k i is the cooperation score of agent k in the list of agent i , and M i is the total number of agents in agent i 's list.
When an agent meets this sharing criterion, it receives a filtered list containing only those agents whose scores also surpass the list's average cooperation score.For example, if agent A has on its list several agents meeting this criterion, agent A will share this filtered list with all of them.As shown in figure 2, Agent B, who meets the criterion (1), will receive the list of all the agents in the list of Agent A satisfying the criterion, except for itself.If any of these agents are already present on agent B's list, their cooperation score is increased by 1, similar to a scenario where an actual cooperative interaction occurred.Furthermore, if the agent is not present, it will be included with a cooperation score of 1 (see for example Agent E of figure 2).The process is repeated for all agents meeting the condition (1).This mechanism of selective list sharing enhances the network of known cooperators for each agent, contributing to the evolving dynamics of cooperative behavior in the simulation.

Genetic Algorithm
Every t time steps elapsed, the 10 lowest performing agents in terms of payoff have their SC substituted with that of the 10 best performing agents plus or minus a value, v, chosen randomly from a uniform distribution between -0.05 and 0.05.

Initialization
We set N = 100, T = 2000, and initialize agents with a strategy score of SC i = 0.5 ± u, where u is a random value from a uniform distribution between −0.2 and 0.2.For every t = 5 time step, the genetic algorithm substitutes the SC of the 10 lowest performing agents in terms of payoff with that of the best 10, plus or minus a mutation.We performed the simulation for values of CL = [0.0;0, 1; 0.2; 0.3; 0.4; 0.5; 0.6; 0.7; 0.8; 0.9].For each value of CL the simulation has run 50 times and the results for each CL value are averages over these 50 runs, each of which has a different seed for random numbers.

Network creation
At time steps t = [1; 50; 500; 1000; 2000] the network formed by the agents and their private lists is visualized.Each agent is a node with an out-degree equal to the number of agents present in its list, and an in-degree equal to the number of times it is present in someone else list.In practice for each agent/node a weighted out link is created towards the agents/nodes present in its private cooperators list, thus shaping the network.The weight is equal to the relative cooperation score of the corresponding element of the agent/node private list.

Results
We performed simulations with different values of the cooperators' list usage percentage, CL, and observed how it impacted the emergence of cooperation in the system.The results reported are averaged over 50 independent runs of the simulation, the number of agents was equal to 100, and each simulation run for 2000 cycles (time steps).

Impact of list use on cooperation scores
When CL = 0.0, as expected from game theory, the mean strategy score of the whole population decreases, meaning that, on average, the agents prefer to defect.
Instead when the list mechanism is present, CL ̸ = 0.0 the average strategy score increases, leading to a more collaborative world.In figures 3 and 4 are shown all different visual representations of this phenomenon.In particular, we observed that an increase of CL corresponds to an increase of the mean cooperation value (MCV), these increments are more pronounced for lower values of CL, while for higher values they tend to plateau.The values reached are shown in table 2 where we report the MCV at the end of the simulations for each value of CL.
To address the optimal duration for our simulations, we conducted additional simulations up to 5000 cycles, albeit with a reduced number of runs (10 instead of 50) due to computational constraints.Figure 3 presents the MCV across these extended cycles.It reveals that MCV stabilizes after 2000 cycles across all configurations, except in the absence of list use (CL = 0%), where divergence aligns with theoretical expectations.This plateau in MCV confirms that 2000 cycles allows one to capture the essential patterns of cooperation without unnecessary computational expenditure.
Another phenomenon we observed is that of a reduction of the standard deviation for higher values of CL at the passing of time.This is true for CL = [0.6,0.7, 0.8, 0.9].A qualitative analysis of figure 4 shows how the highest reduction of standard deviation is reached for CL = 0.7, 0.8, 0.9, for CL = 0.6 the reduction is less pronounced, and for other values it either fluctuates or increase.
Finally, a noteworthy phenomenon is that the average payoff per play, intended as the payoff obtained at the end of the simulation normalized over the number of times an agent played, follow the same trend seen in figure 4, as we can see in figure 4.

Impact of list use on topology
An interesting topological phenomenon emerged from the list use.We displayed the network formed by the agents, where each agent is a node of the network and a weighted, directed arc between two agents represent the presence of one of the two in the other's cooperators list (the weight being the cooperation score assigned in the list).We can observe the emergence of a central hub topology in the vast majority of the simulations instances.These central hubs present the characteristics of having all the other nodes connected to them (in-degree = 99) and a payoff few orders of magnitude greater than the other top performing agents.To address the emergence of central hubs in our simulations and their implications, we have expanded our analysis to draw parallels with central authorities in societal structures.This analogy helps in understanding the role of central hubs in facilitating cooperation and their potential emergence in larger, more complex networks.Our ongoing work involves improving the computational efficiency of our model to explore these dynamics with a larger population of agents.Whiskers extend from each end of the box to the furthest data points within 1.5 times the interquartile range (IQR) from the upper and lower quartiles.Data points lying beyond this range are categorized as outliers and depicted as small circles.Notably, the whiskers show a decreasing spread, particularly at a CL value of 0.7, which indicates a narrower distribution range (and a correspondingly lower standard deviation).Additionally, at this CL value, outliers tend to be closer to the box, as opposed to those at higher CL values (0.8 and 0.9), where outliers are more dispersed.(b) Box plot of payoff per play for different percentages of CL.We present the average payoff per play for each value of CL, normalized to account for variations in the number of plays by an agent.These values represent the final payoffs at the end of the simulation, averaged over 50 runs for each CL value.By normalizing the payoff over the number of plays, we ensure that the data reflects the efficiency of strategies, rather than merely the frequency of play.We can observe that payoffs saturate from CL = 0.7 to a value approximately equal to 2.5.Notice that this value is very high since the maximum cooperation payoff is 3 in the game.In figure 5 we can see how the hubs (nodes with the green border) reach payoffs greatly superior than those of the rest of the network.This can be explained by being an effect of them playing more often, which is a consequence of being in the list of every other agent, thus being chosen more often to play with.However, it is worthwhile to note that while this effect emerges for all the CL values, the hub is increasingly richer the higher the value of CL but only for CL = 0.7 also the average agent is better off by having a higher payoff per play which decreses for values of CL = 0.8 and CL = 0.9, as seen in figure 4.
In figure 6 we show the network evolution at certain times for a certain run.The color of each node represents its in-degree centrality, with darker green indicating higher degrees relatively to the network.This helped identify highly connected nodes.The darkness of the arcs represented the weight of the connections between nodes.The emergence of the central hub is clear.

Discussion
Gossip has been widely studied as a crucial element for the emergence of cooperation in complex societies [31].Our research aimed to explore the possibility of cooperation in the absence of direct ostracism and without the use of an objective image score.In fact, we made very few assumptions about the system over which we observed the emergence of cooperation.A subjective image score is present, created by the interaction of the agents who use a private list mechanism to record if another player cooperated in the RPDG, the key difference from the traditionally proposed 'image score' by Nowak and Sigmund [8] is that in our research it is a property of the observer, and not an intrinsic element of the agent observed.We showed that the private list mechanism is generally able to foster cooperation, however, the list use needs to be quite frequent to obtain the best results.High values of CL seem to provide the best results in terms of global cooperation levels reached, even though we observe many 'defector' outliers for very high values of CL (0.8-0.9).We theorize that over-reliance on the list mechanism can limit exposure to new, potentially more cooperative agents.This could result in over-dependence on a static group of known agents, potentially missing out on more cooperative opportunities.Therefore, while valuing established relationships is beneficial, the integration of new agents is equally crucial for optimal cooperation.Moreover, our approach was innovative also in being agnostic with regard to the network over which the agents interacted.We were able to reliably observe the emergence of a graph characterized by 'super nodes' with very high centrality.All the agents of the system are connected to this central hub.This phenomenon has clear resemblance with the arise of centralized entities, such as government, in developed societies.It would be interesting to explore the possibility that such phenomenon is essential to the emergence of cooperation in developed societies.
In conclusion our findings entail the following results: • The private list mechanism, simulating our new 'subjective' approach to the concept of image score, paired with the list sharing, in our view reproducing gossip, is able to support the emergence of cooperation.We showed that this is true even when using mixed strategies in a RPDG and without explicit image scores or ostracism.Simply recording and sharing a private list of cooperators is sufficient for the diffusion of cooperation.Moreover, we showed how this phenomenon is correlated with the rate of list use.• In all the simulations we found that the networks formed have one central hub (in few cases two) with a disproportionate amount of payoff at the end of the simulation, and to whom every other node is connected.Clearly, the high payoff is a consequence of the fact that everyone is connected to these central hubs, thus giving them more chances to be chosen as players at each iteration, but the reason for their emergence and their significance should be further explored in future research work.Moreover, it is interesting to note how the hubs accumulate higher payoffs the higher the values of CL, but around CL = 0.7 also the average agent is better off by having a higher payoff per play (figure 4), thus hinting that exist an optimum configuration of list use to guarantee a 'fairer' payoff distribution.This hub structure has a resemblance with centralized-hierarchical societies where a 'super agent' (e.g. a government), is connected to everyone.This is in accordance with recent research [32] supporting the idea of spontaneous emerging cooperative hierarchical societies.The relevance of gossip in this phenomenon will be further addressed in future work.
To further explore the implications of central hubs and their resemblance to centralized entities in societies, our future research will delve into the conditions and dynamics that lead to the formation of such hubs.This includes investigating scenarios with varying population sizes and the potential emergence of multiple hubs.The insights gained from these studies could offer valuable contributions to understanding the development of cooperative structures in complex societies.The observed higher payoffs for hubs in our model can be attributed to their increased frequency of being chosen as an opponent.As these hubs are included in the cooperators' lists of all other agents, they naturally have a higher chance of being selected for gameplay.This increased opportunity to play allows hubs to accumulate higher payoffs.Additionally, when these hubs cooperate (which they tend to do on average), their likelihood of being chosen by other agents further increases, creating a positive feedback loop.This phenomenon within our model draws a parallel with the roles of central entities in real societies.
We believe the central hubs in our model can metaphorically represent entities like governments or central banks in real-world cooperative societies.Such entities often play a pivotal role, necessitating interaction from various societal members, similar to how agents in our model must engage with the central hub.This interaction can be likened to participating in a PD game, where consistent cooperation or defection with the hub significantly influences an agent's overall standing and success within the network.For example, in societal terms, individuals and organisations frequently interact with governmental bodies, adhering to regulations and fulfilling obligations like tax payments, which mirrors the frequent interactions with the hub in our model.Those who align with the hub's 'rules' (cooperators) benefit in the long run, while those who do not (defectors) find themselves at a disadvantage, as their likelihood of being selected for positive interactions diminishes.
Finally, to build upon our findings and to refine the predictive power of our model under varying external influences, subsequent research will incorporate reinforcement learning.By enabling agents to adapt through a system of rewards and punishments, reinforcement learning can simulate a learning process that mirrors the natural selection and social dynamics of human societies.Future simulations will deploy RL agents within dynamically evolving graphs, allowing for an in-depth examination of cooperation strategies as they evolve in real-time and in more accurate settings.Such approaches could significantly reduce the impact of exogenous uncertainties and provide a clearer understanding of the essential mechanisms that drive the development of cooperative behaviors in complex societies.

Figure 1 .
Figure 1.Model Visualizations.(a) Model Visualization of a Complete Cycle.This panel illustrates the sequence of events occurring during a single time step in the model.Initially, each agent selects an opponent and engages in a prisoner's dilemma (PD) game.Following the game, agents record the outcome: if their opponent cooperated, they add them to their private list, or if the opponent is already on the list and did not cooperate, they reduce the opponent's cooperator score by one unit.Subsequently, agents share their lists with others included in their own list.The cycle concludes with the application of a genetic algorithm, which identifies agents with the highest payoffs.Agents with lower payoffs are then replaced by copies of those with the best performance, simulating natural selection and strategy evolution.(b) Opponent selection mechanism visualization.This panel illustrates how agents select their opponents in each time step.With a probability of 1 − CL (the probability to use the cooperators' list), an agent selects an opponent randomly, treating each potential opponent equally.Conversely, with a probability of CL, the agent selects from its list of known cooperators.In this case, the selection is weighted based on the cooperator score of each agent on the list, meaning agents with higher cooperator scores are more likely to be chosen.

Figure 2 .
Figure 2. Visualization of List Sharing Mechanism.This diagram illustrates how Agent A shares its list with Agent B. Agents meeting the criterion reported in (1) are highlighted with red circles.Agent B, being among these agents, receives the list.Consequently, Agent B updates its list: the cooperator score of Agent C is increased by 1, and Agent E is newly added with an initial cooperator score of 1. Agent D remains unchanged in Agent B's list as it does not satisfy the sharing criterion.

Figure 3 .
Figure 3. MCV through Time at for different values of CL.(a) Mean Cooperation Value (MCV) Trends Across Simulations for 2000 cycles.This graph presents the MCV for each simulation, averaged over 50 runs, with each line representing a different value of the probability to use the cooperators list (CL).At lower CL values, the graph shows that cooperation tends to emerge with difficulty, indicating a struggle for cooperative behavior to take hold.Conversely, as CL values increase, the simulations consistently lead to more cooperative states, highlighting a direct correlation between higher CL values and the propensity for cooperative behavior within the simulation.(b) Mean Cooperation Value (MCV) Trends Across Simulations for 5000 cycles.For each simulation, the MCV has been averaged over 10 runs.

Figure 4 .
Figure 4. Effects on Cooperation of List Use.(a) Box plot of MCV for different percentages of CL.In this boxplot, the median of the data is indicated by a red line.The top and bottom edges of the box represent the 75th and 25th percentiles, respectively.Whiskers extend from each end of the box to the furthest data points within 1.5 times the interquartile range (IQR) from the upper and lower quartiles.Data points lying beyond this range are categorized as outliers and depicted as small circles.Notably, the whiskers show a decreasing spread, particularly at a CL value of 0.7, which indicates a narrower distribution range (and a correspondingly lower standard deviation).Additionally, at this CL value, outliers tend to be closer to the box, as opposed to those at higher CL values (0.8 and 0.9), where outliers are more dispersed.(b) Box plot of payoff per play for different percentages of CL.We present the average payoff per play for each value of CL, normalized to account for variations in the number of plays by an agent.These values represent the final payoffs at the end of the simulation, averaged over 50 runs for each CL value.By normalizing the payoff over the number of plays, we ensure that the data reflects the efficiency of strategies, rather than merely the frequency of play.We can observe that payoffs saturate from CL = 0.7 to a value approximately equal to 2.5.Notice that this value is very high since the maximum cooperation payoff is 3 in the game.

Figure 5 .
Figure 5. Top Performing Nodes.For CL = 0.7 are shown the top 5 nodes in terms of payoff at the end of the simulation for each of the 50 runs.The size represents the degree of the node, with more connected nodes being larger.The colour represent the payoff, with red being nodes with the highest payoff and blue with lower payoff.Nodes with green borders represent nodes connected to all the others (in degree = 99).

Figure 6 .
Figure 6.Network Evolution Over Time.This figure illustrates the evolution of the network at specific time intervals for a selected run with CL = 0.7.Node colors indicate in-degree centrality, with darker green shades representing higher relative degrees, thereby identifying highly connected nodes.The darkness of the arcs corresponds to the weight of connections between nodes, illustrating the strength of these connections.The emergence of a central hub within the network is clearly depicted, demonstrating its increasing connectivity over time.The size of each node represents its payoff.

Figure 8 .
Figure 8. Matrix Visualizations for different CL Values.

Table 1 .
Definitions of all the acronyms.

Table 2 .
Values of MCV for different values of CL.