Claudio Delrieux & Javier Legris (eds.)
Computer Modeling of
Scientific Reasoning
Third International Workshop
Computational Models of Scientific
Reasoning and Applications
Buenos Aires, Argentina
September, 14-15, 2003
Proceedings
UNIVERSIDAD NACIONAL DEL SUR
Departamento de Ingeniería Eléctrica y Computadoras
Two Evolutionary Games: Collective Action
and Prisoner’s Dilemma
Jakson Alves de Aquino
Federal University of Minas Gerais,
Belo Horizonte, Brazil
Abstract. This paper presents two agent based models that simulate the
evolution of cooperation in the prisoner’s dilemma game and in a kind of nperson prisoner’s dilemma, which was called collective action game. The
models were built as computer applications and the agents, instead of rationality,
have propensities to feel emotions. The interplay between the agents’
personalities and the situation where they are found determine what emotions
arouse and, thus partially determine their behavior.
1 Game Theory, Rationality and the Problem of Cooperation
Sociologists and political scientists use statistics to analyze data, but their attempts to
use mathematics to formalize social theories have not been successful for a long time.
Only on the last decades a branch of theoretical research in social sciences-the game
theory-has began to build formal explanations of social phenomena. However, the
social world is too complicated to be easily translated into mathematics.
To be able to build formal explanations, game theorists have made some
simplifying assumptions about the behavior of human beings. The two most
important of such assumptions are that human beings are strictly rational and that
they have complete information about the game being played.
Rarely the real world is as simple as described by game theory, and this unrealism
frequently make it difficult to interpret the games, that is, we frequently cannot say
that the way the game works is very similar to what happens in the real social world.
In spite of their limitations, some games remain sound even with the simplifying
assumptions, and game theory is an excellent way of conducting thought experiments,
shedding light on important social puzzles. One of them is the problem of
cooperation.
An isolated human being is poorly prepared to survive, and undoubtedly is unable
to provide for himself a wealthy life. Individuals can achieve a better standard of life
cooperating with others. What game theory has shown is that in many circumstances
cooperation is not a rational thing to do. If the result of cooperation is a good that will
be available to everyone in a community independently of who have cooperated for
the production of the good, then the best thing to do is do not incur in the cost of
cooperating and participate only in the consumption of the good. However, this is the
best thing to do for everyone. Thus, the good will not be produced at all, and we have
1
a problem of collective action. Imagining a very small group of only two people, the
argument can be formalized with a simple game: the prisoner’s dilemma.
In the prisoner’s dilemma, two individuals must choose one of two possible
actions: cooperate or defect. Both players make their decisions simultaneously, and,
hence, they do not know whether the other will cooperate or not. Below is an example
of payoff structure for this game:
Player 1
C
D
C
Player 2
3,3 0,5
D
5,0 1,1
If both cooperate, both are rewarded with 3 units of something. If both defect, both
receive only 1 unit. Nevertheless, the worst scenario is to cooperate alone and
receiving nothing. In any case, independently of the other individual action, the
bigger payoff is earned if the individual does not cooperate.
The cooperation is not rational in the prisoner’s dilemma game, but we can see that
people in real life do cooperate. Hence, we must reach the obvious conclusion that the
game must be modified if we want something more similar to what really happens in
society.
It is rational to cooperate if some conditions are satisfied: the same two individuals
will play the game many times, they do not know how many times they will play the
game, and they consider future payoffs as valuable enough. However, in real life
people also meet strangers and cooperate with them even knowing that they will
never meet again. A common explanation for this fact is that real people follow social
norms prescribing cooperation. Nevertheless, to follow norms is not always rational
and, as Edling argues to build a game that simulate the encounter of many different
people is not an easy task for traditional game theory: “Modeling true heterogeneity
means adding a new equation for each individual. Even with moderately large social
systems, this quickly becomes cumbersome. This approach to modeling processes is
therefore best left for macroprocesses and for the analysis of aggregate data” (Edling,
2002).
The complexity of combining many functions in a calculus grows exponentially as
the number of individuals and interactions among them increases. Instead of trying to
do this big calculus, we can use a computer to simulate many interactions among
individuals, each encounter evolving only simple calculations. In this approach, we
simulate social phenomena from the bottom up: we model the behavior of individuals
but what results can be interpreted as a social outcome.
One advantage of computer simulation over traditional game theory is the
easiness with which we can model learning and evolutionary processes.
Using this approach to the problem of cooperation in society, Robert
Axelrod (1997), has built a computer simulation of the sustainability of social
norms and the consequent evolution of cooperation. In Axelrod’s model,
the cooperation evolves if in addition to a norm of punishing defectors there
is a second order norm (what he calls metanorm) of also punishing individuals
who do not follow the norm of punishing non-cooperators. Following Ax-
2
elrod, I built a version of his norms and metanorms games, modifying, however,
some features of the game to turn it a little more realist.
In next sections, I present and discuss two models that I developed.
2 Two Evolutionary Games
Some scholars consider the problem of collective action as a kind of n-person
prisoner’s dilemma. In fact, we can imagine the payoffs in a way that resembles the
payoff structure of prisoner’s dilemma. In both games, on average, individuals would
be better off if all cooperate, but each one would be better off defecting. However,
differences exist and are important. We can imagine a world where people make a
multitude of cooperative pairs or very small groups but never engage in collective
action that requires the cooperation of all or almost all individuals. Here the two
situations are modeled separately: the Prisoner’s Dilemma Game models a situation
where many agents play the prisoner’s dilemma game and Collective Action Game
models a situation where agents of a virtual society must decide either contributing or
not contributing to yield a collective good that will be available for everyone. From
the two games, the Collective Action is the most similar to Axelrod’s Norms and
Metanorms Games.
The two games presented here follow an evolutionary approach: Agents are born in
a torus world, accumulate wealth while playing the games repeatedly, reproduce and
die. The richer an agent when it reaches the age of reproduction, the bigger its
number of offspring. The models do not assume that agents are rational and follow
rational strategies: they have personalities. As long as emotions were successfully
translated into numbers, the agents can genetically inherit the ability to feel emotions.
The interplay between their personalities and the situation where they are found
determines what emotions arouse and, thus, partially determines their behavior.
The games consist of repeated cycles of agent activation. At each cycle, agents are
activated only once and in a different sequence. When activated, the agent tries to
move to an adjacent cell. If the cell is empty the agent moves to it.
In the first population, agents are created with an initial age that varies between 1
and the average age of reproduction. Afterwards, they will be born with age 0. During
the whole run of the game, they are also created with predetermined ages for
reproduction, drawn from normal distributions which mean value and the standard
deviation can be chosen. The age of reproduction is at most one standard deviation far
away from the mean. At each cycle, the ages of all agents increase in one unit. When
an agent reaches its age of reproduction, it reproduces and dies. New agents appear in
cells adjacent to their parents. If all adjacent cells are occupied, they will be born in a
place chosen randomly. The reproduction of agents happens at the end of each cycle.
The number of offspring an agent has is a result of its wealth when it reaches the
age of reproduction. Their initial wealth is zero. Agents whose wealth is 0.5 or more
standard deviation below the average does not reproduce. Agents whose wealth is less
than 0.5 standard deviation far away from the average generate 1 clone. Finally,
3
agents whose wealth is 0.5 or more standard deviation above the average generate 2
clones. At run time, the number of offspring is continually adjusted to maintain the
number of agents near the value chosen at the beginning of the game.
Agents inherit from their parents a genetic code, that is, a collection of genes, being
each gene a sequence of ten binary digits (0 or 1). The agents have the following
genes: boldness, vengefulness, gratefulness, and remembrance. During the creation of
the first population, the probability of each bit being the digit 1 can be determined
before running the game. When agents are born, there is a probability of 0.02 of each
bit of each gene suffering mutation.
3 The Collective Action Game
The Collective Action Game is very similar to Axelrod’s model. However, some
differences are important to note. In Axelrod’s model there are only 20 agents, and
there is no lattice world where agents live. The probability of an agent being seen
defecting or not punishing a defector was drawn from a uniform distribution between
0 and 1, that is, on average it is 0.5 (Axelrod, 1997). In my model, the number of
agents can be as few as 20 or as many as 1,000, agents live in a lattice world, and
only agents who are near enough the acting agent can see it. While agents are moving
randomly around the world, there is no difference between having a lattice world and
drawing a probability of interaction from some kind of distribution. However, as we
will see in Prisoner’s Dilemma Game, modeling a lattice world can be very useful
because it allows the development of models that have an evolving probability of
interactions among agents. Other differences are that my model does not have
simultaneous reproduction-agents can “live” for more than 4 cycles-and the number
of cycles to be executed is not limited to 100 generations, as in Axelrod’s models.
In the Collective Action Game, if the game is being played without norms, the
agents have only boldness. Including norms (and metanorms), the agents also have
vengefulness. In any case, the agent’s boldness alone determinates th e probability of
cooperating or defecting: the computer generates a random number between 0 and 1
and if this number was bigger then the agent’s boldness divided by 10, the agent
cooperates.
When an agent is activated, it tries to move to an adjacent cell and, then, can either
cooperate to the production of a collective good or refuse to cooperate. If it
cooperates, its current wealth decreases in 3 units and the society’s wealth increases
in 6 units. At the end of the cycle, when all agents already made their decision of
either contributing or free riding, the accumulated wealth of society is distributed
almost equally among all agents (some noise was added and each agent receives
between 0.95 and 1.05 of the accumulated wealth divided by the number of agents).
I ran 36 simulations of Collective Action Game. Each run had 9,000 cycles and the
following combination of parameters was tested: norms (played or not played),
metanorms (played or not played), age of reproduction (mean 25 and std. dev. 5 or
mean 100 and std. dev. 10), dimensions of world (10 x 10, 20 x 20, or 50 x 50) and
number of agents (12, 35, 40, 140, 950). In all simulations, the initial average bold-
4
ness and vengefulness were 5 on average. Without norms and metanorms, as
expected, the cooperation did not evolve. The average proportion of cooperation in
the last 20 cycles was 0.0013. Below is the graphic of one of the runs:
Fig. 1. First 2000 cycles of evolution of boldness and proportion of cooperation in a simulation of
Collective Action Game without norms (initial number of agents, 140; mean age of reproduction, 25;
world dimensions, 20 x 20).
If the game includes Norms, when an agent does not cooperate, its neighbors who
are up to two cells of distance can observe the defection. After seeing a defection, the
neighbor must decide either to follow the norm of punishing the defector or to do
nothing. The probability of an agent being vengeful is entirely dependent of its
genetically inherited propensity for feeling anger at non-cooperators. Again, the
computer generates a number and if it is bigger enough the agent punish the noncooperator. The process of punishment decreases the wealth of both agents, punisher
and punished, respectively, in 2 and 5 units. In the simulations that included norms,
but not metanorms, the cooperation evolved in 5 and did not evolve in 7, as can be
seen in the table below:
Table 1. Average Proportion of Cooperation in the Last 20 Cycles
P. of Coop.
0.00
0.18
1.00
Frequency
6
1
5
I ran again the five simulations that yielded evolution of cooperation-but now with
50,000 cycles-and the three with the biggest and most populated world collapsed,
according to the table below:
5
Table 2. Cycle when Cooperation began Collapsing in Collective Action with Norms in Three
Simulations
Dimensions of World
10 x 10
20 x 20
50 x 50
N. of Agents
40
140
950
Cycle
25,000
20,000
15,000
As can be seen, the bigger the world, the most difficult to sustain the cooperation
using norms alone. The two simulations that sustained the cooperation until the cycle
50,000 had only 12 agents. As many authors have pointed out, including Olson
(1965), the cooperation is easier in small groups because each individual receives a
bigger share of its own contribution. In the Collective Action Game, the collapse of
cooperation has a typical pattern. As can be seen in the figure below, in the
beginning, the cooperation increases, but as agents with high vengefulness are
selected out, defection is no more punished, and the cooperation collapses.
Fig. 1. Evolution of boldness, vengefulness, and proportion of cooperation in a simulation of
Collective Action Game with Norms (world dimensions, 20 x 20; initial number of agents, 140; mean
age of reproduction, 25).
If the game includes Metanorms, the neighboring vigilant agents can punish
another who has decided not punishing a non-cooperator. The process is the same of
Norms: again, the process of punishment decreases the wealth of both agents. There is
no “meta-metanorms”, and nothing happens with agents that do not follow the norm
of punishing non-punishers.
Since the mean age of reproduction was 25 in the only case where the cooperation
did not evolve with metanorms, I ran again the other five simulations that included
metanorms and had the same age of reproduction, but now with 50,000 cycles. All
five simulations sustained the cooperation throughout the 50,000 cycles. The real
problem was the population density, because it determines the probability of an agent
6
being seen defecting. I ran some simulations with different world dimensions and
different initial number of agents. With low density, the cooperation did not evolve;
with intermediary density, the cooperation begin to evolve but collapses as typically
happened in norms game; with high density, the cooperation was sustainable. Since
the cooperation is easier in small groups, what is low density in a large society can
already be a high density in a small population. That is, the cooperation can begin to
evolve in small groups only with norms, but the development of metanorms would be
necessary to the appearance of a large cooperative society.
Most Collective Action Game results are equivalent to that of Axelrod’s Norms
Game and Metanorms Game: in Axerold’s Norms Game cooperation can evolve if the
conditions are highly favorable, and in Axelrod’s Metanorms Game the evolution of
cooperation does not occur only when the conditions are highly unfavorable. One
difference is that since Axelrod’s model does not have somethin g that can be called
“population density”, in his models only the initial values of boldness and
vengefulness are important to determine what outcome is the most likely. Other
difference is that, although cooperation initially increased in my Collective Action
Game with norms, it was not sustainable, collapsing as soon as the average
vengefulness has fallen below a critical level.
4 The Prisoner’s Dilemma Game
In the Prisoner’s Dilemma Game there is no collective good that is produced and
distributed. Each agent receives its payoff immediately after playing one shot of the
game with another agent. In the simplest Prisoner’s Dilemma Game, an activated
agent randomly chooses one of its eight adjacent cells and decides to move to it. If the
cell is occupied, instead of moving, it plays the prisoner’s dilemma with the occupant
of the cell. If it is the first interaction between the two agents, the probability of
cooperating or defecting is uniquely determined by the agents’ boldness levels.
The payoff an agent receives is added to its wealth. Below are the values assigned
to the different payoffs:
T = 5 (temptation to defect)
R = 3 (reward for mutual cooperation)
P = 0 (punishment for mutual defection)
S = -3 (“Sucker’s” payoff)
Since there is no norm prescribing that everyone must punish the defectors, each
agent must defend itself and punish the other agent that took advantage of its
generosity. However, the prisoner’s dilemma has such a structure that this vengeance
is possible only if the two agents interact again in the future. The once exploited agent
would have to remember the other agent and be vengeful by the only mean it has: do
not cooperating in the next time they play prisoner’s dilemma. Thus, in the Prisoner’s
Dilemma Game, agents must have a cognitive ability that was unnecessary for the
development of cooperation in the Collective Action Game: the capacity to recognize
individuals and to remember past interactions with them. But this cognitive ability
was not enough yet. Since an agent will cooperate all the time only if its genetically
inherited propensity to be bold were zero, even a prevalently cooperative agent would
sometimes defect. The second agent, being vengeful, would have its propensity to de-
7
fect increased. The first agent would be more vengeful and less cooperative because
of the second agent’s increased level of defection, and so on. To counterbalance the
deleterious effect of vengefulness, the agents in the Prisoner’s Dilemma Game were
equipped with another emotion: gratefulness.
Each agent now has a unique identification number (ID). The agents can memorize
up to ten IDs, and for each ID number they can memorize up to ten results of
Prisoner’s Dilemma Game, including the payoff earned and the date of the game.
When an agent plays Prisoner’s Dilemma with the eleventh different agent, it can
memorize this new player and the result of the game by forgetting the agent with
whom the last game played was the oldest. Thus, each agent has its own propensity to
be bold, vengeful, or grateful (or the three things at the same time) according to the
history of its past interactions with the other player and to its ability to be sensible to
many or to few past interactions. The agents can either be highly rancorous,
remembering each past defection and thus increasing its own propensity to defect in
its next interaction with that other agent or has a very short memory, only the last
interaction triggering its vengeful behavior. The same short or long memory
determines the number of past cooperation of the other agent that will increase his
propensity to cooperate in the next game with that agent.
Before deciding what to do, an agent scans its memory, searching for the ID of the
other player. If the other’s ID is found, the agent recalls the past payoff s received
after playing with that agent. The number of past games recalled depends on the
agent’s genetically inherited remembrance capacity. For example, if the agent has a
remembrance level of 10 and if the two agents interacted at least ten times in the past,
all ten interactions will be remembered; if its remembrance level is 5, it will
remember the last five interactions, providing that they already played the game at
least five times. If the agent remembers the other agent’s past cooperation, it can be
grateful for this; if what it found in its memory are past defections, it can act
vengefully in the current interaction, defecting instead of cooperating. The boldness
level also is sensible to the past: it increases if there are Ts among the past payoffs.
Thus, if two agents have already played the game in the past, the probability of
defecting is no more simply arithmetically proportional to the boldness level. It is the
result of a set of “emotions” triggered by remembered events.
Consider the following variables:
b: genetically determined boldness propensity
v: genetically determined vengefulness propensity
g: genetically determined gratefulness propensity
T: number of received Ts remembered;
R: number of Rs remembered;
P: number of Ps remembered;
S: number of received Ss remembered;
n: number of games remembered.
First, if the two agents already have played prisoner' s dilemma and remember that
the other was a sucker at least one time in the past, the boldness increases:
b' = b +
(1 − b) ⋅ T
n
(1)
8
Then, the probability of not cooperating is determined according to the following
formula:
(1 − b' ) ⋅ v ⋅ ( S + P ) − b'⋅ g ⋅ ( R + T )
(2)
p = b'+
n
I ran 48 simulations of Prisoner’s Dilemma Game. Each run had 9,000 cycles and
the following combination of parameters was tested: trust game (played or not
played), move randomly or move towards the best neighbor, age of reproduction
(mean 25 and std. dev. 5 or mean 100 and std. dev. 10), dimensions of world (10 x 10,
20 x 20, and 50 x 50), and numbers of agents (12, 35, 40, 140 and 950). In all
simulations, the initial average boldness, vengefulness, gratefulness, and
remembrance were 5. Without trust game and with agents moving randomly, even
with remembrance and gratefulness, the cooperation did not evolve, as can be seen in
the figure below:
Fig. 3. Evolution of boldness, vengefulness, gratefulness, remembrance, and proportion of mutual
cooperation in a simulation of Prisoner’s Dilemma Game with agents moving randomly and without
Trust Game (dimensions of world, 10 x 10; initial number of agents, 40; mean age of reproduction,
25).
In the simulations that yield the results above, playing the game was an obligation:
if the randomly chosen cell was occupied the two agents had no option but playing
the game. As an attempting of creating a situation more favorable to the evolution of
cooperation, the agents were relieved from this obligation, and the prisoner’s
dilemma game became a Trust Game.
In Trust Game, if the chosen destination cell is not empty the agent evaluates the
occupant of the cell. If it was predominantly a cooperator in the past, it offers itself to
play Prisoner’s Dilemma . The second agent evaluates the first too, and, if it also was
a good player in the past, the offer to play the game is accepted. One agent is
considered a good neighbor if the average payoff obtained with it either is above or
has the same value of P (mutual defection).
9
The results were unexpected. Trust game did not change the results significantly. I
considered the possibility that trust game was not enough to yield cooperation alone
but that at least it could both delay the collapse and accelerate the evolution of
cooperation. To test this possibility, I measured the average proportion of cooperation
yielded until cycle number 1,000 in the 48 simulations. Again, the changes in the
results were not significant:
Table 3. Average Proportion of Mutual Cooperation in the First 1,000 Cycles
Kind of Game
Without Trust Game
With Trust Game
Total
Mean
0.34
0.31
0.32
N
24
24
48
Std. Deviation
0.25
0.26
0.25
In the prisoner’s dilemma, if two agents do not play the game many times, it is not
rational to cooperate. Our agents are not rational, but those agents with the “right”
emotions (high vengefulness and high boldness) become wealthier and reproduce
more than the other agents. The cooperation did not evolve because there was not a
high enough probability of two agents playing the game many times during their
lives. One obvious way of doing this is making the agents living more and in a
smaller world. Another way is giving the agent the chance of choosing where to
move. This second option was implemented giving to agents the ability to move
towards the best neighbor.
In the real world, people prefer to join with acquaintances than with unfamiliar
persons for solving problems, because, in general, friends are more trustworthy than
strangers are. In this game, if the agent is moving towards the best neighbor, and if
there is at least one good neighbor, the chosen cell will be that occupied by the best
neighbor. When agents are moving towards the best neighbors the probability of
encountering another agent in the next move and of playing Prisoner’s Dilemma is
not uniform along all the game. We can see the agents clustering in one or more
groups, because when two nice players find each other they “become friends” and do
not move until one of them dies. This would not be easy to program if the model was
not an agent based one (that is, a lattice world populated with virtual agents). The
cooperation, finally, evolved. The figure below shows the evolution of cooperation:
10
Fig. 4. Evolution of boldness, vengefulness, gratefulness, remembrance, and proportion of mutual
cooperation in a simulation of Prisoner’s Dilemma Game with movement towards the best neighbor
and without Trust Game (world dimensions, 20 x 20; initial number of agents, 140; mean age of
reproduction, 25).
The story, however, is not finished yet. In two of the 48 simulations (both with 12
agents), the cooperation did not evolve. One of them included Trust Game, the other
not. I ran a new set of 40 simulations with the following common parameters: 30,000
cycles; agents move towards the best neighbor; world dimensions, 10 x 10; number of
agents, 12; mean age of reproduction, 25. Half included trust game and in half the
agents earned the ability to avoid the worst neighbor. In this case, if there is no good
neighbor and there is a neighbor that predominantly defected in the past, the agent
moves to the opposite direction.
I considered that the cooperation evolved when the game yielded a proportion of
0.90 or more of mutual cooperation from all interactions among agents in the last 20
cycles. The cooperation evolved 35 times in the 40 simulations:
Table 4. Number of Cooperations that Evolved in 40 Simulations Including the Option Towards the
Best Neighbor
Kind of Game
Without either Trust Game or Avoiding the Worst
Trust Game Only
Trust Game and Avoiding the Worst
Avoiding the Worst Only
Total
N
10
10
10
10
40
N. of Cooperations
9
8
8
10
35
In one of the simulations that included the option avoiding the worst neighbor the
cooperation first collapsed and then recovered.
11
5 Conclusion
Robert Axelrod indicates three different ways of interpreting an evolutionary game:
“The evolutionary principle itself can be thought of as the consequence of any of
three different mechanisms. It could be that the more effective individuals are more
likely to survive and reproduce. This is true in biological systems and in some
economic and political systems. A second interpretation is that the players learn by
trial and error, keeping effective strategies and altering ones that turn out poorly. A
third interpretation, and the one most congenial to the study of norms, is that the
players observe each other, and those with poor performance tend to imitate the
strategies of those they see doing better” (Axelrod, 1997).
Although I agree with him that the cultural interpretation is probably the most
adequate for the evolution of social phenomena, I suggest that the biological one also
deserves to be analyzed. One feature that distinguishes us from other animals is our
“emotional brain” (Ledoux, 1999). As Elster argues, some emotions are universally
found in every people around the world, although to be conscious of them one must
name them and different cultures do not have names for the same set of emotions.
Frequently, it is necessary to make decisions with information that are far from
complete. In these situations, it is impossible to act rationally. If to decide what to do
the individual always had to be dependent on rational calculations, sometimes he
would simply do nothing. The agent in the Collective Action Game, for instance, does
not know what is the probability of being seen defecting, does not know whether it
will be punished or not, and does not calculate how much wealthier or poorer it will
be after making its decision of cooperating or defecting. It is simply guided by its
emotions, which reflect what were the “wise” decisions made in the past for its
ancestors. The two models indicate that the cooperation can have evolved among
individuals that have almost no information about the world and, thus, that cannot
make rational choices. In such a circumstance, emotions, instead of rationality, can
guide behavior. Individuals that experience the right emotions reproduce more than
others with propensities to feel the wrong emotions do.
In the two models, the possibility of evolution of emotions is taken as given, and
the agents’ three propensities for feeling emotions (vengefulness, boldness and
gratefulness) were chosen observing actual human emotions that undoubtedly are
important in the maintenance of norms. However, since I suggested a biological
interpretation of the game, I must note that the necessary ontogenetic conditions for
the evolution of these emotions were not discussed. We do not know what the process
of natural selection had to work on, and, agent based models cannot help in this
discussion. One way of choosing the set of emotions that most likely were
experienced by our ancestors, millions of years ago, would be trying to reconstruct
the way that our ancestors used to cooperate. Nevertheless, we do not know how
exactly was the environment of our ancestors and the fossil record obviously is not
very informative about how their minds used to work. One solution to this
problem could be the analysis of data on the patterns of cooperation
among apes and hunter-gatherers humans. The common characteristics
found would probably be present in our last common ancestor with the
great apes and these characteristics would be an adequate starting point to
12
model the evolution of cooperation among human beings. Jonathan Turner used this
kind of analysis in his discussion on the origins of human emotions (Turner, 2000).
In any case, it is very difficult to choose the right emotions to include in the
models, because, as Elster points out, there is no consensus about what exactly are the
human emotions: “... the basic emotions identified by various writers vary
enormously. There is not a single emotion that is found in all of fourteen lists of
purportedly basic emotions accumulated by Andrew Ortony, Gerald Clone, and Allan
Collins” (Elster, 1989).
In the Collective Action Game, the “wealth of society” is simply distributed e qually
among all agents. In the real world, someone or some institution must coordinate the
distribution of common resources or wealth, and the distribution is another collective
action problem. Moreover, it rarely is equalitarian. On the contrary, in the real world
the distribution of wealth is the result of a process of bargain, with some people being
more successful than others do. Other unrealistic assumption of the two games is that
all agents have the same preference structure and that in all interactions the possible
payoffs are the same. The two games could be more heterogeneous in these aspects.
Two emotions, vengefulness and boldness, are present in both games, Prisoner’s
Dilemma and Collective Action Game, and the software developed allows both games
being played simultaneously. This possibility can be explored in future works, but
first the assumption that the same emotions play important roles in the two situations
must be carefully analyzed. It is worth noting that real situations modeled by the
Collective Action Game can be more impersonal than situations modeled by the
Prisoner’s Dilemma Game .
Bibliography
Axelrod, R. (1984). The evolution of cooperation. New York: Basic Books.
Axelrod, R. (1997). The complexity of cooperation: agent-based models of competition and
collaboration. Princeton: Princeton University Press.
Bonabeau, E. (2002). Agent-based modeling: Methods and techniques for simulating human
systems. Proceedings of the National Academy of Sciences of the USA, 99 (Supp. 3), May
14, p. 7280-7287
Castelfranchi, C. (1998). Through the minds of the agents. Journal of Artificial Societies and
Social Simulation, vol. 1, n. 1.
Dawkins, R. (1979). O gene egoísta. Belo Horizonte: Itatiaia.
Edling, C. R. (2002). Mathematics in sociology. Annual Review of Sociology, 28, p. 197-220.
Elster, J. (1989). The cement of society: a study of social order. Cambridge: Cambridge
University Press.
Elster, J. (1999). Alchemies of the mind: rationality and the emotions. Cambridge: Cambridge
University Press.
Epstein, J. (1998). M. Zones of cooperation in demographic prisoner’s dilemma. Complexity,
vol. 4, n. 2, p. 36-48.
Kollock, P. (1993). “An eye for an eye leaves everyone blind”: cooperation and accounting
systems. American Sociological Review. 58, December, p. 768-786.
Ledoux, J. (1999). O cérebro emocional. Rio de Janeiro: Objetiva.
Macy, M. W., Willer, R. (2002). From factors to actors: computational sociology and agentbased modeling. Annual Review of Sociology, 28, p. 143-166.
Olson, M. (1965). The logic of collective action: public goods and the theory of groups.
Cambridge: Harvard University Press.
13
Staller, A., Petta, P. (2001). Introducing emotions into the Computational Study of Social
Norms: A First Evaluation. Journal of Artificial Societies and Social Simulation,
vol. 4, n. 1.
Taylor, M. (1987). The possibility of cooperation. Cambridge: Cambridge University Press.
Turner, J.H. (2000). On the origins of human emotions: a sociological inquiry into the
evolution of human affect. Stanford: Stanford University Press.
14