Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

A Short Introduction to Game Theory, Papers of Game Theory

This paper gives a brief overview of game theory.

Typology: Papers

2021/2022

Uploaded on 03/31/2022

arold
arold 🇺🇸

4.7

(24)

376 documents

1 / 22

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
A Short Introduction to Game Theory
Heiko Hotz
pf3
pf4
pf5
pf8
pf9
pfa
pfd
pfe
pff
pf12
pf13
pf14
pf15
pf16

Partial preview of the text

Download A Short Introduction to Game Theory and more Papers Game Theory in PDF only on Docsity!

A Short Introduction to Game Theory

Heiko Hotz

Contents

  • CONTENTS
  • 1 Introduction
    • 1.1 Game Theory – What is it?
    • 1.2 Game Theory – Where is it applied?
  • 2 Definitions
    • 2.1 Normal Form Games
    • 2.2 Extensive Form Games
    • 2.3 Nash Equilibrium
      • 2.3.1 Best Response
      • 2.3.2 Localizing a Nash Equilibrium in a Payoff-matrix
    • 2.4 Mixed Strategies
  • 3 Games
    • 3.1 Prisoner’s Dilemma (PD)
      • 3.1.1 Other Interesting Two-person Games
    • 3.2 The Ultimatum Game
    • 3.3 Public Good Game
    • 3.4 Rock, Paper, Scissors
  • 4 Evolutionary Game Theory
    • 4.1 Why EGT?
    • 4.2 Evolutionary Stable Strategies
      • 4.2.1 ESS and Nash Equilibrium
      • 4.2.2 The Hawk-Dove Game
      • 4.2.3 ESS of the Hawk-Dove Game
    • 4.3 The Replicator Dynamics
    • 4.4 ESS and Replicator Dynamics
  • 5 Applications
    • 5.1 Evolution of cooperation
    • 5.2 Biodiversity
  • A Mathematical Derivation
    • A.1 Normal form games
    • A.2 Nash equilibrium and best answer
    • A.3 Evolutionary stable strategies (ESS)
    • A.4 Replicator equation
    • A.5 Evolutionary stable state of the Hawk-Dove game
  • B Pogram Code
    • B.1 Spatial PD
    • B.2 Hawk-Dove game

2 DEFINITIONS 3

Despite the deep insights he gained from game theory’s applications to economics, von Neumann was mostly interested in applying his methods to politics and war- fare, perhaps descending from his favorite childhood game, Kriegspiel, a chess-like military simulation. He used his meth- ods to model the Cold War interaction be- tween the U.S. and the USSR, picturing them as two players in a zero-sum game. He sketched out a mathematical model of the conflict from which he deduced that the Allies would win, applying some of the methods of game theory to his pre- dictions. There are many more applications in the sciences, which have already been men- tioned, and in many more sciences like so- ciology, philosophy, psychology and cul- tural anthropology. It is not possible to list them all in this paper, more informa- tion can be obtained in the references at the end of this paper.

2 Definitions

I now introduce some of the basic defini- tions of game theory. I use a non-mathematical description as far as possible, since math- ematics is not really required to under- stand the basic concepts of game theory. However, a mathematical derivation is given in appendix A.1 and A.2.

2.1 Normal Form Games

A game in normal form consists of:

  1. A finite number of players.
  2. A strategy set assigned to each player. (e.g. in the Prisoner’s Dilemma each player has the possibility to cooper- ate (C) and to defect (D). Thus his strategy set consists of the elements C and D.) 3. A payoff function, which assigns a certain payoff to each player depend- ing on his strategy and the strat- egy of the other players (e.g. in the Prisoner’s Dilemma the time each of the players has to spend in prison).

The payoff function assigns each player a certain payoff depending on his strat- egy and the strategy of the other play- ers. If the number of players is limited to two and if their sets of strategies con- sist of only a few elements, the outcome of the payoff function can be represented in a matrix, the so-called payoff matrix, which shows the two players, their strate- gies and their payoffs.

Example:

Player1\Player2 L R U 1, 3 2, 4 D 1, 0 3, 3

In this example, player 1 (vertical) has two different strategies: Up (U) and Down (D). Player 2 (horizontal) also has two different strategies, namely Left (L) and Right (R). The elements of the matrix are the outcomes for the two players for playing certain strategies, i.e. supposing, player 1 chooses strategy U and player 2 chooses strategy R, the outcome is (2, 4), i.e. the payoff for player 1 is 2 and for player 2 is

2.2 Extensive Form Games

Contrary to the normal form game, the rules of an extensive form game are de- scribed such that the agents of the game execute their moves consecutively. This game is represented by a game tree, where each node represents every possible stage of the game as it is played. There is a unique node called the initial

2 DEFINITIONS 4

node that represents the start of the game. Any node that has only one edge con- nected to it is a terminal node and rep- resents the end of the game (and also a strategy profile). Every non-terminal node belongs to a player in the sense that it represents a stage in the game in which it is that player’s move. Every edge repre- sents a possible action that can be taken by a player. Every terminal node has a payoff for every player associated with it. These are the payoffs for every player if the combination of actions required to reach that terminal node are actually played.

Example:

Figure 1: A game in extensive form

In figure 1 the payoff for player 1 will be 2 and for player 2 will be 1, provided that player 1 plays strategy U and player 2 plays strategy D’.

2.3 Nash Equilibrium

In game theory, the Nash equilibrium (named after John Nash, who first described it) is a kind of solution concepts of a game involving two or more players, where no player has anything to gain by changing only his own strategy. If each player has chosen a strategy and no player can benefit by changing his strategy while the other players keep theirs unchanged, then the current set of

strategy choices and the corresponding pay- offs constitute a Nash equilibrium. John Nash showed in 1950, that every game with a finite number of players and finite number of strategies has at least one mixed strategy Nash equilibrium.

2.3.1 Best Response The best response is the strategy (or strate- gies) which produces the most favorable immediate outcome for the current player, taking other players’ strategies as given. With this definition , we can now de- termine the Nash equilibrium in a normal form game very easily by using the payoff matrix. The formal proof that this procedure leads to the desired result is given in ap- pendix A.2.

2.3.2 Localizing a Nash Equilibrium in a Payoff-matrix Let us use the payoff matrix of the Pris- oner’s Dilemma, which will be introduced in 3.1, to determine the Nash equilibrium:

Player1\Player2 C D C 3, 3 0, 5 D 5, 0 1, 1

The procedure is the following: First we consider the options for player 1 by a given strategy of player 2, i.e. we look for the best answer to a given strategy of player

If player 2 plays C, the payoff for player 1 for coosing C will be 3, for choosing D it will be 5, so we highlight his best answer, D:

C D C 3, 3 0, 5 D 5 , 0 1, 1

Now we repeat this procedure for the case,

3 GAMES 6

a conviction, and, having separated both prisoners, an officer visits each of them to offer the same deal: if one testifies for the prosecution against the other and the other remains silent, the betrayer goes free and the silent accomplice receives the full 10-year sentence. If both stay silent, the police can sentence both prisoners to only six months in jail for a minor charge. If each betrays the other, each will receive a two-year sentence. Each prisoner must make the choice of whether to betray the other or to remain silent. However, nei- ther prisoner knows for sure what choice the other prisoner will make. So the ques- tion this dilemma poses is: How will the prisoners act?

We will use the following abbreviations: To testify means to betray the other sus- pect and thus to defect (D), to remain silent means to cooperate (C) with the other suspect. And for the sake of clarity, we want to use positive numbers in the payoff matrix.

C D C R=3, R=3 S=0, T= D T=5, S=0 P=1, P=

  • R is a Reward for mutual coopera- tion. Therefore, if both players co- operate then both receive a reward of 3 points.
  • If one player defects and the other cooperates then one player receives the Temptation to defect payoff ( in this case) and the other player (the cooperator) receives the Sucker payoff (zero in this case).
  • If both players defect then they both receive the Punishment for mutual defection payoff (1 in this case).

As we have already seen, the logical move for both players is defection (D). The dilemma lies herein, that the best result for player 1 and player 2 as a group (R = 3 for both) can’t be achieved.

In defining a PD, certain conditions have to hold. The values we used above, to demonstrate the game, are not the only values that could have been used, but they do adhere to the conditions listed below. Firstly, the order of the payoffs is im- portant. The best a player can do is T (temptation to defect). The worst a player can do is to get the sucker payoff, S. If the two players cooperate then the reward for that mutual cooperation, R, should be better than the punishment for mu- tual defection, P. Therefore, the following must hold:

T > R > P > S. In repeated interactions, another condi- tion it is additionally required: Players should not be allowed to get out of the dilemma by taking it in turns to exploit each other. Or, to be a little more pedantic, the players should not play the game so that they end up with half the time being exploited and the other half of the time exploiting their opponent. In other words, an even chance of being ex- ploited or doing the exploiting is not as good an outcome as both players mutu- ally cooperating. Therefore, the reward for mutual cooperation should be greater than the average of the payoff for the temp- tation and the sucker. That is, the follow- ing must hold:

R > (S + T )/ 2

3.1.1 Other Interesting Two-person Games Depending on the order of R, T, S, and P, we can have different games. Most are

3 GAMES 7

trivial, but two games stand out:

  • Chicken (T > R > S > P )

C D C R=2, R=2 S=1, T= D T=3, S=1 P=0, P=

Example: Two drivers with something to prove drive at each other on a narrow road. The first to swerve loses faces among his peers (the chicken). If neither swerves, however, the obvious worst case will oc- cur.

  • Stag Hunt (R > T > P > S)

C D C R=3, R=3 S=0, T= D T=2, S=0 P=1, P=

Example: Two hunters can either jointly hunt a stag or individually hunt a rabbit. Hunting stags is quite challenging and re- quires mutual cooperation. Both need to stay in position and not be tempted by a running rabbit. Hunting stags is most beneficial for society but requires a lot of trust among its members. The dilemma exists because you are afraid of the oth- ers’ defection. Thus, it is also called trust dilemma.

3.2 The Ultimatum Game

Imagine you and a friend of yours are walking down the street, when suddenly a stranger stops you and wants to play a game with you: He offers you 100 $ and you have to agree on how to split this money. You, as the proposer, make an offer to your friend, the responder. If he accepts your offer, the deal goes ahead. If your friend rejects, neither player gets anything. The stranger will take back his money and the game is over.

Obviously, rational responders should accept even the smallest positive offer, since the alternative is getting nothing. Pro- posers, therefore, should be able to claim almost the entire sum. In a large num- ber of human studies, however, conducted with different incentives in different coun- tries, the majority of proposers offer 40 to 50 % of the total sum, and about half of all responders reject offers below 30 %

3.3 Public Good Game

A group of 4 people are given $ 200 each to participate in a group investment project. They are told that they could keep any money they do not invest. The rules of the game are that every $1 invested will yield $ 2, but that these proceeds would be distributed to all group members. If every one invested, each would get $ 400. However, if only one person invested, that “sucker” would take home a mere $ 100. Thus, the assumed Nash equilibrium could be the combination of strategies, where no one invests any money. And we can show that this is indeed the Nash equilib- rium. We will not display this game in a pay- off matrix, since each player has a too big set of strategies (the strategy sn is given by the amount of money that player n wants to contribute, e.g. s 1 = 10 means, that player 1 invests 10 $). Nevertheless this is a game in normal form and there- fore it has a payoff function for each player. The payoff function for, let’s say, player 1 is given by

P =

2 · (s 1 + s 2 + s 3 + s 4 ) 4 − s 1

=

2 · (s 2 + s 3 + s 4 ) 4 − 0 , 5 · s 1

But this means, that every investment s 1 of player 1 will diminish his payoff. Therefore, a rational player will choose

4 EVOLUTIONARY GAME THEORY 9

Another very prominent application is the quest for the origins and evolution of cooperation. The effects of population structures on the performance of behav- ioral strategies became apparent only in recent years and marks the advent of an intriguing link between apparently unre- lated disciplines. EGT in structured pop- ulations reveals critical phase transitions that fall into the universality class of di- rected percolation on square lattices. Together with EGT as an extension of game theory, new concepts were devel- oped to investigate and to describe these very problems. I will now introduce two of them, which are crucial to describe EGT, evolutionary stable strategies and the repli- cator dynamics. The first one is applied tu study the stability of populations, the latter one investigates the adoption of strate- gies.

4.2 Evolutionary Stable Strate-

gies

An evolutionary stable strategy (ESS) is a strategy which if adopted by a popula- tion cannot be invaded by any competing alternative strategy. The concept is an equilibrium refinement to the Nash equi- librium. The definition of an ESS was intro- duced by John Maynard Smith and George R. Price in 1973 based on W.D. Hamil- ton’s (1967) concept of an unbeatable strat- egy in sex ratios. The idea can be traced back to Ronald Fisher (1930) and Charles Darwin (1859).

4.2.1 ESS and Nash Equilibrium

A Nash equilibrium is a strategy in a game such that if all players adopt it, no player will benefit by switching to play any al- ternative strategy. If a player, choosing strategy μ in a population where all other players play

strategy σ, receives a payoff of E(μ, σ), then strategy σ is a Nash equilibrium if E(σ, σ) ≥ E(μ, σ), i.e. σ does just as good or better playing against σ than any mutant with strategy μ does playing against σ. This equilibrium definition allows for the possibility that strategy μ is a neutral alternative to σ (it scores equally, but not better). A Nash equilibrium is presumed to be stable even if μ scores equally, on the assumption that players do not play μ. Maynard Smith and Price specify (May- nard Smith & Price, 1973; Maynard Smith

  1. two conditions for a strategy σ to be an ESS. Either
  1. E(σ, σ) > E(μ, σ), or
  2. E(σ, σ) = E(μ, σ) and E(σ, μ) > E(μ, μ) must be true for all σ 6 = μ. In other words, what this means is that a strategy σ is an ESS if one of two conditions holds:
  3. σ does better playing against σ than any mutant does playing against σ, or
  4. some mutant does just as well play- ing against σ as σ, but σ does better playing against the mutant than the mutant does. A derivation of ESS is given in ap- pendix A.3.

4.2.2 The Hawk-Dove Game As an example for ESS, we consider the Hawk-Dove Game. In this game, two in- dividuals compete for a resource of a fixed value V (The value V of the resource cor- responds to an increase in the Darwinian fitness of the individual who obtains the resource). Each individual follows exactly one of two strategies described below:

4 EVOLUTIONARY GAME THEORY 10

  • Hawk: Initiate aggressive behaviour, not stopping until injured or until one’s opponent backs down.
  • Dove: Retreat immediately if one’s opponent initiates aggressive behaviour.

If we assume that

  1. whenever two individuals both ini- tiate aggressive behaviour, conflict eventually results and the two indi- viduals are equally likely to be in- jured,
  2. the cost of the conflict reduces indi- vidual fitness by some constant value C,
  3. when a hawk meets a dove, the dove immediately retreats and the hawk obtains the resource, and
  4. when two doves meet the resource is shared equally between them,

the fitness payoffs for the Hawk-Dove game can be summarized according to the fol- lowing matrix:

Hawk Dove Hawk 12 (V − C), 12 (V − C) V, 0 Dove 0 , V V / 2 , V / 2

One can readily confirm that, for the Hawk- Dove game, the strategy Dove is not evo- lutionarily stable because a pure popula- tion of Doves can be invaded by a Hawk mutant. If the value V of the resource is greater than the cost C of injury (so that it is worth risking injury in order to obtain the resource), then the strat- egy Hawk is evolutionarily stable. In the case where the value of the resource is less than the cost of injury, there is no ESS if individuals are restricted to following pure strategies, although there is an ESS if players may use mixed strategies.

4.2.3 ESS of the Hawk-Dove Game Clearly, Dove is no stable strategy, since V 2 =^ E(D, D)^ < E(H, D) =^ V^ , a popu- lation of doves can be invaded by hawks. Because of E(H, H) = 12 (V −C) and E(D, H) = 0, H is an ESS if V > C. But what if V < C? Neither H nor D is an ESS. But we could ask: What would hap- pen to a population of individuals which are able to play mixed strategies? Maybe there exists a mixed strategy which is evo- lutianary stable. Consider a population consisting of a species, which is able to play a mixed strategy I, i.e. sometimes Hawk and some- times Dove with probabilities p and 1 − p respectively. For a mixed ESS I to exist the follow- ing must hold:

E(D, I) = E(H, I) = E(I, I)

Suppose that there exists an ESS in which H and D, which are played with positive probability, have different pay- offs. Then it is worthwile for the player to increase the weight given to the strat- egy with the higher payoff since this will increase expected utility. But this means that the original mixed strategy was not a best response and hence not part of an ESS, which is a contradic- tion. Therefore, it must be that in an ESS all strategies with positive probabil- ity yield the same payoff. Thus:

E(H, I) = E(D, I)

⇔ p E(H, H) + (1 − p)E(H, D) = p E(D, H) + (1 − p)E(D, D) ⇔ p 2 (V − C) + (1 − p)V = (1 − p) V 2 ⇔ p = VC

5 APPLICATIONS 12

The connection between ESSs and stable states under an evolutionary dynamical model is weakened further if we do not model the dynamics by the replicator dy- namics. In 5.1 we use a local interaction model in which each individual plays the Pris- oner’s Dilemma with his or her neighbors. Nowak and May, using a spatial model in which local interactions occur between in- dividuals occupying neighboring nodes on a square lattice, showed that stable pop- ulation states for the Prisoner’s Dilemma depend upon the specific form of the pay- off matrix.

5 Applications

5.1 Evolution of cooperation

As mentioned before, the evolution of co- operation is a fundamental problem in bi- ology because unselfish, altruistic actions apparently contradict Darwinian selection. Nevertheless, cooperation is abundant in nature ranging from microbial interac- tions to human behavior. In particular, cooperation has given rise to major tran- sitions in the history of life. Game theory together with its extensions to an evolu- tionary context has become an invaluable tool to address the evolution of coopera- tion. The most prominent mechanisms of cooperation are direct and indirect reci- procity and spatial structure. The mechanisms of reciprocity can be investigated very well with the Ultima- tum game and also with the Public Good game. But the prime example to investigate spatially structured populations is the Pris- oner’s Dilemma. Investigations of spatially extended sys- tems have a long tradition in condensed matter physics. Among the most impor- tant features of spatially extended sys- tems are the emergence of phase transi-

tions. Their analysis can be traced back to the Ising model. The application of methods developed in statistical mechan- ics to interactions in spatially structured populations has turned out to be very fruit- ful. Interesting parallels between non equi- librium phase transitions and spatial evo- lutionary game theory have added another dimension to the concept of universality classes. We have already seen, that the Nash equilibrium of PD is to defect. But to overcome this dilemma, we consider spa- tially structured populations where indi- viduals interact and compete only within a limited neighborhood. Such limited lo- cal interactions enable cooperators to form clusters and thus individuals along the boundary can outweigh their losses against defectors by gains from interactions within the cluster. Results for different popula- tion structures in the PD are discussed and related to condensed matter physics.

This problem has been investigated by Martin Nowak (Nature, 359 , pp. 826- 829, 1992) I programmed this scenario based on the investigations of Nowak. The pro- gram is written in NetLogo, the program code is given in appendix B.

5.2 Biodiversity

One of the central aims of ecology is to identify mechanisms that maintain bio- diversity. Numerous theoretical models have shown that competing species can coexist if ecological processes such as dis- persal, movement, and interaction occur over small spatial scales. In particular, this may be the case for nontransitive com- munities, that is, those without strict com- petitive hierarchies. The classic non-transitive system involves a community of three com- peting species satisfying a relationship sim- ilar to the children’s game rock-paper-scissors,

5 APPLICATIONS 13

where rock crushes scissors, scissors cuts paper, and paper covers rock. Such rela- tionships have been demonstrated in sev- eral natural systems. Some models pre- dict that local interaction and dispersal are sufficient to ensure coexistence of all three species in such a community, whereas diversity is lost when ecological processes occur over larger scales. Kerr et al., tested these predictions empirically using a non- transitive model community containing three populations of Escherichia coli. They found that diversity is rapidly lost in our experi- mental community when dispersal and in- teraction occur over relatively large spa- tial scales, whereas all populations coexist when ecological processes are localized. There exist three strains of Escherichia coli bacteria:

  • Type A releases toxic colicin and produces, for its own protection, an immunity protein.
  • Type B produces the immunity pro- tein only.
  • Type C produces neither toxin nor immunity.

The production of the toxic colicin and the immunity protein causes some higher costs. Thus the strain which produces the toxin colicin is superior to the strain which has no immunity protein (A beats C). The one with no immunity protein is superior to the strain with immunity pro- tein, since it has lower costs to reproduce (C beats B) The same holds for the strain with im- munity protein but no production of the colicin compared to the strain which pro- duces colicin (B beats A).

In figure 2, one can see, that on a static plate, which is an environment in which dispersal and interaction are pri- marily local, the three strain coexist.

Figure 2: Escherichia coli on a static plate

green: resistant strain red: colicin producing strain blue: sensitive strain

Figure 3: Escherichia coli in a flask

In figure 3, where the strains are held in a flask, a well-mixed environment in which dispersal and interaction are not exclusively local, only the strain, which produces the immunity protein only will survive.

A MATHEMATICAL DERIVATION 15

A.4 Replicator equation

The fitness of a population is given by (Ax)i and the total fitness of the entire population is given by xT^ Ax. Thus, the relative fitness of a population is given by

(Ax)i xT^ Ax

Let us assume that the proportion of the population following the strategies in the next generation is related to the propor- tion of the population following the strate- gies current generation according to the rule:

xi(t + ∆t) = xi(t)

(Ax)i xT^ Ax ∆t

for xT^ Ax 6 = 0. Thus

xi(t+∆t)−xi(t) = xi(t) (Ax)i − xT^ Ax xT^ Ax

∆t

This yields the differential equation for ∆t → 0:

x˙i = xi[(Ax)i − xT^ Ax] xT^ Ax

for i = 1,... , n with ˙xi denoting the deriva- tive of xi after time. The simplified equation

x˙i = xi[(Ax)i − xT^ Ax] (3)

has the same trajectories than (2), since every solution x(t) of (2) delivers accord- ing to the time transformation

t(s) =

∫ (^) s

so

x(t)T^ Ax(t)dt

a solution y(s) := x(t(s)) of (3). Equation (3) is called the replicator equation.

A.5 Evolutionary stable state of

the Hawk-Dove game

We want to show, that the replicator dy- namics and ESS yield to the same result

for the Hawk-Dove game.

The replicator equation is given by

x˙i = xi[(Ax)i − xT^ Ax)]

Let us denote the population of hawks x 1 with p, thus the population of doves x 2 will be denoted with 1 − p. The first term Ax gives ( (^) p 2 (V^ −^ VC) +^ V^ (1^ −^ p) 2 (1^ −^ p)

Since Hawk is denoted with x 1 , we will use the first component of the vector Ax. The second term xT^ Ax delivers p^2 2

(V − C) + p V (1 − p) +

V

(1 − p)^2

Thus:

p˙ = p [ p 2 (V − C) + V (1 − p) − p

2 2 (V^ −^ C) −p V (1 − p) − V 2 (1 − p)^2 ] = p [ C 2 p^2 − 12 (V + C)p + V 2 ] = p [p^2 − V^ +C Cp + VC ]

In order to be a population evolutionary stable we set the changes of the popula- tion per time to zero, so that there is no change in time. This gives:

p˙ = 0 ⇒ p [p^2 − V^ +C Cp + VC ] = 0

This is certainly true for p = 0. This is the trivial solution. Two other solutions can be obtained by evaluating the term in the brackets:

p^2 −

V + C

C

p +

V

C

This gives

p 1 / 2 = V^2 +CC ±

V 2 +2V C+C^2 4 C^2 −^

V C = V^2 +CC ±

V 2 − 2 V C+C^2 4 C^2 = V^2 +CC ± V 2 −CC

A MATHEMATICAL DERIVATION 16

Thus:

p 1 = 1, p 2 =

V

C

p 1 = 1 is another trivial solution, thus the only relevant result is p 2 = VC.

B POGRAM CODE 18

] [ ; if this patch is cooperator: ; if neighbor with highest score is defector: set patch to defector (z=1, yellow) ; else: stay cooperator ifelse (z_prev-of neighbor_h) = 1 [set pcolor yellow set z 1][set pcolor blue] ] set d delta * (2.0 * (random-float 1.0) - 1.0) ] [ ; if own score is the highest: ; if cooperator: set color blue, otherwise red ifelse z = 0 [set pcolor blue][set pcolor red] ] ] end

to perturb

if mouse-down? [ ask patch-at mouse-xcor mouse-ycor [set z 1 - z ifelse z = 0 [set pcolor blue][set pcolor red]] ;need to wait for a while; otherwise the procedure is run a few times after a mouse click wait 0. ] end

to movie_start movie-start "out.mov" set movie_on? true end

to movie_stop movie-close set movie_on? false end

B POGRAM CODE 19

B.2 Hawk-Dove game

breed [hawks hawk] breed [doves dove] globals [deltaD deltaH p total counter decrement reproduce_limit] turtles-own [energy]

to setup ca set-default-shape hawks "hawk" set-default-shape doves "butterfly" createH n_hawks createD n_doves ask turtles [set energy random-float 10.0] set reproduce_limit 11. set decrement 0. end

to go ask turtles [ move fight reproduce ] while [count turtles > 600] [ask one-of turtles[die]] do-plot end

to createH [num_hawks] create-custom-hawks num_hawks [ set color red set size 1. setxy random-xcor random-ycor ] end

to createD [num_doves] create-custom-doves num_doves [ set color white set size 1. setxy random-xcor random-ycor ] end

to fight

ifelse (is-hawk? self) [ if ((count other-hawks-here = 1) and (count other-doves-here = 0)) [set energy (energy + 0.5 * (V - C))] if ((count other-hawks-here = 0) and (count other-doves-here = 1)) [set energy (energy + V)] ] [ if ((count other-hawks-here = 0) and (count other-doves-here = 1))