Evolutionarily stable strategy

From Wikipedia, the free encyclopedia
Evolutionarily stable strategy
A solution concept in game theory
Relationship
Subset ofNash equilibrium
Superset ofStochastically stable equilibrium, Stable Strong Nash equilibrium
Intersects withSubgame perfect equilibrium, Trembling hand perfect equilibrium, Perfect Bayesian equilibrium
Significance
Proposed byJohn Maynard Smith and George R. Price
Used forBiological modeling and Evolutionary game theory
ExampleHawk-dove

An evolutionarily stable strategy (ESS) is a strategy (or set of strategies) that is impermeable when adopted by a population in adaptation to a specific environment, that is to say it cannot be displaced by an alternative strategy (or set of strategies) which may be novel or initially rare. Introduced by John Maynard Smith and George R. Price in 1972/3,[1][2] it is an important concept in behavioural ecology, evolutionary psychology, mathematical game theory and economics, with applications in other fields such as anthropology, philosophy and political science.

In game-theoretical terms, an ESS is an equilibrium refinement of the Nash equilibrium, being a Nash equilibrium that is also "evolutionarily stable." Thus, once fixed in a population, natural selection alone is sufficient to prevent alternative (mutant) strategies from replacing it (although this does not preclude the possibility that a better strategy, or set of strategies, will emerge in response to selective pressures resulting from environmental change).

History[edit]

Evolutionarily stable strategies were defined and introduced by John Maynard Smith and George R. Price in a 1973 Nature paper.[2] Such was the time taken in peer-reviewing the paper for Nature that this was preceded by a 1972 essay by Maynard Smith in a book of essays titled On Evolution.[1] The 1972 essay is sometimes cited instead of the 1973 paper, but university libraries are much more likely to have copies of Nature. Papers in Nature are usually short; in 1974, Maynard Smith published a longer paper in the Journal of Theoretical Biology.[3] Maynard Smith explains further in his 1982 book Evolution and the Theory of Games.[4] Sometimes these are cited instead. In fact, the ESS has become so central to game theory that often no citation is given, as the reader is assumed to be familiar with it.

Maynard Smith mathematically formalised a verbal argument made by Price, which he read while peer-reviewing Price's paper. When Maynard Smith realized that the somewhat disorganised Price was not ready to revise his article for publication, he offered to add Price as co-author.

The concept was derived from R. H. MacArthur[5] and W. D. Hamilton's[6] work on sex ratios, derived from Fisher's principle, especially Hamilton's (1967) concept of an unbeatable strategy. Maynard Smith was jointly awarded the 1999 Crafoord Prize for his development of the concept of evolutionarily stable strategies and the application of game theory to the evolution of behaviour.[7]

Uses of ESS:

Motivation[edit]

The Nash equilibrium is the traditional solution concept in game theory. It depends on the cognitive abilities of the players. It is assumed that players are aware of the structure of the game and consciously try to predict the moves of their opponents and to maximize their own payoffs. In addition, it is presumed that all the players know this (see common knowledge). These assumptions are then used to explain why players choose Nash equilibrium strategies.

Evolutionarily stable strategies are motivated entirely differently. Here, it is presumed that the players' strategies are biologically encoded and heritable. Individuals have no control over their strategy and need not be aware of the game. They reproduce and are subject to the forces of natural selection, with the payoffs of the game representing reproductive success (biological fitness). It is imagined that alternative strategies of the game occasionally occur, via a process like mutation. To be an ESS, a strategy must be resistant to these alternatives.

Given the radically different motivating assumptions, it may come as a surprise that ESSes and Nash equilibria often coincide. In fact, every ESS corresponds to a Nash equilibrium, but some Nash equilibria are not ESSes.

Nash equilibrium[edit]

An ESS is a refined or modified form of a Nash equilibrium. (See the next section for examples which contrast the two.) In a Nash equilibrium, if all players adopt their respective parts, no player can benefit by switching to any alternative strategy. In a two player game, it is a strategy pair. Let E(S,T) represent the payoff for playing strategy S against strategy T. The strategy pair (S, S) is a Nash equilibrium in a two player game if and only if for both players, for any strategy T:

E(S,S) ≥ E(T,S)

In this definition, a strategy TS can be a neutral alternative to S (scoring equally well, but not better). A Nash equilibrium is presumed to be stable even if T scores equally, on the assumption that there is no long-term incentive for players to adopt T instead of S. This fact represents the point of departure of the ESS.

Maynard Smith and Price[2] specify two conditions for a strategy S to be an ESS. For all TS, either

  1. E(S,S) > E(T,S), or
  2. E(S,S) = E(T,S) and E(S,T) > E(T,T)

The first condition is sometimes called a strict Nash equilibrium.[9] The second is sometimes called "Maynard Smith's second condition". The second condition means that although strategy T is neutral with respect to the payoff against strategy S, the population of players who continue to play strategy S has an advantage when playing against T.

There is also an alternative, stronger definition of ESS, due to Thomas.[10] This places a different emphasis on the role of the Nash equilibrium concept in the ESS concept. Following the terminology given in the first definition above, this definition requires that for all TS

  1. E(S,S) ≥ E(T,S), and
  2. E(S,T) > E(T,T)

In this formulation, the first condition specifies that the strategy is a Nash equilibrium, and the second specifies that Maynard Smith's second condition is met. Note that the two definitions are not precisely equivalent: for example, each pure strategy in the coordination game below is an ESS by the first definition but not the second.

In words, this definition looks like this: The payoff of the first player when both players play strategy S is higher than (or equal to) the payoff of the first player when he changes to another strategy T and the second player keeps his strategy S and the payoff of the first player when only his opponent changes his strategy to T is higher than his payoff in case that both of players change their strategies to T.

This formulation more clearly highlights the role of the Nash equilibrium condition in the ESS. It also allows for a natural definition of related concepts such as a weak ESS or an evolutionarily stable set.[10]

Examples of differences between Nash equilibria and ESSes[edit]

Cooperate Defect
Cooperate 3, 3 1, 4
Defect 4, 1 2, 2
Prisoner's Dilemma
A B
A 2, 2 1, 2
B 2, 1 2, 2
Harm thy neighbor

In most simple games, the ESSes and Nash equilibria coincide perfectly. For instance, in the prisoner's dilemma there is only one Nash equilibrium, and its strategy (Defect) is also an ESS.

Some games may have Nash equilibria that are not ESSes. For example, in harm thy neighbor (whose payoff matrix is shown here) both (A, A) and (B, B) are Nash equilibria, since players cannot do better by switching away from either. However, only B is an ESS (and a strong Nash). A is not an ESS, so B can neutrally invade a population of A strategists and predominate, because B scores higher against B than A does against B. This dynamic is captured by Maynard Smith's second condition, since E(A, A) = E(B, A), but it is not the case that E(A,B) > E(B,B).

C D
C 2, 2 1, 2
D 2, 1 0, 0
Harm everyone
Swerve Stay
Swerve 0,0 −1,+1
Stay +1,−1 −20,−20
Chicken

Nash equilibria with equally scoring alternatives can be ESSes. For example, in the game Harm everyone, C is an ESS because it satisfies Maynard Smith's second condition. D strategists may temporarily invade a population of C strategists by scoring equally well against C, but they pay a price when they begin to play against each other; C scores better against D than does D. So here although E(C, C) = E(D, C), it is also the case that E(C,D) > E(D,D). As a result, C is an ESS.

Even if a game has pure strategy Nash equilibria, it might be that none of those pure strategies are ESS. Consider the Game of chicken. There are two pure strategy Nash equilibria in this game (Swerve, Stay) and (Stay, Swerve). However, in the absence of an uncorrelated asymmetry, neither Swerve nor Stay are ESSes. There is a third Nash equilibrium, a mixed strategy which is an ESS for this game (see Hawk-dove game and Best response for explanation).

This last example points to an important difference between Nash equilibria and ESS. Nash equilibria are defined on strategy sets (a specification of a strategy for each player), while ESS are defined in terms of strategies themselves. The equilibria defined by ESS must always be symmetric, and thus have fewer equilibrium points.

Vs. evolutionarily stable state[edit]

In population biology, the two concepts of an evolutionarily stable strategy (ESS) and an evolutionarily stable state are closely linked but describe different situations.

In an evolutionarily stable strategy, if all the members of a population adopt it, no mutant strategy can invade.[4] Once virtually all members of the population use this strategy, there is no 'rational' alternative. ESS is part of classical game theory.

In an evolutionarily stable state, a population's genetic composition is restored by selection after a disturbance, if the disturbance is not too large. An evolutionarily stable state is a dynamic property of a population that returns to using a strategy, or mix of strategies, if it is perturbed from that initial state. It is part of population genetics, dynamical system, or evolutionary game theory. This is now called convergent stability.[11]

B. Thomas (1984) applies the term ESS to an individual strategy which may be mixed, and evolutionarily stable population state to a population mixture of pure strategies which may be formally equivalent to the mixed ESS.[12]

Whether a population is evolutionarily stable does not relate to its genetic diversity: it can be genetically monomorphic or polymorphic.[4]

Stochastic ESS[edit]

In the classic definition of an ESS, no mutant strategy can invade. In finite populations, any mutant could in principle invade, albeit at low probability, implying that no ESS can exist. In an infinite population, an ESS can instead be defined as a strategy which, should it become invaded by a new mutant strategy with probability p, would be able to counterinvade from a single starting individual with probability >p, as illustrated by the evolution of bet-hedging.[13]

Prisoner's dilemma[edit]

Cooperate Defect
Cooperate 3, 3 1, 4
Defect 4, 1 2, 2
Prisoner's Dilemma

A common model of altruism and social cooperation is the Prisoner's dilemma. Here a group of players would collectively be better off if they could play Cooperate, but since Defect fares better each individual player has an incentive to play Defect. One solution to this problem is to introduce the possibility of retaliation by having individuals play the game repeatedly against the same player. In the so-called iterated Prisoner's dilemma, the same two individuals play the prisoner's dilemma over and over. While the Prisoner's dilemma has only two strategies (Cooperate and Defect), the iterated Prisoner's dilemma has a huge number of possible strategies. Since an individual can have different contingency plan for each history and the game may be repeated an indefinite number of times, there may in fact be an infinite number of such contingency plans.

Three simple contingency plans which have received substantial attention are Always Defect, Always Cooperate, and Tit for Tat. The first two strategies do the same thing regardless of the other player's actions, while the latter responds on the next round by doing what was done to it on the previous round—it responds to Cooperate with Cooperate and Defect with Defect.

If the entire population plays Tit-for-Tat and a mutant arises who plays Always Defect, Tit-for-Tat will outperform Always Defect. If the population of the mutant becomes too large — the percentage of the mutant will be kept small. Tit for Tat is therefore an ESS, with respect to only these two strategies. On the other hand, an island of Always Defect players will be stable against the invasion of a few Tit-for-Tat players, but not against a large number of them.[14] If we introduce Always Cooperate, a population of Tit-for-Tat is no longer an ESS. Since a population of Tit-for-Tat players always cooperates, the strategy Always Cooperate behaves identically in this population. As a result, a mutant who plays Always Cooperate will not be eliminated. However, even though a population of Always Cooperate and Tit-for-Tat can coexist, if there is a small percentage of the population that is Always Defect, the selective pressure is against Always Cooperate, and in favour of Tit-for-Tat. This is due to the lower payoffs of cooperating than those of defecting in case the opponent defects.

This demonstrates the difficulties in applying the formal definition of an ESS to games with large strategy spaces, and has motivated some to consider alternatives.

Human behavior[edit]

The fields of sociobiology and evolutionary psychology attempt to explain animal and human behavior and social structures, largely in terms of evolutionarily stable strategies. Sociopathy (chronic antisocial or criminal behavior) may be a result of a combination of two such strategies.[15]

Evolutionarily stable strategies were originally considered for biological evolution, but they can apply to other contexts. In fact, there are stable states for a large class of adaptive dynamics. As a result, they can be used to explain human behaviours that lack any genetic influences.

See also[edit]

References[edit]

  1. ^ a b Maynard Smith, J. (1972). "Game Theory and The Evolution of Fighting". On Evolution. Edinburgh University Press. ISBN 0-85224-223-9.
  2. ^ a b c Maynard Smith, J.; Price, G.R. (1973). "The logic of animal conflict". Nature. 246 (5427): 15–8. Bibcode:1973Natur.246...15S. doi:10.1038/246015a0.
  3. ^ Maynard Smith, J. (1974). "The Theory of Games and the Evolution of Animal Conflicts" (PDF). Journal of Theoretical Biology. 47 (1): 209–21. doi:10.1016/0022-5193(74)90110-6. PMID 4459582.
  4. ^ a b c Maynard Smith, John (1982). Evolution and the Theory of Games. ISBN 0-521-28884-3.
  5. ^ MacArthur, R. H. (1965). Waterman T.; Horowitz H. (eds.). Theoretical and mathematical biology. New York: Blaisdell.
  6. ^ Hamilton, W.D. (1967). "Extraordinary sex ratios". Science. 156 (3774): 477–88. Bibcode:1967Sci...156..477H. doi:10.1126/science.156.3774.477. JSTOR 1721222. PMID 6021675.
  7. ^ Press release Archived 2016-03-03 at the Wayback Machine for the 1999 Crafoord Prize
  8. ^ Alexander, Jason McKenzie (23 May 2003). "Evolutionary Game Theory". Stanford Encyclopedia of Philosophy. Retrieved 31 August 2007.
  9. ^ Harsanyi, J (1973). "Oddness of the number of equilibrium points: a new proof". Int. J. Game Theory. 2 (1): 235–50. doi:10.1007/BF01737572.
  10. ^ a b Thomas, B. (1985). "On evolutionarily stable sets". J. Math. Biology. 22: 105–115. doi:10.1007/bf00276549.
  11. ^ Apaloo, J.; Brown, J. S.; Vincent, T. L. (2009). "Evolutionary game theory: ESS, convergence stability, and NIS". Evolutionary Ecology Research. 11: 489–515. Archived from the original on 2017-08-09. Retrieved 2018-01-10.
  12. ^ Thomas, B. (1984). "Evolutionary stability: states and strategies". Theor. Popul. Biol. 26 (1): 49–67. doi:10.1016/0040-5809(84)90023-6.
  13. ^ King, Oliver D.; Masel, Joanna (1 December 2007). "The evolution of bet-hedging adaptations to rare scenarios". Theoretical Population Biology. 72 (4): 560–575. doi:10.1016/j.tpb.2007.08.006. PMC 2118055. PMID 17915273.
  14. ^ Axelrod, Robert (1984). The Evolution of Cooperation. ISBN 0-465-02121-2.
  15. ^ Mealey, L. (1995). "The sociobiology of sociopathy: An integrated evolutionary model". Behavioral and Brain Sciences. 18 (3): 523–99. doi:10.1017/S0140525X00039595.

Further reading[edit]

External links[edit]