Uggame
Uggame
Uggame
July 8, 2007
Table of Contents
1 Introduction 5
1.1 What is Game Theory? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.3 Our Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
6 Bayesian Games 79
6.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
6.2 Bayesian Equilibrium . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
6.3 Some Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
2
TABLE OF CONTENTS 3
10 Auctions 117
10.1 Independent Private Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
10.1.1 Second Price Auctions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
10.1.2 First Price Auctions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
10.1.3 All-Pay Auctions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
10.2 Revenue Equivalence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
10.3 Common Values and The Winner’s Curse . . . . . . . . . . . . . . . . . . . . . . 124
10.4 Auction Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
10.4.1 Need for a Reserve Price . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
10.4.2 Common Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
10.4.3 Risk-Averse Bidders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
4 TABLE OF CONTENTS
Introduction
We, humans, cannot survive without interacting with other humans, and ironically, it some-
times seems that we have survived despite those interactions. Production and exchange require
cooperation between individuals at some level but the same interactions may also lead to disastrous
confrontations. Human history is as much a history of fights and wars as it is a history of success-
ful cooperation. Many human interactions carry the potentials of cooperation and harmony as well
as conflict and disaster. Examples are abound: relationships among couples, siblings, countries,
management and labor unions, neighbors, students and professors, and so on.
One can argue that the increasingly complex technologies, institutions, and cultural norms that
have existed in human societies have been there in order to facilitate and regulate these interactions.
For example, internet technology greatly facilitates buyer-seller transactions, but also complicates
them further by increasing opportunities for cheating and fraud. Workers and managers have usu-
ally opposing interests when it comes to wages and working conditions, and labor unions as well as
labor law provide channels and rules through which any potential conflict between them can be ad-
dressed. Similarly, several cultural and religious norms, such as altruism or reciprocity, bring some
order to potentially dangerous interactions between individuals. All these norms and institutions
constantly evolve as the nature of the underlying interactions keep changing. In this sense, under-
standing human behavior in its social and institutional context requires a proper understanding of
human interaction.
Economics, sociology, psychology, and political science are all devoted to studying human
behavior in different realms of social life. However, in many instances they treat individuals in
isolation, for convenience if not for anything else. In other words, they assume that to understand
5
6 Introduction
one individual’s behavior it is safe to assume that her behavior does not have a significant effect on
other individuals. In some cases, and depending upon the question one is asking, this assumption
may be warranted. For example, what a small farmer in a local market, say in Montana, charges for
wheat is not likely to have an effect on the world wheat prices. Similarly, the probability that my
vote will change the outcome of the U.S. presidential elections is negligibly small. So, if we are
interested in the world wheat price or the result of the presidential elections, we may safely assume
that one individual acts as if her behavior will not affect the outcome.
In many cases, however, this assumption may lead to wrong conclusions. For example, how
much our farmer in Montana charges, compared to the other farmers in Montana, certainly affects
how much she and other farmers make. If our farmer sets a price that is lower than the prices
set by the other farmers in the local market, she would sell more than the others, and vice versa.
Therefore, if we assume that they determine their prices without taking this effect into account,
we are not likely to get anywhere near understanding their behavior. Similarly, the vote of one
individual may radically change the outcome of voting in small committees and assuming that they
vote in ignorance of that fact is likely to be misleading.
The subject matter of game theory is exactly those interactions within a group of individuals (or
governments, firms, etc.) where the actions of each individual have an effect on the outcome that
Game theory studies strategic
is of interest to all. Yet, this is not enough for a situation to be a proper subject of game theory: the
interactions
way that individuals act has to be strategic, i.e., they should be aware of the fact that their actions
affect others. The fact that my actions have an effect on the outcome does not necessitate strategic
behavior, if I am not aware of that fact. Therefore, we say that game theory studies strategic
interaction within a group of individuals. By strategic interaction we mean that individuals know
that their actions will have an effect on the outcome and act accordingly.
Having determined the types of situations that game theory deals with, we have to now discuss
how it analyzes these situations. Like any other theory, the objective of game theory is to organize
our knowledge and increase our understanding of the outside world. A scientific theory tries to
abstract the most essential aspects of a given situation, analyze them using certain assumptions and
procedures, and at the end derive some general principles and predictions that can be applied to
individual instances.
For it to have any predictive power, game theory has to postulate some rules according to which
rules of the game
individuals act. If we do not describe how individuals behave, what their objectives are and how
they try to achieve those objectives we cannot derive any predictions at all in a given situation. For
example, one would get completely different predictions regarding the price of wheat in a local
market if one assumes that farmers simply flip a coin and choose between $1 and $2 a pound
compared to if one assumes they try to make as much money as possible. Therefore, to bring some
1.1. What is Game Theory? 7
discipline to the analysis one has to introduce some structure in terms of the rules of the game.
The most important, and maybe one of the most controversial, assumption of game theory
which brings about this discipline is that individuals are rational. We assume that individuals are
rational.
Rationality implies that individuals know the strategies available to each individual, have com-
plete and consistent preferences over possible outcomes, and they are aware of those preferences.
Furthermore, they can determine the best strategy for themselves and flawlessly implement it.
The term strategic interaction is actually more loaded than it is alluded to above. It is not
enough that I know that my actions, as well as yours, affect the outcome, but I must also know that
you know this fact. Take the example of two wheat farmers. Suppose both farmer A and B know
that their respective choices of prices will affect their profits for the day. But suppose, A does not
know that B knows this. Now, from the perspective of farmer A, farmer B is completely ignorant
of what is going on in the market and hence farmer B might set any price. This makes farmer
A’s decision quite uninteresting itself. To model the situation more realistically, we then have to
assume that they both know that they know that their prices will affect their profits. One actually
has to continue in this fashion and assume that the rules of the game, including how actions affect
the participants and individuals’ rationality, are common knowledge.
A fact X is common knowledge if everybody knows it, if everybody knows that everybody
knows it, if everybody knows that everybody knows that everybody knows it, an so on. This has
8 Introduction
some philosophical implications and is subject to a lot of controversy, but for the most part we will
We assume that the game and
avoid those discussions and take it as given.
rationality are common
knowledge
Its limitations aside, game theory has been fruitfully applied to many situations in the realm of
economics, political science, biology, law, etc. In the rest of this chapter we will illustrate the main
ideas and concepts of game theory and some of its applications using simple examples. In later
chapters we will analyze more realistic and complicated scenarios and discuss how game theory is
applied in the real world. Among those applications are firm competition in oligopolistic markets,
competition between political parties, auctions, bargaining, and repeated interaction between firms.
1.2 Examples
For the sake of comparison, we first start with an example in which there is no strategic inter-
action, and hence one does not need game theory to analyze.
Example 1.1 (A Single Person Decision Problem). Suppose Ali is an investor who can invest his
$100 either in a safe asset, say government bonds, which brings 10% return in one year, or he can
invest it in a risky asset, say a stock issued by a corporation, which either brings 20% return (if the
company performance is good) or zero return (if the company performance is bad).
State
Good Bad
Bonds 10% 10%
Stocks 20% 0%
Clearly, which investment is best for Ali depends on his preferences and the relative likelihoods
of the two states of the world. Let’s denote the probability of the good state occurring p and that of
the bad state 1 − p, and assume that Ali wants to maximize the amount of money he has at the end
of the year. If he invests his $100 on bonds, he will have $110 at the end of the year irrespective
of the state of the world (i.e., with certainty). If he invests on stocks, however, with probability
1.2. Examples 9
p he will have $120 and with probability 1 − p he will have $100. We can therefore calculate his
average (or expected) money holdings at the end of the year as
If, for example, p = 1/2, then he expects to have $110 at the end of the year. In general, if p > 1/2,
then he would prefer to invest in stocks, and if p < 1/2 he would prefer bonds.
This is just one example of a single person decision making problem, in which the decision
problem of an individual can be analyzed in isolation of the other individuals’ behavior. Any A single person decision
problem has no strategic
uncertainty involved in such problems are exogenous in the sense that it is not determined or in- interaction
fluenced in any way by the behavior of the individual in question. In the above example, the only
uncertainty comes from the performance of the stock, which we may safely assume to be inde-
pendent of Ali’s choice of investment. Contrast this with the situation illustrated in the following
example.
Example 1.2 (An Investment Game). Now, suppose Ali again has two options for investing his
$100. He may either invest it in bonds, which have a certain return of 10%, or he may invest it in
a risky venture. This venture requires $200 to be a success, in which case the return is 20%, i.e.,
$100 investment yields $120 at the end of the year. If total investment is less than $200, then the
venture is a failure and yields zero return, i.e., $100 investment yields $100. Ali knows that there
is another person, let’s call her Beril, who is exactly in the same situation, and there is no other
potential investor in the venture. Unfortunately, Ali and Beril don’t know each other and cannot
communicate. Therefore, they both have to make the investment decision without knowing the
decisions of each other.
We can summarize the returns on the investments of Ali and Beril as a function of their deci-
sions in the table given in Figure 1.1. The first number in each cell represents the return on Ali’s
investment, whereas the second number represents Beril’s return. We assume that both Ali and
Beril know the situation represented in this table, i.e., they know the rules of the game.
The existence of strategic interaction is apparent in this situation, which should be contrasted
with the one in Example 1.1. The crucial element is that the outcome of Ali’s decision (i.e., the
return on the investment chosen) depends on what Beril does. Investing in the risky option, i.e., the
10 Introduction
venture, has an uncertain return, as it was the case in Example 1.1. However, now the source of the
uncertainty is another individual, namely Beril. If Ali believes that Beril is going to invest in the
venture, then his optimal choice is the venture as well, whereas, if he thinks Beril is going to invest
in bonds, his optimal choice is to invest in bonds. Furthermore, Beril is in a similar situation, and
this fact makes the problem significantly different from the one in Example 1.1.
So, what should Ali do? What do you expect would happen in this situation? At this point
we do not have enough information in our model to provide an answer. First we have to describe
Ali and Beril’s objectives, i.e., their preferences over the set of possible outcomes. One possibility,
economists’ favorite, is to assume that they are both expected payoff, or utility, maximizers. If
we further take utility to be the amount of money they have, then we may assume that they are
expected money maximizers. This, however, is not enough for us to answer Ali’s question, for we
have to give Ali a way to form expectations regarding Beril’s behavior.
One simple possibility is to assume that Ali thinks Beril is going to choose bonds with some
given probability p between zero and one. Then, his decision problem becomes identical to the one
in Example 1.1. Under this assumption, we do not need game theory to solve his problem. But,
is it reasonable for him to assume that Beril is going to decide in such a mechanical way? After
all, we have just assumed that Beril is an expected money maximizer as well. So, let’s assume that
they are both rational, i.e., they choose whatever action that maximizes their expected returns, and
they both know that the other is rational.
Is this enough? Well, Ali knows that Beril is rational, but this is still not enough for him to
deduce what she will do. He knows that she will do what maximizes her expected return, which,
in turn, depends on what she thinks Ali is going to do. Therefore, what Ali should do depends on
what she thinks Beril thinks that he is going to do. So, we have to go one more step and assume
that not only each knows that the other is rational but also each knows that the other knows that
the other is rational. We can continue in this manner to argue that an intelligent solution to Ali’s
connundrum is to assume that both know that both are rational; both know that both know that both
are rational; both know that both know that both know that both are rational; ad infinitum. This
is a difficult problem indeed and game theory deals exactly with this kind of problems. The next
example provides a problem that is relatively easier to solve.
Example 1.3 (Prisoners’ Dilemma). Probably the best known example, which has also become
a parable for many other situations, is called the Prisoners’ Dilemma. The story goes as follows:
two suspects are arrested and put into different cells before the trial. The district attorney, who is
pretty sure that both of the suspects are guilty but lacks enough evidence, offers them the following
deal: if both of them confess and implicate the other (labeled C), then each will be sentenced to,
say, 5 years of prison time. If one confesses and the other does not (labeled N), then the “rat” goes
1.2. Examples 11
free for his cooperation with the authorities and the non-confessor is sentenced to 6 years of prison
time. Finally, if neither of them confesses, then both suspects get to serve one year.
We can compactly represent this story as in Figure 1.2 where we assume that the utility of a
year in prison is −1 for each suspect.
For instance, the best outcome for the player 1 is the case in which he confesses and the player
2 does not. The next best outcome for player 1 is (N, N), and then (C,C) and finally (N,C). A
similar interpretation applies to player 2.
How would you play this game in the place of player 1? One useful observation is the follow-
ing: no matter what player 2 intends to do, playing C yields a better outcome for player 1. This is
so because (C,C) is a better outcome for him than (N,C), and (C, N) is a better outcome for him
than (N, N). So, it seems only “rational” for player 1 to play C by confessing. The same reasoning
for player 2 entails that this player too is very likely to play C. A very reasonable prediction here
is, therefore, that the game will end in the outcome (C,C) in which both players confess to their
crimes.
And this is the dilemma: wouldn’t each of the players be strictly better off by playing N in-
stead? After all, (N, N) is preferred by both players to (C,C). It is really a pity that the rational
individualistic play leads to an inferior outcome from the perspective of both players.
You may at first think that this situation arises here only because the prisoners are put into
separate cells and hence are not allowed to have pre-play communication. Surely, you may argue,
if the players debate about how to play the game, they would realize that (N, N) is superior relative
to (C,C) for both of them, and thus agree to play N instead of C. But even if such a verbal agreement
is reached prior to the actual play of the game, what makes player 1 so sure that player 2 will not
backstab him in the last instant by playing C; after all, if player 2 is convinced that player 1 will
keep his end of the bargain by playing N, it is better for her to play C. Thus, even if such an
agreement is reached, both players may reasonably fear betrayal, and may thus choose to betray
before being betrayed by playing C; we are back to the dilemma.
☞ What do you think would happen if players could sign binding contracts?
Even if you are convinced that there is a genuine dilemma here, you may be wondering why
12 Introduction
we are making such a big deal out of a silly story. Well, first note that the “story” of the prisoners’
dilemma is really only a story. The dilemma presented above correspond to far more realistic
scenarios. The upshot is that there are instances in which the interdependence between individuals
who rationally follow their self-interest yields socially undesirable outcomes. Considering that
one of the main claims of the neoclassical economics is that selfish pursuit of individual welfare
yields efficient outcomes (the famous invisible hand), this observation is a very important one, and
economists do take it very seriously. We find in prisoners’ dilemma a striking demonstration of the
fact that the classical claim that “decentralized behavior implies efficiency” is not necessarily valid
in environments with genuine room for strategic interaction.
Prisoners’ dilemma type situations actually arise in many interesting scenarios, such
as arms-races, price competition, dispute settlements with or without lawyers, etc. The
common element in all these scenarios is that if everybody is cooperative a good outcome
results, but nobody finds it in her self-interest to act cooperatively, and this leads to a less
desirable outcome. As an example consider the pricing game in a local wheat market
(depicted in Figure 1.3) where there are only two farmers and they can either set a low
price (L) or a high price (H). The farmer who sets the lowest price captures the entire
market, whereas if they set the same price they share the market equally.
This example paints a very grim picture of human interactions. Indeed, many times we observe
cooperation rather than its complete failure. One important area of research in game theory is the
analysis of environments, institutions, and norms, which actually sustain cooperation in the face of
such seemingly hopeless situations as the prisoners’ dilemma.
Just to illustrate one such scenario, consider a repetition of the Prisoners’ Dilemma game.
In a repeated interaction, each player has to take into account not only what is their payoff in
each interaction but also how the outcome of each of these interactions influences the future ones.
For example, each player may induce cooperation by the other player by adopting a strategy that
punishes bad behavior and rewards good behavior. We will analyze such repeated interactions in
Chapter 9.
1.2. Examples 13
Example 1.4 (Rebel Without a Cause). In the classic 1955 movie Rebel Without a Cause, Jim,
played by James Dean, and Buzz compete for Judy, played by Natalie Wood. Buzz’s gang members
gather by a cliff that drops down to the Pacific Ocean. Jim and Buzz are to drive toward the cliff;
the first person to jump from his car is declared the chicken whereas the last person to jump is a
hero and captures Judy’s heart. Each player has two strategies: jump before the other player (B)
and after the other player (A). If they jump at the same time (B, B), they survive but lose Judy. If
one jumps before and the other after, the latter survive and gets Judy, whereas the former gets to
live, but without Judy. Finally, if both choose to jump after the other (A, A), they die an honorable
death.
The situation can be represented as in Figure 1.4.
The likely outcome is not clear. If Jim thinks Buzz is going to jump before him, then he is
better off waiting and jumping after. On the other hand, if he thinks Buzz is going to wait him
out, he better jumps before: he is young and there will be other Judys. In the movie Buzz’s leather
jacket’s sleeve is caught on the door handle of his car. He cannot jump, even though Jim jumps.
Both cars and Buzz plunge over the cliff.1
Game of chicken is also used as a parable of situations which are more interesting than the
above story. There are dynamic versions of the game of chicken called the war of attrition. In a
war of attrition game, two individuals are supposed to take an action and the choice is the timing
of that action. Both players desire to be the last to take that action. For example, in the game of
chicken, the action is to jump. Therefore, both players try to wait each other out, and the one who
concedes first loses.
Example 1.5 (Entry Game). In all the examples up to here we assumed that the players either
choose their strategies simultaneously or without knowing the choice of the other player. We
model such situations by using what is known as Strategic (or Normal) Form Games.
In some situations, however, players observe at least some of the moves made by other players
and therefore this is not an appropriate modeling choice. Take for example the Entry Game depicted
in Figure 1.5. In this game Pepsi (P) first decides whether to enter a market curently monopolized
1 In real life, James Dean killed himself and injured two passengers while driving on a public highway at an estimated
speed of 100 mph.
14 Introduction
Out In
C
0, 4
A F
2, 2 −1, 0
by Coke (C). After observing Pepsi’s choice Coke decides whether to fight the entry (F) by, for
example, price cuts and/or advertisement campaigns, or acquiesce (A).
Such games of sequential moves are modeled using what is known as Extensive Form Games,
and can be represented by a game tree as we have done in Figure 1.5.
In this example, we assumed that Pepsi prefers entering only if Coke is going to acquiesce, and
Coke prefers to stay as a monopoly, but if entry occurs it prefers to acquiesce; hence the payoff
numbers appended to the end nodes of the game.
Example 1.6 (Voting). Another interesting application of game theory, to political science this
time, is voting. As a simple example, suppose that there are two competing bills, A and B, and
three legislators, voters 1, 2 and 3, who are to vote on these bills. The voting takes place in two
stages. They first vote between A and B, and then between the winner of the first stage and the
status-quo, denoted S. The voters’ rankings of the alternatives are given in Table 1.1.
First note that if each voter votes truthfully, A will be the winner in the first round, and it will
also win against the status-quo in the second round. Do you think this will be the outcome? Well,
voter 3 is not very happy about the outcome and has another way to vote which would make him
1.2. Examples 15
Second Round A
A
First Round
B Second Round S
happier. Assuming that the other voters keep voting truthfully, she can vote for B, rather than A,
in the first round, which would make B the winner in the first round. B will lose to S in the second
round and voter 3 is better off. Could this be the outcome? Well, now voter 2 can switch her vote
to A to get A elected in the first round which then wins against S. Since she likes A better than S
she would like to do that.
We can analyze the situation more systematically starting from the second round. In the second
round, each voter should vote truthfully, they have nothing to gain and possibly something to lose
by voting for a less preferred option. Therefore, if A is the winner of the first round, it will also win
in the second round. If B wins in the first round, however, the outcome will be S. This means that,
by voting between A and B in the first round they are actually voting between A and S. Therefore,
voter 1 and 2 will vote for A and eventual outcome will be A. (see Figure 1.6.)
Example 1.7 (Investment Game with Incomplete Information). So far, in all the examples, we
have assumed that every player knows everything about the game, including the preferences of the
other players. Reality, however, is not that simple. In many situations we lack relevant information
regarding many components of a strategic situation, such as the identity and preferences of other
players, strategies available to us and to other players, etc. Such games are known as Games with
Incomplete (or Private) Information.
As an illustration, let us go back to Example 1.2, which we modify by assuming that Ali is
not certain about Beril’s preferences. In particular, assume that he believes (with some probability
p) that Beril has the preferences represented in Figure 1.1, and with probability 1 − p he believes
Beril is a little crazy and has some inherent tendency to take risks, even if they are unreasonable
from the perspective of a rational investor. We represent the new situation in Figure 1.7.
If Ali was sure that Beril was crazy, then his choice would be clear: he should choose to invest
in the venture. How small should p be for the solution of this game to be both Ali and Beril,
irrespective of her preferences, investing in the venture? Suppose that “normal” Beril chooses
bonds and Ali believes this to be the case. Investing in bonds yields $110 for Ali irrespective of
what Beril does. Investing in the venture, however, has the following expected return for Ali
which is bigger than $110 if p < 1/2. In other words, we would expect the solution to be investment
in the venture for both players if Ali’s belief that Beril is crazy is strong enough.
Example 1.8 (Signalling). In Example 1.7 one of the players had incomplete information but they
chose their strategies without observing the choices of the other player. In other words, players did
not have a chance to observe others’ behavior and possibly learn from them. In certain strategic
interactions this is not the case. When you apply for a job, for example, the employer is not exactly
sure of your qualities. So, you try to impress your prospective boss with your resume, education,
dress, manners etc. In essence, you try to signal your good qualities, and hide the bad ones, with
your behavior. The employer, on the other hand, has to figure out which signals she should take
seriously and which ones to discount (i.e. she tries to screen good candidates).
This is also the case when you go out on a date with someone for the first time. Each person
tries to convey their good sides while trying to hide the bad ones, unless of course, it was a failure
from the very beginning. So, there is a complex interaction of signalling and screening going on.
Suppose, for example, that Ali takes Beril out on a date. Beril is going to decide whether she is
going to have a long term relationship with him (call that marrying) or dump him. However, she
wants to marry a smart guy and does not know whether Ali is smart or not. However, she thinks he
is smart or dumb with equal probabilities. Ali really wants to marry her and tries to show that he
is smart by cracking jokes and being funny in general during the date. However, being funny is not
very easy. It is just stressful, and particularly so if one is dumb, to constantly try to come up with
jokes that will impress her. Figure 1.8 illustrates the situation.
What do yo think will happen at the end? Is it possible for a dumb version of Ali to be funny
and marry Beril? Or, do you think it is more likely for a smart Ali to marry Beril by being funny,
while a dumb Ali prefers to be quite and just enjoys the food, even if the date is not going further
than the dinner?
Example 1.9 (Hostile Takeovers). During the 1980s there was a huge wave of mergers and acqui-
sitions in the Uniter States. Many of the acquisitions took the form of “hostile takeovers,” a term
used to describe takeovers that are implemented against the will of the target company’s manage-
1.2. Examples 17
ment. They usually take the form of direct tender offers to shareholders, i.e., the acquirer publicly
offers a price to all the shareholders. Some of these tender offers were in the form of what is known
as “two-tiered tender offer.”
Such was the case in 1988 when Robert Campeau made a tender offer for Federated Department
Stores. Let us consider a simplified version of the actual story. Suppose that the pre-takeover price
of a Federated share is $100. Campeau offers to pay $105 per share for the first 50% of the shares,
and $90 for the remainder. All shares, however, are bought at the average price of the total shares
tendered. If the takeover succeeds, the shares that were not rendered are worth $90 each.
For example, if 75% of the shares are tendered, Campeau pays $105 to the first 50% and pays
$90 to the remaining 25%. The average price that Campeau pays is then equal to
50 25
p = 105 × + 90 ×
75 75
= 100
In general, if s percent of the shares are tendered the average price paid by Campeau, and thus
18 Introduction
Notice that if everybody tenders, i.e., s = 100, then Campeau pays $97.5 per share which is less
than the current market price. So, this looks like a good deal for Campeau, but only if sufficiently
high number of shareholders tender.
☞ If you were a Federated shareholder, would you tender your shares to Campeau?
☞ Does your answer depend on what you think other shareholders will do?
☞ Now suppose Macy’s offers $102 per share conditional upon obtaining the majority.
What would you do?
The actual unfolding of events were quite unfortunate for Campeau. Macy’s joined the bidding
and this increased the premium quite significantly. Campeau finally won out (not by a two-tiered
tender offer, however) but paid $8.17 billion for the stock of a company with a pre-acquisition
market value of $2.93 billion. Campeau financed 97 percent of the purchase price with debt. Less
than two years later, Federated filed for bankruptcy and Campeau lost his job.
So, we have seen that many interesting situations involve strategic interactions between indi-
viduals and therefore render themselves to a game theoretical study. At this point one has two
options. We can either analyze each case separately or we may try to find general principals that
apply to any game. As we have mentioned before, game theory provides tools to analyze strate-
gic interactions, which may then be applied to any arbitrary game-like situation. In other words,
throughout this course we will analyze abstract games, and suggest “reasonable” outcomes as solu-
tions to those games. To fix ideas, however, we will discuss applications of these abstract concepts
to particular cases which we hope you will find interesting.
We will analyze games along two different dimensions: (1) the order of moves; (2) information.
This gives us four general forms of games, as we illustrate in Table 1.2.
1.3. Our Methodology 19
Information
Complete Incomplete
Strategic Form Games Bayesian Games
Simultaneous with Complete Information
Moves Example 1.2 Example 1.7
Extensive form Games Extensive form Games
Sequential with Complete Information with Incomplete Information
Example 1.5 Example 1.8
20 Introduction
Chapter 2
2.1 Preliminaries
The simplest form of strategic interdependence prevails in contexts in which the actions are
either taken simultaneously or without the knowledge of action choices of the other players. To
model such a setting, all we need to do is to specify the set of interacting individuals (commonly
called the players), the set of actions available to these individuals, and a description of the incen-
tives regarding the modeled interaction. That is, we need to write down the who, what and why of
the setting we are trying to model.
Formally speaking, we need exactly three objects to define a game in strategic form.
➥ Set of players : N
➥ A set of actions : Ai for each player i
➥ A payoff function: ui : A → R for each player i
In general, we name the players by integers and denote a generic player by i, whom we call
player i. However, this choice is arbitrary and one may choose to name the players differently.
In the chicken game of Example 1.4 on page 13, for example, the set of players is given by N =
{Jim, Buzz}.
We interpret Ai as the set of all available actions (or strategies) to player i. That is, for player
i,“playing the game” means choosing an action from the set Ai . For instance, in the children game
21
22 Strategic Form Games with Complete Information
“rock, scissors and paper,” the action space for each player is {rock, scissors, paper}, and in the
prisoners’ dilemma the action space for each player i is {confess, not confess} (i.e. {C, N}).
Given the action spaces of the players, we define the outcome space of the game as
(Note that the term “normal form game” is also used in the literature.) Thus, when we talk about
a “game in strategic form” we have in mind a setup in which all this information is provided. In
particular, if the game is played by only two players (so that N = {1, 2}), we need exactly four
pieces of information:
(A1 , A2 , u1 , u2 ).
Therefore, if each player has finitely many actions available to him/her, then we can represent a
2-person game in strategic form by means of a bimatrix, as we have done in examples 1.2–1.4 in
Chapter 1. In such a representation our convention is always that player 1 (who is a male) chooses
2.2. Examples 23
A crucial assumption in our model of a game in strategic form is that everything about the
formulation of the game (that is the set of players, the set of actions and the utility functions) are
all known by each player in the game. What is more, each player knows that all players know
everything about the game, and all players know that each player knows that all players know ev-
erything about the game, and so on. Believe it or not, at a philosophical level, all this matters. But
we shall not concern ourselves much with this issue; we shall simply postulate that the primitives
of a game is common knowledge without worrying too much about what this really means.1 What
is more, there is no uncertainty pertaining to the actions available to the players and to their payoff
functions. This makes the game form defined in this section a strategic form game with complete
information. In later sections we will have a chance to see how to model situations involving dy-
namic interaction as well as incomplete information on the part of some players.
As with any other new concept, the best way to come into grips with the games in strategic
form is to study several specific examples, hence the next section.
2.2 Examples
Example 2.1 (Prisoners’ Dilemma). Recall that the prisoners’ dilemma scenario we have discussed
in the introduction was represented by the bimatrix
at this stage. We shall thus do no more on this topic than recommending to the interested reader the excellent survey of
J. Geanakoplos (1992), “Common knowledge,” Journal of Economic Perspectives 6, pp. 53-82.
24 Strategic Form Games with Complete Information
Here we have A1 = {C, N} = A2 so that A = {(C,C), (C, N), (N,C), (N, N)} and u1 (C,C) =
−5, u2 (N,C) = 0, . . . , and so on. You should make sure you understand that the bimatrix
Example 2.2 (Battle of the Sexes). Ali and Beril are married and they are in their offices on a
Friday evening trying to figure out what they should do after work. They cannot not get in touch
with each other but would like to meet and spend the evening going to a movie or an opera. Ali
likes movies better while Beril would rather go to an opera. However, being in love, the most
important thing for them is to do something together; both view the night “wasted” unless they
spend it together.
We may represent this story as a 2-person game in strategic form by means of the bimatrix in
Figure 2.3
and uAli (o, m) = 0, uBeril (o, o) = 2, . . . , and so on. (Once again the choice of utility values is arbi-
trary other than the ranking of the outcomes it entails.) Like the prisoners’ dilemma, battle of the
sexes is also a famous example in game theory that will help us illustrate many interesting con-
cepts later on. So perhaps now is a good time for you think about how you would play this game
in actuality.
Example 2.3 (A Pure Coordination Game). Suppose now that Ali (call him player 1) and Beril
(player 2) are supposed to meet after work either in Grand Central Station (G) or Penn Station (P).
Unfortunately, neither knows where for sure. They would like to meet and, both of their offices
being on the West side, they would rather meet in Penn Station. So, we have A1 = A2 = {G, P} and
the game being played is represented by the bimatrix in Figure 2.4.
This game too is an interesting one, and we shall come back to it later when we discuss the
effects of preplay communication among the players. For now, ask yourself if your prediction about
how this game would actually be played depends on whether preplay communication is allowed or
not.
Example 2.4 (Matching Pennies). Ali and Beril finally meet and try to decide whether to go see
a movie or an opera. Neither one of them concedes to the other and they decide to play matching
pennies to choose where to go. Each of them conceals a penny in their palm either with its face up
(heads, H) or face down (tails, T ). Both coins are revealed simultaneously. If they match Ali wins
and if they are different Beril wins. The bimatrix is given in Figure 2.5.
Before considering an economically motivated example, let us note that the Prisoners’ Dilemma symmetric games
and Coordination Game are (two-person) symmetric games in the sense that they satisfy the fol-
lowing two conditions:
The symmetric games are in general simpler than asymmetric games because reasoning from
the point of view of one player is sufficient in such games to understand how the other players
reason as well. We shall utilize this fact in many examples that we shall consider in this book.
Let us now examine a slightly more sophisticated example of a strategic form game. This
example plays a fundamental role in the theory of industrial organization, and we shall work out
several variations of it in the sequel.
Example 2.5 (Cournot Duopoly Model). Consider a market for a single (homogeneous) good
whose market inverse demand function is
P = D(Q), Q≥0
where P is the price of the good and Q is the quantity demanded. We assume that the function D is
monotonically decreasing. Suppose that there are exactly two firms producing this good. The cost
functions of these firms are
Ci = Ci (Qi ), Qi ≥ 0, i = 1, 2,
where Ci is a twice differentiable function defined on R+ with Ci′ > 0 and Ci′′ ≤ 0.
We may model the market interaction of these firms as a 2-person game in strategic form as
follows:
(i) N = {1, 2}.
(ii) Ai = [0, Q̄]; thus (Q1 , Q2 ) ∈ A = [0, Q̄]2 means that firm i is producing Qi units at the outcome
(Q1 , Q2 ). The value Q̄ > 0 is an upper bound on the level of production of firms acting as a capacity
constraint.
2.2. Examples 27
with a > c > 0 and b > 0 being given parameters. To simplify the analysis further, we set Q̄ in this
specification equal to a/b; this is meaningful since no firm would realistically produce an output
level that exceeds a/b in this setting as this would entail making negative profits. We refer to this
model as the linear Cournot model, and observe that the payoff function of firm i in this model is:
$
(a − b(Q1 + Q2 ))Qi − cQi , 0 ≤ Q1 + Q2 ≤ a/b
ui (Q1 , Q2 ) =
−cQi , a/b < Q1 + Q2
for each Qi ∈ Ai , i = 1, 2. Therefore, the associated 2-person game in strategic form is symmetric
(while this is not necessarily the case in the general model).
The important thing to note in the Cournot model is that, unlike the market structures of perfect
competition (where all firms disregard the actions of other firms since each firm is assumed to be
negligible in the market) and of monopoly (where there is no other firm around to matter), one
firm’s action does not alone determine the outcome. Thus, we need the apparatus of game theory
to provide a prediction with respect to the market outcome.
We shall later encounter many more examples of games in strategic form. But now it is time
that we turn to the question of how to play a strategic game.
28 Strategic Form Games with Complete Information
Chapter 3
The problem of a player in a strategic game is to decide upon an action to take without knowing
which actions will be taken by her opponents. Therefore, each individual has to form a conjecture
regarding the action choices of the other players, and this is not always an easy task. But, in some
cases, this difficulty does not really arise, because there is an optimal way of taking an action
independently of the intended play of the others. We have in fact already encountered such a
situation in the prisoners’ dilemma. Indeed, taking the noncooperative action of confessing, C, is
optimal for, say player 1, in the prisoners’ dilemma no matter what player 2 is planning to do. In
this sense, we say that there is an “obvious” way of playing the prisoners’ dilemma for player 1
(and similarly for player 2): choosing C. We formalize such sure-fire actions in general as follows.
Let A = × j∈N A j be the outcome space of an n-person game in strategic form, and let a =
(a1 , ..., an ) ∈ A. For each i, we let
and write a = (ai , a−i ). Clearly, a−i is nothing but a profile of actions taken by all players in the
game other than i. We denote the set of all such profiles conveniently as A−i . Formally speaking,
we have A−i = × j∈N\{i} A j .
29
30 Strategic Form Solution Concepts
and
ui (ai , a−i ) > ui (bi , a−i ) for some a−i ∈ A−i.
It strictly dominates bi if
In other words, an action ai weakly dominates another action bi for player i, if, irrespective of
what other players do, action ai does at least as well as action bi , and for some action profiles of
the other players ai does strictly better than bi . If ai is strictly better than bi , irrespective of what
other players do, then we say that ai strictly dominates bi .
To reiterate, a dominant strategy for a player is an action that is optimal for this player no
matter what his opponents do. Put differently, a player with a dominant action does not have to
worry about how his opponents will play the game; for any belief that he might have about the plans
of actions by others, playing a dominant action is optimal. Consequently, there is good reason to
believe that rational players would play their dominant actions in a given game (of course, provided
that such actions are present). This idea leads us to the following equilibrium concept.
As we noted earlier, the action N is strictly dominant for both players in prisoners’ dilemma
(PD). Thus Ds (PD) = {(C,C)}, which is also the weakly dominant strategy equilibrium, since a
strictly dominant action is also a weakly dominant action. As an example of a weakly dominant
strategy equilibrium which is not strict, consider
3.2. Dominance Solvability 31
L R
U 2,1 0,2
D 2,3 4,3
In this game there is no strictly dominant strategy equilibrium. There is, however, a weakly
dominant strategy equilibrium given by the action profile (D, R).
Dominant strategy equilibrium is quite a reasonable equilibrium concept which does not de-
mand an excessive amount of “rationality” from the players. It only demands the players to be
(rational) optimizers, and does not require them to know that the others are rational too. Unfortu-
nately, this concept is silent in many interesting games since the existence of a dominant action for
all players in a given game is a relatively rare phenomenon.
It seems that we need to demand more rationality from the players to obtain more powerful
predictions. We now turn to a systematic way of doing this.
We have argued above that a “rational” player would play a dominant action (when such an
action exists). Turning this argument on its head, we may then say that a “rational” player would
never play an action when there is another action available to her that guarantees strictly more
payoffs for this player irrespective of the intended play of others. We refer to such an action as a
strictly dominated action. Formally,
32 Strategic Form Solution Concepts
Definition. Take a game in strategic form and consider any two actions ai , bi ∈ Ai for any
player i ∈ N. We say that ai is strictly dominated by bi if
while
ui (ai , a−i ) < ui (bi , a−i ) for some a−i ∈ A−i.
A fundamental premise in game theory is that “rational” players do not play strictly dominated
actions. For, as the argument goes, there is no belief that a player may hold about the intended play
of others such that a strictly dominated action is optimal. Therefore, given a game G in strategic
form, it makes sense to eliminate all the strictly dominated actions for any one of the players; after
all “rational” players know that this player will not take any such action. But if all players ponder
about how to play the game after eliminating (in their heads) strictly dominated actions of a given
player, then the actual game being played is effectively a smaller game than the original one. But
then why don’t we search for strictly dominated actions in this smaller game, that is, eliminate next
the strictly dominated actions of another player requiring “dominance” only against actions not
yet eliminated. And why not continue this way as far as we can?
Well, doing this may or may not be a reasonable thing to do depending on the context. Nev-
ertheless, this elimination process, which is called the iterated elimination of strictly dominated
(IESD) actions, certainly leads us to an interesting equilibrium concept. First of all, it yields an
extension of the strictly dominant strategy equilibrium. While this is formally obvious, it is an
important observation and we state it as a proposition.
Proposition 3.1. If both players have strictly dominant actions, then IESD actions leads to the
unique dominant strategy equilibrium.
Proof. Obvious.
Moreover, the IESD actions may apply in many games with no dominant strategy equilibrium,
and may yield a prediction concerning the play of the game even if no player has a dominant action.
This prediction may even be sharp enough to entail a unique outcome. In this case we say that G is
dominance solvable.
3.2. Dominance Solvability 33
Prisoners’ Dilemma is dominance solvable by Proposition 3.1. On the other hand, the IESD
actions does not at all refine the outcome space in Battle of the Sexes since neither of the players has
a strictly dominated action in this game. In other words, Battle of the Sexes game is not dominance
solvable.
As a less trivial example, consider the game
L M R
U 1,0 1,2 0,1
D 0,3 0,1 2,0
which does not possess a dominant strategy equilibrium. Observe that R is strictly dominated for
player 2 (by action M). Therefore, in the first stage of the IESD process, we eliminate R. The idea is
that player 1, being “rational,” knows that player 2 will not play R, and views the game effectively
as
L M
U 1,0 1,2
D 0,3 0,1
But player 2, being “rational,” knows that player 1 is really contemplating about how to play this
smaller game, and notices that in this game D is strictly dominated for player 1. So player 2
eliminates (in his head) the action D for player 1. This is the second stage of the IESD process and
leaves us with the game
L M
U 1,0 1,2
We now reach to the final stage of the IESD process where we eliminate L for player 2. Hence this
game is dominance solvable, and IESD actions leads to the outcome (U,M).
Observe that applying the process of IESD actions in the case of a finite game is technically
easy. In the example above, for instance, the outcome is immediately obtained by eliminating first
R, then D and then L. However, you should keep in mind that the longer this process takes, the
more “he knows that she knows that he knows that ...” sort of reasonings are used, and hence the
“more rational” we demand the players should be. Put differently, for the IESD actions to make
conceptual sense, not only that each player must not take strictly dominated actions, but also that
each player must know that her opponent won’t do so, that he knows that her opponent knows
that he won’t do so, and so on. So, this concept is less plausible in complicated games. Here is
34 Strategic Form Solution Concepts
an example of a dominance solvable game which requires the players to be, in a certain sense,
“infinitely rational.” You should decide for yourself how reasonable is the IESD actions in this
example.
du1
so no matter what Q2 is, dQ 1
< 0 when Q1 > a−c 2b (This is perhaps a bit too swift, make sure you
understand this step.) Thus any production level Q1 > a−c a−c
2b is strictly dominated (by 2b ). In the
first stage of the IESD actions process, therefore, we eliminate all Qi > a−c
2b , i = 1, 2. Consequently,
we have Q1 + Q2 < a/b after one iteration. (As we discuss at the end of this section, the order of
elimination does not matter for the final outcome in the case of IESD actions, so we can eliminate
the strictly dominated actions of the firms simultaneously.) Consequently, given that Q2 ≤ a−c 2b , we
have
du1
= (a − c) − 2bQ1 − bQ2
dQ1
% &
a−c
≥ (a − c) − 2bQ1 − b
2b
a−c
= − 2bQ1
2
du1
so that dQ1 > 0 when Q1 < a−c a−c a−c
4b . Thus, we eliminate all Qi < 4b , i = 1, 2. But, given that Q2 ≥ 4b ,
du1
one can similarly show that dQ 1
< 0 when Q1 > 3(a−c)
8b . Iterating infinitely may times, then, only the
a−c a−c
outcome ( 3b , 3b ) survives the IESD actions. (Challenge: prove this.) Hence the linear Cournot
model is dominance solvable.
We define the iterated elimination of weakly dominated (IEWD) actions in a way analogous
to the IESD actions. But, as we shall see, this is a somewhat more problematic notion than IESD
actions. To begin with there is a possible contradiction in the procedure: the argument behind not
using weakly dominated actions is that if there is uncertainty in the mind of players as to the action
choice of the other players then a weakly dominated action should not be used. For example, in the
following game
L R
U 2,1 0,2
D 2,3 4,3
3.2. Dominance Solvability 35
player 1 should not use action U if there is a small probability in his mind that player 2 will play R.
Yet, the procedure of eliminating weakly dominated actions may involve deleting actions to which
the player previously assigned a positive probability. Consider in the following game
L R
U 3,1 2,0
M 4,0 1,1
D 4,4 2,4
we may delete U assuming that player 1 assigns a positive (however small) probability to the event
that player 2 will play action L. Given U is deleted, then player 2’s action L becomes weakly dom-
inated and hence it can be deleted, i.e., it is going to be played with zero probability, contradicting
the reason why action U was deleted to begin with.
Nevertheless, IEWD actions is used widely in economic applications of game theory, and we
too will utilize this concept on occasion. Let us illustrate by means of two examples the power
(and perhaps also the potential counter intuitiveness, you decide for yourself) of the notion of
IEWD actions.
Example 3.2 (Guess-the-average game). Consider an n-person strategic game in which each player
picks an integer between 1 and 999. So N = {1, ..., n} and Ai = {1, ..., 999}. Let us write ā for the
mean of the action profile (a1 , ..., an ), that is, ā = ∑ni=1 ai /n. The winners in this game are those
players whose choice of integer is closest to 23 ā.1
• First take about five minutes to decide how you would play this game.
• Observe next that IESD actions does not provide a sharp prediction here; this game is not
dominance solvable.
• Let us now apply IEWD actions. Take any player. This player knows that no matter what
the other players play, the two-thirds of the average ballot cannot exceed 666. But then
any integer larger than 666 is weakly dominated by 666 for this individual (why weakly?).
Since this is true for all players, IEWD actions demands that we eliminate all actions in
{667, ..., 999}. But the argument can be repeated, for every strategy in {445, ..., 666} is now
1 Formally, we may write ui (a1 , ..., an ) = 1 if
' ' ' '
'ai − 2 ā' ≤ 'a j − 2 ā' for all j = 1, ..., n
' ' ' '
' 3 ' ' 3 '
weakly dominated by 444. Continuing this way (iterating finitely many times), we find that
the only outcome that survives the IEWD actions is (1, ..., 1)!
Example 3.3 (Chairman’s Paradox). Consider a committee of three persons (named as usual 1,
2 and 3) whose task is to choose an alternative form the choice set {α, β, γ} by means of voting.
The alternative will be chosen on the basis of majority so that an alternative which gets two votes
wins the election. The rule is such that if there is a tie (that is, if each voter votes for a different
alternative, then the chairman of the committee, who is, say, player 3, will unilaterally decide on the
outcome of the election by declaring the alternative that he best likes as the winner of the election.
So this is not a symmetric game, it appears that the position of player 3 is strategically superior to
the rest of the players.
Now assume that the preferences of the players are given as in the following list:
Here the convention is that any alternative in each column is strictly preferred to the alternatives
that are below it by the corresponding player. For instance, player 1 strictly prefers α to β while she
likes β strictly better than γ. Therefore, given these preferences, if all voters voted sincerely, each
would vote for a different alternative, and in this case, player 3 would exert his additional power to
declare the alternative γ as the winner of the election.
However, there is no reason why all voters should vote truthfully, in principal they would do
so only if this would benefit them. What if they wish to play this voting game strategically? To
see what would happen in this case, let us model the scenario as a game in strategic form where
Ai = {α, β, γ} (an action for each individual is the vote that she is going to cast), and consider the
3.2. Dominance Solvability 37
α β γ
α 2,0,1 0,1,2 0,1,2
if player 3 chooses γ
β 0,1,2 1,2,0 0,1,2
γ 0,1,2 0,1,2 0,1,2
α β γ
α 2,0,1 2,0,1 2,0,1
if player 3 chooses α
β 2,0,1 1,2,0 0,1,2
γ 2,0,1 0,1,2 0,1,2
α β γ
α 2,0,1 1,2,0 0,1,2
if player 3 chooses β
β 1,2,0 1,2,0 1,2,0
γ 2,0,1 1,2,0 0,1,2
Here, for instance, u3 (α, β, γ) = 2 since in this case the outcome of the election is γ which is the
most preferred outcome by player 3. (Check that this representation really corresponds to the
scenario described above.)
Our task is now to apply the IEWD actions to this game. Here is one way of doing this:
Eliminate (1) γ for player 1; (2) α and γ for player 2; (3) α and β for player 3; (4) α for player 1.
Hence the IEWD actions leads to the outcome (β, β, γ) which means that the winner of the election
is β. Observe that this outcome contrasts sharply with the outcome in the case of sincere voting.
In fact, with strategic voting, we observe that the worst outcome is elected for player 3 (if you
believe in IEWD actions) who supposedly is a more powerful player than the others; this is why
the present game is sometimes called the chairman’s paradox. (What do you think is the key to
“explain” this paradoxical outcome? What if players did not know the preferences of the others?
What if they didn’t believe that the others were so terribly smart? Do you agree with the prediction
reached through the IEWD actions?)
Remark 3.1. An important question that we have to deal with before we conclude this section is
this: could eliminating IESD actions lead to different results if elimination takes place in different
orders? Fortunately, the answer is no. (Can you prove this?) However, the answer would be yes
if we rather used weakly dominated actions in the iterations. For instance, consider the 2-person
38 Strategic Form Solution Concepts
L R
U 3,1 2,0
M 4,0 1,1
D 4,4 2,4
Here if we first eliminate player 1’s action U, then player 2’s action L, and then player 1’s action
M we get the outcome (D,R), while if we first eliminate M (and then R and then U) we get the
outcome (D,L). This shows that the order of elimination matters in the case of IEWD actions.
As we have mentioned in our first lecture, one of the assumptions that we will maintain is
that individuals are rational, i.e., they take the best actions to pursue their objectives. This is not
any different from the assumption of rationality, or optimizing behavior, that you must have come
across in your microeconomics classes. In most of microeconomics, individual decision making
boils down to solving the following problem:
max u (x, θ)
x∈X
where x is the choice variable, or possible actions, (such as a consumption bundle) of the individual,
X denotes the set of possible actions available (such as the budget set), θ denotes any parameters
that are outside the control of the individual (such as the price vector and income), and u is the
utility (or payoff) function of the individual.
What makes a situation a strategic game, however, is the fact that what is best for one individual,
in general, depends upon other individuals’ actions. The decision problem of an individual can be
phrased in above terms by treating θ as the choices of other individuals whose actions affect the
subject individual’s payoff. In other words, letting x = ai , X = Ai , and θ = a−i , the decision making
problem of player i in a game becomes
The main difficulty with this problem is the fact that individual does not, in general, know the
action choices of other players, a−i , whereas in single decision making problems θ, such as price
and income, are assumed to be known, or determined as an outcome of exogenous chance events.
Therefore, determining the best action for an individual requires a joint analysis of every individ-
3.3. Nash Equilibrium 39
In the previous section we have analyzed situations in which this problem could be circum-
vented, and hence we could analyze the problem by only considering it from the perspective of a
single individual. If, independent of the other players’ actions, the individual in question has an
optimal action, then rationality requires taking that action, and hence we can analyze that individ-
ual’s decision making problem in isolation from that of others. If every individual is in a similar
situation this leads to (weakly or strictly) dominant strategy equilibrium. Remember that, the only
assumptions that we used to reach dominant strategy equilibrium is the rationality of players (and
the knowledge of own payoff function, of course). Unfortunately, many interesting games do not
have a dominant strategy equilibrium and this forces us to increase the rationality requirements for
individuals. The second solution concept that we introduced, i.e., iterated elimination of dominated
strategies, did just that. It required not only the rationality of each individual and the knowledge
of own payoff functions, but also the (common) knowledge of other players’ rationality and payoff
functions. However, in this case we run into other problems: there may be too many outcomes
that survive IESD actions, or different outcomes may arise as outcomes that survive IEWD actions,
depending on the order of elimination.
In this section we will analyze by far the most commonly used equilibrium concept for strategic
games, i.e., the Nash equilibrium concept, which overcomes some of the problems of the solution
concepts introduced before.2 The presence of interaction among players requires each individual to
form a belief regarding the possible actions of other individuals. Nash equilibrium is based on the
premises that (i) each individual acts rationally given her beliefs about the other players’ actions,
and that (ii) these beliefs are correct. It is the second element which makes this an equilibrium
concept. It is in this sense we may regard Nash equilibrium outcome as a steady state of a strategic
interaction. Once every individual is acting in accordance with the Nash equilibrium, no one has
an incentive to unilaterally deviate and take another action. More formally, we have the following
definition:
2 The discovery of the basic idea behind the Nash equilibrium goes back to the 1938 work of Augustine Cournot.
(Cournot’s work is translated into English in 1897 as Researches into the Mathematical Principles of the Theory of
Wealth, New York: MacMillan.) The formalization and rigorous analysis of this equilibrium concept was not given until
the seminal 1950 work of the mathematician John Nash. Nash was awarded the Nobel prize in economics in 1994 (along
with John Harsanyi and Reinhardt Selten) for his contributions to game theory. For an exciting biography of Nash, we
refer the reader to S. Nasar (1998), A Beautiful Mind, New York: Simon and Schuster.
40 Strategic Form Solution Concepts
holds for each player i. The set of all Nash equilibria of G is denoted N(G).
In a two player game, for example, an action profile (a∗1 , a∗2 ) is a Nash equilibrium if the fol-
lowing two conditions hold
Therefore, we may say that, in a Nash equilibrium, each player’s choice of action is a best
response to the actions actually taken by his opponents. This suggests, and sometimes more useful,
definition of Nash equilibrium, based on the notion of the best response correspondence.3 We
define the best response correspondence of player i in a strategic form game as the correspondence
Bi : A−i ⇒ Ai given by
(Notice that, for each a−i ∈ A−i , Bi (a−i ) is a set which may or may not be a singleton.) So, for
example, in a 2-person game, if player 2 plays a2 , player 1’s best choice is to play some action in
B1 (a2 ),
B1 (a2 ) = {a1 ∈ A1 : u1 (a1 , a2 ) ≥ u2 (b1 , a2 ) for all b1 ∈ A1 }.
we have B1 (L) = {U}, B1 (M) = {U,D} and B1 (R) = {D}, while B2 (U) = {M,R} and B2 (D) =
{L}.
3 Mathematical Reminder: Recall that a function f from a set A to a set B assigns to each x ∈ A one and only one
element f (x) in B. By definition, a correspondence f from A to B, on the other hand, assigns to each x ∈ A a subset
of B, and in this case we write f : A ⇒ B. (For instance, f : [0, 1] ⇒ [0, 1] defined as f (x) = {y ∈ [0, 1] : x ≤ y} is
a correspondence; draw the graph of f .) In the special case where a correspondence is single-valued (i.e. f (x) is a
singleton set for each x ∈ A), then f can be thought of as a function.
3.3. Nash Equilibrium 41
Proposition 3.2. For any 2-person game in strategic form G, we have (a∗1 , a∗2 ) ∈ N(G) if, and only
if
a∗1 ∈ B1 (a∗2 ) and a∗2 ∈ B2 (a∗1 ).
Proof. Exercise.
This proposition suggests a way of computing the Nash equilibria of strategic games. In par-
ticular, when the best response correspondence of the players are single-valued, then Proposition B
tells us that all we need to do is to solve two equations in two unknowns to characterize the set of
all Nash equilibria (once we have found B1 and B2 , that is). The following examples will illustrate.
Example 3.4. We have N(BoS) = {(m,m), (o,o)} (and both of these equilibria are strict). Indeed,
in this game, B1 (o) = {o}, B1 (m) = {m}, B2 (o) = {o}, and B2 (m) = {m}. These observations
also show that (m,o) and (o,m) are not equilibrium points of BoS. Similar computations yield
N(CG) = {(l,l), (r,r)} and N(MW) = 0./
An easy way of finding Nash equilibrium in two-person strategic form games is to utilize the
best response correspondences and the bimatrix representation. You simply have to mark the best
response(s) of each player given the action choice of the other player and any action profile at
which both players are best responding to each other is a Nash equilibrium. In the BoS game, for
example, given player 1 plays m, the best response of player 2 is to play m, which is expressed
by underscoring player 2’s payoff at (m,m), and her best response to o is o, which is expressed by
underscoring her payoff at (o,o).
m o
m 2,1 0,0 .
o 0,0 1,2
The same procedure is applied to player 1 as well. The set of Nash equilibrium is then the set of
outcomes at which both players’ payoffs are underscored, i.e., (m, m), (o, o).
Nash equilibrium concept has been motivated in many different ways, mostly on an informal
basis. We will now give a brief discussion of some of these motivations:
Self Enforcing Agreements. Let us assume that two players debate about how they should
play a given 2-person game in strategic form through preplay communication. What sort of an
agreement would they reach? Of course, we cannot give a precise answer to this question before
knowing more about the specifics of the game, but this much we can say: the agreement (whatever
it is) should be “self enforcing” in the sense that no player should have a reason to deviate from
her promise if she believes that the other player will keep his end of the bargain. Put informally, a
42 Strategic Form Solution Concepts
Nash equilibrium is an outcome that would correspond to a self enforcing agreement in this sense.
Once it is reached, no individual has an incentive to deviate from it unilaterally.
Social Conventions. Consider a strategic interaction played between two players, where player
1 is randomly picked from a population and player 2 is randomly picked from another population.
For example, the situation could be a bargaining game between a buyer and a seller. Now imagine
that this situation is repeated over time, each iteration being played between two randomly selected
players. If this process settles down to an action profile, that is if time after time the action choices
of players in the role of player 1 and those in the role of player 2 are always the same, then we
may regard this outcome as a convention. Even if players start with arbitrary actions, as long as
they remember how the actions of the previous players fared in the past and choose those actions
that are better, any social convention must correspond to a Nash equilibrium. If an outcome is not
a Nash equilibrium, then at least one of the players is not best responding, and sooner or later a
player in that role will happen to land on a better action which will then be adopted by the players
afterwards. Put differently, an outcome which is not a Nash equilibrium lacks a certain sense of
stability, and thus if a convention were to develop about how to play a given game through time,
we would expect this convention to correspond to a Nash equilibrium of the game.
Focal Points. Focal points are outcomes which are distinguished from others on the basis
of some characteristics. Those characteristics may distinguish an outcome as a result of some
psychological or social process and may even seem trivial, such as the names of the actions. Focal
points may also arise due to the optimality of the actions, and Nash equilibrium is considered focal
on this basis.
Learned Behavior. Consider two players playing the same game repeatedly. Also suppose that
each player simply best responds to the action choice of the other player in the previous interaction.
It is not hard to imagine that over time their play may settle on an outcome. If this happens, then it
has to be a Nash equilibrium outcome. There are, however, two problems with this interpretation:
(1) the play may never settle down, (2) the repeated game is different from the strategic form game
that is played in each period and hence it cannot be used to justify its equilibrium.
So, whichever of the above parables one may want to entertain, if a reasonable outcome of a
game in strategic form exists, it must possess the property of being a Nash equilibrium. In other
words, being a Nash equilibrium is a necessary condition for a reasonable outcome. But notice
that this is a one-way statement; it would not be reasonable to claim that any Nash equilibrium of
a given game corresponds to an outcome that is likely to be observed when the game is actually
played. (More on this shortly.)
We will now introduce two other celebrated strategic form games to further illustrate the Nash
equilibrium concept.
3.3. Nash Equilibrium 43
Example 3.5 (Stag Hunt (SH)). Two hungry hunters go to the woods with the aim of catching a
stag, or at least a hare. They can catch a stag only if both remain alert and devote their time and
energy to catching it. Catching a hare is less demanding and does not require the cooperation of
the other hunter. Each hunter prefers half a stag to a hare. Letting S denote the action of going after
the stag, and H the action of catching a hare, we can represent this game by the following bimatrix
S H
S 2,2 0,1 .
H 1,0 1,1
Example 3.6 (Hawk-Dove (HD)). Two animals are fighting over a prey. The prey is worth v to
each player, and the cost of fighting is c1 for the first animal (player 1) and c for the second animal
(player 2). If they both act aggressively (hawkish) and get into a fight, they share the prey but
suffer the cost of fighting. If both act peacefully (dovish), then they get to share the prey without
incurring any cost. If one acts dovish and the other hawkish, there is no fight and the latter gets the
whole prey.
(1) Write down the strategic form of this game
(2) Assume v, c1 , c2 are all non-negative and find the Nash equilibria of this game in each of the
following cases: (a) c1 > v/2, c2 > v/2, (b) c1 > v/2, c2 < v/2, (c) c1 < v/2, c2 < v/2.
Example 3.7 (Cournot Duopoly). We have previously introduced a simple Cournot duopoly model
and analyzed its outcome by applying IESD actions. Let us now try to find its Nash equilibria. We
will first find the best response correspondence of firm 1. Given that firm 2 produces Q2 ∈ [0, a/b],
the best response of firm 1 is found by solving the first order condition
du1
= (a − c) − 2bQ1 − bQ2
dQ1
Q2 d u1 2
which yields Q1 = a−c 2b − 2 . (Second order condition checks since dQ21 = −2b < 0.) But notice
that this equation yields Q1 < 0 if Q2 > a−c
b while producing a negative quantity is not feasible for
firm 1. Consequently, we have
⎧
⎨ a−c−bQ2 , if Q ≤ a−c
2b 2 b ,
B1 (Q2 ) =
⎩0, a−c
if Q > 2 b .
and, by symmetry,
44 Strategic Form Solution Concepts
a−c
b
B1
Nash Equilibrium
a−c
2b
a−c
3b
B2
a−c a−c a−c a Q1
3b 2b b b
⎧
⎨ a−c−bQ1 , if Q ≤ a−c
2b 1 b ,
B2 (Q1 ) =
⎩0, a−c
if Q > 1 b .
Observe next that it is impossible that either firm will choose to produce more than a−c
b in the
equilibrium (why?). Therefore, by Proposition B, to compute the Nash equilibrium all we need to
do is to solve the following two equations:
a − c Q∗1 a − c Q∗2
Q∗2 = − and Q∗1 = − .
2b 2 2b 2
Doing this, we find that the unique Nash equilibrium of this game is
% &
a−c a−c
(Q∗1 , Q∗2 ) = , .
3b 3b
(See Figure 3.1) Interestingly, this is precisely the only outcome that survives the IESD actions. (Is
this a strict Nash equilibrium?)
An interesting question to ask at this point is if in the Cournot model it is inefficient for these
firms to produce their Nash equilibrium levels of output. The answer is yes, showing that the
inefficiency of decentralized behavior may surface in more realistic settings than the scenario of
the prisoners’ dilemma suggests. To prove this, let us entertain the possibility that firms 1 and 2
collude (perhaps forming a cartel) and act as a monopolist with the proviso that the profits earned in
this monopoly will be distributed equally among the firms. Given the market demand, the objective
3.3. Nash Equilibrium 45
where Q = Q1 + Q2 ∈ [0, 2a/b]. By using calculus, we find that the optimal level of production for
this monopoly is Q = a−c
2b . (Since the cost functions of the individual firms are identical, it does not
really matter how much of this production takes place in whose plant.) Consequently,
% % & &% &
profits of the monopolist 1 a−c a−c (a − c)2
= a−c−b ) =
2 2 2b 2b 8b
while
(a − c)2
profits of firm i in the equilibrium = ui (Q∗1 , Q∗2 ) = .
9b
Thus, while both parties could be strictly better off had they formed a cartel, the equilibrium pre-
dicts that this will not take place in actuality. (Do you think this insight generalizes to the n-firm
case?).
Remark 3.2. There is reason to expect that symmetric outcomes will materialize in symmetric
games since in such games all agents are identical to one another. Consequently, symmetric equi-
libria of symmetric games is of particular interest. Formally, we define a symmetric equilibrium of
a symmetric game as a Nash equilibrium of this game in which all players play the same action.
(Note that this concept does not apply to asymmetric games.) For instance, in the Cournot duopoly
game above, (Q∗1 , Q∗2 ) corresponds to a symmetric equilibrium. More generally, if the Nash equilib-
rium of a symmetric game is unique, then this equilibrium must be symmetric. Indeed, suppose that
G is a symmetric 2-person game in strategic form with a unique equilibrium and (a∗1 , a∗2 ) ∈ N(G).
But then using the symmetry of G one may show easily that (a∗2 , a∗1 ) is a Nash equilibrium of G as
well. Since there is only one equilibrium of G, we must then have a∗1 = a∗2 .
Nash equilibrium requires that no individual has an incentive to deviate from it. In other words,
it is possible that at a Nash equilibrium a player may be indifferent between her equilibrium action
and some other action, given the other players’ actions. If we do not allow this to happen, we
arrive at the notion of a strict Nash equilibrium. More formally, an action profile a∗ is a strict Nash
equilibrium if
ui (a∗i , a∗−i ) > ui (ai , a∗−i ) for all ai ∈ Ai such that ai ̸= a∗i
L R
T −1, 0 0, −1
.
M 0, 1 0, 1
B 1, −1 −1, 0
Now that we have seen all the minor equilibrium concepts for games in strategic form, we
should analyze the relations between these concepts. We turn to such an analysis in this section.
It follows readily from the definitions that every strictly dominant strategy equilibrium is a
weakly dominant strategy equilibrium, and every weakly dominant strategy equilibrium is a Nash
equilibrium. Thus,
Ds (G) ⊆ Dw (G) ⊆ N(G)
for all strategic games G. For instance, (C,C) is a Nash equilibrium for Prisoners’ Dilemma; in fact
this is the only Nash equilibrium of this game (do you agree?).
Exercise. Show that if all players have a strictly dominant strategy in a strategic game, then
this game must have a unique Nash equilibrium.
However, there may exist a Nash equilibrium of a game which is not a weakly or strictly
dominant strategy equilibrium; the BoS provides an example to this effect. What is more interesting
is that a player may play a weakly dominated action in Nash equilibrium. Here is an example:
α β
α 0,0 1,0 (3.1)
β 0,1 3,3
Here (α, α) is a Nash equilibrium, but playing β weakly dominates playing α for both players. This
observation can be stated in an alternative way:
Proposition 3.3. A Nash equilibrium need not survive the IEWD actions.
Yet the following result shows that if IEWD actions somehow yields a unique outcome, then
this must be a Nash equilibrium in finite strategic games.
Proposition 3.4. Let G be a game in strategic form with finite action spaces. If the iterated elimi-
nation of weakly dominated actions results in a unique outcome, then this outcome must be a Nash
3.4. Nash Equilibrium and Dominant/Dominated Actions 47
equilibrium of G.4
Proof. For simplicity, we provide the proof for the 2-person case, but it is possible to generalize
the argument in a straightforward way. Let the only actions that survive the IEWD actions be a∗1
and a∗2 , but to derive a contradiction, suppose that (a∗1 , a∗2 ) ∈
/ N(G). Then, one of the players must
not be best-responding to the other, say this player is the first one. Formally, we have
But a′1 must have been weakly dominated by some other action a′′1 ∈ A1 at some stage of the
elimination process, so
Now if a′′1 = a∗1 , then we contradict (3.2). Otherwise, we continue as we did after (3.2) to obtain
an action a′′′
1 ∈/ {a′1 , a′′1 } such that u1 (a′1 , a∗2 ) ≤ u1 (a′′′ ∗ ′′′ ∗
1 , a2 ). If a1 = a1 we are done again, otherwise
we continue this way and eventually reach the desired contradiction since A1 is a finite set by
hypothesis.
However, even if IEWD actions results in a unique outcome, there may be Nash equilibrium
which does not survive IEWD actions (The game given by (??) illustrates this point). Furthermore,
it is important that IEWD actions leads to a unique outcome for the proposition to hold. For
example in the BoS game all outcomes survive IEWD actions, yet the only Nash equilibrium
outcomes are (m,m) and (o,o). One can also, by trivially modifying the proof given above show that
if IESD actions results in a unique outcome, then that outcome must be a Nash equilibrium. In other
words, any finite and dominance solvable game has a unique Nash equilibrium. But how about the
converse of this? Is it the case that a Nash equilibrium always survives the IESD actions. In contrast
to the case with IEWD actions (recall Proposition C), the answer is given in the affirmative by our
next result.
Proposition 3.5. Let G be a 2-person game in strategic form. If (a∗1 , a∗2 ) ∈ N(G), then a∗1 and a∗2
must survive the iterated elimination of strictly dominated actions.
4 So, for instance, (1, ..., 1) must be a Nash equilibrium of guess-the average game: .
48 Strategic Form Solution Concepts
Proof. We again give the proof in the 2-person case for simplicity. To obtain a contradiction,
suppose that (a∗1 , a∗2 ) ∈ N(G), but either a∗1 or a∗2 is eliminated at some iteration. Without loss of
generality, assume that a∗1 is eliminated before a∗2 . Then, there must exist an action a′1 ∈ A1 (not yet
eliminated at the iteration at which a∗1 is eliminated) such that,
Given that the Nash equilibrium is the most widely used equilibrium concept in economic
applications, it is important to understand its limitations. We discuss some of these as the final
order of business in this chapter.
3.5.1 A Nash equilibrium may involve a weakly dominated action by some players.
We observed this possibility in Proposition C. Ask yourself if (α, α) in the game (??) is a
sensible outcome at all. You may say that if player 1 is “certain” that player 2 will play α and vice
versa, then it is. But if either one of the players assigns a probability in her mind that her opponent
may play β, the expected utility maximizing (rational) action would be to play β, no matter how
small this probability is. Since it is rare that all players are “certain” about the intended plays
of their opponents (even if pre-play negotiation is possible), weakly dominated Nash equilibrium
appears unreasonable. This leads us to refine the Nash equilibrium in the following manner.
Example. If G denotes the game given in (??), then Nundom (G) = {(β, β)}. On the other hand,
Nundom (G) = N(G) where G = PD, BoS, CG. The same equality holds for the linear Cournot
model. (Question: Are all strict Nash equilibria of a game in strategic form undominated?)
3.5. Difficulties with Nash Equilibrium 49
Exercise. Compute the set of all Nash and undominated Nash equilibria of the chairman’s
paradox game.
Preplay Negotiation. Consider the CG game and allow the players to communicate (cheap
talk) prior to the game being played. What do you think will be the outcome then? Most people
answer this question as (r,r). The reason is that agreement on the outcome (r,r) seems in the nature
of things, and what is more, there is no reason why players should not play r once this agreement
is reached (i.e. such an agreement is self-enforcing). Thus, pure coordination games like CG can
often be “solved” via preplay negotiation. (More on this shortly.)
But how about BoS? It is not at all obvious which agreement would surface in the preplay
communication in this game, and hence, even if an agreement on either (m,m) or (o,o) would be
self-enforcing, preplay negotiation does not help us “solve” the BoS. Maybe we should learn to
live with the fact that some games do not admit a natural “solution.”
Focal Points. It has been argued by many game theorists that the story of some games isolate
certain Nash equilibria as “focal” in that certain details that are not captured by the formalism of a
game in strategic form may actually entail a clear path of play. The following will illustrate.
Example. (A Nash Demand Game) Suppose that two individuals (1 and 2) face the problem
of dividing $100 among themselves. They decide to use the following method in doing this: each
of them will simultaneously declare how much of the $100 (s)he wishes to have, and if their total
demand exceeds $100 no one will get anything (the money will then go to a charity) while they
will receive their demands otherwise (anything left on the table will go to a charity).
50 Strategic Form Solution Concepts
We may formulate this scenario as a 2-person game in strategic form where Ai = [0, 100] and
$
xi , if x1 + x2 ≤ 100
ui (x1 , x2 ) =
0, otherwise.
Notice that we are assuming here that money is utility; an assumption which is often useful.
(Caveat: But this is not an unexceptionable assumption - what if the bargaining was between a
father and his 5 year old daughter or between two individuals who hate each other?).
• Well, there are just too many equilibria here; any division of $100 is an equilibrium! Thus,
for this game, the predictions made on the basis of the Nash equilibrium are bound to be very
weak. Yet, when people actually played this game in the experiments, in an overwhelming
number of times the outcome (50, 50) is observed to surface. So, in this example, 50-50 split
appears to be a focal point suggesting that equity considerations (which are totally missed by
the formalism of the game theory we have developed so far) may play a role in certain Nash
equilibrium to be selected in actual play. !
Unfortunately, the notion of a focal point is an elusive one. It is difficult to come up with a
theory for it since it is not clear what is the general principle that underlies it. The above example
provides, after all, only a single instance of it; one can think of other scenarios with a focal equi-
librium.5 It is our hope that experimental game theory (which we shall talk about further later on)
will shed light into the matter in the future.
letters A,B,C,D,E,F,G,H with the proviso that player 1’s list must contain A and player 2’s list must contain H. If their
lists do not overlap, then they both win, they lose otherwise. (How would you play this game in the place of player 1?
Player 2?) What happens very often when the game is played in the experiments is that people in the position of player
1 chooses {A,B,C,D} and people in the position of player 2 chooses {E,F,G,H}; what is going on here, how do people
coordinate so well? For more examples of this sort and a thorough discussion of focal points, an excellent reference is
T. Schelling (1960), The Strategy of Conflict, London: Oxford University Press.
3.5. Difficulties with Nash Equilibrium 51
through communication that takes place prior to play), for at the Nash equilibrium outcome (r,r)
they are both strictly better off. This suggests the following refinement of the Nash equilibrium.
Definition. A Pareto optimal Nash equilibrium of a game G in strategic form is any Nash
equilibrium a∗ = (a∗1 , ..., a∗n ) such that there does not exist another equilibrium b∗ = (b∗1 , ..., b∗n ) ∈
N(G) with
ui (a∗ ) < ui (b∗ ) for each i ∈ N.
We denote the set of all Pareto optimal Nash equilibrium of G by NPO (G).
A Pareto optimal Nash equilibrium outcome in a 2-person game in strategic form is particularly
appealing (when preplay communication is allowed), for once such an outcome has been somehow
realized, the players would not have an incentive from deviating from it neither unilaterally (as
the Nash property requires) nor jointly (as Pareto optimality requires). As you would expect, this
refinement of Nash equilibrium delivers us what we wish to find in the CG: NPO (CG) = {(r,r)}. As
you might expect, however, the Pareto optimal Nash equilibrium concept does not help us “solve”
the BoS, for we have NPO (BoS) = N(BoS).
The fact that Pareto optimal Nash equilibrium refines the Nash equilibrium points to the fact
that the latter is not immune to coalitional deviations. This is because the stability achieved by
the Nash equilibrium is by means of avoiding only the unilateral deviations of each individual.
Put differently, the Nash equilibrium does not ensure that no coalition of the players will find
it beneficial to defect. The Pareto optimal Nash equilibrium somewhat corrects for this through
avoiding defection of the entire group of the players (the so-called grand coalition) in addition to
that of the individuals (the singleton coalitions). Unfortunately, this refinement does not solve the
problem entirely. Here is a game in which the Pareto optimal Nash equilibrium does not refine the
Nash equilibrium in a way that deals with coalitional considerations in a satisfactory way.
Example. In the following game G player 1 chooses rows, player 2 chooses columns and
player 3 chooses tables.
α β
a 1,1,-5 -5,-5,0 if player 3 chooses U
b -5,-5,0 0,2,7
α β
a 1,1,6 -5,-5,0 if player 3 chooses D
b -5,-5,0 -2,-2,0
52 Strategic Form Solution Concepts
(For instance, we have N = {1, 2, 3}, A3 = {U, D} and u3 (a,β,D) = 0.) In this game we have
but coalitional considerations indicate that the equilibrium (a, α,D) is rather unstable, provided that
players can communicate prior to play. Indeed, it is quite conceivable in this case that players 2
and 3 would form a coalition and deviate from (a, α,D) equilibrium by publicly agreeing to take
actions β and U, respectively. Since this is clearly a self-enforcing agreement, it casts doubt on the
claim that (a, α,D) is a reasonable prediction for this game. !
You probably see where the above example is leading to. It suggest that there is merit in refin-
ing even the Pareto optimal Nash equilibrium by isolating those Nash equilibria that are immune
against all possible coalitional deviations. To introduce this idea formally, we need a final bit of
Notation. Let A = ×i∈N Ai be the outcome space of an n-person game in strategic form, and
let (a1 , ..., an ) ∈ A. For each K ⊆ N, we let aK denote the vector (ai )i∈K ∈ ×i∈K Ai , and a−K the
vector (ai )i∈N\K ∈ ×i∈N\K Ai . By (aK , a−K ), we then mean the outcome (a1 , ..., an ). Clearly, aK is
the profile of actions taken by all players who belong to the coalition K, and we denote the set
of all such profiles by AK (that is, AK = ×i∈K Ai by definition). Similarly, a−K is the profile of
actions taken by all players who does not belong to K, and A−K is a shorthand notation for the set
A−K = ×i∈N\K Ai .
While its formal definition is a bit mouthful, all that the strong Nash equilibrium concept does
is to choose those outcomes at which no coalition can find it in the interest of each of its members
to deviate. Clearly, we have
NS (G) ⊆ NPO (G) ⊆ N(G)
fort any game G in strategic form. Since, for 2-person games the notions of Pareto optimal and
strong Nash equilibrium coincide (why?), the only strong Nash equilibrium of the CG is (r,r). On
6 The notion of the strong Nash equilibrium was first introduced by the mathematician and economist Robert Aumann.
3.5. Difficulties with Nash Equilibrium 53
the other hand, in the 3-person game discussed above, we have NS (G) = {(b, β,U)} as is desired
(verify!).
Unfortunately, while the notion of the strong Nash equilibrium solves some of our problems, it
is itself not free of difficulties. In particular, in many interesting games no strong Nash equilibrium
exists, for it is simply too demanding to disallow for all coalitional deviations. What we need
instead is a theory of coalition formation so that we can look for the Nash equilibria that are immune
to deviations by those coalitions that are likely to form. At present, however, there does not exist
such a theory that is commonly used in game theory, the issue awaits much further research.7
7 If you are interested in coalitional refinements of the Nash equilibrium, a good place to start is the highly readable
paper by D. Bernheim, B. Peleg and M. Whinston (1987), “Coalition-proof Nash equilibria I: Concepts,” Journal of
Economic Theory, 42, pp. 1-12.
54 Strategic Form Solution Concepts
Chapter 4
In this section, we shall consider several economic scenarios which are modeled well by means
of strategic games. We shall also examine the predictions that game theory provides in such sce-
narios by using some of the equilibrium concepts that we have studied so far. One major objective
of this section is actually to establish a solid understanding of the notion of Nash equilibrium, un-
doubtedly the most commonly used equilibrium concept in game theory. We contend that the best
way of understanding the pros and cons of Nash equilibrium is seeing this concept in action. For
this reason we shall consider below quite a number of examples. Most of these examples are the
toy versions of more general economic models and we shall return to some of them in later chapters
when we are better equipped to cover more realistic scenarios.
Auctions
Many economic transactions are conducted through auctions. Governments sell treasury bills,
foreign exchange, mineral rights, and more recently airwave spectrum rights via auctions. Art
work, antiques, cars, and houses are also sold by auctions. Auction theory has also been applied to
areas as diverse as queues, wars of attrition, and lobbying contests.1
There are four commonly used and studied forms of auctions: the ascending-bid auction (also
called English auction), the descending-bid auction (also called Dutch auction), the first-price
sealed bid auction, and the second-price sealed bid auction (also known as Vickrey auction2 ). In
the ascending-bid auction, the price is raised until only one bidder remains, and that bidder wins
the object at the final price. In the descending-bid auction, the auctioneer starts at a very high price
and lowers it continuously until the someone accepts the currently announced price. That bidder
1 For a good introductory survey to the auction theory see Paul Klemperer (1999), “Auction Theory: A Guide to the
55
56 Strategic Form Games: Applications
wins the object at that price. In the first-price sealed bid auction each bidder submits her bid in a
sealed envelope without seeing others’ bids, and the object is sold to the highest bidder at her bid.
The second-price sealed bid auction works the same way except that the winner pays the second
highest bid.
In this section we will analyze the last two forms of auctions, not only because they are simpler
to analyze but also because under the assumptions we will work with in this section the first-price
sealed bid auction is strategically equivalent to descending bid auction and the second-price sealed
bid auction is strategically equivalent to ascending bid auction.
For simplicity we will assume there are only two individuals, players 1 and 2, who are compet-
ing in an auction for a valuable object. While this may require a stretch of imagination, it is com-
monly known that the value of the object to the player i is vi dollars, i = 1, 2, where v1 > v2 > 0.
(What we mean by this is that player i is indifferent between buying the object at price vi and not
buying it.) The outcome of the auction, of course, depends on the rules of the auctioning procedure.
In fact, identifying the precise nature of the outcomes in a setting like this (and in similar scenarios)
under various procedures is the subject matter of a very likely subfield of game theory, namely the
auction theory. In this section, our aim is to provide an elementary introduction to this topic. let
us then begin with analyzing this game theoretic scenario first under the most common auctioning
procedure.
and $
v2 − b2 , if b2 > b1
u2 (b1 , b2 ) =
0, otherwise
for all (b1 , b2 ) ∈ R2+ . (Here bi stands for the bid of player i, i = 1, 2).
We now wish to identify the set of Nash equilibria of G. (In case you are wondering why we are
3 There are other tie-breaking methods such as, randomly selecting a winner (by means of coin toss, say). Our choice
of the tie-breaking rule is useful in that it leads to a simple analysis. The reader should not find it difficult to modify the
results reported below by using other tie-breaking rules.
57
not checking for dominant strategy equilibrium, note that the following analysis will demonstrate
that Ds (G) = Dw (G) = 0.)
/ Rather than computing the best response correspondences of the players,
we adopt here instead a direct approach towards this goal. Let us try to find what properties a Nash
equilibrium has to satisfy. We first claim that
(1) In any Nash equilibrium player 1 (the individual who values the object the most) wins the
object.
Proof: Let (b∗1 , b∗2 ) be a Nash equilibrium, but for a contradiction, suppose player 1 does not win
the object. This implies that b∗1 < b∗2 and player 1’s payoff in equilibrium is zero, i.e. u1 (b∗1 , b∗2 ) = 0.
Now if b∗2 ≤ v2 , then b∗2 < v1 (since v2 < v1 ), and hence bidding, say b∗2 , is a strictly better response
for player 1 when player 2 is bidding b∗2 . Therefore, bidding a strictly smaller amount than b∗2
cannot be a best response for player 1. If, on the other hand, b∗2 > v2 , then u2 (b∗1 , b∗2 ) < 0 so that
bidding anything in the interval [0, b∗1 ] is a profitable deviation for player 2. In either case, then, we
obtain a contradiction to the hypothesis that (b∗1 , b∗2 ) is an equilibrium. Therefore, we conclude that
in any equilibrium (b∗1 , b∗2 ) of G player 1 obtains the object, that is, b∗1 ≥ b∗2 .
Secondly,
(2) b∗1 > b∗2 cannot hold in equilibrium, for in this case player 1 would deviate by bidding, say,
b∗2 and increase her payoff from v1 − b∗1 to v1 − b∗2 . Together with our finding that b∗1 ≥ b∗2 , this
implies that b∗1 = b∗2 must hold in equilibrium.
Thirdly,
(3) Neither b∗1 < v2 nor b∗1 > v1 can hold (player 2 would have a profitable deviation in the first
case, and player 1 in the second case).
So, any Nash equilibrium (b∗1 , b∗2 ) of this game must satisfy
v2 ≤ b∗1 = b∗2 ≤ v1 .
Is any pair (b∗1 , b∗2 ) that satisfy these inequalities an equilibrium? Yes. The inequality v2 ≤ b∗1
guarantees that player 2 does not wish to win the object when player 1 bids b1 , so his action is
optimal. The inequality, b∗2 ≤ v1 , on the other hand, guarantees that player 1 is also best responding.
We thus conclude that
N(G) = {(b1 , b2 ) : v2 ≤ b1 = b2 ≤ v1 }
Exercise. Verify the above conclusion by means of computing the best response correspon-
dences of the players 1 and 2, and plotting their graph in the (b1 , b2 ) space.
While N(G) is rather a large set and hence does not lead us to a sharp prediction, refining this
set by eliminating the weakly dominated actions solves this problem. Indeed, it is easily verified
58 Strategic Form Games: Applications
that bidding anything strictly higher than v2 is a weakly dominated action for player 2. To see this,
suppose player 2 bids b′2 which is strictly higher than v2 . Now, if player 1’s bid b1 is greater than
equal to b′1 , then player 1 wins the object and player 2’s payoff to b′2 and to bidding her valuation
v2 are both zero. If, however, player 1’s bid is strictly smaller than b′2 but greater than or equal
to v2 ,then player 2 wins by bidding b′2 but obtains a negative payoff since she pays more than her
valuation. The payoff to bidding v2 , on the other hand, is zero. Similarly, bidding v2 is strictly
better than bidding b′2 if player 1’s bid is strictly smaller than v2 . The following table summarizes
this discussion. (Does bidding v2 weakly dominate bids less than v2 as well?).
Now, there is an intriguing normative problem with this equilibrium: the first player is not
bidding his true valuation. It is often argued that it would be desirable to design an auctioning
method in which all players are induced to bid their true valuations in equilibrium. But is such a
thing possible? This question was answered in the affirmative by the economist William Vickrey
who has showed that truth-telling can be established even as a dominant action by modifying the
rules of the auction suitably. Let us carefully examine Vickrey’s modification.
and $
v2 − b1 , if b2 > b1
u2 (b1 , b2 ) =
0, otherwise
for all (b1 , b2 ) ∈ R2+ . (Contrast G′ with the game G we studied above.)
We now claim that Dw (G′ ) = {(v1 , v2 )}. To see that bidding b1 = v1 is a dominant action for
59
Exercise. Generalize the above analysis by considering n ≥ 2 many individuals assuming that
the value of the object to player i is vi dollars, i = 1, ..., n, where v1 > · · · > vn , that the object is
given to the highest bidder with the smallest index in both the first and second-price auctions, and
that the winner pays the second highest bid in the second-price auction.
By an ingenious modification of the first-price auction, therefore, Vickrey was able to guarantee
the truthful revelation of the preferences of the agents in dominant strategy equilibrium. This
result shows that, by designing the rules of interaction carefully, one may force the individuals to
coordinate on normatively appealing outcomes, and this even without knowing the true valuations
of the individuals! Vickrey’s technique provides a foundation for the theory of implementation
which has important applications in public economics where one frequently needs to put on the
mask of a social engineering. We shall talk about some of these applications later on.
60 Strategic Form Games: Applications
Buyer-Seller Games
A seller,call him player s, is in possession of an object that is worth vs dollars to him (that is,
player s is indifferent between receiving vs dollars for the object and keeping the object). The value
of this object is vb dollars to a potential buyer, player b. We assume in what follows that
vb > vs > 0.
So, since the value of the object is higher for the buyer than it is for the seller, an efficient state of
affairs demand that trade takes place. But what is the price that player b will pay to player s? The
buyer wants to pay only vs (well, she wants to pay nothing, but she knows that the seller will not
sell the object at a price strictly less than vs ) while the seller wants to charge vb . The actual price
of the object will thus be determined through the bargaining of the players. Different bargaining
scenarios would presumably lead to different equilibrium prices. To demonstrate this, we shall
consider here two such scenarios.4
and $
pb − vs , if pb > ps
us (pb , ps ) =
0, otherwise
Exercise. Consider the game described above. Show that the best response correspondence of
4 It is very likely that these scenarios will strike you as unrealistic. The objective of these examples is, however, not
achieving a satisfactory level of realism, but rather to illustrate the use of Nash equilibrium in certain simple buyer-seller
games. In later chapters, we will return to this setting and consider much more realistic bargaining scenarios that involve
sequential offers and counteroffers by the players.
61
and $
[vs , pb ], if pb ∈ (vs , vb ]
Bs (pb ) =
[vs , vb ], if pb = vs
respectively. Deduce from this that the only equilibrium of the game is (p∗b , p∗s ) = (vs , vb ).
in this game. Consequently, (p∗b , p∗s ) ∈ Bb (p∗s ) × Bs (p∗b ) holds if, and only if, either (p∗b , p∗s ) =
(vs , vb ) (the no-trade equilibrium) or p∗b = p∗s ∈ [vs , vb ] (see Figure 2).
Therefore, with a minor modification of the bargaining procedure, one is able to generate many
equilibria in which trade occurs. (This is an important observation for especially the seller, for,
in many instances, it is the seller who design the bargaining procedure.) However, the prediction
of the Nash equilibrium in the resulting game is less than satisfactory due to the large multiplic-
ity of equilibria. (Check if undominated and/or Pareto optimal Nash equilibria provide sharper
62 Strategic Form Games: Applications
predictions here.)
63
where Qi (P1 , P2 ) denotes the output sold by firm i at the price profile (P1 , P2 ). If we assume that
there is no qualitative difference between the products of the two firms, it would be natural to
assume that the consumers always buy the cheaper good. In case both firms charge the same price,
we assume that firms 1 and 2 share the market equally. These assumptions entail that
⎧
⎪ − Pi ,
a
b Pi < Pj
⎨ ) a b Pi *
1
Qi (P1 , P2 ) = 2b − b , Pi = Pj
⎪
⎩
0, Pi > Pj
where j ̸= i = 1, 2, and complete the formulation of the model at hand as a 2-person game in
strategic form.
An immediate question to ask is if our prediction (based on the Nash equilibrium) about the
market outcome would be different in this model than in the linear Cournot duopoly model. The
5 The argument was first given by the French mathematician Joseph Bertrand in 1883 as a critique of the Cournot
model.
64 Strategic Form Games: Applications
answer is easily seen to be yes. To see this, recall that in the Nash equilibrium of the linear Cournot
duopoly model both firms charge the same price, namely
% &
a−c 2a c
P1 = P2 = a − b = + ,
3b 3 3
Thus, the Cournot prices cannot constitute an equilibrium for the linear Bertrand model. (How
did we conclude this, really?) The problem is that the tie-breaking rule of the Bertrand duopoly
introduces a discontinuity to the model allowing firms to achieve relatively large gains through
small alterations of their actions.6
What then is the equilibrium? The analysis outlined in the previous paragraph actually brings
us quite close to answering this question. First observe that neither firm would ever charge a price
below c as this would yield negative profits (which can always be avoided by charging exactly c
dollars for the unit product). Thus, if the price profile (P1∗ , P2∗ ) is a Nash equilibrium, we must
have P1∗ , P2∗ ≥ c. Is P1∗ > P2∗ > c possible? No, for in this case firm 1 would be making zero profits,
and thus it would better for it to charge, say, P2∗ which will ensure positive profits given that firm
2’s price is P2∗ . How about P1∗ = P2∗ > c? This is also impossible, because in this case either firm
can unilaterally increase its profits by undercutting the other firm (just as in the discussion above)
contradicting that (P1∗ , P2∗ ) is a Nash equilibrium. By symmetry, P2∗ ≥ P1∗ > c is also impossible, and
hence we conclude that at least one firm must be charging precisely its unit cost c in the equilibrium.
Can we have P1∗ > P2∗ = c then? No, for in this case firm 2 would not be best responding; it can
increase its profits by charging, say, P1∗ /2 + c/2. Similarly, P2∗ > P1∗ = c is not possible. The only
candidate for equilibrium is thus (P1∗ , P2∗ ) = (c, c), and this is indeed an equilibrium as you can
easily check: in the Nash equilibrium of the linear Bertrand duopoly, all firms price their products
6 Notice that this is the third time we are observing that the tie-breaking rule is playing an important role with regard
to the nature of equilibrium. This is quite typical in many interesting strategic games, and hence, it is always a good idea
to inquire into the suitability of a specific tie-breaking rule in such models.
65
at unit cost.
This is a surprising result since it envisages that all firms operate with zero profits in the equi-
librium. In fact, the equilibrium outcome here is nothing but the competitive equilibrium outcome
which is justified in microeconomics only by hypothesizing a very large number of firms who act
as price takers (not setters) in the industry. Here, however, we predict precisely the same outcome
in equilibrium with only two price setting firms!
Remark. The major culprit behind the above finding is the fundamental discontinuity that
the Bertrand game possesses. Indeed, as noted earlier, it is possible in this game to alter one’s
action marginally (infinitesimally) and increase the associated profits significantly, given the other’s
action. Such games are called discontinuous games, and often do not possess a Nash equilibrium.
For example, if we modify the linear Bertrand model so that the unit cost of firm 1, call it c1 ,
exceeds that of firm 2, we obtain an asymmetric Bertrand game that does not have an equilibrium.
(Exercise: Prove this.) But this is not a severe difficulty. It arises only because we take the prices
as continuous variables in the classic Bertrand model. If the medium of exchange was discrete but
small, then there would exist an equilibrium of this game such that the high cost firm 1 charges its
unit cost (and thus make zero profits) while the low cost firm 2 would grab the entire market by
charging the lowest possible price strictly below c1 . (Challenge: Formalize and prove this claim.)
"
66 Strategic Form Games: Applications
the candidates have policy preferences on their own, and hence, care also about the policy that will be implemented in
equilibrium. However, vote maximization is certainly one of the major goals of politicians, and the above model (which
is sometimes called the Downsian model of political competition) is useful in identifying the implictions of such an
objective about the pre-election behavior of the candidates. It is by far the most standard in the literature.
67
9 If you are careful, you will notice that the assumption of carefully distributed individuals did not really play a role
in arriving at this conclusion. If the distribution is given by an arbitrary continuous density function f on [0, 1] with
f (x) > 0, the equilibrium would have both parties to locate on the median of this distribution.
68 Strategic Form Games: Applications
candidate i is given as
⎧
⎨ 1,
⎪ if i wins the election alone at the position profile (ℓ1 , ℓ2 , ℓ3 )
ui (ℓ1 , ℓ2 , ℓ3 ) = 1
⎪ 2, if i there is a tie at the position profile (ℓ1 , ℓ2 , ℓ3 )
⎩
0, if i loses the election at the position profile (ℓ1 , ℓ2 , ℓ3 ),
where ℓ j denotes the policy chosen by candidate j = 1, 2, 3. The equilibrium of this game is not a
trivial extension of the previous game. Indeed, (ℓ1 , ℓ2 , ℓ3 ) = (1/2, 1/2, 1/2) does not correspond to
a Nash equilibrium here. For, each candidate is getting approximately the 33% of the total votes at
this profile, and by moving slightly to the left (or right) of 1/2 any of the candidates can increase
her share of the votes almost to 50%. None of the candidates is thus playing optimally given the
actions of others.
It turns out that there are many Nash equilibria of this 3-person voting game. As an exam-
ple let us verify that (ℓ1 , ℓ2 , ℓ3 ) = (1/4, 1/4, 3/4) is an equilibrium. Begin with observing that
candidate 3 wins the election alone at this position profile. Therefore, this candidate obviously
does not have any incentive to deviate from 3/4 given that the other two candidates position them-
selves at 1/4. Does candidate 1 (hence candidate 2) has a profitable deviation? No. Given that
(ℓ1 , ℓ3 ) = (1/4, 3/4), it is readily observed that if candidate 1 chooses instead of 1/4 any position
in the interval [0, 1/4], then candidate 3 remains as the winner, and if she deviates to any position
in the interval [3/4, 1], the candidate 2 becomes the winner alone. Less clear is the implication of
choosing a policy in the interval (1/4, 3/4).
+ The key observation
, here is that by doing so candidate
ℓ1 +1/4 ℓ1 +3/4
1 would attract the votes that belong to 2 , 2 . Thus in this case candidate 1 would get ex-
actly the 25% of the total vote (see Figure 5.) But either candidate 2 (3/4 > ℓ1 ≥ 1/2) or candidate
3 (if 1/2 ≥ ℓ1 > 1/4) is bound to collect 37.5% of the votes in this case. Therefore, choosing 1/4
is as good as choosing any other position in [0, 1] for candidate 1 given the actions of others, she
maintains a payoff level of 0 with any such choice. So, at the profile (ℓ1 , ℓ2 , ℓ3 ) = (1/4, 1/4, 3/4),
neither candidate 1 nor candidate 2 can force a win by means of a unilateral deviation, and we
conclude that this outcome is a Nash equilibrium. (Challenge: Compute all the Nash equilibria of
this game.)
But in voting problems the issue of coalitions arise very naturally. So we better ask if the
equilibrium (1/4, 1/4, 3/4) is actually strong or not. Indeed, it is not. For, candidates 1 and 2 can
jointly deviate at this profile to, say, 3/4 − ε for small ε > 0, and thus force a win (which yields a
payoff of 1/2 to each). What is more, there is no strong Nash equilibrium of this game. (Challenge:
Prove this.)
we model the situation as a game in strategic form by setting, for each candidate i, Ai = [0, 1]∪{stay
out} and
⎧
⎪
⎪
⎪ 1, if i wins the election alone at the position profile ℓ
⎪
⎨ 1,
2 if i ties for the first place at the position profile ℓ
ui (ℓ) =
⎪ 0,
⎪ if i stays out at the position profile ℓ
⎪
⎪
⎩ −1, if i runs but loses the election at the position profile ℓ
Exercise. Prove: If n = 2, the unique Nash equilibrium of the game defined above is (1/2, 1/2).
Once again, life is more complicated in the 3-person scenario, but now this is not because of
the multiplicity of equilibria. On the contrary, this game has no Nash equilibrium when n = 3. A
sketch of proof can be given as follows. First observe that, since each candidate can avoid losing
by staying out of the election, all running candidates must tie for the first place in any equilibrium.
Moreover, there cannot be only one running candidate in equilibrium, for otherwise, any other
player may choose the same location with the running candidate and forces a tie for the first place
(which is better than staying out). Similarly, it cannot be that everyone stays out in equilibrium.
Therefore, in any given equilibrium, there must exist two or more running candidates who tie for
the first place. Consider first the possibility that there are exactly two such candidates. Then, by
the exercise above, both candidates must be choosing 1/2. But since the running candidates share
the total votes, the remaining candidate can force a win by choosing slightly to the left (or right)
of 1/2. Thus staying out cannot be a best response for this candidate, contradicting that we are at
an equilibrium. The final possibility is the case in which all three candidates choose not to stay
out and tie for the first place. Suppose that (ℓ1 , ℓ2 , ℓ3 ) is such an equilibrium. If ℓ1 = ℓ2 = ℓ3 , any
one of the candidates can profitably deviate and force a win (why?), so at least two components
of (ℓ1 , ℓ2 , ℓ3 ) must be distinct. Suppose that ℓ1 ̸= ℓ2 = ℓ3 . In this case, candidate 1 can force a
win by getting very close to ℓ2 (see this?), and hence she cannot be best responding in the profile
(ℓ1 , ℓ2 , ℓ3 ). The two other possibilities in which exactly two components of (ℓ1 , ℓ2 , ℓ3 ) are distinct
are ruled out similarly. We are then left with the following final possibility: ℓ1 ̸= ℓ2 ̸= ℓ3 ̸= ℓ1 . To
rule out this case as well, we pick the leftist candidate, call her i (so we have ℓi = min{ℓ1 , ℓ2 , ℓ3 }),
and observe that this candidate can force a win by choosing a position very close to the median
of {ℓ1 , ℓ2 , ℓ3 }. So, finally, we can conclude that there does not exist a Nash equilibrium for the
3-person voting game at hand.
70 Strategic Form Games: Applications
The following table summarizes our findings in the four spatial voting games we have examined
above.
It is illuminating to observe how seemingly minor alterations in these voting models result in such
vast changes in the set of equilibria.
Chapter 5
5.1 Introduction
Up to now we have assumed that the only choice available to players was to pick an action from
the set of available actions. In some situations a player may want to randomize between several
actions. If a player chooses which action to play randomly, we say that the player is using a mixed
strategy, as opposed to a pure strategy. In a pure strategy the player chooses an action for sure,
whereas in a mixed strategy, she chooses a probability distribution over the set of actions available
to her. In this section we will analyze the implications of allowing players to use mixed strategies.
As a simple illustration, consider the following matching-pennies game.
H T
H 1, −1 −1, 1
T −1, 1 1, −1
If we restrict players’ strategies only to actions, as we have done so far, this game has no Nash
equilibrium (check), i.e., it has no Nash equilibrium in pure strategies. Since we have argued that
Nash equilibrium is a necessary condition for a steady state, does that mean that the matching-
pennies game has no steady state? To answer this question let us allow players to use mixed
strategies. In particular, let each player play H and T with half probability each. We claim that
this choice of strategies constitute a steady-state, in the sense that if each player predicts that the
other player will play in this manner, then she has no reason not to play in the specified manner.
Since player 2 plays H with probability 1/2, the expected payoff of player 1 if she plays H is
(1/2) (1) + (1/2) (−1) = 0. Similarly, the expected payoff to action T is 0. Therefore, player 1
has no reason to deviate from playing H and T with probability 1/2 each. Similarly, if player 2
71
72 Mixed Strategy Equilibrium
predicts that player 1 will play H and T with half probability each, she has no reason to deviate
from doing the same. This shows that the strategy profile where player 1 and 2 play H and T
with half probability each is a steady-state of this situation. We say that playing H and T with
probabilities 1/2 and 1/2 respectively constitutes a mixed strategy equilibrium of this game.
If we assume that players repeatedly play this game and forecast each other’s action on the
basis of past play, then each player actually has an incentive to adopt a mixed strategy with these
probabilities. If, for example, player 1 plays H constantly, rather than the above mixed strategy,
then it is reasonable that player 2 will come to expect him to play H again and play her best
response, which is T. This will result in player 1 getting −1 as long as he continues playing H.
Therefore, he should try to be unpredictable, for as soon as his opponent becomes able to predict
his action, she will be able to take advantage of the situation. Therefore, player 1 should try to
mimic playing a mixed strategy by playing H and T with frequencies 1/2 and 1/2.
Consider the Hawk-Dove game for a another motivation.
H D
H 0, 0 6, 1
D 1, 6 3, 3
Suppose each period two randomly selected individuals, who both belong to a large population,
play this game. Also suppose that 3/4 of the population plays H (is hawkish) and 1/4 plays D
(is dovish), but no player can identify the opponent’s type before the game is played. We claim
that this is a stable population composition. Since the opponent is chosen randomly from a large
population, each player expects the opponent to play H with probability 3/4 and D with probability
1/4. Would a dovish player do better if she were a hawkish player? Well, on average a dovish player
gets a payoff of (3/4) (1) + (1/4) (3) = 3/2. A hawkish player gets (3/4) (0) + (1/4) (6) = 3/2 as
well. Therefore, neither type of player has a reason to change his behavior.
Definition. A mixed strategy αi for player i, is a probability distribution over his set of available
actions, Ai . In other words, if player i has m actions available, a mixed strategy is an m dimensional
) * m
vector α1i , α2i , . . . , αm k k
i such that αi ≥ 0, for all k = 1, 2, . . . m, and ∑k=1 αi = 1.
We will denote by αi (ai ) the probability assigned to action ai by the mixed strategy αi . Let
△ (X ) denote the set of all probability distributions on a set X . Then, any mixed strategy αi for
player i is an element of △ (Ai ) , i.e., αi ∈ △ (Ai ) . Following the convention we developed for
action profiles, we will denote by α = (αi )i∈N a mixed strategy profile, i.e., a mixed strategy for
5.2. Mixed Strategies and Expected Payoffs 73
each player in the game. To denote the strategy profile in which player i plays α′i and the rest of the
) *
players play α∗j , j ̸= i, we will use α′i , α∗−i . Unless otherwise stated, we will assume that players
choose their mixed strategies independently.
Notice that not all actions have to receive a positive probability in a mixed strategy. Therefore,
it is also possible to see pure strategies as degenerate mixed strategies, in which all but one action
is played with zero probability.
Let us illustrate these concepts by using the Battle of the Sexes game that we introduced before:
m o
m 2,1 0,0 .
o 0,0 1,2
A possible mixed strategy for player 1 is (1/2, 1/2) , or α1 (m) = α1 (o) = 1/2. Another is (1/3, 2/3) ,
or α1 (m) = 1/3, α1 (o) = 2/3. For player 2, we may have (2/3, 1/3) , i.e., α2 (m) = 2/3, α2 (o) =
1/3, as a possible mixed strategy. A mixed strategy profile could be ((1/2, 1/2) , (2/3, 1/3))
another could be ((1/3, 2/3) , (2/3, 1/3)) . Notice that we always have α1 (o) = 1 − α1 (m) and
α2 (o) = 1 − α2 (m) simply because probabilities have to add up to one. Therefore, sometimes we
may want to simplify the notation by defining, say p ≡ α1 (m) , q ≡ α2 (m) , and using (p, q) to
denote a strategy profile, where player 1 chooses m with probability p and action o with probabil-
ity 1 − p, and player 2 chooses m with probability q and action o with probability 1 − q. Notice,
if there were 3 actions for a player, then we would need at least two numbers to specify a mixed
strategy for that player.
Once we allow players to use mixed strategies, the outcomes are not deterministic anymore. For
example if both players play m with probability 1/2 in the BoS game, then each action profile is ob-
tained with probability 1/4. Therefore, we have to specify players’ preferences over lotteries, i.e.,
over probability distributions over outcomes, rather than preferences over certain outcomes. We
will assume that players’ preferences satisfy the assumptions of Von Neumann and Morgenstern
so that the payoff to an uncertain outcome is the weighted average of the payoffs to underlying
certain outcomes, weight attached to each outcome being the probability with which that outcome
occurs. (See Dutta, P., ch. 27 for more on this). In other words, we assume that for each player i,
there is a payoff function ui defined over the certain outcomes a ∈ A, such that the player’s pref-
erences over lotteries on A can be represented by the expected value of ui . If each outcome a ∈ A
occurs with probability p (a) , then the expected payoff of player i is
Example 5.1. For example, in the BoS game if each player i plays the mixed strategy αi , then the
expected payoff of player i is given by
or,
Notice that, since αi (o) = 1 − αi (m) , we can write these expected payoffs as
and
u2 (α1 , α2 ) = 2 − 2α1 (m) + α2 (m) [3α1 (m) − 2] .
For example, if player 1 plays m for sure, i.e., α1 (m) = 1, and player 2 plays m with probability
1/3, then
and
Definition. The support of a mixed strategy αi is the set of actions to which αi assigns a positive
probability, i.e.,
supp (αi ) = {ai ∈ Ai : αi (ai ) > 0} .
Definition. Best response correspondence of player i is the set of mixed strategies which are
optimal given the other players’ mixed strategies. In other words:
therefore,
B1 ((1/2, 1/2)) = {(1, 0)} .
In general, letting p ≡ α1 (m) , and q ≡ α2 (m) , we can express the best response of player 1 in
terms of optimal choice of p in response to q
⎧
⎨ {1} , if q > 1/3
⎪
B1 (q) = [0, 1] , if q = 1/3 .
⎪
⎩
{0} , if q < 1/3
Definition. A mixed strategy equilibrium is a mixed strategy profile (α∗1 , . . . , α∗n ) such that, for all
76 Mixed Strategy Equilibrium
i = 1, . . . , n
) *
α∗i ∈ arg max ui αi , α∗−i
αi ∈△(Ai )
or
) *
α∗i ∈ Bi α∗−i .
In the Battle of the Sexes game, then, the set of mixed strategy Nash equilibria is
{((1, 0) , (0, 1)) , ((0, 1) , (1, 0)) , ((2/3, 1/3) , (1/3, 2/3))} .
Remark 5.1. A mixed strategy αi is a best response to α−i if and only if every action in the support
of αi is itself a best response to α−i . Otherwise, player i could transfer probability from the action
which is not a best response to an action which is a best response and strictly increase his payoff.
Remark 5.2. This suggests an easy way to find mixed strategy Nash equilibrium. A mixed strategy
profile α∗ is a mixed strategy Nash equilibrium if and only if for each player i, each action in the
support of α∗i is a best response to α∗−i . In other words, each action in the support of α∗i yields the
same expected payoff when played against α∗−i , and no other action yields a strictly higher payoff.
Remark 5.3. One implication of the above remark is that a nondegenerate mixed strategy equilib-
5.4. Dominated Actions and Mixed Strategies 77
Example 5.3. In the BoS game, if (α∗1 , α∗2 ) is a mixed strategy equilibrium with supp(α∗1 ) =supp(α∗2 ) =
{m, o}, then it must be that the expected payoffs to m and o are the same for both player against
α∗−i . In other words, for player 1
2α∗2 (m) = 1 − α∗2 (m)
Proposition 5.1. Every finite strategic form game has a mixed strategy equilibrium.
In earlier lectures we defined an action to be weakly or strictly dominated, only if there existed
another action which weakly or strictly dominated that action. However, it is possible that an action
is not dominated by any other action, yet it is dominated by a mixed strategy.
Definition. In a strategic form game, player i’s mixed strategy α∗i strictly dominates her action a′i
if
) *
ui (αi , a−i ) > ui a′i , a−i for all a−i ∈ A−i .
L R
T 1, 1 1, 0
.
M 3, 0 0, 3
B 0, 1 4, 1
Clearly, no action dominates T, but the mixed strategy α1 (M) = 1/2, α1 (B) = 1/2 strictly domi-
nates T.
Remark 5.4. A strictly dominated action is never used with positive probability in a mixed strategy
equilibrium
78 Mixed Strategy Equilibrium
To find the mixed strategy equilibria in games where one of the players have more than two
actions one should first look for strictly dominated actions and eliminate them. (see example 118.2
in Osborne chapter 4 on my web page).
Chapter 6
Bayesian Games
So far we have assumed that all players had perfect information regarding the elements of a
game. These are called games with complete information. A game with incomplete information,
on the other hand, tries to model situations in which some players have private information before
the game begins. The initial private information is called the type of the player. For example, types
could be the privately observed costs in an oligopoly game, or privately known valuations of an
object in an auction, etc.
6.1 Preliminaries
A Bayesian game is a strategic form game with incomplete information. It consists of:
• a probability function,
pi : Θi → △ (Θ−i )
• a payoff function,
ui : A × Θ → R.
79
80 Bayesian Games
The function pi summarizes what player i believes about the types of the other players given her
type. So, pi (θ−i |θi ) is the conditional probability assigned to the type profile θ−i ∈ Θ−i . Similarly,
ui (a|θ) is the payoff of player i when the action profile is a and the type profile is θ.
We call a Bayesian game finite if N, Ai and Θi are all finite, for all i ∈ N. A pure strategy for
player i in a Bayesian game is a function which maps player i’s type into her action set
ai : Θi → Ai ,
Similarly, for a given mixed strategy profile α∗ the expected payoff of player 1 of type θ1 is
- ′
. ) *
q ∑ α∗1 (a1 |θ1 ) α∗2 (a2 |θ2 ) u1 (a1 , a2 |θ1 , θ2 ) + (1 − q) ∑ α∗1 (a1 |θ1 ) α∗2 a2 |θ2 u1 a1 , a2 |θ1 , θ′2
a∈A a∈A
Definition. A Bayesian equilibrium of a Bayesian game is a mixed strategy profile α = (αi )i∈N ,
such that for every player i ∈ N and every type θi ∈ Θi , we have
/ 0
αi (.|θi ) ∈ arg max ∑
γ∈△(Ai ) θ ∈Θ
pi (θ−i |θi ) ∑ ∏ α j (a j |θ j ) γ (ai ) ui (a|θ) .
−i −i a∈A j∈N\{i}
Remark 6.1. Type, in general, can be any private information that is relevant to the player’s decision
making, such as the payoff function, player’s beliefs about other players’ payoff functions, her
beliefs about what other players believe her beliefs are, and so on.
Remark 6.2. Notice that, in the definition of a Bayesian equilibrium we need to specify strategies
for each type of a player, even if in the actual game that is played all but one of these types are
6.3. Some Examples 81
non-existent. This is because, given a player’s incomplete information, analysis of that player’s
decision problem requires us to consider what each type of the other players would do, if they were
to play the game.
Suppose player 2 has perfect information and two types l and h. Type l loves going out with
player 1 whereas type h hates it. Player 1 has only one type and does not know which type is player
2. Her beliefs place probability 1/2 on each type. The following tables give the payoffs to each
action and type profile:
B S B S
B 2,1 0,0 B 2,0 0,2
S 0,0 1,2 S 0,1 1,0
typel typeh
• N = {1, 2}
• A1 = A2 = {B, S}
• Θ1 = {x} , Θ2 = {l, h}
Since player 1 has only one type (i.e., his type is common knowledge) we will omit references
to his type from now on.
Let us find the Bayesian equilibria of this game by analyzing the decision problem of each
player of each type:
Player 2 of type l : Given player 1’s strategy α1 , his expected payoff to
• action B is α1 (B) ,
• action S is 2 (1 − α1 (B))
so that his best response is to play B if α1 (B) > 2/3 and to play S if α1 (B) < 2/3.
Player 2 of type h : Given player 1’s strategy α1 , his expected payoff to
82 Bayesian Games
• action B is (1 − α1 (B)) ,
so that his best response is to play B if α1 (B) < 1/3 and to play S if α1 (B) > 1/3.
Player 1: Given player 2’s strategy α2 (.|l) and α2 (.|h) , her expected payoff to
• action B is
1 1
α2 (B|l) (2) + α2 (B|h) (2) = α2 (B|l) + α2 (B|h) ,
2 2
• action S is
1 1 α2 (B|l) + α2 (B|h)
(1 − α2 (B|l)) (1) + (1 − α2 (B|h)) (1) = 1 − .
2 2 2
Therefore, her best response is to play B if α2 (B|l) + α2 (B|h) > 2/3 and to play S if α2 (B|l) +
α2 (B|h) < 2/3.
Let us first check if there is a pure strategy equilibrium in which both types of player 2 play
B, i.e. α2 (B|l) = α2 (B|h) = 1. In this case player 1’s best response is to play B as well to which
playing B is not a best response for player 2 type h. Similarly check that α2 (B|l) = α2 (B|h) = 0 and
α2 (B|l) = 0 and α2 (B|h) = 1 cannot be part of a Bayesian equilibrium. Let’s check if α2 (B|l) = 1
and α2 (B|h) = 0 could be part of an equilibrium. In this case player 1’s best response is to play B.
Player 2 type l’s best response is to play B and that of type h is S. Therefore,
is a Bayesian equilibrium.
Clearly, there is no equilibrium in which both types of player 2 mixes. Suppose only type l
mixes. Then, α1 (B) = 2/3, which implies that α2 (B|l) + α2 (B|h) = 2/3. This, in turn, implies
that α2 (B|h) = 0. Since α2 (B|h) = 0 is a best response to α1 (B|x) = 2/3, the following is another
Bayesian equilibrium of this game
ui = qi (θi − qi − q j ) .
Firm 1 has one type θ1 = 1, but firm 2 has private information about its type θ2 . Firm 1 believes
that θ2 = 3/4 with probability 1/2 and θ2 = 5/4 with probability 1/2, and this belief is common
knowledge.
We will look for a pure strategy equilibrium of this game. Firm 2 of type θ2 ’s decision problem
is to
max q2 (θ2 − q1 − q2 )
q2
which is solved at
θ2 − q1
q∗2 (θ2 ) = .
2
Firm 1’s decision problem, on the other hand, is
1 2
1 ∗ 1 ∗
max q1 (1 − q1 − q2 (3/4)) + q1 (1 − q1 − q2 (5/4))
q1 2 2
which is solved at
2 − q∗2 (3/4) − q∗2 (5/4)
q∗1 = .
4
Solving yields,
1 11 5
q∗1 = , q∗2 (3/4) = , q∗2 (5/4) = .
3 24 24
84 Bayesian Games
Chapter 7
So far we have assumed that players, when taking their actions, either did so simultaneously,
or without knowing the action choice of the other players. Although, this modelling assumption
might be appropriate in some settings, there are many situations in the world of business and poli-
tics that involve players moving sequentially after observing what the other players have done. For
example, a bargaining situation between a seller and a buyer may involve the buyer making an offer
and the seller, after observing the buyer’s offer, either accepting or rejecting it. Or imagine an in-
cumbent senator deciding whether to run an expensive ad campaign for the upcoming elections and
a potential challenger deciding whether to enter the race or not, after observing the campaign deci-
sion of the incumbent. Both of these situations involve a player choosing an action after observing
the action of the other player.
The extensive form of a game, as opposed to the strategic form, provides a more appropriate
framework to analyze certain interesting questions that arise in strategic interactions that involve
sequential moves.
As you now very well know, strategic form of a game has three ingredients: (1) the set of
players, (2) the set of actions, and (3) the payoff functions. The extensive form provides a richer
specification of a strategic interaction by specifying who moves when doing what and with what
85
86 Extensive Form Games with Perfect Information
information. The easiest way to represent an extensive form game is to use a game tree, which is
a multi-person generalization of a decision tree.
To illustrate, let us go back to the bargaining example above and assume that the buyer moves
first by offering either $500 or $100 for a product that she values $600. The seller, for whom
the value of the object is $50, responds by either accepting (A) or rejecting (R) the offer. We can
represent this situation by the game tree in Figure 7.1.
B❜
✟❍
100✟ ✟ ❍❍500
S"✟✟✟ ❍❍ S
❍"
A "❅ R A "❅ R
" ❅ " ❅
"" ❅" "" ❅"
500, 50 0, 0 100, 450 0, 0
Figure 7.1: Bargaining Game
• nodes
• branches
• information sets
• player labels
• action labels
and
• payoffs.
Nodes are of two types: Decision nodes represent the points in the game at which players
make a decision, i.e., choose an action, or a strategy in general. As any other tree, a game tree has
a root and it is useful to distinguish the root, which we will call the initial node, from the other
decision nodes (it is represented by an open circle whereas all the other nodes are represented by
closed circles). To each decision node, including the initial node, one, and only one, player label
is attached, to indicate who moves at that particular decision node. The second type of nodes are
called terminal nodes and at these nodes the game is over and nobody takes any action anymore.
To each terminal node a payoff vector is appended.
7.1. Extensive Form Games 87
From each decision node, one or more branches emanate, each branch representing an action
that can be taken by the player who is to move at that node. Each such branch is labelled with the
action that it represents. A branch either leads to another decision node or to a terminal node.
The last component that we have to talk about is the information sets. Information sets tell us
what the players know when they are making a decision. They are collections of decision nodes of
a player that cannot be distinguished from the perspective of that player. We can illustrate it using
the bargaining example under the assumption that the seller, somehow, does not observe the buyer’s
offer before deciding whether to accept or reject it. We depict this informational assumption by
connecting the two decision nodes of the seller with a dashed line (see Figure 7.2).
B❜
✟❍
100✟✟ ❍❍500
S♣"✟✟ ✟ ❍ S
♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣❍
♣❍
♣ "
A "❅ R A "❅ R
" ❅ " ❅
"" ❅" "" ❅"
500, 50 0, 0 100, 450 0, 0
Figure 7.2: Bargaining Game with Imperfect Information
Notice that the actions available to the seller at the two nodes that are in the same information
set must be the same, otherwise the seller would be able to distinguish between them by just looking
at the actions available to her.
In this section we will deal with extensive form games with perfect information in which every
player can distinguish between any two decision nodes and hence we will not have to worry about
information sets.
7.1.2 Strategies
Strategies in a strategic form game are either action choices or probability distributions over
actions. In an extensive form game, description of a strategy is more involved since players may
have to choose actions at several points in the game. Therefore, a pure strategy of a player in an
extensive form game has to specify an action choice at every decision node of that player. In that
sense, a strategy is a complete plan of action, so complete that if it was handed over to a computer,
the computer would know what to do under every contingency. We denote a pure strategy of player
i by si , and the set of all pure strategies by Si .
For example, in the extensive form game in Figure 1, a pure strategy for the buyer is easy
enough: it has to specify what price to offer at the initial node. A pure strategy for the seller, on the
other hand, has to specify an action at each decision node she may be called upon to move. So, the
88 Extensive Form Games with Perfect Information
buyer has two pure strategies available to her: 100 and 500, and hence SB = {100, 500}. The seller,
however, has four pure strategies: (1) AA, (2) AR, (3) RA, (4) RR, and hence SS = {AA, AR, RA, RR}.
The extensive form strategies sometimes lead to confusion. Let us try to illustrate why, by
looking at the extensive form game in Figure 7.3.
1❜
L " ❅ R
" ❅ 2
"" ❅"
3, 3 l "❅ r
" ❅
"" ❅1"
10, 0 L "❅ R′
′
" ❅
"" ❅"
1, −10 2,1
Figure 7.3: Another Extensive Form Game
A strategy for player 1 in this game has to specify an action at every decision node she has,
and there are two such nodes. She, therefore, has four strategies: LL′ , LR′ , RL′ , RR′ . Notice that
the first two strategies specify an action even at player 1’s second decision node which would not
be reached if those strategies were implemented. The reason why, will become clear in the next
section, after we analyze the optimal behavior of players. For now, let us look at the game tree of
the senate-race game (see Figure 7.4) to further illustrate the concepts introduced so far.
I❜
✟ ❍❍
A✟✟ ❍N
C"✟✟✟ ❍ C
❍❍"
e "❅ n e "❅ n
" ❅ " ❅
"" ❅" "" ❅"
1, 1 3, 3 2, 4 4, 2
Figure 7.4: Senate-Race Game
As in the strategic form games, the equilibrium concept in extensive form games is based upon
the idea that each player plays a best response to the play of the other players. The difference is
that we now require strategies to be optimal at every step in the game. The backward induction
equilibrium is an algorithm that results in a recommendation of an action choice at every decision
7.2. Backward Induction Equilibrium 89
node with the property that if every player follows those recommendations their strategies would
be optimal at every decision node they may be called upon to move. This will also result in a path
of play (i.e., a sequence of branches) which will be called the backward induction outcome.
The algorithm is really simple. You, the game theorist, go to the final decision nodes and
determine the best action available to the players who are to move at those nodes. Since there is
no more moves after players make their moves at these decision nodes, this boils down to choosing
the action that leads to the highest payoff for the player who is moving. (If there is a tie between
two actions that lead to the highest payoff, you may simply choose one of them.) After you have
done that, you prune all the actions that are not chosen (or just indicate the ones that are chosen by
an arrow-head) and go to the penultimate decision nodes to determine the optimal action at those
nodes. You continue in this manner until you reach to the initial node and determine the optimal
action there.
For example, in the bargaining game we start with the seller’s decision nodes which are the
final decision nodes in the game tree. Since accepting both offers is optimal we mark the branches
labelled A by arrow-heads. Once we do that, it is easily seen that the best action for player 1 is to
offer $100. Therefore, the backward induction equilibrium of the bargaining game is (100, AA) and
the backward induction outcome is (100, A) . (See Figure 7.5) The backward induction equilibrium
of the senate race game is (A, ne) and its backward induction outcome is (A, n) . (See Figure 7.6).
B❜
✟✟❍
100✟
✙
✟ ✟
✟ ❍❍500
S"✟✟✟ ❍❍ S
❍"
"❅ R
A"
✠ A"
✠"❅ R
" ❅ " ❅
"" ❅" "" ❅"
500, 50 0, 0 100, 450 0, 0
Figure 7.5: Bargaining Game
I❜
✟✟✟❍
A✟✙
✟✟ ❍ N
✟ ❍❍
C"✟✟ ❍❍C"
e " ❅❘n
❅ e"
✠"❅ n
" ❅ " ❅
"" ❅" "" ❅"
1, 1 3, 3 2, 4 4, 2
Figure 7.6: Senate-Race Game
As an exercise verify that the backward induction equilibrium of the game in Figure 7.3 is
(LR′ , r) . This example illustrates why player 1’s strategy had to specify an action even after she
90 Extensive Form Games with Perfect Information
has previously chosen L. Whether L is optimal or not for player 1 depends on what she believes
that player 2 will do. If she believes that player 2 is going to choose l, then L is not optimal. But,
whether player 2 will choose l or not depends on what player 2 believes that player 1 is going to
do in her last decision node. Therefore, to determine the optimal action for player 1 at her first
decision node, we have to specify what she intends to do at her last decision node.
1❜
✠ ❅ R
"
L"
" ❅ 2
"" ❅"
3, 3 l "❅ ❘r
❅
" ❅
"" ❅1"
10, 0 L "❅
′
❘R
❅
′
" ❅
"" ❅"
1, −10 2,1
Figure 7.7: Another Extensive Form Game
The bargaining and the senate-race games illustrate an important phenomenon that arise in
many extensive form games, i.e., the power of commitment. Suppose that the seller could, some-
how, commit herself to accepting only the offer 500 and that this is known to the buyer. Now, given
that knowledge, the best that the buyer can do is to actually offer 500, because otherwise her offer
will be rejected and she will receive 0, whereas offering 500 gives her 100. Therefore, public, and
credible, commitments could increase a player’s payoff in an extensive form game. Notice that this
is similar to eliminating action A after the offer 100. This is in stark contrast to the single individual
decision making problems where eliminating an action can never improve one’s payoff.
Similarly, in the senate-race game, if the challenger could publicly commit to entering the race
irrespective of the campaign decision of the incumbent, the best thing the incumbent could do
would be not to run campaign ads and hence the challenger would respond by entering the race and
obtaining a payoff of 4 rather than 3 that she was getting in the backward induction equilibrium.1
Another interesting phenomenon that arise in certain extensive form games is that of first
mover advantage. For example, in the senate-race game, when the incumbent moves first, both
players obtain a payoff of 3 in the backward induction equilibrium. Now, let us change the order
of the moves so that it is the challenger who moves first so that we obtain the game tree depicted
in Figure 7.8.
1 See Thomas Schelling (1960), The Strategy of Conflict, for an excellent account of the idea of credible commitments.
7.2. Backward Induction Equilibrium 91
C
✟❜❍
✟
e✟
✙
✟✟✟ ❍ n
✟ ❍❍
I"✟✟ ❍❍I"
A " ❅❘N
❅ A "❅ ❘N
❅
" ❅ " ❅
"" ❅" "" ❅"
1, 1 2, 4 3, 3 4, 2
Figure 7.8: Modified Senate-Race Game
The backward induction equilibrium of this game is (e, NN) which yields a payoff of 4 to the
challenger and a payoff of 2 to the incumbent. Therefore, if they had the chance, both players would
prefer to move first in this game. This is similar to the idea behind the power of commitment. By
choosing e the challenger commits herself to entering whatever the incumbent does.
However, not all games have a first mover advantage. Quite to the contrary, some games have
second mover advantage. Consider a game in which the incumbent (who belongs to a rightist
party) and the challenger (who belongs to a leftist party) in a senate race are choosing political
platforms; either a leftist or a rightist one. Suppose that if both of them choose the same platform
the incumbent wins the elections, whereas if they choose different platforms it is the challenger
who wins. The candidates mostly care about winning, but they also would like to win (or lose)
without compromising their political views. The game tree in Figure 7.9 depicts the situation if
it is the incumbent who moves first, whereas the one in Figure 7.10 reverses the order of moves.
Verify that this game exhibits second mover advantage.
I❜
✟✟❍❍❍❍❍
L✟ ❥
❍R
C"✟✟ ✟ ❍ C
❍❍"
l " ❅❘r
❅ l"
✠"❅ r
" ❅ " ❅
"" ❅" "" ❅"
1, 0 −1, 1 0, 2 2, −1
Figure 7.9: Senate-Race Game II
C❜
✟✟✟❍
l✟
✙✟ ❍ r
✟✟ ❍❍
I"✟✟ ❍❍I"
"
L"
✠ ❅ R L "❅ ❘R
❅
" ❅ " ❅
"" ❅" "" ❅"
1, 0 0, 2 −1, 1 2, −1
Figure 7.10: Modified Senate-Race Game II
92 Extensive Form Games with Perfect Information
A game tree is a collection of nodes, called T, and a binary relation between the nodes called
a precedence relation, denoted ≻. Given two nodes α and β in the game tree, α ≻ β means that α
precedes β. Using this relation, we can define the set of predecessors of α as
P (α) = {t ∈ T : t ≻ α}
The set P (α) is simply the set of nodes from which one can go (through a sequence of branches)
to α. Similarly, the set of successors of α is the set of nodes to which one can go starting from α.
The precedence relation ≻
3. there is a common predecessor to any two non-initial nodes, i.e., for all α, β ∈ T, with P (α) ̸=
0/ and P (β) ̸= 0,
/ there exists a γ ∈ T such that γ ∈ P (α) and γ ∈ P (β) .
The first two conditions guarantee that there are no cycles in the game tree, while the third
condition guarantees that there is a unique initial node. The last condition guarantees that starting
from any node there is a unique path back to the initial node.
Theorem 7.1. Kuhn’s (Zermelo’s Theorem). Every finite extensive form game with perfect infor-
mation has a backward induction equilibrium.
Proof. Omitted.
So, the only difference from the standard definition of a strategic form game is the use of
strategies rather than actions.
As an illustration, let us find the strategic form of the bargaining game. The set of players
is N = {B, S} , the set of strategies are SB = {100, 500} and SS = {AA, AR, RA, RR}. The payoff
functions are represented in the following bimatrix
AA AR RA RR
100 500, 50 500, 50 0, 0 0, 0
500 100, 450 0, 0 100, 450 0, 0
Therefore, the above bargaining game has three Nash equilibria (100, AA), (100, AR), and
(500, RA) . Notice that the first two Nash equilibria result in the same outcome as does the back-
ward induction equilibrium, i.e., (100, A) , whereas the third one results in the outcome (500, A) .
This last equilibrium, however, is sustained by an incredible threat by the seller, i.e., the threat
that she will not accept the offer of $100. This threat is not credible because, if it was tested by
the buyer, i.e., the buyer were to offer $100, then the seller would actually find it in her interest to
accept the offer.
Backward induction equilibrium concept eliminates equilibria based upon incredible threats by
demanding players to be rational at every point in the game, a property that we call sequential
rationality. Sequential rationality is stronger than just requiring the strategies to be best responses
to the strategies of the other players, i.e., stronger than the rationality requirement behind the Nash
equilibrium concept. For example, in the bargaining game above, the strategy RA is a best response
94 Extensive Form Games with Perfect Information
to the offer of $500, but is not sequentially rational, because it specifies the seller to reject the offer
of $100, and this is not rational at the decision node of the seller following the offer of $100.
In the previous section we have analyzed extensive form games with perfect information where
every player had a perfect knowledge of what had happened previously in the game, i.e., each
player observed the previous moves made by the other players. In this section we will relax this
assumption and allow the possibility that some of the previous moves by other players are not
observed when a player is called upon to move. Such games are called extensive form games with
imperfect information.
In extensive form games with imperfect information, the notion of information sets, which we
have introduced in the last section becomes crucial. An information set of player i is a collection
of decision nodes of player i that cannot be distinguished by player i. Therefore, if the game reaches
to any of the nodes in an information set of a player, that player does not know which of the nodes
in that information node has actually been reached.
As an example consider the bargaining game with imperfect information (see Figure 7.2). In
this game there is one information set of the seller that contains the decision nodes following the
offers 100 and 500. When the seller is called upon to move, she does not know which of the
two offers have been made, i.e., which of the two decision nodes in the information set has been
reached. The strategy sets are given by SB = {100, 500} and SS = {A, R} and hence we have the
following strategic form of this game
A R
100 500, 50 0, 0
500 100, 450 0, 0
The unique Nash equilibrium of this game is therefore (100, A) , the same outcome as the
backward induction equilibrium outcome of the bargaining game with perfect information! The
reason why we have a unique Nash equilibrium outcome in this game is that we have eliminated
the seller’s ability of making a non-credible threat of rejecting the offer of $100.
We may think of extensive form with imperfect information as a generalization of extensive
form with perfect information. In the latter, all the information sets are singletons, i.e., they each
contain a single node, whereas in the former there is at least one information set that contains more
than one node.
As an another example consider the following entry-game. Suppose Pepsi is currently the sole
7.5. Extensive Form Games with Imperfect Information 95
provider in a market, say in Bulgaria. Coke is considering to enter the market. If Coke enters,
both firms simultaneously decide whether to act tough (T ) or accommodate (A). This leads to an
extensive form game with imperfect information whose game tree representation is given in Figure
7.11, where the first number in a payoff vector belongs to Coke and the second to Pepsi.
C❜
"❅
" ❅
O" ❅E
" ❅
" ❅P
"" ❅"
0, 5 "❅
" ❅
T" ❅A
" ❅
C"♣"♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣❅♣❅
"
♣ C"
✁❆ ✁❆
✁ ❆ ✁ ❆
T✁ ❆A T✁ ❆A
✁ ❆ ✁ ❆
✁ ❆ ✁ ❆
"✁ ❆" "✁ ❆"
−2, −1 −3, 1 0, −3 1, 2
In this game SC = {OT, OA, ET, EA} and SP = {T, A} , and hence we have the following strate-
gic form:
T A
OT 0, 5 0, 5
OA 0, 5 0, 5
ET −2, −1 0, −3
EA −3, 1 1, 2
There are three Nash equilibria of this game: (OT, T ) , (OA, T ) , (EA, A). In the second Nash
equilibrium Coke is supposed to accommodate and Pepsi is supposed to act tough, following Coke
entering the market. Is that reasonable? In other words, suppose, the game actually reached that
stage, that is Coke actually entered. Now, is (A, T ) a reasonable outcome? One way of asking the
same question is to check if both players are acting rationally, i.e., best responding to each other’s
strategies, conditional upon Coke entering the market. Notice that conditional upon Coke entering
the market we have the following “game”
96 Extensive Form Games with Perfect Information
P❜
"❅
" ❅
T" ❅A
" ❅
C"♣"♣"♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣❅♣❅♣ C"
✁❆ ✁❆
✁ ❆ ✁ ❆
T✁ ❆A T✁ ❆A
✁ ❆ ✁ ❆
✁ ❆ ✁ ❆
"✁ ❆" "✁ ❆"
−2, −1 −3, 1 0, −3 1, 2
Pepsi
T A
Coke T −2, −1 0, −3
A −3, 1 1, 2
If Coke anticipates Pepsi to play T, then its best response is T as well, not A. (Neither is T a best
response for Pepsi to A.) Therefore, to the extent that we regard only Nash equilibrium outcomes
as reasonable, we conclude that (A, T ) is not reasonable. In contrast, the post-entry behavior of
both players are rational in equilibria (OT, T ) and (EA, A).
3. if it contains a node in an information set, then it contains all the nodes in that information
set.
7.5. Extensive Form Games with Imperfect Information 97
It is conventional to treat the entire game as a subgame and call all the other subgames proper
subgames. For example, the entry-game given in figure 7.11 has two subgames: the game itself
and the subgame which starts after Coke enters the market. Of course, only the latter is a proper
subgame.
Given a subgame g, let us denote the restriction of a strategy si to that subgame g by si |g . For
example, if we denote the post-entry subgame in the entry-game by e (this subgame is given in
figure 7.12), then OT |e = T, EA|e = A, etc.
Therefore, there are two SPE of the entry-game: (OT, T ) and (EA, A).
We can now obtain a better insight into the difference between subgame perfect equilibrium
(or backward induction equilibrium) and Nash equilibrium by using the language of subgames. We
first have to distinguish between subgames that can be reached by a strategy profile and those that
cannot be reached. A subgame can be reached under the strategy profile s ∈ S if, when the strategy
profile is implemented, the initial node of the subgame will actually be reached. Otherwise, we
say that the subgame cannot be reached under the strategy profile s. A strategy profile s∗ is a Nash
equilibrium if every player plays a best response to the strategies of the other players in every
subgame that can be reached under s∗ . In contrast, a strategy profile s∗ is a SPE if every player
plays a best response to the strategies of the other players in every subgame, i.e., even in those
subgames that cannot be reached under s∗ . In other words, Nash equilibrium demands rationality
in only those subgames that can be reached in equilibrium, whereas SPE demands rationality in
every subgame, and this latter form of rationality is called sequential rationality.
As an exercise consider the game in figure 7.3 and find its Nash equilibria and SPE. Verify that
there are Nash equilibria in which one of the players do not behave sequentially rationally, whereas
in all SPE both players act sequentially rationally.
98 Extensive Form Games with Perfect Information
Chapter 8
8.1 Bargaining
Bargaining has been one of the most elusive areas in economics. Many great economists have
declared that standard tools of economics cannot predict a unique outcome to bargaining situations
because the outcome is likely to be determined by many non-economic factors, such as psychology,
culture, history, political power, etc. One important solution to the problem has been given by John
F. Nash, Jr., who, in his 1950 paper, took a cooperative approach and showed that there is a unique
solution that satisfies certain “desirable” properties. Cooperative game theory assumes that players
can sign binding contracts, whereas non-cooperative game theory rules out this possibility.
Nash has assumed that two people are bargaining over a set of possible outcomes, denoted by
S ⊆ R2+ . If the individuals fail to reach an agreement they both receive outcome zero (0, 0) , called
the disagreement point. Nash looked for solutions that satisfy the following properties:
Axiom 8.1 (Pareto Efficiency (PAR)). No one can improve upon the solution without making the
other person worse off.
Axiom 8.2 (Symmetry (SYM)). Both individuals receive the same outcome if the bargaining set is
symmetric.
Axiom 8.3 (Invariance (INV)). If the bargaining set is contracted or expanded by some factor, the
shares are also contracted or expanded by the same factor.
Axiom 8.4 (Independence of Irrelevant Alternatives (IIA)). Adding alternatives to the bargaining
set that have not been chosen does not change the solution.
99
100 Extensive Form Games: Applications
Nash has shown that the unique solution that satisfies these properties is given by
Two players, A and B, bargain over a cake of size 1. Player A makes an offer xA ∈ [0, 1] to
player B. If player B accepts the offer (Y ), agreement is reached and player A receives xA and
player B receives 1 − xA . If player B rejects the offer (N), both players receive a payoff of zero.
This can be modelled as an extensive form game with perfect information. However, it is not a
finite game as A has infinitely many actions.
We can use backward induction to find the subgame perfect equilibrium (SPE) of this game.
Consider a subgame that follows A’s offer of x. If x < 1, then B’s optimal action is to accept the
offer. If, on the other hand, x = 1, then both accepting and rejecting are optimal. First, suppose that
B accepts any offer x ∈ [0, 1]. In this case, clearly, the optimal offer by A is x∗A = 1. So, one SPE is
(1, s∗2 (x)) where
s∗2 (x) = Y for all x ∈ [0, 1].
Now suppose B accepts only offers that are strictly smaller than one. What is A’s optimal offer in
this case? Could offering 1 be optimal? No, because this will be rejected by B resulting a payoff
of zero for A. Player A could deviate and offer something smaller than one and obtain a positive
payoff instead.. Could offering something strictly smaller than 1 be optimal? No! To see why,
suppose x < 1 is an optimal offer. This will be accepted by B and give player A a payoff of x.
However, player A can deviate and offer x + ε, with 0 < ε < 1 − x and hence x + ε < 1, which will
be accepted by B and give player A a payoff of x + ε which is strictly larger than x. Therefore, the
unique SPE is (1, s∗2 (x)) where
s∗2 (x) = Y for all x ∈ [0, 1]
Preliminaries
Two players, A and B, bargain over a cake of size 1. At time 0 player A makes an offer
xA ∈ [0, 1] to player B. If player B accepts the offer, agreement is reached and player A receives xA
and player B receives 1 − xA . If player B rejects the offer, she makes a counteroffer xB ∈ [0, 1] at
time 1. If this counteroffer is accepted by player A, then player B receives xB and player A receives
8.1. Bargaining 101
1 − xB . Otherwise, player A makes another offer at time 2. This process continues indefinitely until
a player accepts an offer.
If the players reach an agreement at time t on a partition that gives player i a share xi of the
cake, then player i’s payoff is δti xi , where δi ∈ (0, 1) is player i’s discount factor. If players never
reach an agreement, then each player’s payoff is zero.
We will first characterize the SPE with the following two properties, and then show that all SPE
have these properties.
Let x∗i denote the equilibrium offer by player i. Given properties 1 and 2, the current present
value of rejecting an offer x∗A is δB x∗B for player B. This implies that in equilibrium
Similarly
1 − x∗B = δA x∗A . (8.2)
1 − δB
x∗A = (8.3)
1 − δA δB
1 − δA
x∗B = (8.4)
1 − δA δB
Thus, there exists at most one SPE satisfying the two properties. But we still have to verify
there exists such an equilibrium. Consider the strategy profile (s∗A , s∗B ) defined as
Player A: Always offer x∗A , accept any xB with 1 − xB ≥ δA x∗A
Player B: Always offer x∗B , accept any xA with 1 − xA ≥ δB x∗B .
Before we prove that this strategy profile is a SPE we state the following proposition
Proposition 8.1. (One-Deviation Property). Let Γ be a finite horizon extensive form game with
perfect information. The strategy profile s∗ is a SPE of Γ if and only if for every player i ∈ N and
for every subgame g of Γ, in which player i moves at the initial node of g there exists no profitable
deviation by player i which differs from s∗i only in the action specified at the initial node of g.
102 Extensive Form Games: Applications
Remark 8.1. It is possible to show that if the payoffs of an infinite horizon game satisfies a certain
regularity condition (continuity at infinity: see Fudenberg and Tirole, 1991, p. 110), then the
one-deviation property holds for infinite horizon games as well.]
Proposition 8.2. One-deviation property holds for the Rubinstein bargaining game.
Proposition 8.3. (s∗A , s∗B ) is a SPE of the alternating offers bargaining game.
Proof. Consider any period when A has to make an offer. Her payoff to s∗A is x∗A . If A offers
xA < x∗A then
1 − xA > δB x∗B
by equation (8.1) and hence B accepts any such offer which gives A a payoff less than x∗A . If she
offers xA > x∗A , then B rejects and offers x∗B , A accepts giving her a payoff of
by equation (8.1). Therefore, there is no profitable one-shot deviation in any subgame starting with
her offer.
Now, consider subgames starting with player A responding. If player A rejects offer xB with
1 − xB ≥ δA x∗A , then she will offer x∗A herself and get δA x∗A . So this is not a profitable deviation.
By a symmetric argument, it follows that player B’s strategy is optimal in every subgame as
well.#
Proof. Let Γi denote any subgame that starts with player i making an offer. Clearly, all such
subgames are strategically equivalent (since preferences are stationary, i.e., does not depend on
calendar time) and thus all have the same SPE. Let Gi denote the set of SPE in any subgame Γi and
let1
Mi = max Gi ,
mi = min Gi .
Lemma 8.1. There exists a unique SPE payoff profile of ΓA given by (x∗A , 1 − x∗A ) and a unique SPE
payoff profile of ΓA given by (x∗B , 1 − x∗B ) .
1 It
is possible that max Gi and min Gi do not exist. For example if Gi = (0, 1) , max Gi and min Gi do not exist.
However, the theorem is still true with max and min replaced with sup (supremum) and inf (infimum).
8.1. Bargaining 103
Proof of Lemma 5.
Claim 1. mi ≥ 1 − δ j M j , i ̸= j.
Proof of Claim 1. First note that player B accepts any offer xA such that 1 − xA > δB MB . So,
if there exists an equilibrium of ΓA yielding uA < 1 − δB MB , player A can profitably deviate from
such an equilibrium by offering xA such that uA < xA < 1 − δB MB . ∥
Claim 2. Mi ≤ 1 − δ j m j , i ̸= j.
Proof of Claim 2. Player B rejects any offer which gives her less than δB mB and following
rejection she never offers more than δA MA . Therefore,
⎧ ⎫
⎪
⎨ ⎪
⎬
MA ≤ max 1 − δB m B , δ2A MA
⎪
⎩ 3 45 6 3 45 6 ⎪
⎭
max when B accepts max when B rejects
and hence
M A ≤ 1 − δB m B .
m A ≥ 1 − δB M B (8.5)
m B ≥ 1 − δA M A (8.6)
M A ≤ 1 − δB m B (8.7)
M B ≤ 1 − δA m A (8.8)
and
1 − δB mB ≤ 1 − δB (1 − δA MA )
MA ≤ 1 − δB (1 − δA MA )
or
1 − δB
MA ≤ . (8.9)
1 − δA δB
104 Extensive Form Games: Applications
By 8.8 we have
1 − δB MB ≥ 1 − δB (1 − δA mA )
mA ≥ 1 − δB (1 − δA mA )
or
1 − δB
mA ≥ . (8.10)
1 − δA δB
Since MA ≥ mA , 8.9 and 8.10 imply that
1 − δB
MA = m A = . (8.11)
1 − δA δB
Similarly,
1 − δA
MB = m B = . (8.12)
1 − δA δB
Therefore, the unique payoff profile in ΓA is (x∗A , 1 − x∗A ) and the unique payoff profile in ΓB is
(x∗B , 1 − x∗B ).∥
We can now complete the proof of the theorem by using Lemma 5. We first show that all
equilibrium offers are accepted in any SPE. Suppose there exists a SPE in which player A’s offer is
rejected. By Lemma 5, A’s equilibrium payoff in this subgame is x∗A . But by Lemma 5, A’s payoff
in subgame following rejection is (1 − x∗B ), and hence, the equilibrium payoff of A in the subgame
in which her offer is rejected must be δA (1 − x∗B ) . But, this implies
Second, we show that in all SPE A offers x∗A and B offers x∗B . Suppose A offers xA > x∗A in a SPE
of a ΓA . This offer must be rejected by B in equilibrium, because otherwise B would get less than
1 − x∗A in that subgame which contradicts Lemma 5. This, in turn, contradicts that no equilibrium
offer is rejected. Now suppose xA < x∗A in a SPE of a ΓA . This offer, too, must be rejected by B,
because otherwise A would get less than x∗A in that subgame, contradicting Lemma 5. So, A must
be offering x∗A in all SPE. Similarly, B must always be offering x∗B .
Since there is a unique SPE satisfying these properties, as proved in Proposition 3, the proof is
complete.#
8.1. Bargaining 105
1 − δB
πA = x∗A =
1 − δA δB
and that of B is
δB (1 − δA )
πB = 1 − x∗A =
1 − δA δB
and hence the share of player i is increasing in δi and decreasing in δ j . The bargaining power comes
from patience. The more patient a player is, the higher her share.
If the payers are equally patient, i.e., δA = δB = δ, then
1 δ
πA = > = πB
1+δ 1+δ
1
lim πA =
δ→1 2
1
lim πB = .
δ→1 2
Then, as δ → 1, the SPE of the Rubinstein game converges to the Nash bargaining solution, i.e.,
Repeated Games
9.1 Motivation
Many interactions in the real world have an ongoing structure and in many such situations
people consider their long-term payoffs in addition to the short-term gains. This might lead people
to behave in ways different from how they would if the interactions were one-shot rather than
long-term. Consider the following prisoners’ dilemma game.
C D
C 2, 2 0, 3 .
D 3, 0 1, 1
Remember that in this game defecting (D) for both players is the unique Nash equilibrium (and
also the strictly dominant strategy equilibrium). So, if this game is played only once, game theory
strongly suggests that the outcome will be (D, D) , which is suboptimal, since the cooperative out-
come (C,C) gives both players a strictly higher payoff. However, if this game is played repeatedly
between two players, then they may be inclined to cooperate, rather than defect, if they think they
will be punished in the future for defecting.
Theory of repeated games analyzes the types of outcomes, behavior, and norms that can be sup-
ported as Nash equilibrium or and subgame perfect equilibrium outcomes in repeated interactions.
Rather than presenting this large body of literature, we will present some examples and indicate
how they generalize to other repeated interactions.
107
108 Repeated Games
9.2 Preliminaries
Let G = (N, (Ai ) , (ui )) be an n-player finite strategic form game. We will call G the stage game.
For example, G might be the prisoners’ dilemma (PD) game given above.
An infinitely repeated game is defined by the following elements. The stage game is played
at each discrete time period t = 1, 2, . . . , and at the end of each period the action choice of each
player is revealed to everybody. A history in time period t is simply a sequence of action profiles
from period 1 through period t − 1, i.e.,
) *
ht = a0 , a1 , a2 , . . . , at−1 , for t = 1, 2, . . .
where we take a0 to be the empty history (i.e., nothing has happened so far). For example, in the
) *
PD game a possible fifth period history is a0 , (C,C) , (C,C) , (D,C) , (D, D) . We will usually omit
the empty history in this specification as a convention, and write ((C,C) , (C,C) , (D,C) , (D, D)).
The set of period t histories is then given by
H t = At−1 , for t = 1, 2, . . .
where we again set A0 = a0 . The set of all second period histories in PD game is
H2 = A
= {(C,C) , (C, D) , (D,C) , (D, D)}
H 3 = A2 = A × A
= {(C,C) , (C, D) , (D,C) , (D, D)} × {(C,C) , (C, D) , (D,C) , (D, D)}
etc.. A history is terminal if and only if it is infinite. In other words a terminal history is in the form
) *
of a0 , a1 , a2 , . . . . Notice that each nonterminal history starts a subgame in the repeated game.
After any nonterminal history each player i ∈ N simultaneously chooses an action in Ai . There-
fore, a pure strategy si of player i is a sequence of functions that assign an action in Ai to every
history ht ; si (ht ) denotes the action choice of player i after history ht . Therefore, a strategy for
player i is given by
) ) * ) * ) * *
si = si a0 , si a0 , a1 , . . . , si a0 , a1 , . . . , at−1 , . . .
9.2. Preliminaries 109
This strategy instructs player i to start with playing C and continuing doing so unless the opponent
has played D in the past, in which case, player i plays D forever. (This strategy is also called
grim-trigger strategy, because defection is triggered by the opponent’s defection and grim because
punishment is unrelenting). We denote the set of all pure strategies for player i by Si . The set of
all strategy profiles are denoted S. A strategy profile s = (s1 , . . . , sn ) induces a terminal history in
the obvious manner. For example, if both players adopt the grim-trigger strategy defined above the
outcome will be cooperation every period.
The last thing that we have to define is payoff functions. Since, only histories are the infinite
histories and each period’s payoff is the payoff from the stage game, we have to describe how
) ) * ) * *
players evaluate infinite streams of payoffs ui a1 , ui a2 , . . . . Although there are alternative
specifications in the literature, we will concentrate on the case of discounting, where players dis-
count the period payoffs using a discount factor δ ∈ (0, 1) . The payoff of player i to the infinite
) *
sequence a1 , a2 , . . . is given by
∞ ) *
(1 − δ) ∑ δt−1 ui at .
t=1
The normalization factor (1 − δ) serves to measure the stage game and the repeated game payoffs
in the same units. For example, the payoff to perpetual cooperation is given by
∞
(1 − δ) ∑ δt−1 × 2 = 2.
t=1
where at is the period t action profile if players comply with the strategy profile s. For example, if
both players play according to the grim-trigger strategy profile, then period t action profile will be
110 Repeated Games
= 2.
if s is the grim-trigger strategy profile. Notice that each history starts a new subgame, and hence
for any strategy profile s and history ht , we can compute the players’ expected payoffs from period
t onward. We call these the continuation payoffs and renormalize so that the continuation payoffs
from period t on are measured in period t units:
) * ∞ ) *
Ui s|ht = (1 − δ) ∑ δτ−t ui aτ+1
τ=t
) *
if the strategy profile s induces the sequence of actions at+1 , at+2 , . . . starting from history ht .
Let us denote the resulting infinitely repeated game by Gδ .
Definition. The strategy profile s is a Nash equilibrium of the repeated game Gδ if for all i ∈ N
) *
Ui (si , s−i ) ≥ Ui s′i , s−i for all s′i ∈ Si .
Let us consider some of the Nash equilibria of the PD game. First of all, paying D after history
is clearly a Nash equilibrium. This is because whatever you do, your opponent will play D, and the
best response to D is D as well. Secondly, let us check if the grim-trigger strategy profile is a Nash
equilibrium.
Suppose player 2 adopts the grim-trigger strategy. If player 1 also follows the grim-trigger
strategy then the outcome will be cooperation every period
2, 2, 2, . . . , 2, 2, 2, . . .
9.3. Equilibria of Infinitely Repeated Games 111
Now, consider the best possible deviation for player 1. For such a deviation to be profitable it
must result in a sequence of action profiles which has defection by some players in some period.
This, in turn, implies that player 1 must be defecting at some period (since player 2 is following
grim-trigger she will not defect unless player 1 defected in the past). Let T + 1, T = 0, 1, . . . , be the
first period in which player 1 defects. Therefore, we have the following sequence of action profiles
until period T + 1
(C,C) , (C,C) , . . . , (C,C), (D,C) .
3 45 6 3 45 6
T times period T +1
Since player 2 is following the grim-trigger strategy she will play D in period T + 2 and after. Well,
the best thing that player 1 can do in that case is to play D starting from period T + 2 as well.
Therefore, the best deviation by player 1 generates the following sequence of action profiles
2, 2, . . . , 2, 3456
3 , 1, 1, . . .
3 45 6
T times T +1
You can check to see that if δ ≥ 1/2, this is smaller than or equal to 2. Therefore, if players are
patient enough, i.e., if δ ≥ 1/2, then grim-trigger strategy profile is a Nash equilibrium of infinitely
repeated PD game.
Definition. The strategy profile σ is a subgame perfect equilibrium of the repeated game Gδ if
for all i ∈ N and all ht ∈ H
) * ) *
Ui si , s−i |ht ≥ Ui s′i , s−i |ht for all s′i ∈ Si .
112 Repeated Games
Proposition 9.1 (One-Shot Deviation Property). A strategy profile s∗ is a SPE of Gδ if and only
if no player can gain by changing her action after any history, keeping both the strategies of the
other players and the remainder of her own strategy constant.
Now, we have the one-deviation property at hand we may analyze the SPE of the repeated
prisoners’ dilemma game. Let us first consider a finitely repeated version. By backward induction
it is easy to see that the only SPE in this case is defection (D) every period. This is because, D is
strictly dominant in the last period T and hence both players play D after any history hT . Now, in
period T − 1 neither player can gain in period T by cooperating, and they loose in period T − 1, so
that play in T − 1 will be defection as well after any history hT −1 . Continuing in this manner we
have that both players will play D after every history ht , t = 1, 2, . . . , T, in the unique SPE. [It turns
out that this also the unique Nash equilibrium outcome. Prove this as an exercise].
Let us now consider the infinitely repeated version. One SPE is given by
) *
si ht = D, for all t = 1, 2, . . .
for i = 1, 2. This clearly is subgame perfect (check using the one-shot deviation property).
Now, let us consider the grim-trigger strategy. We have to check whether the grim trigger
strategy satisfies the one-shot deviation property after every possible history. Consider the history
h2 = (C, D) , i.e., in the first period player 2 defected. Let’s see if player 2 has a profitable one-shot
deviation. If player 2 plays according to the grim-trigger strategy, given that player 1 sticks to that
strategy as well, the following sequence of action profiles will result (starting from period 2)
whose average discounted value is δ. If she deviates and plays D in the second period (after history
(C, D)), keeping rest of her strategy the same, she will get a payoff of 1 every period, which has
an average discounted value of 1. Therefore, this is a profitable deviation since δ < 1, and the
grim-trigger strategy profile is not a subgame perfect equilibrium.
We may, however, modify the grim-trigger strategy slightly and obtain a SPE. This strategy
profile, which we will call grim-trigger II, is given by
⎧
⎪ C, t = 1
) * ⎨
∗ t
si h = C, ht = ((C,C) , (C,C) , . . . , (C,C))
⎪
⎩
D, otherwise
9.3. Equilibria of Infinitely Repeated Games 113
for i = 1, 2. The difference in this strategy is that, a player defects if there is a defection in the past
independent of the identity of the defector. We claim that s∗ is a SPE for all δ ≥ 1/2.
Proof. Consider all histories of the type ht = ((C,C) , (C,C) , . . . , (C,C)) , i.e., all histories
without any defection. For player 1, the conditional payoff to playing s∗1 is
) * ∞
U1 s∗ |ht = (1 − δ) ∑ δτ−t × 2 = 2.
τ=t
for all δ ≥ 1/2. Similarly, let ht be a history other than ((C,C) , (C,C) , . . . , (C,C)) . Then,
) * ∞
U1 s∗ |ht = (1 − δ) ∑ δτ−t × 1 = 1
τ=t
Therefore, there is no profitable one-shot deviation in the cooperative phase if and only if
3 − 2δ + δk+1 ≤ 2
or
δk+1 − 2δ + 1 ≤ 0.
δ2 − 2δ + 1 = (δ − 1)2 ≤ 0
which can never hold since δ < 1. If, however, k = 2, then the condition will be satisfied for
any δ ≥ 0.62. In general, as the length of the punishment phase increases, the lower bound on δ
decreases and converges to 1/2 as k → ∞.
We also have to check if there is a profitable one-shot deviation in the punishment phase.
Suppose there are k′ ≤ k periods left in the punishment phase. If player 1 complies with the
forgiving trigger strategy the following action profiles will be realized
Clearly, following the forgiving trigger strategy is better in the punishment phase.
So far we have analyzed only the PD game to illustrate some of the results that can be obtained
in repeated games. The repeated games literature considers all possible games and characterizes
the set of possible outcomes that can be obtained in the Nash equilibria or SPE of repeated games.
9.3. Equilibria of Infinitely Repeated Games 115
Results, known as “folk theorems”, have shown that virtually any outcome can be supported as a
Nash and SPE outcome in infinitely repeated games, provided that the players are patient enough.
Let us consider another example, this time from industrial-organization theory. Consider a
Cournot duopoly model with inverse demand function
$
a − Q, Q ≤ a
P (Q) =
0, Q>a
where Q = Q1 + Q2 and cost functions Ci (Qi ) = cQi , i = 1, 2. The profit function of firm i is given
by
ui (Q1 , Q2 ) = Qi (P (Q1 + Q2 ) − c) .
Consider the following grim-trigger strategy. Produce half the monopoly output in the first period
and as long as everybody has produced that amount so far. Otherwise produce the Cournot output.
As an exercise verify that this is a SPE.
116 Repeated Games
Chapter 10
Auctions
Many economic transactions are conducted through auctions. Governments sell treasury bills,
foreign exchange, publicly owned companies, mineral rights, and more recently airwave spectrum
rights via auctions. Art work, antiques, cars, and houses are also sold by auctions. Government
contracts are awarded by procurement auctions, which are also used by firms to buy inputs or to
subcontract work. Takeover battles are effectively auctions as well and auction theory has been
applied to areas as diverse as queues, wars of attrition, and lobbying contests.1
There are four commonly used and studied forms of auctions:
1. ascending-bid auction (also called the open, oral, or, English auction): the price is raised
until only one bidder remains, and that bidder wins the object at the final price.
2. descending-bid auction (also called Dutch auction): the auctioneer starts at a very high price
and lowers it continuously until someone accepts the currently announced price. That bidder
wins the object at that price.
3. first-price sealed bid auction: each bidder submits her bid in a sealed envelope without seeing
others’ bids, and the object is sold to the highest bidder at her bid.
4. second-price sealed bid auction (also known as Vickrey auction2 ). Bidders submit their bids
in a sealed envelope, the highest bidder wins but pays the second highest bid.
Auctions also differ with respect to the valuation of the bidders. In a private value auction
each bidder’s valuation is known only by the bidder, as it could be the case, for example, in an
1 For a good introductory survey to the auction theory see Paul Klemperer (1999), “Auction Theory: A Guide to the
117
118 Auctions
artwork or antique auction. In a common value auction, the actual value of the object is the same
for everyone, but bidders have different private information regarding that value. For example, the
value of an oil tract or a company maybe the same for everybody but different bidders may have
different estimates of that value.
We will analyze sealed bid auctions, not only because they are simpler to analyze but also
because in the private values case, the first-price sealed bid auction is strategically equivalent to de-
scending bid auction and the second-price sealed bid auction is strategically equivalent to ascending
bid auction.
Figure 10.1: Auction Types
O PEN -C RY S EALED -B ID
Previously, we have looked at two forms of auctions, namely First Price and Second Price
Auctions, in a complete information framework in which each bidder knew the valuations of every
other bidder. In this section we relax the complete information assumption and revisit these two
form of auctions. In particular, we will assume that each bidder knows only her own valuation,
and the valuations are independently distributed random variables whose distributions are common
knowledge.
The following elements define the general form of an auction that we will analyze:
• a belief function: player i believes that her opponents’ valuations are independent draws
from a distribution function F that is strictly increasing and continuous on [v, v̄] .
where P (a) is the price paid by the winner if the bid profile is a. Notice that in the case of a
tie the object is divided equally among all winners.
In this design, highest bidder wins and pays a price equal to the second highest bid. Although
there are many Bayesian equilibria of second price auctions, bidding own valuation vi is weakly
dominant for each player i. To see this let x be the highest of the other bids and consider bidding
a′i < vi , vi , and a′′i > vi . Depending upon the value of x, the following table gives the payoffs to
each of these actions by i
By bidding smaller than vi , you sometimes lose when you should win (ai < x < vi ) and by
bidding more than vi , you sometimes win when you should lose (ai > x > vi ).
In first price auctions, the highest bidder wins and pays her bid. Let us denote the bid of player
with type vi by βi (vi ) and look for symmetric equilibria, i.e. βi (v) = β (v) for all i ∈ N. First,
although we will not attempt to do so here, it is possible to show that strategies βi (vi ) , and hence
β (v) , are strictly increasing and continuous on [v, v̄] .(see Fudenberg and Tirole, 1991). So, let’s
assume that they are, and check if they are once we locate a possible equilibrium.
120 Auctions
The expected payoff of player with type v who bids b when all the others are bidding according
to β is given by
(Because vi are independently distributed). The first order condition for maximizing the expected
payoff is
) *n−1 ) *n−2 ′ ) −1 * 1
−F β−1 (b) + (v − b) (n − 1) F β−1 (b) F β (b) = 0.
β′ (β−1 (b))
by the fact that β is almost everywhere differentiable (since it is strictly increasing), and by the
inverse function theorem. For β (v) to be an equilibrium first order condition must hold when we
substitute β (v) for b,
1
−F (v)n−1 + (v − β (v)) (n − 1) F (v)n−2 F ′ (v) = 0,
β′ (v)
or
β′ (v) F (v)n−1 + (n − 1) β (v) F ′ (v) F (v)n−2 = (n − 1) vF ′ (v) F (v)n−2
? v n−1
0 x dx 1 vn n − 1
β (v) = v − = v− = v.
vn−1 vn−1 n n
10.1. Independent Private Values 121
Uniform example solved explicitly: Let’s look for a symmetric equilibrium of the form
β (v) = av. The expected payoff of player with type v who bids b when all the others are bidding
according to β is given by
(v − b) (n − 1) = b,
which is solved at
n−1
b= v.
n
Consider an auction in which the highest bidder wins the auction but every bidder pays his/her
bid. This model could model bribes, political contests, Olympic competition, war-of-attritions, etc.
Again, let’s look for a symmetric equilibrium, βi (v) = β (v) for all i ∈ N.The expected payoff of
player with type v who bids b when all the others are bidding according to β is given by
) *n−2 1
−1 + v (n − 1) β−1 (b) = 0.
β′ (β−1 (b))
For β (v) to be an equilibrium first order condition must hold when we substitute β (v) for b,
1
−1 + v (n − 1) vn−2 = 0,
β′ (v)
122 Auctions
or
β′ (v) = (n − 1) vn−1
In second price auctions each bidder bids her value and pays the second highest. Therefore,
the expected revenue of the seller is the expected second highest value. In a first price auction, the
highest bidder is the one with the highest value and bids a function of her value, which is n−1n vmax
in our example above. Therefore, the seller’s expected revenue in a first price and a second price
auction depends on the expectation of the highest and the second highest value, respectively. Given
that there are n bidders who each has a value (drawn independently from a common distribution),
what are the expected values of the highest and second highest values? Order statistics provide the
answer.
Order Statistics
Suppose that v is a real-valued random variable with distribution function F and density func-
tion f . Also suppose that n independent values are drawn from the same distribution to form a
random sample (v1 , v2 , . . . , vn ) . Let v(k) denote the kth smallest of (v1 , v2 , . . . , vn ) and call it kth
order statistic. In particular v(n) is the highest and v(n−1) is the second highest order statistics. Let
Fk denote the distribution function of v(k) . Let’s start with the distribution function of v(n) .
) *
Fn (x) = prob v(n) ≤ x = prob (all vi ≤ x)
= [F (x)]n .
Similarly,
) *
Fn−1 (x) = prob v(n−1) ≤ x
= prob (either n or (n − 1) of v’s are ≤ x)
= [F (x)]n + n (1 − F (x)) [F (x)]n−1
10.2. Revenue Equivalence 123
In general,
Therefore,
> 1
E[v(n) ] = xnxn−1 dx
0
n
=
n+1
and
> 1
E[v(n−1) ] = x (n − 1) n (1 − x) xn−2 dx
0
n−1
= .
n+1
Now, in a second price auction the expected revenue is the expected second highest value
n−1
E[R2 ] = E[v(n−1) ] = ,
n+1
and the revenue in the first price auction is the expected bid of the bidder with the highest value,
i.e.,
n−1
E[R1 ] = E[v(n) ]
n
n−1 n
=
n n+1
n−1
= .
n+1
Therefore, both auction forms generate the same expected revenues. This is an illustration of
124 Auctions
Theorem Any auction with independent private values with a common distribution in which
1. the number of the bidders are the same and the bidders are risk-neutral,
2. the object always goes to the buyer with the highest value,
3. the bidder with the lowest value expects zero surplus,
Therefore, all four types of the auctions yield the same expected revenue for the seller in the
case of independent private values and risk neutrality. This theorem also allows us to calculate
bidding strategies of other auctions. An all-pay auction, for example, satisfies the conditions of the
theorem and hence must yield the same expected revenue.
In a common value auction, bidders have all the same value but each bidder only observes a
private signal about the value. Therefore, if a bidder wins the auction, i.e., is the highest bidder, it
is likely that the other bidders received worse signals than the winner. In other words, the value of
the object conditional on winning is smaller than the unconditional expected value. If this is not
taken into account, then the winner might bid an amount more than the actual value of the object,
a situation known as the winner’s curse.
As an example suppose v = t1 + t2 , where v is the common value but bidder i observes only
the signal ti . Assume that each ti is distributed independently and has a uniform distribution over
[0, 1]. This, for example, be a takeover battle where the value of the target company is the same but
each bidder obtains an independent signal about the value. Suppose that the auction is a first-price
sealed bid auction. Denote the strategies by bi (ti ) and look for an equilibrium in which bi (ti ) = ati .
The expected payoff of player 1 given that player 2 bids according to b2 (t2 ) = at2 is given by
As a comparison consider the independent private values case where vi = ti + 0.5. Note that this
is the expected value in the above model conditional upon observing ti (but not conditional upon
winning). Let’s look for an equilibrium of the form bi (ti ) = ati + c. The expected payoff of player
1 to bidding b1 given that player 2 is using the strategy at2 + c is
which is solved at
1 1
b1 = c + t1 + 0.25.
2 2
For b1 (t1 ) = at1 + c to be optimal, we must have
1 1
c + t1 + 0.25 = at1 + c,
2 2
126 Auctions
t1 1
b1 (t1 ) = + ,
2 2
t2 1
b2 (t2 ) = +
2 2
constitutes a Bayesian Nash equilibrium (indeed the unique equilibrium) of this auction. (Notice
that this satisfies that above restriction a ≤ b1 ≤ a + c for all t1 .) Also note that
t1 1
+ ≥ t1
2 2
for all t1 ∈ [0, 1], hence there is always underbidding in common value auctions. The reason is that
the expected value of the object is smaller conditional upon winning in common value auctions,
whereas this value does not depend on the event of winning or not winning.
The auctioneer may have different objectives in designing an auction. The government which
is privatizing a company, for example, might want to generate the highest revenue from the auc-
tion, or might want to make sure that it is efficient, i.e., that the company goes to the bidder with
the highest valuation for it, or to a bidder with some other characteristics. Auction theory helps
in designing auctions by comparing different auction formats in terms of their equilibrium out-
comes. For example, if the objective is to generate the highest revenue, then different auction
formats may be compared on the basis of the expected equilibrium revenues to find the best one.
In the case of private, independent values with the same number of risk neutral bidders, revenue
equivalence theorem says that the format does not matter, as long as the reserve price is set right.
Therefore, the cases where the values are correlated (as in the case of common value auctions), or
the bidders are risk averse, auction design becomes a challenging matter. In practice, collusion and
entry-deterrence also becomes relevant design problems. Collusion is relevant because revenue
equivalence does not hold if there is collusion. Also, remember that the expected revenue from an
auction increases in the number of bidders even when the revenue equivalence holds, and hence the
auctioneer has an incentive to prevent entry-deterrence.
If there is only one bidder who comes to the auction, the seller will not make any money, unless
she sets a reserve price. What is the optimal reserve price? This is similar to the case where the
10.4. Auction Design 127
seller is a monopoly and tries to find the optimal price. Assuming that the costs are sunk and
therefore the total payoff of the seller is given by the total revenue, the expected payoff is given by
or
1 − F (p)
p= .
f (p)
So, if F is uniform over [0,1],
1− p
p=
1
which implies that p = 1/2. So, an optimal auction must set a reserve price of 0.5 in this particular
case.
We have seen above that first-price sealed bid auction leads to lower bids in the case of common
value auctions. In general, if the signals received by the bidders are positively correlated, ascending
auction raises more expected revenue than the second-price sealed bid auction, which in turn beats
the first-price auction.
In a second price auction risk aversion does not matter, i.e., the bidders always bid their values.
In a first-price auction however, an increase in risk aversion leads to higher bids since it increases
the probability of winning at the cost of reducing the value of winning. Therefore, a risk-neutral
seller faced with risk-averse bidders prefers the first-price or (descending) Dutch auction to second-
price or (ascending) English auctions.
3 This
part is based on Paul Klemperer, ”What Really Matters in Auction Design,” Journal of Economic Perspectives
2002, 16(1), 169-189.
128 Auctions
Collusion
A major concern in practical auction design is the possibility that the bidders explicitly or tacitly
collude to avoid higher prices. As an example consider a multi-unit (simultaneous) ascending
auction. In such an auction, bidders can use the early stages when prices are still low to signal who
should win which objects, and then tacitly agree to stop pushing prices up.
• 1999 Germany spectrum auction: any new bid must exceed the previous one by at least 10
percent. Mannesman bid 18.18 mil. on blocks 1-5 and 20 mil. on blocks 1-6 (18.18×1.1≃20).
This was like an offer to T-Mobil (the only other credible bidder) to bid 20 mil. on blocks
1-5 and not bid on blocks 6-10. This is exactly what happened.
• 1996-97 U.S. spectrum auction: U.S. West was competing vigorously with McLeod for lot
number 378 - a licence in Rochester, Minnesota. U.S. West bid $313,378 and $62,378 for
two licences in Iowa in which it had earlier shown no interest, overbidding McLeod who
had seemed to be the uncontested high-bidder for these licenses. McLeod got the point that
it was being punished for competing in Rochester, and dropped out of that market. Since
McLeod made subsequent higher bids on the Iowa licenses, the “punishment” bids cost U.S.
West nothing
• A related phenomenon can arise in one special kind of sealed-bid auction, namely a uniform-
price auction in which each bidder submits a sealed bid stating what price it would pay for
different quantities of a homogenous good, e.g., electricity (that is, it submits a demand
function), and then the good is sold at the single price determined by the lowest winning
bid. In this format, bidders can submit bids that ensure that any deviation from a (tacit or
explicit) collusive agreement is severely punished: each bidder bids very high prices for
smaller quantities than its collusively agreed share. Then if any bidder attempts to obtain
more than its agreed share (leaving other firms with less than their agreed shares), all bidders
will have to pay these very high prices. However, if everyone sticks to their agreed shares
then these very high prices will never need to be paid. So deviation from the collusive
agreement is unprofitable. The electricity regulator in the United Kingdom believes the
market in which distribution companies purchase electricity from generating companies has
fallen prey to exactly this kind of “implicit collusion.”
Much of the kind of behavior discussed so far is hard to challenge legally. Indeed, trying to
outlaw it all would require cumbersome rules that restrict bidders’ flexibility and might generate
inefficiencies, without being fully effective. It would be much better to solve these problems with
better auction designs.
10.4. Auction Design 129
Entry Deterrence
The second major area of concern of practical auction design is to attract bidders, since an
auction with too few bidders risks being unprofitable for the auctioneer and potentially inefficient.
Ascending auctions are often particularly poor in this respect, since they can allow some bidders
to deter the entry, or depress the bidding, of rivals. In an ascending auction, there is a strong
presumption that the firm which values winning the most will be the eventual winner, because even
if it is outbid at an early stage, it can eventually top any opposition. As a result, other firms have
little incentive to enter the bidding, and may not do so if they have even modest costs of bidding.
• Glaxo’s 1995 takeover of the Wellcome drugs company. After Glaxo’s first bid of 9 billion
pounds, Zeneca expressed willingness to offer about 10 billion pounds if it could be sure
of winning, while Roche considered an offer of 11 billion pounds. But certain synergies
made Wellcome worth a little more to Glaxo than to the other firms, and the costs of bidding
were tens of millions of pounds. Eventually, neither Roche nor Zeneca actually entered the
bidding, and Wellcome was sold at the original bid of 9 billion pounds, literally a billion or
two less than its shareholders might have received. Wellcome’s own chief executive admitted
“...there was money left on the table”.
Solutions
Much of our discussion has emphasized the vulnerability of ascending auctions to collusion
and predatory behavior. However, ascending auctions have several virtues, as well.
• An ascending auction is particularly likely to allocate the prizes to the bidders who value
them the most, since a bidder with a higher value always has the opportunity to rebid to top
a lower-value bidder who may initially have bid more aggressively.
• If there are complementarities between the objects for sale, a multi-unit ascending auction
makes it more likely that bidders will win efficient bundles than in a pure sealed-bid auction
in which they can learn nothing about their opponents’ intentions.
• Allowing bidders to learn about others’ valuations during the auction can also make the
bidders more comfortable with their own assessments and less cautious, and often raises the
auctioneer’s revenues if information is correlated.
A number of methods to make the ascending auction more robust are clear enough. For exam-
ple, bidders can be forced to bid “round” numbers, the exact increments can be prespecified, and
130 Auctions
bids can be made anonymous. These steps make it harder to use bids to signal other buyers. Lots
can be aggregated into larger packages to make it harder for bidders to divide the spoils, and keep-
ing secret the number of bidders remaining in the auction also makes collusion harder. But while
these measures can be useful, they do not eliminate the risks of collusion or of too few bidders. An
alternative is to choose a different type of auction.
In a standard sealed-bid auction (or “first-price” sealed-bid auction), each bidder simultane-
ously makes a single “best and final” offer, so collusion is much harder than in an ascending
auction because firms are unable to retaliate against bidders who fail to cooperate with them. Tacit
collusion is particularly difficult since firms are unable to use the bidding to signal.
From the perspective of encouraging more entry, the merit of a sealed-bid auction is that the
outcome is much less certain than in an ascending auction. An advantaged bidder will probably
win a sealed-bid auction, but it must make its single final offer in the face of uncertainty about its
rivals’ bids, and because it wants to get a bargain its sealed-bid will not be the maximum it could
be pushed to in an ascending auction. So “weaker” bidders have at least some chance of victory,
even when they would surely lose an ascending auction. It follows that potential entrants are likely
to be more willing to enter a sealed-bid auction than an ascending auction.
A solution to the dilemma of choosing between the ascending (often called “English”) and
sealed-bid (or “Dutch”) forms is to combine the two in a hybrid, the “Anglo-Dutch”, which of-
ten captures the best features of both, and was first described and proposed in Klemperer (1998.
“Auctions with Almost Common Values.” European Economic Review. 42, pp. 757-69.).
In an Anglo-Dutch auction the auctioneer begins by running an ascending auction in which
price is raised continuously until all but two bidders have dropped out. The two remaining bidders
are then each required to make a final sealed-bid offer that is not lower than the current asking
price, and the winner pays his bid.
Good auction design is not “one size fits all” and must be sensitive to the details of the context.
Chapter 11
11.1 Introduction
So far we have analyzed games in strategic form with and without incomplete information, and
extensive form games with complete information. In this section we will analyze extensive form
games with incomplete information. Many interesting strategic interactions can be modelled in this
form, such as signalling games, repeated games with incomplete information in which reputation
building becomes a concern, bargaining games with incomplete information, etc.
The analysis of extensive form games with incomplete information will show that we need
to provide further refinements of the Nash equilibrium concept. In particular, we will see that
subgame perfect equilibrium (SPE) concept that we have introduced when we studied extensive
form games with complete information is not adequate. To illustrate the main problem in the SPE
concept, however, the following game with imperfect, but complete, information is sufficient.
The strategic form of this game is given by
L R
O 1, 3 1, 3
T 2, 1 0, 0
B 0, 2 0, 1
It can be easily seen that the set of Nash equilibria of this game is {(T, L) , (O, R)} . Since this
game has only one subgame, i.e., the game itself, this is also the set of SPE. But there is something
131
132 Extensive Form Games with Incomplete Information
O ❜1
1, 3 "
"❅
" ❅
T" ❅B
" ❅
2 "♣"♣"♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ I♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣❅♣ ❅♣ "2
✁❆ ✁❆
✁ ❆ ✁ ❆
L✁ ❆R L✁ ❆R
✁ ❆ ✁ ❆
✁ ❆ ✁ ❆
"✁ ❆" "✁ ❆"
2, 1 0, 0 0, 2 0, 1
implausible about the (O, R) equilibrium. Action R is strictly dominated for player 2 at the infor-
mation set I. Therefore, if the game ever reaches that information set, player 2 should never play R.
Knowing that, then, player 1 should play T, as she would know that player 2 would play L, and she
would get a payoff of 2 which is bigger than the payoff that she gets by playing O. Subgame perfect
equilibrium cannot capture this, because it does not test rationality of player 2 at the non-singleton
information set I.
The above discussion suggests the direction in which we have to strengthen the SPE concept.
We would like players to be rational not only in very subgame but also in every continuation game.
A continuation game in the above example is composed of the information set I and the nodes that
follow from that information set. First, notice that the continuation game does not start with a
single decision node, and hence it is not a subgame. However, rationality of player 2 requires that
he plays action L if the game ever reaches there.
In general, the optimal action at an information set may depend on which node in the informa-
tion set the play has reached. Consider the following modification of the above game.
Here the optimal action of player 2 at the information set I depends on whether player 1 has
played T or B - information that 2 does not have. Therefore, analyzing player 2’s decision problem
at that information set requires him forming beliefs regarding which decision node he is at. In other
words, we require that
(1) (Condition 1: Beliefs) At each information set the player who moves at that information set
has beliefs over the set of nodes in that information set.
and
(2) (Condition 2: Sequential Rationality) At each information set, strategies must be optimal,
given the beliefs and subsequent strategies.
11.1. Introduction 133
O ❜1
1, 3 "
"❅
" ❅
T" ❅B
" ❅
2 "♣"♣"♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ I♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣❅♣ ❅♣ "2
✁❆ ✁❆
✁ ❆ ✁ ❆
L✁ ❆R L✁ ❆R
✁ ❆ ✁ ❆
✁ ❆ ✁ ❆
"✁ ❆" "✁ ❆"
2, 1 0, 0 0, 1 0, 2
Figure 11.2
Let us check what these two conditions imply in the game given in Figure 11.1. Condition 1
requires that player 2 assigns beliefs to the two decision nodes at the information set I. Let the
probability assigned to the node that follows T be µ ∈ [0, 1] and the one assigned to the node that
follows B be 1 − µ. Given these beliefs the expected payoff to action L is
µ× 1 + (1 − µ) × 2 = 2 − µ
µ× 0 + (1 − µ) × 1 = 1 − µ.
Notice that 2 − µ > 1 − µ for any µ ∈ [0, 1] . Therefore, Condition 2 requires that in equilibrium
player 2 never plays R with positive probability. This eliminates the subgame perfect equilibrium
(O, R) , which, we argued, was implausible.
Although it requires players to form beliefs at non-singleton information sets, condition 1, does
not specify how these beliefs are formed. As we are after an equilibrium concept, we require the
beliefs to be consistent with the players’ strategies. As an example consider the game given in
Figure 2 again. Suppose player 1 plays actions O, T, and B with probabilities β1 (O) , β1 (T ) , and
β1 (B) , respectively. Also let µ be the belief assigned to node that follows T in the information set
I. If, for example, β1 (T ) = 1 and µ = 0, then we have a clear inconsistency between player 1’s
strategy and player 2’s beliefs. The only consistent belief in this case would be µ = 1. In general,
we may apply Bayes’ Rule, whenever possible, to achieve consistency:
β1 (T )
µ= .
β1 (T ) + β1 (B)
134 Extensive Form Games with Incomplete Information
Of course, this requires that β1 (T ) + β1 (B) ̸= 0. If β1 (T ) + β1 (B) = 0, i.e., player 1 plays action
O with probability 1, then player 2 does not obtain any information regarding which one of his
decision nodes has been reached from the fact that the play has reached I. The weakest consistency
condition that we can impose is then,
(3) (Condition 3: Weak Consistency) Beliefs are determined by Bayes’ Rule and strategies when-
ever possible.
These three conditions define the equilibrium concept Perfect Bayesian Equilibrium (PBE).
To be able to define PBE more formally, let Hi be the set of all information sets a player has in
the game, and let A (h) be the set of actions available at information set h. A behavioral strategy
for player i is a function βi which assigns to each information set h ∈ Hi a probability distribution
on A (h) , i.e.,
∑ βi (a) = 1.
a∈A(h)
Let Bi be the set of all behavioral strategies available for player i and B be the set of all behavioral
strategy profiles, i.e., B = ×i Bi . A belief system µ : X → [0, 1] assigns to each decision node x in
the information set h a probability µ(x), where ∑x∈h µ(x) = 1 for all h ∈ H. Let M be the set of
all belief systems. An assessment (µ, β) ∈ M × B is a belief system combined with a behavioral
strategy profile.
Perfect Bayesian equilibrium is an assessment (µ, β) that satisfy conditions 1-3.1 To illustrate,
consider the game in Figure 2 again. Let βi (a) be the probability assigned to action a by player i,
and µ be the belief assigned to the node that follows T in information set I. In any PBE of this game
we have (i) β2 (L) = 1, (ii) β2 (L) = 0, or (iii) β2 (L) ∈ (0, 1) . Let us check each of the possibilities
in turn:
(i) β2 (L) = 1. In this case, sequential rationality of player 2 implies that the expected payoff to L
is greater than or equal to the expected payoff to R, i.e.,
µ× 1 + (1 − µ) × 1 ≥ µ× 0 + (1 − µ) × 2
1 Perfect Bayesian equilibrium, as it was defined in D. Fudenberg and J. Tirole (1991), “Perfect Bayesian and Se-
quential Equilibrium,” Journal of Economic Theory, 53, 236-60, puts more restrictions on the out-of-equilibrium beliefs
and hence is stronger than the definition provided here.
11.2. Perfect Bayesian Equilibrium 135
or
1 ≥ 2 − 2µ ⇐⇒ µ ≥ 1/2.
Sequential rationality of player 1 on the other hand implies that she plays T, i.e., β1 (T ) = 1. Bayes’
rule then implies that
β1 (T ) 1
µ= = = 1,
β1 (T ) + β1 (B) 1 + 0
which is greater than 1/2, and hence does not contradict player 2’s sequential rationality. Therefore,
the following assessment is a PBE
β1 (T ) = 1, β2 (L) = 1, µ = 1.
(ii) β2 (L) = 0. Sequential rationality of player 2 now implies that µ ≤ 1/2, and sequential rational-
ity of player 1 implies that β1 (O) = 1. Since β1 (T ) + β1 (B) = 0, however, we cannot apply Bayes’
rule, and hence condition 3 is trivially satisfied. Therefore, there is a continuum of equilibria of the
form
β1 (O) = 1, β2 (L) = 0, µ ≤ 1/2.
(iii) β2 (L) ∈ (0, 1) . Sequential rationality of player 2 implies that µ= 1/2. For player 1 the expected
payoff to O is 1, to T is 2β2 (L) , and to B is 0. Clearly, player 1 will never play B with positive
probability, that is in this case we always have β1 (B) = 0. If, β1 (O) = 1, then we must have
2β2 (L) ≤ 1 ⇐⇒ β2 (L) ≤ 1/2, and we cannot apply Bayes’ rule. Therefore, any assessment that
has
β1 (O) = 1, 0 < β2 (L) ≤ 1/2, µ = 1/2
is a PBE. If, on the other hand, β1 (O) = 0, then we must have β1 (T ) = 1, and Bayes’ rule implies
that µ = 1, contradicting µ = 1/2. If β1 (0) ∈ (0, 1) , then Bayes’ rule implies that µ = 1, again
contradicting µ = 1/2.
Perfect Bayesian Equilibrium could be considered a weak equilibrium concept, because it does
not put enough restrictions on out-of-equilibrium beliefs. Consider the three-player game given
in Figure 11.3. The unique subgame perfect equilibrium of this game is (D, L, R′ ) . However, the
strategy profile (A, L, L′ ) together with the belief system that puts probability 1 to the node that
follows R is an assessment that satisfies conditions 1-3. Clearly, this is not a plausible outcome, as
(L, L′ ) is not a Nash equilibrium of the subgame that starts with player 2’s move. Also, notice that
player 3’s beliefs are not consistent with player 2’s strategy, but since player 3’s information set is
off-the-equilibrium, Bayes’ rule has no bite there.
The most commonly used equilibrium concept that do not suffer from such deficiencies is
136 Extensive Form Games with Incomplete Information
1❜
"❅
" ❅
A" ❅D
" ❅
" ❅2
"" ❅"
2, 0, 0 "❅
" ❅
L" ❅R
" ❅
3"♣"♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣❅♣❅
"
♣ 3"
✁❆ ✁❆
′ ✁ ❆ ′ ′ ✁ ❆ ′
L✁ ❆R L✁ ❆R
✁ ❆ ✁ ❆
✁ ❆ ✁ ❆
"✁ ❆" "✁ ❆"
1, 2, 1 3, 3, 3 0, 1, 2 0, 1, 1
that of sequential equilibrium. Before we can define sequential equilibrium, however, we have to
define a particular consistency notion. A behavioral strategy profile is said to be completely mixed
if every action receives positive probability.
) * βn2 (L)
βn1 (A) → 1, βn2 (L) → 1, βn3 L′ → 1, µn = → 0,
βn2 (L) + βn2 (R)
which is not possible. However, the assessment given by ((D, L, R′ ) , µ = 1) is easily checked to
satisfy sequential rationality. To check consistency, let
1 1 ) * 1 1
βn1 (D) = 1 − , βn2 (L) = 1 − , βn3 R′ = 1 − , µn = 1 − .
n n n n
Notice that µn is derived from βn via Bayes’ rule and (µn , βn ) → (µ, β) . Therefore, this assessment
is a sequential equilibrium.
11.3. Signalling Games 137
One of the most common applications in economics of extensive form games with incomplete
information is signalling games. In its simplest form, in a signalling game there are two players, a
sender S, and a receiver, R. Nature draws the type of the sender from a type set Θ, whose typical
element will be denoted θ. The probability of type θ being drawn is p (θ) . Sender observes his type
and chooses a message m ∈ M. The receiver observes m (but not θ) and chooses an action a ∈ A.
The payoffs are given by uS (m, a, θ) and uR (m, a, θ) .
Let µ(θ|m) denote the receiver’s belief that the sender’s type is θ if message m is observed.
Also let βS (m|θ) denote the probability that type θ sender sends message m, and βR (a|m) denote
the probability that the receiver chooses action a after observing message m. Given an assessment
(µ, β), the expected payoff of a sender of type θ is then
whereas the expected payoff of the receiver conditional upon receiving message m is
(a) Weak chooses quiche, Tough chooses beer (βS (Q|W ) = 1, βS (Q|T ) = 0) :
Bayes’ rule implies that
βS (Q|W ) p (W ) 1 × 0.1
µ(W |Q) = = = 1.
βS (Q|W ) p (W ) + βS (Q|T ) p (T ) 1 × 0.1 + 0 × 0.9
Similarly, µ(T |B) = 1. Therefore, the receiver’s sequential rationality implies that βR (A|B) =
138 Extensive Form Games with Incomplete Information
1, 1 " " 0, 1
❅❅ ""
❅F F"
❅ "
❅" Q 1" B "
"
"♣♣ ♣❅
♣
" ♣♣ ♣ ❅
" A ♣♣ ♣ A❅
♣
" ❅
♣
3, 0 "" ♣
♣ W
♣
♣ ❅" 2, 0
♣ ♣
♣ ♣
♣ ♣
♣ ♣
♣ ♣
♣ ♣
2 ♣♣ ❜N 2 ♣♣
♣ ♣
♣ ♣
♣ ♣
♣ ♣
♣ ♣
0, 0 " ♣
T
♣ " 1, 0
❅ "
♣ ♣
❅ ♣ ♣ "
❅ F ♣♣ ♣ F"
♣ ♣
❅ ♣♣ ♣ "
♣
❅"♣ ""
♣
" ♣
" Q 1 B ❅
" ❅
"A A❅
"" ❅❅" 3, 1
2, 1 "
Figure 11.4: Beer or Quiche
1 and βR (F|Q) = 1. Sequential rationality of the sender, then, implies that βS (Q|W ) =
0, contradicting our hypothesis. So, there is no PBE of this type.
(b) Weak chooses beer, Tough chooses quiche (βS (Q|W ) = 0, βS (Q|T ) = 1) :
Bayes’ rule implies that µ(T |Q) = 1 and µ(W |B) = 1. Therefore, the receiver’s sequen-
tial rationality implies that βR (F|B) = 1 and βR (A|Q) = 1. Sequential rationality of the
sender, then, implies that βS (Q|W ) = 1, contradicting our hypothesis. So, there is no
PBE of this type either.
and hence sequential rationality implies that βR (A|Q) = 1. The weak type’s sequen-
tial rationality implies, then, that βS (Q|W ) = 1, confirming our hypothesis. For the
tough type, playing quiche would be rational only if the receiver chooses to fight af-
ter observing beer. Therefore, we must have βR (F|B) = 1, which in turn requires that
µ(W |B) ≥ 1/2. Therefore, any assessment which satisfies the following is a PBE:
(b) Both choose beer (βS (B|W ) = 1, βS (B|T ) = 1) : It is easily checked that the following
constitute the set of PBE of this type:
Suppose there are two types of workers, a high ability (H) and a low ability (L) type. We
let the probability of having high ability be denoted by p ∈ (0, 1) . The output is equal to 2 if the
worker is of high ability and equal to 1 if he is of low ability. The worker can choose a level of
education e ≥ 0 before applying for a job. However, the cost of having level of education e is e
for the low ability worker and e/2 for the high ability worker. The worker knows his ability but
the employer observes only the level of education, not the ability. Therefore, the employer offers a
wage schedule w (e) as a function of education. The payoffs of the workers are given by
u (w, e, H) = w − e/2,
u (w, e, L) = w − e.
We assume that the job market is competitive and hence the employer offers a wage schedule
w (e) such that the expected profit is equal zero. Therefore, if µ(H|e) denotes the belief of the
employer that the worker is of high ability given that he has chosen education level e, the wage
2 Based on M. Spence (1973), “Job Market Signalling,” Quarterly Journal of Economics, 87, 355-74.
140 Extensive Form Games with Incomplete Information
schedule will satisfy w (e) = 2µ(H|e) + (1 − µ(H|e)) . We are interested in the set of PBE of this
game. Let eH and eL denote the education levels chosen by the high and low ability workers,
respectively.
1. Separating Equilibria (eH ̸= eL ): The Bayes’ rule in this case implies that µ(H|eH ) = 1
and µ(L|eL ) = 1. Therefore, we have w (eH ) = 2 and w (eL ) = 1. Given that, the low ability
worker will choose e = 0. In equilibrium, it must be such that the low ability worker does
not want to mimic the high ability worker and vice versa. Therefore, we need to have
eH
2− ≥1
2
or eH ≤ 2 and
1 ≥ 2 − eH
or 1 ≤ eH . We can support any eH between 1 and 2 with the following belief system
$
0, e < eH
µ(H|e) = .
1, e ≥ eH
2. Pooling Equilibria (eH = eL = e∗ ): The Bayes’ rule in this case implies that µ(H|e∗ ) = p
and µ(L|e∗ ) = 1 − p. Therefore, w (e∗ ) = 2p + (1 − p) . = p + 1 and hence
u (w, e∗ , H) = p + 1 − e∗ /2,
u (w, e∗ , L) = p + 1 − e∗ .
p + 1 − e∗ /2 ≥ 0
p + 1 − e∗ ≥ 0.
p + 1 − e∗ /2 ≥ w (e) − e/2,
p + 1 − e∗ ≥ w (e) − e,
for all e ≥ 0. The above inequalities are satisfied if and only if e∗ ≤ p. We can, in turn, show
11.3. Signalling Games 141
that any such e∗ can be supported as an equilibrium by the following belief system
$
p, e = e∗
µ(H|e) =
̸ e∗
0, e =