Lectures On Solid Geometry
Lectures On Solid Geometry
Lectures On Solid Geometry
Lecture 4
Game Theory for Networks (I)
• In all of these cases, interactions with other agents you are connected
to affect your payoff, well-being, utility.
• How to make decisions in such situations?
• Game theory or Multi-agent decision theory
• We often need only ordinal information; i.e., two options a and b, and we
imagine a (real-valued) utility function u(·) that represents the ranking of
different options, and we simply check whether
• Here the first number is the payoff to player (partner) 1 and the second
number is the payoff to player 2. More formally, the cell indexed by row x
and column y contains a pair, (a, b) where
a = u1 (x, y) and b = u2 (x, y)
• These numbers could be just monetary payoffs, or it could be inclusive of
“social preferences” (the fact that you may be altruistic towards your
partner or angry at him or her).
• Should you play “work hard” or “shirk”?
2016/3/27 Lecture 4: Game Theory for Networks (I) 6
Strategic Form Games (I)
• Let us start with games in which all of the participants act simultaneously
and without knowledge of other players’ actions. Such games are referred
to as strategic form games - or as normal form games or matrix games.
• For each game, we have to define
• The set of players.
• The strategies.
• The payoffs.
• More generally, we also have to define the game form, which captures the
order of play (e.g., in chess) and information sets (e.g., in asymmetric
information or incomplete information situations). But in strategic form
games, play is simultaneous, so no need for this additional information.
• Definition: A strategic form game is a triplet hI, (Si )i2I , (ui )i2I i where
• I is a finite set of players, I = {1, 2, . . . , I} .
• Si is a non-empty set of available actions for player i. Q
• ui : S ! R is the payoff (utility) function of player i where S = i Si .
• This game represents “pure conflict,” in the sense that one player’s utility
is the negative of the utility of the other player, i.e., the sum of the utilities
for both players at each outcome is “zero.”
• This class of games is referred to as zero-sum games (or constant-sum
games) and has been extensively studied in the game theory literature.
• Example (Infinite Strategy Sets): The strategy sets of players can also
have infinitely many elements. Consider the following game of Cournot
Competition, which models two firms producing the same homogeneous
good and seeking to maximize their profits. The formal game
G = hI, (Si ), (ui )i consists of:
• When s i > 1 , the action that maximizes the utility function of firm i,
i.e., its best response, is 0. More generally, it can be seen that for
any s i 2 S i , the best response of firm i is given by
Bi (s i ) = arg max(si (2 si s i) si )
si 0
(
1 s
2
i
, if s i 1,
=
0, otherwise
(Confess)
• This payoff matrix models a scenario in which if one player chooses the
strategy Suicide, then, due to lack of witnesses, the other player gets off
free if he remains silent (cooperates). In this game, there is no dominant
strategy equilibrium because of the additional strategy Suicide.
• Notice, however, that the strategy Suicide is the worst possible option
for a player, no matter what the other player does. In this sense, Suicide
is strictly dominated by the other two strategies. More generally, we say
that a strategy is strictly dominated for a player if there exists some
other strategy that yields a strictly higher payoff regardless of the
strategies of the other players.
2016/3/27 Lecture 4: Game Theory for Networks (I) 16
Dominant and Dominated Strategies (V)
• Definition (Strictly Dominated Strategy): A strategy si 2 Si is strictly
dominated for player i if there exists some s0i 2 Si such that
and
ui (s0i , s i ) > ui (si , s i ) for some s i 2S i
• We are left with a game where player 1 does not have any choice in his
strategies, while player 2 can choose between LEFT and RIGHT. Since
LEFT will maximize the utility of player 2, we conclude that the only
rational strategy profile in the game is (Up, LEFT).
• More formally, we define the procedure of iterated elimination of
strictly dominated strategies (or iterated strict dominance) as follows:
• Theorem: Suppose that either (i) each Si is finite, or (ii) each ui (si , s i )
is continuous and each Si is compact. Then Si1 is nonempty for each i.
• Given that player 2 only chooses actions in the interval [0, 1/2], then
player 1 can restrict the domain of his best response function to only
these values. Using his best response function, this implies that the
strategies outside the interval [1/4, 1/2] are strictly dominated for player
1. Applying the same reasoning for player 2, we obtain
⇤ ⇤
v B ⇤ bi B ⇤
v v⇤ bi B ⇤
⇤ ⇤ ⇤
bi = v bi < v bi > v
• It is easy to see that this game does not have a pure Nash equilibrium,
i.e., for every pure strategy in this game, one of the parties has an
incentive to deviate. However, if we allow the players to randomize over
their choice of actions, we can determine a steady state of the play.
• Assume that player 1 picks Heads with probability p and Tails with
probability 1 p , and that player 2 picks both Head and Tail with
probability 1/2. Then, with probability
1 1 1
p + (1 p) =
2 2 2
2016/3/27 Lecture 4: Game Theory for Networks (I) 31
Mixed Strategy and Mixed Strategy Nash Equilibrium
player 1 will receive a payoff 1. Similarly, she will receive a payoff -1 with
the same probability. This implies that if player 2 plays the strategy (1/2,
1/2), then no matter what strategy player 1 chooses, she will get the same
payoff. Due to the symmetry of the game, we conclude that the
randomized strategy (( 1/2 , 1/2 ), (1/2,1/2)) is a stable play of the game.
• We next formalize the notion of “randomized" or mixed strategies. Let ⌃i
denote the set of probability measures over the pure strategy (action)
set Si . We use i 2 ⌃i to denote the mixed strategy of player i. When Si
is a finite set, a mixed strategy is a finite dimensional probability vector,
i.e., a vector whose elements denote the probability with which a
particular action will be played.
• If Si has two elements, the set of mixed strategies i is the one-
dimensional probability simplex, i.e., ⌃i = {(x1 , x2 )|xi 0, x1 + x2 = 1}
Q
• We use 2 ⌃ = i2I ⌃i to denote a mixed strategy profile. Note that
this implicitly assumes that players randomize independently. We
Q
similarly denote i 2⌃ i = j6=i ⌃j .
• Recall that this game has 2 pure Nash equilibria. Using the characterization
result in Proposition 2, we show that it has a unique mixed strategy Nash
equilibrium (which is not a pure strategy Nash equilibrium). First, by using
Proposition 2 (and inspecting the payoffs), it can be seen that there are no Nash
equilibria where only one of the players randomizes over its actions
Similarly, we have
1 ⇥ p + 0 ⇥ (1 p) = 0 ⇥ p + 2 ⇥ (1 p)
We conclude that the only possible mixed strategy Nash equilibrium is
given by q = 1/3 and p = 2/3 .
l1 (x), p1
l2 (x), p2
Figure: A price competition model over congested networks.
• Let li (xi ) denote the latency function of link i, which represents the
delay or congestion cost as a function of the total flow xi on link i.
x1 (p1 , p2 ) = 1 x2 (p1 , p2 )
• The payoffs for the providers are then given by:
u1 (p1 , p2 ) = p1 ⇥ x1 (p1 , p2 )
u2 (p1 , p2 ) = p2 ⇥ x2 (p1 , p2 )
3
s.t. p1 = p2 + (1 x1 ).
2
for some sufficiently small ✏ > 0. For this example, we show that no
pure Nash equilibrium strategy exists for small ✏ . The following list
considers all candidate Nash equilibria (p1 , p2 ) and profitable
unilateral deviations for ✏ sufficiently small, thus establishing the
nonexistence of a pure strategy Nash equilibrium: