Decision 4

Download as pdf or txt
Download as pdf or txt
You are on page 1of 15

7.

2 Bayes Theorem
It is often easier to calculate conditional probabilities in the ‘inverse’ direction to what we
are interested in. That is, if we want to know Pr(A|B), it might be much easier to discover
Pr(B|A). In these cases, we use Bayes Theorem to get the right result. I’ll state Bayes Theorem
in two distinct ways, then show that the two ways are ultimately equivalent.

Pr(B|A)Pr(A)
Pr(A|B) =
Pr(B)
Pr(B|A)Pr(A)
=
Pr(B|A)Pr(A) + Pr(B|¬A)Pr(¬A)

These are equivalent because Pr(B) = Pr(B|A)Pr(A) + Pr(B|¬A)Pr(¬A). Since this is an


independently interesting result, it’s worth going through the proof of it. First note that

Pr(A ∧ B)
Pr(B|A)Pr(A) = Pr(A)
Pr(A)
= Pr(A ∧ B)

Pr(¬A ∧ B)
Pr(B|¬A)Pr(¬A) = Pr(¬A)
Pr¬(A)
= Pr(¬A ∧ B)

Adding those two together we get

Pr(B|A)Pr(A) + Pr(B|¬A)Pr(¬A) = Pr(A ∧ B) + Pr(¬A ∧ B)


= Pr((A ∧ B) ∨ (¬A ∧ B))
= Pr(B)

The second line uses the fact that A ∧ B and ¬A ∧ B are inconsistent, which can be verified
using the truth tables. And the third line uses the fact that (A ∧ B) ∨ (¬A ∧ B) is equivalent
to A, which can also be verified using truth tables. So we get a nice result, one that we’ll have
occasion to use a bit in what follows.

Pr(B) = Pr(B|A)Pr(A) + Pr(B|¬A)Pr(¬A)

So the two forms of Bayes Theorem are the same. We’ll often find ourselves in a position to
use the second form.
One kind of case where we have occasion to use Bayes Theorem is when we want to know
how significant a test finding is. So imagine we’re trying to decide whether the patient has
disease D, and we’re interested in how probable it is that the patient has the disease condi-
tional on them returning a test that’s positive for the disease. We also know the following
background facts.

• In the relevant demographic group, 5% of patients have the disease.


• When a patient has the disease, the test returns a position result 80% of the time

40
• When a patient does not have the disease, the test returns a negative result 90% of the
time

So in some sense, the test is fairly reliable. It usually returns a positive result when applied
to disease carriers. And it usually returns a negative result when applied to non-carriers.
But as we’ll see when we apply Bayes Theorem, it is very unreliable in another sense. So let
A be that the patient has the disease, and B be that the patient returns a positive test. We
can use the above data to generate some ‘prior’ probabilities, i.e. probabilities that we use
prior to getting information about the test.
• Pr(A) = 0.05, and hence Pr(¬A) = 0.95
• Pr(B|A) = 0.8
• Pr(B|¬A) = 0.1

Now we can apply Bayes theorem in its second form.

Pr(B|A)Pr(A)
Pr(A|B) =
Pr(B|A)Pr(A) + Pr(B|¬A)Pr(¬A)
0.8 × 0.05
=
0.08 × 0.05 + 0.1 × 0.95
0.04
=
0.04 + 0.095
0.04
=
0.135
≈ 0.296

So in fact the probability of having the disease, conditional on having a positive test, is less
than 0.3. So in that sense the test is quite unreliable.
This is actually a quite important point. The fact that the probability of B given A is
quite high does not mean that the probability of A given B is equally high. By tweaking the
percentages in the example I gave you, you can come up with cases where the probability of
B given A is arbitrarily high, even 1, while the probability of A given B is arbitrarily low.
Confusing these two conditional probabilities is sometimes referred to as the prosecutors’
fallacy, though it’s not clear how many actual prosecutors are guilty of it! The thought is
that some prosecutors start with the premise that the probability of the defendant’s blood
(or DNA or whatever) matching the blood at the crime scene, conditional on the defendant
being innocent, is 1 in a billion (or whatever it exactly is). They conclude that the probability
of the defendant being innocent, conditional on their blood matching the crime scene, is
about 1 in a billion. Because of derivations like the one we just saw, that is a clearly invalid
move.

7.3 Conditionalisation
The following two concepts seem fairly closely related.

• The probability of some hypothesis H given evidence E


• The new probability of hypothesis H when evidence E comes in

41
In fact these are distinct concepts, though there are interesting philosophical questions
about how intimately they are connected.
The first one is a static concept. It says, at one particular time, what the probability of
H is given E. It doesn’t say anything about whether or not E actually obtains. It doesn’t
say anything about changing your views, or your probabilities. It just tells us something
about our current probabilities, i.e. our current measure on possibility space. And what it
tells us is what proportion of the space where E obtains is occupied by possibilities where
H obtains. (The talk of ‘proportion’ here is potentially misleading, since there’s no physical
space to measure. What we care about is the measure of the E ∧ H space as a proportion of
the measure of the E space.)
The second one is a dynamic concept. It says what we do when evidence E actually comes
in. Once this happens, old probabilities go out the window, because we have to adjust to
the new evidence that we have to hand. If E indicates H, then the probability of H should
presumably go up, for instance.
Because these are two distinct concepts, we’ll have two different symbols for them. We’ll
use Pr(H|E) for the static concept, and PrE (H) for the dynamic concept. So Pr(H|E) is what
the current probability of H given E is, and PrE (H) is what the probability of H will be when
we get evidence E.
Many philosophers think that these two should go together. More precisely, they think
that a rational agent always updates by conditionalisation. That’s just to say that for any ra-
tional agent, Pr(H|E) = PrE (H). When we get evidence E, we always replace the probability
of H with the probability of H given E.
The conditionalisation thesis occupies a quirky place in contemporary philosophy. On
the one hand it is almost universally accepted, and an extremely interesting set of theoret-
ical results have been built up using the assumption it is true. (Pretty much everything in
Bayesian philosophy of science relies in one way or another on the assumption that con-
ditionalisation is correct. And since Bayesian philosophy of science is a thriving research
program, this is a non-trivial fact.) On the other hand, there are remarkably few direct, and
plausible, arguments in favor of conditionalisation. In the absence of a direct argument we
can say two things.
First, the fact that a lot of philosophers (and statisticians and economists etc) accept
conditionalisation, and have derived many important results using it, is a reason to take it
seriously. The research programs that are based around conditionalisation do not seem to be
degenerating, or failing to produce new insights. Second, in a lot of everyday applications,
conditionalisation seems to yield sensible results. The simplest cases here are cases involving
card games or roulette wheels where we can specify the probabilities of various outcomes in
advance.
Let’s work through a very simple example to see this. A deck of cards has 52 cards, of
which 13 are hearts. Imagine we’re about to draw 2 cards, without replacement, from that
deck, which has been well-shuffled. The probability that the first is a heart is 13/52, or, more
simply, 1/4. If we assume that a heart has been taken out, e.g. if we draw a heart with the first
card, the probability that we’ll draw another heart if 12/51. That is, conditional on the first
card we draw being a heart, the probability that the second is a heart if 12/51.
Now imagine that we do actually draw the first card, and it’s a heart. What should the
probability be that the next card will be a heart? It seems like it should be 12/51. Indeed, it is

42
hard to see what else it could be. If A is The first card drawn is a heart and B is The second
card drawn is a heart, then it seems both Pr(A|B) and PrB (A) should be 12/51. And examples
like this could be multiplied endlessly.
The support here for conditionalisation is not just that we ended up with the same result.
It’s that we seem to be making the same calculations both times. In cases like this, when
we’re trying to figure out Pr(A|B), we pretend we’re trying to work out PrB (A), and then stop
pretending when we’ve worked out the calculation. If that’s always the right way to work out
Pr(A|B), then Pr(A|B) should always turn out to be equal to PrB (A). Now this argument goes
by fairly quickly obviously, and we might want to look over more details before deriving very
heavy duty results from the idea that updating is always by conditionalisation, but it’s easy
to see we might take conditionalisation to be a plausible model for updating probabilities.

43
Chapter 8

About Conditional Probability


8.1 Conglomerability
Here is a feature that we’d like an updating rule to have. If getting some evidence E will make
a hypothesis H more probable, then not getting E will not also make H more probable.
Indeed, in standard cases, not getting evidence that would have made H more probable
should make H less probable. It would be very surprising if we could know, before running
a test, that however it turns out some hypothesis H will be more probable at the end of the
test than at the beginning of it. We might have to qualify this in odd cases where H is, e.g.,
that the test is completed. But in standard cases if H will be likely whether some evidence
comes in or doesn’t come in, then H should be already likely.
We’ll say that an update rule is conglomerable if it has this feature, and non-conglomerable
otherwise. That is, it is non-conglomerable iff there are H and E such that,

PrE (H) > Pr(H)andPr¬E (H) > Pr(H)

Now a happy result for conditionalisation, the rule that says PE (H) = Pr(H|E), is that it
is conglomerable. This result is worth going over in some detail. Assume that Pr(H|E) >
Pr(H)andPr¬E (H) > Pr(H). Then we can derive a contradicton as follows

Pr(H) = Pr((H ∧ E) ∨ (H ∧ ¬E)) since H = (H ∧ E) ∨ (H ∧ ¬E)


= Pr(H ∧ E) + Pr(H ∧ ¬E) since (H ∧ E) and (H ∧ ¬E) are disjoint
= Pr(H|E)Pr(E) + Pr(H|¬E)Pr(¬E) since Pr(H|E)Pr(E) = Pr(H ∧ E)
> Pr(H)Pr(E) + Pr(H)Pr(¬E) since by assumption Pr(H|E) > Pr(H) and Pr(H|¬E) > Pr(H)
= Pr(H)(Pr(E) + Pr(¬E))
= Pr(H)Pr(E ∨ ¬E) since E and ¬E are disjoint
= Pr(H) since Pr(E ∨ ¬E) = 1

Conglomerability is related to dominance. The dominance rule of decision making says


(among other things) that if C1 is preferable to C2 given E, and C1 is preferable to C2 given
¬E, then C1 is simply preferable to C2 . Conglomerability says (among other things) that if
Pr(H) is greater than x given E, and it is greater than x given ¬E, then it is simply greater
than x.

44
Contemporary decision theory makes deep and essential use of principles of this form,
i.e. that if something holds given E, and given ¬E, then it simply holds. And one of the run-
ning themes of these notes will be sorting out just which such principles hold, and which do
not hold. The above proof shows that we get one nice result relating conditional probability
and simple probability which we can rely on.

8.2 Independence
The probability of some propositions depends on other propositions. The probability that
I’ll be happy on Monday morning is not independent of whether I win the lottery on the
weekend. On the other hand, the probability that I win the lottery on the weekend is in-
dependent of whether it rains in Seattle next weekend. Formally, we define probabilistic
indepdendence as follows.

• Propositions A and B are independent iff Pr(A|B) = Pr(A).

There is something odd about this definition. We purported to define a relationship that
holds between pairs of propositions. It looked like it should be a symmetric relation: A is
independent from B iff B is independent from A. But the definition looks asymmetric: A
and B play very different roles on the right-hand side of the definition. Happily, this is just
an appearance. Assuming that A and B both have positive probability, we can show that
Pr(A|B) = Pr(A) is equivalent to Pr(B|A) = Pr(B).

Pr(A|B) = Pr(A)
Pr(A ∧ B)
⇔ = Pr(A)
Pr(B)
⇔ Pr(A ∧ B) = Pr(A) × Pr(B)
Pr(A ∧ B)
⇔ = Pr(B)
Pr(A)
⇔ Pr(B|A) = Pr(B)

We’ve multiplied and divided by Pr(A) and Pr(B), so these equivalences don’t hold if Pr(A)
or Pr(B) is 0. But in other cases, it turns out that Pr(A|B) = Pr(A) is equivalent to Pr(B|A) =
Pr(B). And each of these is equivalent to the claim that Pr(A ∧ B) = Pr(A)Pr(B). This is an
important result, and one that we’ll refer to a bit.

• For independent propositions, the probability of their conjunction is the product of


their probabilities.
• That is, if A and B are independent, then Pr(A ∧ B) = Pr(A)Pr(B)

This rule doesn’t apply in cases where A and B are dependent. To take an extreme case,
when A is equivalent to B, then A ∧ B is equivalent to A. In that case, Pr(A ∧ B) = Pr(A),
not Pr(A)2 . So we have to be careful applying this multiplication rule. But it is a powerful
rule in those cases where it works.

45
8.3 Kinds of Independence
The formula Pr(A|B) = Pr(A) is, by definition, what probabilistic independence amounts
to. It’s important to note that probabilistic dependence is very different from causal depen-
dence, and so we’ll spend a bit of time going over the differences.
The phrase ‘causal dependence’ is a little ambiguous, but one natural way to use it is that
A causally depends on B just in case B causes A. If we use it that way, it is an asymmetric
relation. If B causes A, then A doesn’t cause B. But probabilistic dependence is symmetric.
That’s what we proved in the previous section.
Indeed, there will typically be a quite strong probabilistic dependence between effects
and their causes. So not only is the probability that I’ll be happy on Monday dependent on
whether I win the lottery, the probability that I’ll win the lottery is dependent on whether
I’ll be happy on Monday. It isn’t causally dependent; my moods don’t cause lottery results.
But the probability of my winning (or, perhaps better, having won) is higher conditional on
my being happy on Monday than on my not being happy.
One other frequent way in which we get probabilistic dependence without causal de-
pendence is when we have common effects of a cause. So imagine that Fred and I jointly
purchased some lottery tickets. If one of those tickets wins, that will cause each of us to be
happy. So if I’m happy, that is some evidence that I won the lottery, which is some evidence
that Fred is happy. So there is a probabilistic connection between my being happy and Fred’s
being happy. This point is easier to appreciate if we work through an example numerically.
Make each of the following assumptions.

• We have a 10% chance of winning the lottery, and hence a 90% chance of losing.
• If we win, it is certain that we’ll be happy. The probability of either of us not being
happy after winning is 0.
• If we lose, the probability that we’ll be unhappy is 0.5.
• Moreover, if we lose, our happiness is completely independent of one another, so con-
ditional on losing, the proposition that I’m happy is independent of the proposition
that Fred’s happy

So conditional on losing, each of the four possible outcomes have the same probability.
Since these probabilities have to sum to 0.9, they’re each equal to 0.225. So we can list the
possible outcomes in a table. In this table A is winning the lottery, B is my being happy and
C is Fred’s being happy.

A B C Pr
T T T 0.1
T T F 0
T F T 0
T F F 0
F T T 0.225
F T F 0.225
F F T 0.225
F F F 0.225

46
Adding up the various rows tells us that each of the following are true.
• Pr(B) = 0.1 + 0.225 + 0.225 = 0.55
• Pr(C) = 0.1 + 0.225 + 0.225 = 0.55
• Pr(B ∧ C) = 0.1 + 0.225 = 0.325

From that it follows that Pr(B|C) = 0.325/0.55 ≈ 0.59. So Pr(B|C) > Pr(B). So B and C
are not independent. Conditionalising on C raises the probability of B because it raises the
probability of one of the possible causes of C, and that cause is also a possible cause of B.
Often we know a lot more about probabilistic dependence than we know about causal
connections and we have work to do to figure out the causal connections. It’s very hard,
especially in for example public health settings, to figure out what is a cause-effect pair, and
what is the result of a common cause. One of the most important research programs in
modern statistics is developing methods for solving just this problem. The details of those
methods won’t concern us here, but we’ll just note that there’s a big gap between probabilistic
dependence and causal dependence.
On the other hand, it is usually safe to infer probabilistic dependence from causal depen-
dence. If E is one of the (possible) causes of H, then usually E will change the probabilities
of H. We can perhaps dimly imagine exceptions to this rule.
So imagine that a quarterback is trying to decide whether to run or pass on the final
play of a football game. He decides to pass, and the pass is successful, and his team wins.
Now as it happens, had he decided to run, the team would have had just as good a chance
of winning, since their run game was exactly as likely to score as their pass game. It’s not
crazy to think in those circumstances that the decision to pass was among the causes of the
win, but the win was probabilistically independent of the decision to pass. In general we can
imagine cases where some event moves a process down one of two possible paths to success,
and where the other path had just as good a chance of success. (Imagine a doctor deciding
to operate in a certain way, a politician campaigning in one area rather than another, a storm
moving a battle from one piece of land to another, or any number of such cases.) In these
cases we might have causal dependence (though whether we do is a contentious issue in the
metaphysics of causation) without probabilistic dependence.
But such cases are rare at best. It is a completely commonplace occurrence to have prob-
abilistic dependence without clear lines of causal dependence. We have to have very deli-
cately balanced states of the world in order to have causal dependence without probabilistic
dependence, and in every day cases we can safely assume that such a situation is impossible
without probabilistic connections.

8.4 Gamblers’ Fallacy


If some events are independent, then the probability of one is independent of the proba-
bility of the others. So knowing the results of one event gives you no guidance, not even
probabilistic guidance, into whether the other will happen.
These points may seem completely banal, but in fact they are very hard to fully incor-
porate into our daily lives. In particular, they are very hard to completely incorporate in
cases where we are dealing with successive outcomes of a particular chance process, such as
a dice roll or a coin flip. In those cases we know that the individual events are independent

47
of one another. But it’s very hard not to think that, after a long run of heads say, that the
coin landing tails is ‘due’.
This feeling is what is known as the Gamblers’ Fallacy. It is the fallacy of thinking that,
when events A and B are independent, that what happens in A can be a guide of some kind
to event B.
One way of noting how hard a grip the Gamblers’ Fallacy has over our thoughts is to try
to simulate a random device such as a coin flip. As an exercise, imagine that you’re writing
down the results of a series of 100 coin flips. Don’t actually flip the coin, just write down a
sequence of 100 Hs (for Heads) and Ts (for Tails) that look like what you think a random
series of coin flips will look like. I suspect that it won’t look a lot like what an actual sequence
does look like, in part because it is hard to avoid the Gamblers’ Fallacy.
Occasionally people will talk about the Inverse Gamblers’ Fallacy, but this is a much less
clear notion. The worry would be someone inferring from the fact that the coin has landed
heads a lot that it will probably land heads next time. Now sometimes, if we know that it
is a fair coin for example, this will be just as fallacious as the Gamblers’ Fallacy itself. But it
isn’t always a fallacy. Sometimes the fact that the coin lands heads a few times in a row is
evidence that it isn’t really a fair coin.
It’s important to remember the gap between causal and probabilistic dependence here.
In normal coin-tossing situations, it is a mistake to think that the earlier throws have a causal
impact on the later throws. But there are many ways in which we can have probabilistic
dependence without causal dependence. And in cases where the coin has been landing
heads a suspiciously large number of times, it might be reasonable to think that there is a
common cause of it landing heads in the past and in the future - namely that it’s a biased
coin! And when there’s a common cause of two causally independent events, they may be
probabilistically dependent. That’s to say, the first event might change the probabilities of
the second event. In those cases, it doesn’t seem fallacious to think that various patterns will
continue.
This does all depend on just how plausible it is that there is such a causal mechanism.
It’s one thing to think, because the coin has landed heads ten times in a row, that it might
be biased. There are many causal mechanisms that could explain that. It’s another thing
to think, because the coin has alternated heads and tails for the last ten tosses that it will
continue to do so in the future. It’s very hard, in normal circumstances, to see what could
explain that. And thinking that patterns for which there’s no natural causal explanation will
continue is probably a mistake.

48
Chapter 9

Expected Utility
9.1 Expected Values
A random variable is simply a variable that takes different numerical values in different
states. In other words, it is a function from possibilities to numbers. Typically, random
variables are denoted by capital letters. So we might have a random variable X whose value
is the age of the next President of the United States, and his or her inauguration. Or we
might have a random variable that is the number of children you will have in your lifetime.
Basically any mapping from possibilities to numbers can be a random variable.
It will be easier to work with a specific example, so let’s imagine the following case.
You’ve asked each of your friends who will win the big football game this weekend, and 9
said the home team will win, while 5 said the away team will win. (Let’s assume draws are
impossible to make the equations easier.) Then we can let X be a random variable measuring
the number of your friends who correctly predicted the result of the game. The value X takes
is {
9, if the home team wins,
X=
5, if the away team wins.
Given a random variable X and a probability function Pr, we can work out the expected
value of that random variable with respect to that probability function. Intuitively, the ex-
pected value of X is a weighted average of the possible values of X, where the weights are
given by the probability (according to Pr) of each value coming about. More formally, we
work out the expected value of X this way. For each case, we multiply the value of X in that
case by the probability of the case obtaining. Then we sum the numbers we’ve got, and the
result is the expected value of X. We’ll write the expected value of X as Exp(X). So if the
probability that the home wins is 0.8, and the probability that the away team wins is 0.2,
then

Exp(X) = 9 × 0.8 + 5 × 0.2


= 7.2 + 1
= 8.2

There are a couple of things to note about this result. First, the expected value of X isn’t in
any sense the value that we expect X to take. Indeed, the expected value of X is not even
a value that X could take. So we shouldn’t think that “expected value” is a phrase we can

49
understand by simply understanding the notion of expectation and of value. Rather, we
should think of the expected value as a kind of average.
Indeed, thinking of the expected value as an average lets us relate it back to the common
notion of expectation. If you repeated the situation here – where there’s an 0.8 chance that
9 of your friends will be correct, and an 0.2 chance that 5 of your friends will be correct
– very often, then you would expect that in the long run the number of friends who were
correct on each occasion would average about 8.2. That is, the expected value of a random
variable X is what you’d expect the average value of X to be if (perhaps per impossible) the
underlying situation was repeated many many times.

9.2 Maximise Expected Utility Rule


The orthodox view in modern decision theory is that the right decision is the one that max-
imises the expected utility of your choice. Let’s work through a few examples to see how this
might work. Consider again the decision about whether to take a cheap airline or a more
reliable airline, where the cheap airline is cheaper, but it performs badly in bad weather. In
cases where the probability is that the plane won’t run into difficulties, and you have much
to gain by taking the cheaper ticket, and even if something goes wrong it won’t go badly
wrong, it seems that you should take the cheaper plane. Let’s set up that situation in a table.

Good weather Bad weather


Pr = 0.8 Pr = 0.2
Cheap Airline 10 0
Reliable Airline 6 5

We can work out the expected utility of each action fairly easily.
Exp(Cheap Airline) = 0.8 × 10 + 0.2 × 0
=8+0
=8
Exp(Reliable Airline) = 0.8 × 6 + 0.2 × 5
= 4.8 + 1
= 5.8
So the cheap airline has an expected utility of 8, the reliable airline has an expected utility
of 5.8. The cheap airline has a higher expected utility, so it is what you should take.
We’ll now look at three changes to the example. Each change should intuitively change
the correct decision, and we’ll see that the maximise expected utility rule does change in
each case. First, change the downside of getting the cheap airline so it is now more of a risk
to take it.

Good weather Bad weather


Pr = 0.8 Pr = 0.2
Cheap Airline 10 -20
Reliable Airline 6 5

50
Here are the new expected utility considerations.

Exp(Cheap Airline) = 0.8 × 10 + 0.2 × –20


= 8 + (–4)
=4
Exp(Reliable Airline) = 0.8 × 6 + 0.2 × 5
= 4.8 + 1
= 5.8

Now the expected utility of catching the reliable airline is higher than the expected utility of
catching the cheap airline. So it is better to catch the reliable airline.
Alternatively, we could lower the price of the reliable airline, so it is closer to the cheap
airline, even if it isn’t quite as cheap.

Good weather Bad weather


Pr = 0.8 Pr = 0.2
Cheap Airline 10 0
Reliable Airline 9 8

Here are the revised expected utility considerations.

Exp(Cheap Airline) = 0.8 × 10 + 0.2 × 0


=8+0
=8
Exp(Reliable Airline) = 0.8 × 9 + 0.2 × 8
= 7.2 + 1.6
= 8.8

And again this is enough to make the reliable airline the better choice.
Finally, we can go back to the original utility tables and simply increase the probability
of bad weather.

Good weather Bad weather


Pr = 0.3 Pr = 0.7
Cheap Airline 10 0
Reliable Airline 6 5

51
We can work out the expected utility of each action fairly easily.

Exp(Cheap Airline) = 0.3 × 10 + 0.7 × 0


=3+0
=3
Exp(Reliable Airline) = 0.3 × 6 + 0.7 × 5
= 1.8 + 3.5
= 5.3

We’ve looked at four versions of the same case. In each case the ordering of the outcomes,
from best to worst, was:
1. Cheap airline and good weather
2. Reliable airline and good weather
3. Reliable airline and bad weather
4. Cheap airline and bad weather
As we originally set up the case, the cheap airline was the better choice. But there were three
ways to change this. First, we increased the possible loss from taking the cheap airline. (That
is, we increased the gap between the third and fourth options.) Second, we decreased the
gain from taking the cheap airline. (That is, we decreased the gap between the first and
second options.) Finally, we increased the risk of things going wrong, i.e. we increased the
probability of the bad weather state. Any of these on their own was sufficient to change the
recommendation that “Maximise Expected Utility” makes. And that’s all to the good, since
any of these things does seem like it should be sufficient to change what’s best to do.

9.3 Structural Features


When using the “Maximise Expected Utility” rule we assign a number to each choice, and
then pick the option with the highest number. Moreover, the number we assign is inde-
pendent of the other options that are available. The number we assign to a choice depends
on the utility of that choice in each state and the probability of the states. Any decision rule
that works this way is guaranteed to have a number of interesting properties.
First, it is guaranteed to be transitive. That is, if it recommends A over B, and B over
C, then it recommends A over C. To see this, let’s write the expected utility of a choice A
as Exp(U(A)). If A is chosen over B, then Exp(U(A)) > Exp(U(B)). And if B is chosen over
C, then Exp(U(B)) > Exp(U(C)). Now >, defined over numbers, is transitive. That is, if
Exp(U(A)) > Exp(U(B)) and Exp(U(B)) > Exp(U(C)), then Exp(U(A)) > Exp(U(C)). So
the rule will recommend A over B.
Second, it satisfies the independence of irrelevant alternatives. Assume A is chosen over
B and C. That is, Exp(U(A)) > Exp(U(B)) and Exp(U(A)) > Exp(U(C)). Then A will be
chosen when the only options are A and B, since Exp(U(A)) > Exp(U(B)). And A will
be chosen when the only options are A and C, since Exp(U(A)) > Exp(U(C)). These two
features are intuitively pleasing features of a decision rule.
Numbers are totally ordered by >. That is, for any two numbers x and y, either x > y
or y > x or x = y. So if each choice is associated with a number, a similar relation holds

52
among choices. That is, either A is preferable to B, or B is preferable to A, or they are equally
preferable.
Expected utility maximisation never recommends choosing dominated options. As-
sume that A dominates B. For each state Si , write utility of A in Si as U(A|Si ). Then domi-
nance means that for all i, U(A|Si ) > U(B|Si ). Now Exp(U(A)) and Exp(U(B)) are given by
the following formulae. (In what follows n is the number of possible states.)

Exp(A) = Pr(S1 )U(A|S1 ) + Pr(S2 )U(A|S2 ) + ... + Pr(Sn )U(A|Sn )


Exp(B) = Pr(S1 )U(B|S1 ) + Pr(S2 )U(B|S2 ) + ... + Pr(Sn )U(B|Sn )

Note that the two values are each the sum of n terms. Note also that, given dominance,
each term on the top row is at least as great as than the term immediately below it on the
second row. (This follows from the fact that U(A|Si ) > U(B|Si ) and the fact that Pr(Si ) ≥ 0.)
Moreover, at least one of the terms on the top row is greater than the term immediately
below it. (This follows from the fact that U(A|Si ) > U(B|Si ) and the fact that for at least one
i, Pr(Si ) > 0. That in turn has to be true because if Pr(Si ) = 0 for each i, then Pr(S1 ∨ S2 ∨
... ∨ Sn ) = 0. But S1 ∨ S2 ∨ ... ∨ Sn has to be true.) So Exp(A) has to be greater than Exp(B).
So if A dominates B, it has a higher expected utility.

53
Chapter 10

Sure Thing Principle


10.1 Generalising Dominance
The maximise expected utility rule also supports a more general version of dominance. We’ll
state the version of dominance using an example, then spend some time going over how we
know maximise expected utility satisfies that version.
The original dominance principle said that if A is better than B in every state, then A is
simply better than B simply. But we don’t have to just compare choices in individual states,
we can also compare them across any number of states. So imagine that we have to choose
between A and B and we know that one of four states obtains. The utility of each choice in
each state is given as follows.

S1 S2 S3 S4
A 10 9 9 0
B 8 3 3 3

And imagine we’re using the maximin rule. Then the rule says that A does better than B in
S1 , while B does better than A in S4 . The rule also says that B does better than A overall,
since it’s worst case scenario is 3, while A’s worst case scenario is 0. But we can also compare
A and B with respect to pairs of states. So conditional on us just being in S1 or S2 , then A is
better. Because between those two states, its worst case is 9, while B’s worst case is 3.
Now imagine we’ve given up on maximin, and are applying a new rule we’ll call maxi-
average. The maxiaverage rule tells us make the choice that has the highest (or maximum)
average of best case and worst case scenarios. The rule says that B is better overall, since it
has a best case of 8 and a worst case of 3 for an average of 5.5, while A has a best case of 10
and a worst case of 0, for an average of 5.
But if we just know we’re in S1 or S2 , then the rule recommends A over B. That’s because
among those two states, A has a maximum of 10 and a minimum of 9, for an average of 9.5,
while B has a maximum of 8 and a minimum of 3 for an average of 5.5.
And if we just know we’re in S3 or S4 , then the rule also recommends A over B. That’s
because among those two states, A has a maximum of 9 and a minimum of 0, for an average
of 4.5, while B has a maximum of 3 and a minimum of 3 for an average of 3.
This is a fairly odd result. We know that either we’re in one of S1 or S2 , or that we’re in
one of S3 or S4 . And the rule tells us that if we find out which, i.e. if we find out we’re in S1

54

You might also like