MScFE 620 DTSP - Video - Transcripts - M1
MScFE 620 DTSP - Video - Transcripts - M1
MScFE 620 DTSP - Video - Transcripts - M1
Hi, in this video, I'll be introducing the notion of probability theory, which is going to be one of the
most important concepts in the course.
The study of probability theory begins with what we call a random experiment. A random
experiment is just an experiment whose outcome cannot be predetermined. An example of such
an experiment is the tossing of three coins. When tossing three coins, the outcome of this random
experiment cannot be predetermined – when we toss the coins, we can never know what the
outcome will be.
Each random experiment is associated with what we call the "sample space", which is just a set
that represents all the possible outcomes of the random experiment. We denote it by Ω and we call
it the set of all possible outcomes of the random experiment.
In our example of tossing three coins, the sample space can be defined as a set that looks like this:
{HHH,HHT, … ,TTT} – it contains (HHH), which is the outcome when all three tosses give a head;
(HHT) where the first two tosses give a head and the last one gives a tail; and so on, up until we get
to (TTT). In total, we will have eight outcomes.
We also define the notion of an event in elementary probability as a subset of 𝛺. For instance, we
can look at the event 𝐴 which is the event that the outcome contains two heads. That event
corresponds to the subset of 𝛺 containing {HHT, HTH, THH}. Unfortunately, though, for the
experiments that we are going to consider, not all subsets of 𝛺 correspond to events. What we will
need to do, then, is restrict the collection of events to a sub-collection, 𝐹 which is a subset of the
powerset of 𝛺.
This sub-collection will contain all the events that we are interested in considering and it should
satisfy the following conditions:
𝒜1 Firstly, it should contain the empty set, meaning that the empty set is always an event no
matter what experiment you're looking at.
𝒜2 The second condition is that if a set 𝐴 is an event – a in other words if 𝐴 belongs to ℱ, then its
complement is also an event – i.e. it also belongs to 𝐹.
𝒜3 The third condition is that if a set 𝐴 and another set 𝐵 are both events in ℱ then their union is
also an event in ℱ.
σ𝒜4 The last condition, which we call σ𝒜4, is that if you have a countable collection of events – so
if 𝒜1, 𝒜2, 𝒜3, and so on, are all events in ℱ – then their union is also an event in ℱ.
Any collection of sets that satisfies these four conditions is called a sigma-algebra (σ-algebra). So,
the collection of events should form a σ-algebra. If a sub-collection satisfies only 𝒜1, 𝒜2 and 𝒜3,
we call it an algebra. Therefore, a σ-algebra is simply an algebra with the additional condition that
if you take countably many events in ℱ, their union also belongs to ℱ.
We are going to call them the pair (Ω,ℱ) where Ω is any set and ℱ is a σ-algebra of subsets of Ω. We
will call this a measurable space.
Now let (Ω,ℱ) be a measurable space, which means that Ω is a set and ℱ is a σ-algebra. We define 𝑃
to be a probability measure if ℙ is a function from Ω to [$0, 1$]. What that means is that it assigns
to each event 𝐴 a number between 0 and 1 and satisfies the following conditions:
ℙ1 The first condition is that ℙ of the empty set must be 0. Thus, the probability of the empty set is
0.
ℙ2 The second condition is that ℙ of any event 𝐴 is always greater than or equal to 0.
ℙ3 The third condition is that if you have a countable collection of pairwise disjoint events – so if
𝒜1, 𝒜2 and so on are pairwise disjoint events – then ℙ of their union is the sum of ℙ of each one of
them, from 𝑛 is 1 to infinity. This is an important condition that we call σ-additivity.
ℙ4 The last condition that must be satisfied by probability is that 𝑃(Ω) = 1. This simply means that
it assigns probability 1 to the sample space.
Another way to visualize this is on a number line between 0 and 1. Probability in each event 𝐴
assigns a real number between 0 and 1 in such a way that that assignment satisfies conditions ℙ1,
ℙ2, ℙ3 and ℙ4.
Returning to ℙ4 and what it means, the sample space Ω is assigned probability 1, which we usually
translate as saying that it is a certain event.
Now, there are other set functions that satisfy conditions ℙ1, ℙ2 and ℙ3. Some examples of them
include length: if you think about the lengths of subsets of the real line, the length of the empty set
is 0, the length of any subset of the real line should be non-negative and the length of disjoint
subsets of the real line should be the sum of the length. Of course, the length function does not
satisfy condition ℙ4 because the length of the real line itself, which is the sample space in this case,
is infinite.
Another example is area, which also satisfies ℙ1, ℙ2 and ℙ3, with ℙ representing the
corresponding measure and volume.
What we are going to do in this course is study what is called a measure μ, which is just a set
function that satisfies ℙ1, ℙ2, and ℙ3 but not necessarily ℙ4. What we will see is that a probability
measure will simply be a special case of a measure whereby μ(Ω) = 1. That field is called Measure
Theory.
If you want to go through this at your own pace, please refer to the notes.
Now that we have introduced the basics of probability theory, we will begin the study of random
variables.
So, let (Ω, ℱ, ℙ) be a probability space – recall that this means that Ω is a set, ℱ is a σ-algebra on Ω,
and ℙ is a probability measure.
So, if you take any Borel subset of ℝ and interval – for instance, one could be the natural numbers
or it can be any other Borel set – and you take the set of all outcomes for which 𝑋 maps into that
Borel set: that collection must be in ℱ. You will see why we need this – so that we can calculate the
probability that 𝑋 belongs to the Borel set.
Now, we can generalize this to any measurable function. So, in general, if we have a measurable
space – for example, (Ω, ℱ) – we define ℱ as a function from Ω → ℝ to be measurable if it satisfies a
similiar condition. So, in other words, if the pre-image of every Borel set belongs to ℱ. So,
essentially, a random variable is just a measurable function whereby the domain space has a
probability measure defined on it, which makes it a probability space.
Consider (Ω, ℱ) to be any measurable space – remember that Ω is a set and ℱ is a σ-algebra, and
take 𝐴 to be a subset of Ω. We define the indicator function of the set 𝐴 as a function 𝐼𝐴 from Ω → ℝ
that satisfies the following condition:
1 𝜔∈𝐴
𝐼𝐴 (𝜔) = { .
0 𝜔∉𝐴
To visualize this, you have Ω and you have 𝐴, as a subset of Ω. The indicator function takes the
value 1 within the set 𝐴 and 0 outside the set 𝐴.
The question is: is this a random variable or in general a measurable function? Let's check that.
If we take 𝐵 to be any Borel subset of ℝ, then the pre-image of 𝐵 will look like this:
Ω 0, 1 ∈ B
𝐴 1 ∈ 𝐵, 0 ∉ 𝐵
𝐼𝐴−1 (𝐵) ={ 𝑐 .
𝐴 0 ∈ 𝐵, 1 ∉ 𝐵
∅ 0, 1 ∈ 𝐵
o In the first case, if the set 𝐵 contains both 0 and 1, then the pre-image will be everything in
Ω because it is the set of everything that is mapped to 𝐵, and, in this case, 𝐵 contains both
0 and 1. Again, to visualize this, you will have 0 and 1 on the y-axis of a number line and Ω
on the x-axis (even though it need not be the real line). 𝐼𝐴 maps elements of Ω to either 0
or 1. Now, if the Borel set contains both 0 and 1, then the pre-image will be all of Ω.
o Next, if the Borel set contains 1 but not 0, the pre-image will be 𝐴.
o Similarly, it will be 𝐴𝑐 if the Borel set contains 0 but does not contain 1.
o Finally, it will be the empty set if the Borel set contains neither 0 nor 1.
Now, for 𝐼 𝐴 to be measurable with respect to the σ-algebra, we need the pre-image of any Borel
set to be in ℱ for each of these four cases. As you can see, this will happen if and only if 𝐴 itself
belongs to ℱ, because if 𝐴 belongs to ℱ then – Ω is always in ℱ because it's Ω and the empty set is
always in ℱ because it's the empty set – 𝐴 is in ℱ and 𝐴𝑐 is also in ℱ. So, 𝐼𝐴 is measurable if and only
if 𝐴 belongs to ℱ.
We now introduce the law of a random variable, which is also called the probability distribution of
𝑋. So, let 𝑋: Ω → ℝ be a random variable on the probability space (Ω, ℱ, ℙ). We define the law of 𝑋
as a function ℙ𝑋 from the Borel subsets to [0,1]. This means that ℙ𝑋 is actually a probability
measure and it is defined as ℙ𝑋 (𝐵) ∶= ℙ(𝑋 −1 (𝐵)). Written in full:
If we look at this on a number line, we can take the Borel subset of ℝ, take its pre-image, or inverse
image, via 𝑋, and then you measure the probability of the resulting set ℙ(𝑋 −1 (𝐵))in Ω. That is how
the law of 𝑋 is defined.
Let's consider the same example where 𝐴 is a measurable subset and 𝑋 is the indicator of 𝐴. Recall
that we found the pre-image of any Borel set, 𝐵, to be the following:
Ω 0, 1 ∈ 𝐵
𝐴 1 ∈ 𝐵, 0 ∉ 𝐵
𝑋 −1 (𝐵) = { 𝑐 .
𝐴 1 ∉ 𝐵, 0 ∈ 𝐵
∅ 0, 1 ∉ 𝐵.
So, this is the pre-image. Therefore, we can calculate the law of 𝑋, as the above equation implies
that the law of 𝑋 will be as follows:
1 0, 1 ∈ 𝐵
ℙ(𝐴) 1 ∈ 𝐵, 0 ∉ 𝐵
ℙ(𝐵) = { .
ℙ(𝐴𝑐 ) 1 ∉ 𝐵, 0 ∈ 𝐵
ℙ(∅) 0, 1 ∉ 𝐵.
Unit 2: Expectation
Hi, in this video we introduce the expectation of a random variable 𝑋 and the integral of a function
𝑓 with respect to a measure μ.
What we want to do is define what we mean by the integral of 𝑓 with respect to the measure μ.
Written in full:
∫ 𝑓 𝑑𝜇.
Ω
1 𝑓 − 𝐼𝐴 , 𝐴 ∈ ℱ
Firstly, we will assume that 𝑓 is the indicator of a set 𝐴, where 𝐴 is measurable. We will
define what integration is in that special case.
2 Secondly, we will assume that 𝑓 is what we call a simple function. A simple function is
simply a function that has a finite combination of indicators.
3 𝑓≥0
Next, we are going to assume that 𝑓 is positive and measurable. We will use a definition
from simple functions to define integration in that special case.
4 𝑓 ∈ 𝑀ℱ
Finally, we are going to assume that 𝑓 is a general measurable function and define the
notion of integrability.
We will begin with step one of indicators. So, let 𝑓 = 𝐼𝐴 . We define the integral of 𝑓 with respect to
the measure μ as simply the measure of 𝐴. Written in full:
∫ 𝑓 𝑑μ ≔ μ(𝐴).
Ω
We move onto the next step where 𝑓 is a function that we call a simple function. This can be
represented as a finite linear combination of indicators as follows:
𝑓 = ∑ α𝑘 𝐼𝐴𝑘 .
𝑘=1
∫ 𝑓 𝑑μ ≔ ∑ α𝑘 μ(𝐴𝑘 ).
Ω 𝑘=1
Now, one might ask why, if we represent this function in a different way, will the integral
necessarily be the same. We can show that it is indeed the same and so, this definition is
independent of the representation of 𝑓.
We move on to the third step where the function 𝑓 is now positive. So, 𝑓 is positive and
measurable at the same time, which we can write as:
𝑓 ≥ 0, 𝑓 ∈ 𝑀ℱ +
We are going to define the integral of 𝑓 in that special case as the supremum of the integrals of
simple functions that are less than 𝑓. Written in full:
So, we take the integral of every simple function that is less than 𝑓, greater than or equal to 0 and
we take the supremum of that set. Note that the first integral could be infinite.
In the next step, we take 𝑓 to be a measurable function that may be sign-changing – and, if that is
the case, we define 𝑓 + to be the positive part of 𝑓, so, the maximum of 𝑓 and 0. To visualize this, we
can imagine that we have a function that looks like this. 𝑓 + will take simply take 𝑓 when 𝑓 is
positive, and 0 when 𝑓 is negative. So, it will look like this – so that's what 𝑓 + is. Similarly, we define
𝑓 − which is called the negative part of 𝑓, and that will be the maximum of negative 𝑓 and 0. What
happens is, when 𝑓 is negative, it reflects it, so, it takes the negative of that, and, when 𝑓 is positive,
it remains at 0, and so on. So, 𝑓 − will look like this. Note that 𝑓 can be written as a difference
between 𝑓 + and 𝑓 −. The absolute value of 𝑓 is actually a sum of 𝑓 + plus 𝑓 −.
We are therefore going to define the integral of 𝑓 in this case as the integral of 𝑓 + minus the
integral of 𝑓 −, provided at least one of them is finite. Otherwise, the integral is undefined. Written
in full:
∫ 𝑓 𝑑μ ≔ ∫ 𝑓 + 𝑑 μ − ∫ 𝑓 − 𝑑 𝜇.
Ω Ω Ω
We are allowed to make this definition because we know the integral of 𝑓 + since 𝑓 + is a positive
measurable function. Similarly, we can also define the integral of 𝑓 −since 𝑓 − is a positive and
measurable function.
We say that 𝑓 is integrable if both of these integrals are finite and that is equivalent to the integral
of the absolute value of 𝑓, which is less than infinity:
∫ |𝑓|𝑑 μ < ∞.
Ω
1 The first one, which is probably the most important for us, is the special case where μ is
actually a probability measure. We are going to denote it by ℙ.
In that case, (Ω, ℱ, ℙ)is a probability space and 𝑋 is a random variable, where 𝑋: Ω → ℝ. In
that case, the integral of 𝑋 with respect to ℙ is defined as the expected value of the random
variable 𝑋. Written in full:
∫ 𝑋 𝑑ℙ =: 𝐸(𝑋).
Ω
2 In the second example, we will take Ω to be the real line ℝ, ℱ to be the σ-algebra of Borel
subsets of ℝ, and μ to be the Lebesgue measure, λ1 .
If we consider an interval, [𝑎, 𝑏], as a subset of ℝ, then the integral over [𝑎, 𝑏] of a function
𝑓, with respect to the Lebesgue measure, turns out to be very simple in many special cases.
It is equal to the Riemann integral of 𝑓(𝑥), provided that 𝑓 is Riemann integrable on the
interval [𝑎, 𝑏]. Written in full:
𝑏
∫ 𝑓 𝑑λ1 = ∫ 𝑓(𝑥)𝑑 𝑥.
[𝑎,𝑏] 𝑎
3 In another example, we take Ω to be the natural numbers, and ℱ to be the powerset of the
natural numbers, and consider μ to be equal to the counting measure that counts the
number of elements in each set. Then, the integral of a function, 𝑓, with respect to the
counting measure, is simply equal to a sum of 𝑓(𝑛). So, summation is a special case of
integration. Written in full:
∫ 𝑓 𝑑# = ∑ 𝑓(𝑛).
Ω 𝑛𝜖ℕ
4 As a last example, we consider any measure space and equip it with the Dirac measure –
for example, (Ω, ℱ, 𝛿𝑎 ) – so that 𝑎 belongs to ℱ and, the Dirac measure, as you will recall, is
a measure that assigns, 1 if 𝑎 belongs to 𝐴, and 0 otherwise. Then the integral of a function
with respect to the Dirac measure is simply equal to a function evaluated at that point.
Written in full:
∫ 𝑓 𝑑δ𝑎 = 𝑓(𝑎).
Ω
Now that we have understood what expectation is, we can move on to the Radon-Nikodym
theorem.
Hi, in this video we introduce the Radon-Nikodym theorem, which is a theorem about the
relationship between measures on a measurable space.
Let (Ω, 𝐹) be a measurable space and take 𝜇 and λ (λ does not mean the Lebesgue measure in this
case) to be measures on this measurable space: [0, ∞]. We say that λ is absolutely continuous with
respect to 𝜇 and we write it in this form 𝜆 ≪ 𝜇 which means if 𝜇(𝐴) = 0 then 𝜆(𝐴) = 0. We say
that 𝜆 and 𝜇 are equivalent if and only if 𝜆(𝐴) = 0 is equivalent to 𝜇 (𝐴) = 0. Another way of
saying this is that 𝜆 is absolutely continuous with respect to 𝜇, and 𝜇 is also absolutely continuous
with respect to 𝜆. Finally, we say that 𝜆 and 𝜇 are mutually singular, and we write it like this (λ ⊥ μ)
to mean that there exists a set A such that 𝜆(𝐴) = 0 is equal to 𝜇 (𝐴𝐶 ). So, it sends both of them to
0.
As an example, let's take Ω to be ℝ and 𝐹 to be the Borel 𝜎-algebra on ℝ. Consider the following
measures: 𝛿0 and 𝜆1 , where 𝜆1 is the Lebesgue measure and 𝛿0 is a Dirac measure concentrated at
zero. These two measures are mutually singular since the set A can be picked to be the singleton 0.
In that case, 𝛿0 ({0}𝐶 ) and that's the same as 𝜆1({0}) because the Lebesgue measure sends
singletons to zero.
The Radon-Nikodym theorem says that the converse of the following statement is also true. In
other words, if 𝜆 is absolutely continuous with respect to 𝜇 then, in fact, there exists a function
that is positive and measurable such that 𝜆(𝐴) is equal to the integral of 𝑓 with respect to 𝜇 for
every 𝐴 in 𝐹. So it is a converse of this result here that all measures λ that are absolutely
continuous with respect to 𝜇 are in fact of this form, so they arise as an integral. This is true
provided that the measures 𝜆 and 𝜇 are 𝜎-finite measures. This function 𝑓 that exists is called the
𝑑𝜆
Radon-Nikodym derivative and it's unique almost everywhere. 𝑓 denoted by is a function called
𝑑𝜇
a Radon-Nikodym derivative.
Let us look at how we can apply the Radon-Nikodym theorem. In this example, we will take (Ω, 𝐹)
again to be any measurable space. Consider a measure 𝜇 and let λ= A times 𝜇 (𝑎𝜇), where 𝑎 is a
positive constant (𝑎 > 0). The measure λ is just a scalar multiple of μ. Now, if 𝜇 (A) = 0, then λ(A),
which is 𝑎 times 𝜇 of 𝐴 (𝑎𝜇(𝐴)), will also be equal to 0. This implies that λ is absolutely continuous
with respect to μ. And, assuming that μ is 𝜎-finite, that would imply that λ is also 𝜎 -finite; hence a
Radon-Nikodym derivative exists.
We can calculate it as follows: 𝜆(𝐴) is equal to 𝑎 times μ of 𝐴, which is 𝑎 times the integral of 1 with
respect to 𝜇, which is equal to the integral over 𝐴 of 𝑎 times 𝑑𝜇. Written in full:
𝑑𝜆
= ∫ 𝑎𝑑𝜇 ⇒ = 𝑎.
𝐴 𝑑𝜇
This implies that the Radon-Nikodym derivative of 𝜆 with respect to μ is equal to 𝑎 itself.