Bivariate Normal
Bivariate Normal
Bivariate Normal
This is Section 4.7 of the 1st edition (2002) of the book Introduc-
tion to Probability, by D. P. Bertsekas and J. N. Tsitsiklis. The
material in this section was not included in the 2nd edition (2008).
Let U and V be two independent normal random variables, and consider two
new random variables X and Y of the form
X = aU + bV,
Y = cU + dV,
where a, b, c, d, are some scalars. Each one of the random variables X and Y is
normal, since it is a linear function of independent normal random variables.†
Furthermore, because X and Y are linear functions of the same two independent
normal random variables, their joint PDF takes a special form, known as the bi-
variate normal PDF. The bivariate normal PDF has several useful and elegant
properties and, for this reason, it is a commonly employed model. In this section,
we derive many such properties, both qualitative and analytical, culminating in
a closed-form expression for the joint PDF. To keep the discussion simple, we
restrict ourselves to the case where X and Y have zero mean.
Note that if X and Y are jointly normal, then any linear combination
Z = s 1 X + s2 Y
† For the purposes of this section, we adopt the following convention. A random
variable which is always equal to a constant will also be called normal, with zero
variance, even though it does not have a PDF. With this convention, the family of
normal random variables is closed under linear operations. That is, if X is normal,
then aX + b is also normal, even if a = 0.
1
2 The Bivariate Normal Distribution
Thus, Z is the sum of the independent normal random variables (as1 + cs2 )U
and (bs1 + ds2 )V , and is therefore normal.
A very important property of jointly normal random variables, and which
will be the starting point for our development, is that zero correlation implies
independence.
2 2
E[esZ ] = MZ (s) = eσZ s /2
,
This leads to the following formula for the multivariate transform associated
with the uncorrelated pair X and Y :
MX,Y (s1 , s2 ) = E [es1 X+s2 Y ]
= E[eZ ]
2 2 2 2
= e(s1 σX +s2 σY )/2.
Thus, the two pairs of random variables (X, Y ) and (X, Y ) are associated with
the same multivariate transform. Since the multivariate transform completely
determines the joint PDF, it follows that the pair (X, Y ) has the same joint
PDF as the pair (X, Y ). Since X and Y are independent, X and Y must also
be independent, which establishes our claim.
conditional distribution of X is also normal with mean X̂ and the same variance
2 . The variance of X̃ can be found with the following calculation:
σX̃
2
2 σX
σX̃ =E X −ρ Y
σY
σX σ2 2
2 − 2ρ
= σX ρσX σY + ρ2 X σ
σY σY2 Y
2 ,
= (1 − ρ2 )σX
2 = (1 − ρ2 )σ 2 .
σX̃ X
Having determined the parameters of the PDF of X̃ and of the conditional PDF
of X, we can give explicit formulas for these PDFs. We keep assuming that
The Bivariate Normal Distribution 5
X and Y have zero means and positive variances. Furthermore, to avoid the
degenerate where X̃ is identically zero, we assume that |ρ| < 1. We have
1 2
−x̃2 /2σX̃
fX̃ (x̃) = fX̃|Y (x̃ | y) = √ √ e ,
2π 1 − ρ2 σX
and 2
σX 2
1 − x−ρ y /2σX̃
fX|Y (x | y) = √ √ e σY ,
2π 1 − ρ2 σX
where
2 = (1 − ρ2 )σ 2 .
σX̃ X
x2 y2
1 − 2 − 2
fX,Y (x, y) = e 2σX 2σY ,
2πσX σY
6 The Bivariate Normal Distribution
which is just the product of two independent normal PDFs. We can get some
insight into the form of this PDF by considering its contours, i.e., sets of points
at which the PDF takes a constant value. These contours are described by an
equation of the form
x2 y2
2 + 2 = constant,
σX σY
and are ellipses whose two axes are horizontal and vertical.
In the more general case where X and Y are dependent, a typical contour
is described by
x2 xy y2
2 − 2ρ + = constant,
σX σX σY σY2
and is again an ellipse, but its axes are no longer horizontal and vertical. Figure
4.11 illustrates the contours for two cases, one in which ρ is positive and one in
which ρ is negative.
y y
x x
Figure 4.11: Contours of the bivariate normal PDF. The diagram on the left
(respectively, right) corresponds to a case of positive (respectively, negative) cor-
relation coefficient ρ.
Example 4.28. Suppose that X and Z are zero-mean jointly normal random
2 2
variables, such that σX = 4, σZ = 17/9, and E[XZ] = 2. We define a new random
variable Y = 2X − 3Z. We wish to determine the PDF of Y , the conditional PDF
of X given Y , and the joint PDF of X and Y .
As noted earlier, a linear function of two jointly normal random variables is
also normal. Thus, Y is normal with variance
17
σY2 = E (2X − 3Z)2 = 4E[X 2 ] + 9E[Z 2 ] − 12E[XZ] = 4 · 4 + 9 · − 12 · 2 = 9.
9
Hence, Y has the normal PDF
1 2
fY (y) = √ e−y /18 .
2π · 3
The Bivariate Normal Distribution 7
We next note that X and Y are jointly normal. The reason is that X and Z are
linear functions of two independent normal random variables (by the definition of
joint normality), so that X and Y are also linear functions of the same independent
normal random variables. The covariance of X and Y is equal to
E[XY ] = E X(2X − 3Z)
= 2E[X 2 ] − 3E[XZ]
= 2·4−3·2
= 2.
E[XY ] 2 1
ρ= = = .
σX σY 2·3 3
The conditional expectation of X given Y is
σX 1 2 2
E[X | Y ] = ρ Y = · Y = Y.
σY 3 3 9
2 1 32
σX̃ = (1 − ρ2 )σX
2
= 1− 4= ,
9 9
√
so that σX̃ = 32/3. Hence, the conditional PDF of X given Y is
2
x − (2y/9)
3 −
fX|Y (x | y) = √ √ e 2 · 32/9 .
2π 32
Finally, the joint PDF of X and Y is obtained using either the multiplication
rule fX,Y (x, y) = fX (x)fX|Y (x | y), or by using the earlier developed formula for
the exponent q(x, y), and is equal to
y2 x2 2 xy
+ − ·
− 9 4 3 2·3
1 2(1 − (1/9))
fX,Y (x, y) = √ e .
2π 32
We end with a cautionary note. If X and Y are jointly normal, then each
random variable X and Y is normal. However, the converse is not true. Namely,
if each of the random variables X and Y is normal, it does not follow that
they are jointly normal, even if they are uncorrelated. This is illustrated in the
following example.
Example 4.29. Let X have a normal distribution with zero mean and unit
variance. Let Z be independent of X, with P(Z = 1) = P(Z = −1) = 1/2. Let
8 The Bivariate Normal Distribution
Y = ZX, which is also normal with zero mean. The reason is that conditioned
on either value of Z, Y has the same normal distribution, hence its unconditional
distribution is also normal. Furthermore,
so X and Y are uncorrelated. On the other hand X and Y are clearly dependent.
(For example, if X = 1, then Y must be either −1 or 1.) If X and Y were jointly
normal, we would have a contradiction to our earlier conclusion that zero correlation
implies independence. It follows that X and Y are not jointly normal, even though
both marginal distributions are normal.
The development in this section generalizes to the case of more than two random
variables. For example, we can say that the random variables X1 , . . . , Xn are
jointly normal if all of them are linear functions of a set U1 , . . . , Un of independent
normal random variables. We can then establish the natural extensions of the
results derived in this section. For example, it is still true that zero correlation
implies independence, that the conditional expectation of one random variable
given some of the others is a linear function of the conditioning random variables,
and that the conditional PDF of X1 , . . . , Xk given Xk+1 , . . . , Xn is multivariate
normal. Finally, there is a closed-form expression for the joint PDF. Assuming
that none of the random variables is a deterministic function of the others, we
have
fX1 ,...,Xn = ce−q(x1 ,...,xn ) ,
where c is a normalizing constant and where q(x1 , . . . , xn ) is a quadratic function
of x1 , . . . , xn that increases to infinity as the magnitude of the vector (x1 , . . . , xn )
tends to infinity.
Multivariate normal models are very common in statistics, econometrics,
signal processing, feedback control, and many other fields. However, a full de-
velopment falls outside the scope of this text.
Y1 = 2X1 + X2 , Y2 = X1 − X2 .
Find E[Y1 ], E[Y2 ], cov(Y1 , Y2 ), and the joint PDF fY1 ,Y2 .
Solution. The means are given by
The bivariate normal is determined by the means, the variances, and the correlation
coefficient, so we need to calculate the variances. We have
Similarly,
σY2 2 = E[Y22 ] − µ2Y2 = 5.
Thus
cov(Y1 , Y2 ) 1
ρ(Y1 , Y2 ) = = .
σY1 σY2 5
To write the joint PDF of Y1 and Y2 , we substitute the above values into the formula
for the bivariate normal density function.
Problem 2. The random variables X and Y are described by a joint PDF of the
form 2 −6xy−18y 2
fX,Y (x, y) = ce−8x .
Find the means, variances, and the correlation coefficient of X and Y . Also, find the
value of the constant c.
Solution. We recognize this as a bivariate normal PDF, with zero means. By comparing
8x2 + 6xy + 18y 2 with the exponent
x2 xy y2
2
− 2ρ + 2
σ σX σY σY
q(x, y) = X
2(1 − ρ2 )
2
σX (1 − ρ2 ) = 1/4, σY2 (1 − ρ2 ) = 1/9, (1 − ρ2 )σX σY = −ρ/3.
(1 − ρ2 )σX σY = 1/6,
2
which implies that ρ = −1/2. Thus, σX = 1/3, and σY2 = 4/27. Finally,
√
1 27
c= = .
2π 1 − ρ 2 σX σY π
10 The Bivariate Normal Distribution
Problem 3. Suppose that X and Y are independent normal random variables with
the same variance. Show that X − Y and X + Y are independent.
Solution. It suffices to show that the zero-mean jointly normal random variables
X − Y − E[X − Y ] and X + Y − E[X + Y ] are independent. We can therefore, without
loss of generality, assume that X and Y have zero mean. To prove independence, under
the zero-mean assumption, it suffices to show that the covariance of X − Y and X + Y
is zero. Indeed,
cov(X − Y, X + Y ) = E (X − Y )(X + Y ) = E[X 2 ] − E[Y 2 ] = 0,
1 2 2 2
fX,Y (x, y) 1 − 2 (x + y − c )
fX,Y |C (x, y) = = e 2σ .
P(C) 2πσ 2
Problem 5.* Suppose that X and Y are jointly normal random variables. Show that
σX
E[X | Y ] = E[X] + ρ Y − E[Y ] .
σY
Hint: Consider the random variables X − E[X] and Y − E[Y ] and use the result
established in the text for the zero-mean case.
Solution. Let X̃ = X − E[X] and Ỹ = Y − E[Y ]. The random variables X̃ and Ỹ
are jointly normal. This is because if X and Y are linear functions of two independent
normal random variables U and V , then X̃ and Ỹ are also linear functions of U and
V . Therefore, as established in the text,
σX̃
E[X̃ | Ỹ ] = ρ(X̃, Ỹ ) Ỹ .
σỸ
Since X and X̃ only differ by a constant, we have σX̃ = σX and, similarly, σỸ = σY .
Finally,
cov(X̃, Ỹ ) = E[X̃ Ỹ ] = E X − E[X] Y − E[Y ] = cov(X, Y ),
from which it follows that ρ(X̃, Ỹ ) = ρ(X, Y ). The desired formula follows by substi-
tuting the above relations in the formula for E[X̃ | Ỹ ].
Problem 6.*
(a) Let X1 , X2 , . . . , Xn be independent identically distributed random variables and
let Y = X1 + X2 + · · · + Xn . Show that
Y
E[X1 | Y ] = .
n
(b) Let X and W be independent zero-mean normal random variables, with posi-
tive integer variances k and m, respectively. Use the result of part (a) to find
E[X | X + W ], and verify that this agrees with the conditional expectation for-
mula for jointly normal random variables given in the text. Hint: Think of X
and W as sums of independent random variables.
Solution. (a) By symmetry, we see that E[Xi | Y ] is the same for all i. Furthermore,
E[X1 + · · · + Xn | Y ] = E[Y | Y ] = Y.
We identify Y with X + W and use the result from part (a), to obtain
X +W
E[Xi | X + W ] = .
k+m
Thus,
k
E[X | X + W ] = E[X1 + · · · + Xk | X + W ] = (X + W ).
k+m
This formula agrees with the formula derived in the text because
σX cov(X, X + W ) k
ρ(X, X + W ) = 2
= .
σX+W σX+W k+m