Sst304 Lesson 1
Sst304 Lesson 1
Sst304 Lesson 1
1.0 Introduction
In this Lesson we introduce the concept of a random vector as a vector whose
components are random variables and a mean vector as the corresponding vector of
constants resulting from taking the expectation of the mean vector.
1.1 Objectives
By the end of this Lesson you should be able to:
• Define a random vector and a mean vector and distinguish between the two.
• Define and evaluate a variance-covariance matrix of a random vector
• Evaluate the mean and variance of a linear function of a random vector
• Define and evaluate the expectation of a quadratic form
E ( X ) = ( E ( X 1 ), E ( X 2 ), ..., E ( X p ))′
= ( µ1 , µ 2 , ..., µ p )′
= µ , is the mean vector.
1
⎡ σ 11 σ 12 . . σ 1 p ⎤ ⎡ σ 1 σ 12 . . σ1 p ⎤
2
⎢σ ⎢ ⎥
⎢ 21 σ 22 . . σ 2 p ⎥⎥ ⎢σ 21 σ 22 . . σ2p ⎥
=⎢ . . . . . ⎥=⎢ . . . . . ⎥
⎢ ⎥ ⎢ ⎥
⎢ . . . . . ⎥ ⎢ . . . . . ⎥
⎢σ p1 σ p 2 ⎥
. . σ pp ⎦ ⎢σ⎢ ⎥
⎣ ⎣ p1 σ p 2 . . σ 2p ⎦⎥
=Σ
E (Y ) = a1E ( X1 ) + a2 E ( X 2 ) + ... + a p E ( X p )
= a′ E ( X )
= a′ µ
And
var(Y ) = var(a1 X1 + a2 X 2 + ... + a p X p )
= a12 var( X1 ) + a22 var( X 2 ) + ... + a 2p var( X p ) + ∑ ai a j cov( X i , X j )
i≠ j
= a′Σa
…
Yq = aq1 X1 + aq 2 X 2 + ... + aqp X p
2
This can be expressed in vector matrix notation as :
⎡ a11 a12 . . a1 p ⎤
⎢a ⎛X ⎞
⎢ 21 a22 . . a2 p ⎥⎥ ⎜ 1 ⎟
X
Y =⎢ . . . . . ⎥⎜ 2 ⎟
⎢ ⎥⎜ . ⎟
. . . ⎥⎜
⎢ . ⎜ X p ⎟⎟
.
⎢ aq1
⎣ aq 2 . . aqp ⎦ ⎝
⎥ ⎠
= AX
Then,
⎛ ⎞
⎜ ⎟
⎜ a11 E ( X 1 ) + a12 E ( X 2 ) + ... + a1 p E ( X p ) ⎟
E (Y ) = ⎜ a21 E ( X 1 ) + a22 E ( X 2 ) + ... + a2 p E ( X p ) ⎟
⎜ ⎟
⎜ ... ⎟
⎜ a E ( X ) + a E ( X ) + ... + a E ( X ) ⎟
⎝ q1 1 q2 2 qp p ⎠
⎛ µ1 ⎞
⎛ a11 a12 ... a1 p ⎞ ⎜ ⎟
⎜ ⎟ ⎜ µ2 ⎟
a a ... a
=⎜ ⎟⎜ . ⎟
21 22 2 p
⎜ . . . . ⎟⎜ ⎟
⎜⎜ ⎟⎟ ⎜ . ⎟
⎝ aqi a2 ... aqp ⎠ ⎜ µ ⎟
⎝ p⎠
= Aµ
And
= E{( AX − Aµ )( AX − Aµ )′}
= E{A( X − µ )( X − µ )′ A′}
= AE{( X − µ )( X − µ )′}A′
= A.var( X ). A′
= AΣA′
3
1.3. Quadratic Forms
Then
p
X ′AX = ∑a
i =1
ii X i2 + ∑ ∑ i j
a ij X i X j
So that
p
E(X ′AX ) = ∑a
i =1
ii E ( X i2 ) + ∑ ∑ i j
a ij E ( X i X j )
But v a r( X i ) = E ( X i2 ) − { E ( X i )} 2 .
That is σ ii = E ( X ) − µ
2 2
i i
Then
E ( X i2 ) = σ ii + µi2
Similarly
σ ij = E ( X i X j ) − [ E ( X i )][ E ( X j )]
= E( X i X j ) − µi µ j .
So that,
E ( X i X j ) = σ ij + µi µ j .
Thus,
p
E(X ′AX ) = ∑a
i =1
ii (σ ii + µ i2 ) + ∑ ∑ i j
a ij (σ ij + µ i µ j )
4
⎪⎧ ⎪⎫ ⎪⎧ ⎪⎫
p p
⎨ ∑
= ⎪ aiiσ ii + ∑∑ i j
aij σ i j ⎬ + ∑
⎨
⎪⎭ ⎪⎩ i =1
aii µi2 + ∑∑
i
a
j ij
µi µ j ⎬
⎪⎭
⎩ i =1
p p
= ∑∑ a σ
i =1 j =1
ij ji + µ ′ Aµ
, since
σ ij = σ ji
=
trace( AΣ) + µ ′ Aµ .
Special case
E ( X ′ A X ) = σ 2 trace( A) + µ ′ A µ
E[( X − a )′ A( X − a )]
= E (Y AY ) = trace[ A.var(Y )] + µ Y ′ Aµ Y
= trace( AΣ) + ( µ − a )' A( µ − a )
Example:
Q = ( X 1 − X 2 ) 2 + ( X 2 − X 3 ) 2 + ... + ( X n −1 − X n ) 2 .
Solution:
5
Write Y1 = X 1 − X 2 , Y2 = X 2 − X 3 , …, Yn −1 = X n −1 − X n
So that
Q = Y ′ AY where A = I n −1 .
Then,
E (Q ) = E{Y ′ AY }
E (Yi ) = E ( X i ) − E ( X i +1 ) = 0
Thus E (Y ) = 0
Next var(Yi ) = var( X i − X i +1 ) = 2σ 2
⎧⎪−σ 2 , if j = i + 1
=⎨
⎪⎩0, if j > i + 1
Thus
var(Y ) =
⎡ 2σ 2 −σ 2 0 . . 0 ⎤
⎢ 2 2 ⎥
⎢ −σ 2σ 0 . . 0 ⎥
⎢ . . . . . . ⎥ = ΣY
⎢ ⎥
⎢ . . . . . . ⎥
⎢ ⎥
⎢⎣ 0 0 0 . . 2σ 2 ⎥⎦ ( n −1)×( n −1)
Therefore
E (Q ) = trace( I n −1ΣY ) + 0′ I n −1 0
= trace(ΣY )
=
2(n − 1)σ 2
⎧ Q ⎫ Q
⎬ = σ . That is is unbiased estimator of σ 2 .
2
We see that E ⎨
⎩ 2(n − 1) ⎭ 2( n − 1)
6
1.4 Revision Exercise
⎛ 5 2 3⎞
⎜ ⎟
Σ = ⎜ 2 3 0⎟
⎜ 3 0 2⎟
⎝ ⎠
(a) Find the expectation and variance of Y = X 1 − 2 X 2 + X 3
(b) If Y1 = X 1 + X 2 and Y2 = X 1 + X 2 + X 3 are linear functions of X 1 , X 2 and X 3 , find
the expectation and variance-covariance matrix of the random vector Y = (Y1 , Y2 )′
(c) Using the result of part(b) or otherwise find the correlation coefficient between Y1
and Y2 .
n 2 n
Represent Q= ∑ ( X i − X ) as a quadratic form X ' A X where X = ∑X
1
i .
i =1 n i =1
Find the symmetric matrix A. Hence find E (Q ) and give an unbiased estimator
of (1 − ρ )σ 2
Summary
7
In this Lesson we have considered random vectors as vectors whose components are
random variables. In particular we have:
• Obtained the mean vector and the variance-covariance matrix of a random vector
• Derived the mean vector and variance-covariance matrix of a vector formed as
linear function of components of a random vector
• Derived the expectation of a quadratic form formed from a random vector.
Further Reading