Sst304 Lesson 1

Download as pdf or txt
Download as pdf or txt
You are on page 1of 8

LESSON 1: RANDOM AND MEAN VECTORS

1.0 Introduction
In this Lesson we introduce the concept of a random vector as a vector whose
components are random variables and a mean vector as the corresponding vector of
constants resulting from taking the expectation of the mean vector.
1.1 Objectives
By the end of this Lesson you should be able to:
• Define a random vector and a mean vector and distinguish between the two.
• Define and evaluate a variance-covariance matrix of a random vector
• Evaluate the mean and variance of a linear function of a random vector
• Define and evaluate the expectation of a quadratic form

1.2 Random vector mean vector and variance-covariance


Let X 1 , X 2 , ..., X p be random variables with means E ( X i ) = µi , i = 1, 2, ..., p ,variances

var( X i ) = σ i2 , i = 1, 2, ..., p and covariances co v( X i , X j ) = σ ij , i ≠ j = 1, 2, ..., p

Then the vector X = ( X 1 , X 2 , ..., X p )′ is a p × 1 random vector and

E ( X ) = ( E ( X 1 ), E ( X 2 ), ..., E ( X p ))′

= ( µ1 , µ 2 , ..., µ p )′
= µ , is the mean vector.

Note that the components of µ are constants

The variance-covariance matrix X of is given by


var( X ) = E{( X − µ )( X − µ )′}

⎛ var( X 1 ) cov( X 1 , X 2 ) ... cov( X 1 , X p ) ⎞


⎜ ⎟
cov( X 2 , X 1 ) var( X 2 ) ... cov( X 2 , X p ) ⎟
=⎜
⎜ ... ... ... ... ⎟
⎜⎜ ⎟
⎝ cov( X p , X 1 ) cov( X p , X 2 ) ... var( X p ) ⎟⎠

1
⎡ σ 11 σ 12 . . σ 1 p ⎤ ⎡ σ 1 σ 12 . . σ1 p ⎤
2

⎢σ ⎢ ⎥
⎢ 21 σ 22 . . σ 2 p ⎥⎥ ⎢σ 21 σ 22 . . σ2p ⎥
=⎢ . . . . . ⎥=⎢ . . . . . ⎥
⎢ ⎥ ⎢ ⎥
⎢ . . . . . ⎥ ⎢ . . . . . ⎥
⎢σ p1 σ p 2 ⎥
. . σ pp ⎦ ⎢σ⎢ ⎥
⎣ ⎣ p1 σ p 2 . . σ 2p ⎦⎥

Note that σ ii = σ i2 and σ ij = σ ji , that is Σ is symmetric.


By defining the correlation coefficient between the random variables X i and X j as
σ ij
rij = form the matrix of correlation coefficients. What is the value of rii ?
σ i .σ j

Next let Y = a1 X 1 + a2 X 2 + ... + a p X p = a′ X , where a is a vector of constants.


Then from earlier unit in second year we had that

E (Y ) = a1E ( X1 ) + a2 E ( X 2 ) + ... + a p E ( X p )
= a′ E ( X )
= a′ µ
And
var(Y ) = var(a1 X1 + a2 X 2 + ... + a p X p )
= a12 var( X1 ) + a22 var( X 2 ) + ... + a 2p var( X p ) + ∑ ai a j cov( X i , X j )
i≠ j

= a′Σa

Verify this result for the case p=2

Next let us write


Y1 = a11 X1 + a12 X 2 + ... + a1 p X p

Y2 = a21 X1 + a22 X 2 + ... + a2 p X p


Yq = aq1 X1 + aq 2 X 2 + ... + aqp X p

where aij ' s are constants

2
This can be expressed in vector matrix notation as :
⎡ a11 a12 . . a1 p ⎤
⎢a ⎛X ⎞
⎢ 21 a22 . . a2 p ⎥⎥ ⎜ 1 ⎟
X
Y =⎢ . . . . . ⎥⎜ 2 ⎟
⎢ ⎥⎜ . ⎟
. . . ⎥⎜
⎢ . ⎜ X p ⎟⎟
.
⎢ aq1
⎣ aq 2 . . aqp ⎦ ⎝
⎥ ⎠

= AX
Then,

⎛ ⎞
⎜ ⎟
⎜ a11 E ( X 1 ) + a12 E ( X 2 ) + ... + a1 p E ( X p ) ⎟
E (Y ) = ⎜ a21 E ( X 1 ) + a22 E ( X 2 ) + ... + a2 p E ( X p ) ⎟
⎜ ⎟
⎜ ... ⎟
⎜ a E ( X ) + a E ( X ) + ... + a E ( X ) ⎟
⎝ q1 1 q2 2 qp p ⎠

⎛ µ1 ⎞
⎛ a11 a12 ... a1 p ⎞ ⎜ ⎟
⎜ ⎟ ⎜ µ2 ⎟
a a ... a
=⎜ ⎟⎜ . ⎟
21 22 2 p
⎜ . . . . ⎟⎜ ⎟
⎜⎜ ⎟⎟ ⎜ . ⎟
⎝ aqi a2 ... aqp ⎠ ⎜ µ ⎟
⎝ p⎠

= Aµ
And

var(Y ) = E{(Y − E (Y ))(Y − E (Y ))′}

= E{( AX − Aµ )( AX − Aµ )′}

= E{A( X − µ )( X − µ )′ A′}

= AE{( X − µ )( X − µ )′}A′

= A.var( X ). A′

= AΣA′

Verify this result for the case p=2 and q=2

3
1.3. Quadratic Forms

Let X = ( X 1 , X 2 , ..., X p )′ be a random vector with a mean vector µ = ( µ1 , µ 2 , ..., µ p )′ and

variance-covariance matrix Σ = ((σ ij )) , i, j = 1, 2, ..., p .

Let Q = X ′ AX where A is a symmetric matrix of constants. Q is called a quadratic form

in X . We need to find E{Q} = E{ X ′ AX } .


Let A = ((aij )) , i, j = 1, 2, ..., p .

Then
p
X ′AX = ∑a
i =1
ii X i2 + ∑ ∑ i j
a ij X i X j

So that
p
E(X ′AX ) = ∑a
i =1
ii E ( X i2 ) + ∑ ∑ i j
a ij E ( X i X j )

But v a r( X i ) = E ( X i2 ) − { E ( X i )} 2 .
That is σ ii = E ( X ) − µ
2 2
i i

Then

E ( X i2 ) = σ ii + µi2

Similarly

σ ij = E ( X i X j ) − [ E ( X i )][ E ( X j )]

= E( X i X j ) − µi µ j .

So that,

E ( X i X j ) = σ ij + µi µ j .

Thus,

p
E(X ′AX ) = ∑a
i =1
ii (σ ii + µ i2 ) + ∑ ∑ i j
a ij (σ ij + µ i µ j )

4
⎪⎧ ⎪⎫ ⎪⎧ ⎪⎫
p p

⎨ ∑
= ⎪ aiiσ ii + ∑∑ i j
aij σ i j ⎬ + ∑

⎪⎭ ⎪⎩ i =1
aii µi2 + ∑∑
i
a
j ij
µi µ j ⎬
⎪⎭
⎩ i =1
p p

= ∑∑ a σ
i =1 j =1
ij ji + µ ′ Aµ
, since
σ ij = σ ji

=
trace( AΣ) + µ ′ Aµ .

Special case

If Σ = σ I n = diag (σ , σ , ..., σ ) then,


2 2 2 2

trace( AΣ) = σ 2trace( A)


and

E ( X ′ A X ) = σ 2 trace( A) + µ ′ A µ

If we write Y = X − a , where a is a vector of constants, then we get

E[( X − a )′ A( X − a )]
= E (Y AY ) = trace[ A.var(Y )] + µ Y ′ Aµ Y
= trace( AΣ) + ( µ − a )' A( µ − a )

What happens when a = µ ?

Example:

Let X 1 , X 2 , ..., X n be i.i.d . with E ( X i ) = µ , var( X i ) = σ 2 . Find E (Q) where

Q = ( X 1 − X 2 ) 2 + ( X 2 − X 3 ) 2 + ... + ( X n −1 − X n ) 2 .

Hence find unbiased estimator of σ .


2

Solution:

5
Write Y1 = X 1 − X 2 , Y2 = X 2 − X 3 , …, Yn −1 = X n −1 − X n
So that

Q = Y ′ AY where A = I n −1 .

Then,
E (Q ) = E{Y ′ AY }

= trace( AΣY ) + [ E (Y )]′ A[ E (Y )]


But

E (Yi ) = E ( X i ) − E ( X i +1 ) = 0
Thus E (Y ) = 0
Next var(Yi ) = var( X i − X i +1 ) = 2σ 2

cov (Yi , Y j ) = cov( X i − X i +1 , X j − X j +1 )

⎧⎪−σ 2 , if j = i + 1
=⎨
⎪⎩0, if j > i + 1

Thus
var(Y ) =
⎡ 2σ 2 −σ 2 0 . . 0 ⎤
⎢ 2 2 ⎥
⎢ −σ 2σ 0 . . 0 ⎥
⎢ . . . . . . ⎥ = ΣY
⎢ ⎥
⎢ . . . . . . ⎥
⎢ ⎥
⎢⎣ 0 0 0 . . 2σ 2 ⎥⎦ ( n −1)×( n −1)

Therefore

E (Q ) = trace( I n −1ΣY ) + 0′ I n −1 0

= trace(ΣY )

=
2(n − 1)σ 2
⎧ Q ⎫ Q
⎬ = σ . That is is unbiased estimator of σ 2 .
2
We see that E ⎨
⎩ 2(n − 1) ⎭ 2( n − 1)

6
1.4 Revision Exercise

1. Let X / = ( X 1 , X 2 ,..., X p ) be a random vector, then define the following terms:


i) mean vector
ii) variance-covariance matrix .
iii) correlation matrix

2. . Let ( X 1 , X 2 , X 3 )′ be a 3-dimensional random vector with expectation


µ = (2, 5, 3)′ and variance-covariance matrix

⎛ 5 2 3⎞
⎜ ⎟
Σ = ⎜ 2 3 0⎟
⎜ 3 0 2⎟
⎝ ⎠
(a) Find the expectation and variance of Y = X 1 − 2 X 2 + X 3
(b) If Y1 = X 1 + X 2 and Y2 = X 1 + X 2 + X 3 are linear functions of X 1 , X 2 and X 3 , find
the expectation and variance-covariance matrix of the random vector Y = (Y1 , Y2 )′
(c) Using the result of part(b) or otherwise find the correlation coefficient between Y1
and Y2 .

3. Let X= ( X 1 , X 2 , X 3 ) ' be a 3-dimensional random vector and Y= (Y1 , Y2 ) ' be a 2-


dimensional random vector where Y1 = X 1 + X 3 and Y2 = X 1 + X 2 + X 3
Let the variance-covariance matrix of the random vector X be given by
⎡2 1 1⎤
∑ = ⎢⎢1 3 0⎥⎥
⎢⎣1 0 1⎥⎦
Obtain the variance-covariance matrix of Y, hence find the correlation coefficient
between Y1 and Y 2

4. Let X 1 , X 2 , ..., X n be jointly distributed random variables with E ( X ) = µ ,


var(X ) = σ and cov( X i , X j ) = ρσ 2 , for i ≠ j ; i, j = 1, 2, ..., n .
2

n 2 n
Represent Q= ∑ ( X i − X ) as a quadratic form X ' A X where X = ∑X
1
i .
i =1 n i =1
Find the symmetric matrix A. Hence find E (Q ) and give an unbiased estimator
of (1 − ρ )σ 2

Summary

7
In this Lesson we have considered random vectors as vectors whose components are
random variables. In particular we have:
• Obtained the mean vector and the variance-covariance matrix of a random vector
• Derived the mean vector and variance-covariance matrix of a vector formed as
linear function of components of a random vector
• Derived the expectation of a quadratic form formed from a random vector.

Further Reading

1. Manly, B.F.J.(2004). Multivariate Statistical Methods: A Primer, 3rd Edition.


Chapman& Hall/HRC. ISBN‐1584884142, ISBN‐13: 978‐1583883149.
2. Morrison, D. F. (2004). Multivariate Statistical Methods; 4th Edition; McGraw
Hill; ISBN: 07‐043185.
3. Krzanowski, W. J. (2000). Principles of Multivariate Analysis; 2nd Edition;
Oxford University Press; ISBN: 0198507089, 97801198507086.

4. Chartfield, C. and Collins A. J. (1980). Introduction to Multivariate


Analysis; 1st Edition; Chapman and Hall; ISBN: 978-3-642-80330-7,
ISBN: 978-3642-80328-

You might also like