Problem Set 1 Full Solutions

Download as pdf or txt
Download as pdf or txt
You are on page 1of 8

Queen Mary, University London

Liudas Giraitis
CB301, [email protected]

Time Series Analysis


Solutions: Problem Set 1

Problem 1. State, with explanation, which of the following time series


{Xt : t = . . . , −2, −1, 0, 1, 2, . . .} are covariance stationary:

(i) Xt ’s are independent and identically distributed random variables, each


with a standard normal distribution.

(ii) Xt = At + εt where A is a real constant and εt ’s are independent and


identically distributed random variables.

(iii) Xt = tεt where εt ’s are independent and identically distributed N (0, 1)


random variables.

(iv) Xt = Y−t , where {Yt } is second-order stationary.

Solution. (i) Yes. Mean and variance are finite and constant and cov(Xt , Xs ) =
0 for t 6= s.

(ii) No. E[Xt ] = At + Eεt = At, which depends on t.

(iii) No. For all t, E[Xt ] = E[tεt ] = 0 and V ar(Xt ) = E(Xt − EXt )2 =
EXt2 = E(tεt )2 = t2 Eε2t = t2 which clearly depends on t.

(iv) Yes, E[Xt ] = E[Y−t ] = µY , which is a finite constant. Similar argument


applies to var(Xt ) = var(Y−t ) = σY2 . Now, Cov(Xt , Xt+k ) = Cov(Y−t , Y−t−k ) =
γY (k), which is a function of k only.

1
Problem 2. Explain why none of the following functions can be the auto-
correlation function of a second order stationary time series:-

(a) ρ0 = 1, ρ1 = 0.5, ρ−1 = −0.5, and ρk = 0 for k = ±2, ±3, . . . .

(b) ρk = 1.1|k| , k = 0, ±1, ±2, . . ..

(c) ρ0 = 1, ρ1 = ρ−1 = 0.5, ρ2 = ρ−2 = −2.5 and all the other ρk ’s are zero.

Solution. (a) Because ρ−1 6= ρ1 , and we known autocorrelation function has


property ρk = ρ−k

(b) We know that autocorrelation function has property |ρk | ≤ 1. Since


ρ1 > 1 the given function cannot be the autocorrelation function.

(c) Because |ρ2 | = 2.5 > 1.

Problem 3. The classical measure of nongaussianity is kurtosis. The kur-


tosis of a random variable X is defined by
E(X − µX )4
K(X) = 4
σX
2
where µX = E[X], σX = V ar(X). For Gaussian variable X, kurtosis
K(X) = 3.
a) Let
Y = aX + b
where a and b are real numbers. Show that
K(X) = K(Y ).

b) If X1 and X2 are two independent random variables with zero mean,


show that
K(X1 X2 ) = K(X1 )K(X2 ).

c) If X and Z are two Gaussian variables, what is the kurtosis of the


variable
X + Z?

2
Solution.
a) We have E[Y ] = E[aX + b] = aE[X] + b = aµX + b. By definition,

σY2 = V ar(Y ) = E(Y − E[Y ])2 = E[(aX + b − (aµX + b))2 ]

= a2 E[(X − µX )2 ] = a2 V ar(X) = a2 σX
2
.
Similarly,

E[(Y − E[Y ])4 ] = E[(aX + b − (aµX + b))4 ] = a4 E[(X − µX )4 ].

Therefore
E(Y − µY )4 a4 E(X − µX )4
K(Y ) = = = K(X).
σY4 4
a4 σ X

b) By definition,

E(X1 X2 − E[X1 X2 ])4


K(X1 X2 ) = 4
.
σX 1 X2

Since EX1 = EX2 = 0, and X1 , X2 are independent, then


2
E[X1 X2 ] = E[X1 ]E[X2 ] = 0, σX 1
= E(X1 −µX1 )2 = E[X12 ], 2
σX 2
= E[X22 ]
2
σX 1 X2
= E[X1 X2 ]2 = E[X12 ]E[X22 ] = σX
2
σ2 ,
1 X2

E[(X1 X2 )4 ] = E[X14 ]E[X24 ] = E[(X1 − µX1 )4 ]E[(X2 − µX2 )4 ].


So,

E[(X1 X2 )4 ] E[(X1 )4 ] E[(X2 )4 ]


K(X1 X2 ) = 4
= 4 4
= K(X1 )K(X2 ).
σX 1 X2
σX 1
σX 2

3
c) It is known that
1) for any Gaussian random variable X, kurtosis K(X) = 3.
2) The sum of two Gaussian variables is a Gaussian variable.
Therefore X + Z is a Gaussian variable and K(X + Z) = 3.

Problem 4. Assume that X is a Gaussian random variable with mean µ


and variance σ 2 .
a) Write down probability density of X.
b) Transform the variable X into a Gaussian variable variable Z with
standard normal distribution N (0, 1), using transformation

Z = aX + b.

Find a and b.

Solution. Probability density of a Gaussian variable with mean µX and


2
variance σX is

1 (x − µX )2
f (x) = √ exp(− 2
), x ∈ (−∞, ∞).
2πσX 2σX

b) It is known that if X is Gaussian random variable, than for any num-


bers a and b, the variable Y = aX + b has also Gaussian distribution. Gaus-
sian distribution is defined by its mean and variance.
Then
E[Y ] = aE[X] + b = aµX + b
and

V ar(Y ) = E(Y −E[Y ])2 = E(aµX +b−(aµX +b))2 = a2 E(X−E[X])2 = a2 σX


2
.

Thus Y has Gaussian distribution

N (µY , σY2 ) = N (aµx + b, a2 σX


2
)

We need to find a and b such that

N (aµx + b, a2 σX
2
) = N (0, 1)

4
that is
aµx + b = 0, a2 σX
2
= 1.
Solving these equation we find
1 µ
a= , b=− .
σX σX
So
X − µX
Y = .
σX

Problem 5. The skewness of a random variable X is defined by

E(X − µX )3
S(X) = 3
σX
2
where µX = E[X], σX = V ar(X).
If X and Y are two independent random variables with zero mean, and
S(X) = 0, show that S(XY ) = 0.
Solution. b) By definition,

E(XY − E[XY ])3


S(XY ) = 3
.
σXY

It is given that S(X) = 0. Thus S(X) = E(X − EX)3 /σX


3
= E(X)3 /σX
3
=0
3
implies that E(X) = 0. Thus,

E[(XY − E[XY ])3 ] = E[(XY )3 ] = E[X 3 ]E[Y 3 ] = 0.

Since X, Y are independent, then


2
E[XY ] = E[X]E[Y ] = 0, σXY = E(XY −E[XY ])2 = E(XY )2 = EX 2 EY 2 > 0.
0
So, S(XY ) = 3
σXY
= 0.

5
Problem 6 [Optional]. The distribution of a random variable X is said to
have a fat tail if

P r[X > x] ∼ x−α , as x → ∞, α > 0.

Assume that X zero mean and symmetric distribution with probability den-
sity f .
a) explain what does it mean symmetric distribution in terms of f .
b) Find derivative
d
P r[X > x].
dx
c) (Optional) Denote F + (x) = P r[X > x]. Show that for any 0 < p < α,
Z ∞
p
E[|X| ] = 2 F ∗ (x)pxp−1 dx.
0

Conclude that
E[|X|p ] < ∞ if p < α.

Solution. a) symmetric distribution of a random variable X with zero mean


means that
P (X > a) = P (X < −a), f or a > 0
or, in terms of probability density function,

f (x) = f (−x), x > 0.

b)

d d d
P r[X > x] = (1 − P r[X ≤ x]) = (1 − F (x)) = −f (x),
dx dx dx
d
since distribution function F (x) = P (X ≤ x) has property dx
F (x) = f (x).
Integrating by parts gives
Z ∞ Z ∞ ∞
Z ∞
∗ p−1
F (x)px dx = F ∗ (x)dxp = F ∗ (x)xp − xp dF ∗ (x)).
0 0 0 0

6
Since F ∗ (x) ∼ x−α when x → ∞, and p < α, then

lim F ∗ (x)xp = lim x−α xp = 0.


x→∞ x→∞

Clearly F ∗ (0)0p = 0.
Next, (F ∗ (x))′ = (1 − F (x)′ = −f (x). So
Z ∞ Z ∞
p ∗
x dF (x)) = − xp f (x)dx.
0 0

Therefore, Z ∞ Z ∞
∗ p−1
2 F (x)px dx = 2 xp f (x)dx.
0 0

Since f (x) is even function then this integral equals to


Z ∞
xp f (x)dx = E|X|p .
−∞

c) We provide a heuristic proof. We know that

P (X > x) = 1 − F (x) ∼ x−α .

Then
(1 − F (x))′ ∼ (x−α )′ ,
and
−f (x) ∼ −αx−α−1 .
So f (x) ∼ αx−α−1 when x is large. Therefore
Z ∞
p
E|X| = |x|p f (x)dx < ∞

since
|x|p f (x) ∼ α|x|−α+p−1 , as x is large
is an integrable function when α > p.

7
Problem 7. What can use say about tail index of the standard normal
distribution? How many finite moments it has?

Solution. Standard normal distribution has probability density


1 2
f (x) = √ e−x /2 .

Then
∞ ∞
1 1
Z Z
p 2 2
E|X| = p
|x| √ e−x /2 dx = 2 |x|p √ e−x /2 dx
−∞ 2π 0 2π
2
because for any a > 0, e−x /2 ≤ x−a as x increases, and function xp−a is
integrable for large x, when a > p + 1.

Problem 8 [Optional]. In http://faculty.chicagogsb.edu/ruey.tsay/teaching/fts2/


you will find the daily stock returns of American Express (axp), Caterpilar
(cat) and Starbucks (sbux).
(a) compute the sample mean, standard deviation, skewness, excess kur-
tosis, minimum, and maximum of the returns of each stock.
(b) Discuss empirical characteristics of these returns

You might also like