Problem Set 1 Full Solutions
Problem Set 1 Full Solutions
Problem Set 1 Full Solutions
Liudas Giraitis
CB301, [email protected]
Solution. (i) Yes. Mean and variance are finite and constant and cov(Xt , Xs ) =
0 for t 6= s.
(iii) No. For all t, E[Xt ] = E[tεt ] = 0 and V ar(Xt ) = E(Xt − EXt )2 =
EXt2 = E(tεt )2 = t2 Eε2t = t2 which clearly depends on t.
1
Problem 2. Explain why none of the following functions can be the auto-
correlation function of a second order stationary time series:-
(c) ρ0 = 1, ρ1 = ρ−1 = 0.5, ρ2 = ρ−2 = −2.5 and all the other ρk ’s are zero.
2
Solution.
a) We have E[Y ] = E[aX + b] = aE[X] + b = aµX + b. By definition,
= a2 E[(X − µX )2 ] = a2 V ar(X) = a2 σX
2
.
Similarly,
Therefore
E(Y − µY )4 a4 E(X − µX )4
K(Y ) = = = K(X).
σY4 4
a4 σ X
b) By definition,
3
c) It is known that
1) for any Gaussian random variable X, kurtosis K(X) = 3.
2) The sum of two Gaussian variables is a Gaussian variable.
Therefore X + Z is a Gaussian variable and K(X + Z) = 3.
Z = aX + b.
Find a and b.
1 (x − µX )2
f (x) = √ exp(− 2
), x ∈ (−∞, ∞).
2πσX 2σX
N (aµx + b, a2 σX
2
) = N (0, 1)
4
that is
aµx + b = 0, a2 σX
2
= 1.
Solving these equation we find
1 µ
a= , b=− .
σX σX
So
X − µX
Y = .
σX
E(X − µX )3
S(X) = 3
σX
2
where µX = E[X], σX = V ar(X).
If X and Y are two independent random variables with zero mean, and
S(X) = 0, show that S(XY ) = 0.
Solution. b) By definition,
5
Problem 6 [Optional]. The distribution of a random variable X is said to
have a fat tail if
Assume that X zero mean and symmetric distribution with probability den-
sity f .
a) explain what does it mean symmetric distribution in terms of f .
b) Find derivative
d
P r[X > x].
dx
c) (Optional) Denote F + (x) = P r[X > x]. Show that for any 0 < p < α,
Z ∞
p
E[|X| ] = 2 F ∗ (x)pxp−1 dx.
0
Conclude that
E[|X|p ] < ∞ if p < α.
b)
d d d
P r[X > x] = (1 − P r[X ≤ x]) = (1 − F (x)) = −f (x),
dx dx dx
d
since distribution function F (x) = P (X ≤ x) has property dx
F (x) = f (x).
Integrating by parts gives
Z ∞ Z ∞ ∞
Z ∞
∗ p−1
F (x)px dx = F ∗ (x)dxp = F ∗ (x)xp − xp dF ∗ (x)).
0 0 0 0
6
Since F ∗ (x) ∼ x−α when x → ∞, and p < α, then
Clearly F ∗ (0)0p = 0.
Next, (F ∗ (x))′ = (1 − F (x)′ = −f (x). So
Z ∞ Z ∞
p ∗
x dF (x)) = − xp f (x)dx.
0 0
Therefore, Z ∞ Z ∞
∗ p−1
2 F (x)px dx = 2 xp f (x)dx.
0 0
Then
(1 − F (x))′ ∼ (x−α )′ ,
and
−f (x) ∼ −αx−α−1 .
So f (x) ∼ αx−α−1 when x is large. Therefore
Z ∞
p
E|X| = |x|p f (x)dx < ∞
∞
since
|x|p f (x) ∼ α|x|−α+p−1 , as x is large
is an integrable function when α > p.
7
Problem 7. What can use say about tail index of the standard normal
distribution? How many finite moments it has?