HW1
HW1
HW1
Rjv2119
Homework 1
1.1 a)
We construct a function that we want to minimize in terms of c with the form:
L ( c ) =E [ ( Y c )2 ]
E [ Y 22 Yc+c 2 ] =E [ Y 2 ] 2c E [ Y ] + c2
We derive this equation with respect to c and equate it to 0 to obtain:
dL
=2 E [ Y ] +2 c=0
dc
From which we obtain that the value of c that minimizes the function is:
c^ =E [ Y ]
We can also check the second derivative to see if it is actually a minimum:
d2 L
=2>0
d c2
1.1 b)
We construct a function that we want to minimize in terms of
L ( f ( X ) ) =E [ ( Y f ( X ) ) X ]
2
E [ Y 22 Yf ( X )+ f ( X )2 X ]=E [ Y 2 X ]2 f ( X ) E [ Y X ] + f ( X )2
We derive this equation with respect to
dL
=2 E [ Y X ] +2 f ( X )=0
df ( X )
From which we obtain that the value of
f^ ( X )=E [ Y X ]
We can also check the second derivative to see if it is actually a minimum:
d2 L
=2>0
2
dc
1.1 c)
We construct a function that we want to minimize in terms of
L ( g ( X ) )=E [ ( Y g ( X ) ) ]=E E [ ( Y g ( X ) ) X ]
2
g(X)
E [ ( Y g ( X ) ) X ] f ( x)dx
2
E [ ( Y g ( X ) ) X ] is minimized when:
2
^g ( X )=E [ Y X ]
Knowing that
f (x)
E [ ( Y g ( X ) )2 X ] f ( x )dx
X
^g ( X )=E [ Y X ]
1.2 a)
We construct a function that we want to minimize in terms of
f ( X1, X2 , , Xn)
with
the form:
L ( f ( X 1 , X 2 , , X n ) ) =E ( X n+1 f ( X 1 , X 2 , , X n ) ) X 1 , X 2 , , X n
E [ X n +12X 1 , X 2 , , X n ]2 f ( X 1 , X 2 , , X n ) E [ X n+1 X 1 , X 2 , , X n ] + f ( X 1 , X 2 , , X n )
f ( X1, X2 , , Xn)
and equate it to 0 to
obtain:
dL
=2 E [ X n +1X 1 , X 2 , , X n ] + 2 f ( X 1 , X 2 , , X n ) =0
df ( X )
From which we obtain that the value of
f ( X1, X2 , , Xn)
function is:
f^ ( X 1 , X 2 , , X n )=E [ X n+1 X 1 , X 2 , , X n ]
We can also check the second derivative to see if it is actually a minimum:
d2 L
=2>0
2
dc
1.2 b)
We construct a function that we want to minimize in terms of
g ( X 1 , X 2 , , X n)
with
the form:
] [ [
L ( g ( X 1 , X 2 , , X n ) ) =E ( X n +1g ( X 1 , X 2 , , X n ) ) =E E ( X n+1g ( X 1 , X 2 , , X n ) ) X 1 , X 2 , , X n
X 1 , X 2 , , X n
E ( X n+1 g ( X 1 , X 2 , , X n ) ) X 1 , X 2 , , X n f (x 1 , x 2 , , x n )d x 1 d x 2 d x n
E ( X n+1 g ( X 1 , X 2 , , X n ) ) X 1 , X 2 , , X n
is minimized when:
^g ( X 1 , X 2 , , X n) =E [ X n +1X 1 , X 2 , , X n ]
Knowing that
see that:
f (x1 , x2 , , xn )
]]
E ( X n+ 1g ( X 1 , X 2 , , X n ) ) X 1 , X 2 , , X n f ( x1 , x2 , , x n ) d x 1 d x 2 d x n
X1 , X 2 , , X n
^g ( X 1 , X 2 , , X n) =E [ X n +1X 1 , X 2 , , X n ]
1.2 c)
We know from the previous literal that
^g ( X 1 , X 2 , , X n) =E [ X n +1X 1 , X 2 , , X n ]
Minimizes the mean square error. If we add the information that
X 1 , X 2 , , X n is
^g ( X 1 , X 2 , , X n) =E [ X n +1X 1 , X 2 , , X n ]=E [ X n +1 ] =
1.2 d)
We construct a linear estimator for in terms of coefficients
i and iid
X 1 , X 2 , , X n
n
= i X i
i=1
We want this estimator to be unbiased and therefore we know that this condition
has to be met:
E [ ] =
For this to happen, using our iid assumption we know that:
E [ ] =E
i=1
i=1
i X i = i E [ X i ]= i
i=1
i=1
i=1
Now we want this estimator to have minimum variance as well because we know by
theorem that the MSE is equal to bias + variance. Therefore, if we construct an
estimator that is unbiased and has minimum variance then we know it is the best
estimator. We therefore do:
Var ( )=Var
i=1
i=1
i X i = i2 Var ( X i ) = 2 i2
i=1
So we know that the estimator will have minimum variance when the sum of
n
i2
i=1
min i
i
i=1
s . i=1
i=1
L ( i ) = +
i=1
2
i
( )
i=1
i1
^ i=
i and
1
n
1
X
n i=1 i
^g ( X 1 , X 2 , , X n) =E [ X n +1X 1 , X 2 , , X n ]=E [ X n +1 ] =
Is the value that minimizes the MSE which is the squared diference between
X n +1
and the estimator. In previous literal we also obtained that estimator for :
n
1
X
n i=1 i
Is the best linear unbiassed estimator for under iid conditions. We can therefore
say that
n
1
X
n i=1 i
S n+1 =S n +X n +1
Sn
g ( S n )=S n+ c
That minimizes the MSE for
E ( S n+1g ( Sn ) ) Sn
S n+1
in the form:
For this problem we have derived the solution in a previous literal in the form of:
c^ =E [ X n+1 ]=
We therefore construct the estimator with the form:
^g ( S n )=S n+
1.3)
From the strictly stationary definition we select
n=1 and
h=T
and we obtain:
X 1 X 1+T
We can therefore see that for any T :
h=T
and we obtain:
( X t , X t+T ) ( X 1 , X 1+T )
We can therefore see that:
E [ X t ] =a+b E [ Z t ] +c E [ Z t2 ]=0
For the covariance we do:
2 ( b2 +c 2 ) ,h=0
( h )= bc 2 ,|h|=2
0 ,o . w .
We can see that both the expected value and the covariance dont depend on t and
therefore the process is weekly stationary.
1.4 b)
Its easy to obtain the expectation of this process:
cos ( ctct+ch )
( h )= 2 cos ( ch )
We can see that both the expected value and the covariance dont depend on t and
therefore the process is weekly stationary.
1.4 c)
Its easy to obtain the expectation of this process:
E [ X t ] =E [ Z t ] cos ( ct ) + E [ Zt ] sin ( ct ) =0
For the covariance we do:
2 ,h=0
2
( h )= 2 sin ( ct ) cos ( c ( t1 ) ) ,h=1
cos ( ct ) sin ( c ( t+ 1 ) ) ,h=1
0 ,o . w .
+
,
c= k
k Z
dont depend on t and therefore the process is weekly stationary. If not then the
process is not weekly stationary.
1.4 d)
Its easy to obtain the expectation of this process:
E [ X t ] =a+b E [ Z 0 ]=a
For the covariance we do:
E [ X t ] =E [ Z 0 ] cos ( ct )=0
For the covariance we do:
+
,
c= k
k Z
dont depend on t and therefore the process is weakly weekly stationary. If not then
the process is not weekly stationary.
1.4 f)
Its easy to obtain the expectation of this process:
E [ X t ] =E [ Z t ] E [ Z t 1 ] =0
For the covariance we do:
( h )=Cov ( Z t Z t 1 , Z t h Z t1h )
( h )= ,h=0
0 ,o . w .
We can see that both the expected value and the covariance dont depend on t and
therefore the process is weekly stationary.
1.5 a)
For this process we have:
1+ 2 ,h=0
( h )= ,|h|=2
0 ,o . w .
1 ,h=0
p ( h )=
,|h|=2
1+ 2
0 ,o . w .
So we substitute
1.64 ,h=0
( h )= 0.8 ,|h|=2
0 ,o . w .
=0.8
and obtain:
1 ,h=0
p ( h )= 0.488 ,|h|=2
0 ,o . w .
1.5 b)
For this process we have:
n
) (
1
1
Var ( X )=Var 2 X i = 2
n i=1
n
=0.8
So we substitute
1
1+2
Var ( X i ) + Cov ( X i , X j ) = 16 ( 4 ( 0 ) +4 ( 2 ) )= 4 + 4
i=1
i j
and obtain:
Var ( X )=0.61
1.5 c)
=0.8
So we substitute
and obtain:
Var ( X )=0.21
We can see that the negative covariance causes the variance of the mean to
decrease.
1.6 a)
For this process we have:
,h=0
( h )= 12
|h|
( 0 ) ,h 0
For this process we have:
Var ( X )=Var
So we substitute
Var ( X )=4.6375
1.6 b)
) (
1
1
Xi = 2
2
n i=1
n
=0.9
i=1
i j
Var ( X i ) + Cov ( X i , X j ) =
and
2=1
and obtain:
1
( 4 ( 0 ) +6 ( 1 ) + 4 (2 )+2 ( 3 ) )
16
So we substitute
=0.9 and
=1
and obtain:
Var ( X )=0.126
We can see that the negative covariance causes the variance of the mean to
decrease.
1.7)
E { X t + Y t } =E [ X t ] + E [ Y t ] = X + Y
We can see that the expected value does not depend on time.
E { X t + Y t } =E [ X t ] + E [ Y t ] = X + Y
We can see that the expected value does not depend on time.
E [ Z t ] =0
E [ Z t 2 ]=1
E [ Z t 3 ]=0
Using this we obtain:
E [ X t ] = 0 ,t is even
0 ,t isodd
h=0
we have:
( 0 )= ( t , t ) =E [ Z t 2 ]=1
Now if t is even and
( 1 ) = ( t ,t +1 )=
1
3
E [ Z t ] E [ Z t ])=0
(
2
( 0 )= ( t , t ) =
h=1 we have:
h=0
we have:
1
( E [ Z t14 ] 2 E [ Z t12 ] +1 )=1
2
h=1 we have:
( 1 ) = ( t ,t +1 )=E [ Z t+1 ] E
Z t121
=0
2
h>1
we have:
( h )=0
( h )= 1 ,h=0
0 ,h 0
And therefore we have proven that this process is white noise (0,1). Now to prove
that it is not iid we are going to use the result in the next literal.
1.8 b)
If n is odd
E [ X n +1X 1 , X 2 , , X n ] =E [ Z n X 1 , X 2 , , X n ] =E [ Z n ] =0
If n is even
E [ X n +1X 1 , X 2 , , X n ]=E
] [
1
1
1
( Z n21 ) X n=x n =E ( Z n21 )Z n =xn = ( x n21 )
2
2
2
b=
2 a
n+1 . We therefore can see that:
b
x =a+ (n+ 1)=0
2
We therefore express the sample covariance function
nh
1
1
^ ( h )= ( a+b ( th ) ) ( a+ bt ) =
n t =1
n
nh
(
t =1
nh
( a+bt ) + bh ( a+bt )
2
t =1
^ ( 0 )=
1
2
( a+bt )
n t =1
( a+ bt )
^p ( h )=
t=1
n
nh
bh ( a+ bt )
+
t=1
( a+ bt )2 ( a+bt )2
t=1
t=1
then:
nh n
And therefore the above expression becomes
( h>0 ) as:
lim ^p ( h )=1
=1
( a+ bt )
t=1
1.9 b)
For making results simpler we select
w=
x =
1
c cos ( wt )=0
n t =1
cos ( wt ) .
^ ( h )=
c2
cos ( wt ) cos ( w ( t +h ) )
n t=1
h0
Now if
and
is odd we have:
nh
c2
c2
^ ( h )= (1 )= ( hn )
n t=1
n
h0
Now if
and
is even we have:
nh
c2
c2
^ ( h )= ( +1 )= ( nh )
n t=1
n
h=0
Now if
we have
^ ( 0 )=
c2
c2
(
+1
)
=
( n )=c 2
n t=1
n
h
1 ,h is even
n
^p ( h )=
h
1 ,h is odd
n
And therefore obtain:
( h>0 ) as:
cos ( wh )
w=
When
1.10)
We define:
mt =mt m t1
mt=c 0+ c1 t+ c 2 t 2+...+ c p t p
If we define
then:
mt =c 0 ( t 0( t1 )0 ) +c 1 ( t 1( t1 )1) +...+c p ( t p ( t1 ) p )
p
mt = c k ( t k ( t1 )
k=0
( t 1 ) =
i=0
k t k i (1 )i =t k +
k t ki (1 )i
i
i =1 i
()
()
mt
( ()
mt =c 0 + c k k t ki (1 )i
k=1
i=1 i
as:
We can clearly see that the inside summation (binomial expansion) starts in 1 and
therefore for every
therefore for
cp
ck
p+1
m t=0
p 1
k1
and
1.11 a)
We can clearly see that given the conditions of the filter:
q
a j=1
k=q
q
k=q
j a j=0
And therefore:
q
k=q
a j mt j = a j ( c 0 +c 1 (t j) )=c0
k=q
k=q
a j +c 1 t
k=q
a jc 1 ja j=c0 + c1 t=mt
k=q
1.11 b)
We calculate:
[
q
E [ A t ] =E
j=q
Var ( A t ) =Var
a j Zt j = a j E [ Z t j ]=0
i=1
1
a j Z t j = a j2 Var ( Z t j ) = 2 ( 2 q+1
) 1= 2q+1
j =q
j=q
j=q
1.12 a)
We can express:
()
And from this expression we can see that this is going to be equal to
only if
a j=1
j
( j )i a j=0
j
mt
if and
()
For every i.
1.12 b)
We use what is obtained in a) and using R we plug in the following code to check if:
a j=1
j
( j )i a j=0
j
For every i.
j=-7:7
aj=(1/320)*c(-3,-6,-5,3,21,46,67,74,67,46,21,3,-5,-6,-3)
sum(aj)
[1] 1
print(t(j)%*%aj)
[,1]
[1,] 0
j2=j^2
print(t(j2)%*%aj)
[,1]
[1,] 0
j3=j^3
print(t(j3)%*%aj)
[,1]
[1,] 0
1.14)
We use what is obtained in 1.12 a) and using R we plug in the following code to
check if:
a j=1
j
( j )i a j=0
j
For every i.
j=-2:2
aj=(1/9)*c(-1,4,3,4,-1)
sum(aj)
[1] 1
print(t(j)%*%aj)
[,1]
[1,] 0
j2=j^2
print(t(j2)%*%aj)
[,1]
[1,] 0
j3=j^3
print(t(j3)%*%aj)
[,1]
[1,] 0
1.15 a)
Using properties of the function and of the seasonality, we can express:
Z t = 12 X t =Y tY t1 +Y t13Y t 12
E [ Z t ] =0
Z t = 212 X t =Y t 2 Y t1 2 +Y t24
E [ Z t ] =0
Cov ( Z t , Z th ) =Cov ( Y t 2Y t12+Y t24 ,Y t h2 Y th12 +Y th24 ) =6 ( h )4 ( h+ 12 ) + ( h+24 )+ ( h24 )
We can see that both the expected value and the covariance does not depend on t
therefore Zt is weakly stationary.
1.18)
We show the resulting graphs of what is requested in the problem:
S e r ie s
10200.
10000.
9800.
9600.
9400.
9200.
9000.
8800.
8600.
8400.
8200.
8000.
0
10
20
30
40
50
60
70
40
50
60
70
S e r ie s
600.
400.
200.
0.
-2 0 0 .
-4 0 0 .
-6 0 0 .
10
20
30
============================================
ITSM::(Tests of randomness on residuals)
============================================
Ljung - Box statistic = .20240E+03 Chi-Square ( 20 ), p-value = .00000
McLeod - Li statistic = .20508E+03 Chi-Square ( 20 ), p-value = .00000
# Turning points = 43.000~AN(46.667,sd = 3.5324), p-value = .29926
2000.
1000.
0.
-1 0 0 0 .
-2 0 0 0 .
10
20
30
40
50
60
70
80
90