Tsa L5

Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

Time Series Analysis Lecture 5

Langtian Ma, Yifei Feng, Yufei Cai, Zhuorang Liu


April 25, 2023

1 Linear Process and Convergence in L2


Definition 1 (White Noise) A random variable w is said to be a white noise if
it has a probability distribution with zero mean and finite variance σ 2 , which
is denoted by w ∼ WN(0, σ 2 ).

Definition 2 (Linear Process) A linear process is a sequence of random vari-


able {Xt } such that:
X∞
Xt = ψj wt−j ,
j=−∞
2
where wt−j ∼ WN(0, σ ) and mutually independent, ψj ∈ R for all t, j ∈ Z.

Definition 3 (Convergence in L2 -norm) Let {Xn } be a sequence of random


variables. If there exists a random variable X s.t.

lim E[(Xn − X)2 ] = 0.


n→∞

We say {Xn } converges in L2 to X, and {Xn } is convergent in L2 , denoted by


L2
Xn −−→X.

Remark (Convergence in Lr norm) (Supplementary) More generally, we say


{Xn } converges in Lr norm to X if

lim E[|Xn − X|r ] = 0.


n→∞

Lemma 1 (Cauchy’s Condition) A sequence of random variable {Xn } is con-


vergent in L2 iff
lim E[(Xn − Xm )2 ] = 0.
n,m→∞

Lemma 2 (Markov’s Inequality) Let X be a nonnegative random variable and


α > 0, then:
E[X]
P (X ⩾ α) ⩽ .
α

1
Lemma 3 (Jensen’s Inequality, Probabilistic Form) Let X be a random variable
and φ a convex function, then:

φ(E[X]) ⩽ E[φ(X)]

Corollary For a white noise w ∼ WN (0, σ 2 ), we have:


p p √
E|w| = E2 |w| ⩽ E[w2 ] = σ 2 = σ (1)

Proposition 1 For a linear process {Xt }



X
Xt = ψj wt−j
j=−∞
P∞
with j=−∞ |ψj | < ∞, we define
n
X
Sn = ψj wt−j
j=−n

then we have:
1. |Xt | < ∞ a.s. i.e. P (|Xt | > α) → 0 as α → ∞.

2. {Sn } converges to come Xt in L2 .


Proof For 1:

1
P (|Xt | > α) ⩽ E|Xt | (Markov’s inequality)
α

1 X
= E ψj wt−j
α j=−∞

1 X
⩽ |ψj |E|wt−j |
α j=−∞

1 X
⩽ |ψj ||E[wt−j ]| (By inequality (1) )
α j=−∞

σ X
= |ψj | → 0 as α → ∞
α j=−∞

For 2:

2
 2
X
E(Sm − Sn )2 =E  ψj wt−j 
n⩽|j|⩽m
X
= ψj2 E[wt−j
2
] (Since E[wi wj ] = 0 for i ̸= j)
n⩽|j|⩽m
X
= ψj2 σ 2
n⩽|j|⩽m
 2
X
⩽σ 2  |ψj | → 0 as m, n → ∞
n⩽|j|⩽m

Then by Cauchy’s condition (lemma 1), {Sn } converges in L2 . 2

2 AR(1) and Causality


Definition 4 A time series {Xt } is called an AR(1) model if it satisfies:

Xt = ϕXt−1 + wt , (2)

where ϕ is a parameter and wt ∼ WN(0, σ 2 ), mutually independent.

iid
Example 1 A simulated AR(1) series with ϕ = 0.9 and wt ∼ N (0, 1) (Figure
1), and the plot of Xt versus Xt−1 (Figure 2), where we can observe an obivious
linear relationship.
l i b r a r y ( ”TSA” )
data ( a r 1 . s ) ;
plot ( a r 1 . s , y l a b=expression (Y[ t ] ) , type= ’ o ’ )
plot ( y=a r 1 . s , x=z l a g ( a r 1 . s ) , y l a b=expression (Y[ t ] ) ,
x l a b=expression (Y[ t − 1 ] ) , type= ’ p ’ )

Figure 1: Figure 2:

3
Definition 5 (Back-shift Operator) For any time series {Xt }, a back-shift op-
erater B operates an element of the time series to produce the previous element,
i.e. BXt = Xt−1 .

Proposition 2 For the AR(1) model {Xt }:


P∞
1. If |ϕ| < 1, Xt = j=0 ϕj wt−j is the unique weakly stationary solution
to equation (2).
|h| 2
2. E[Xt ] = 0 and Cov(Xt , Xt+h ) = ϕ1−ϕσ2 .
P∞
Proof For 1, Since |ϕ| < 1, j=0 |ϕj | < ∞, then by proposition 1, Xt converges
in L2 . Assume that there exists another weakly stationaly solution {Yt }, then
n−1
!2
X
i
lim E Yt − ϕ wt−i = lim E(ϕn Yt−n )2 = 0,
n→∞ n→∞
i=0

which leads to a contradiction.


For 2, it is proved in previous lecture. 2

Example 2 Plot of Cov(Xt , Xt−h ) versus h with ϕ = 0.6 (Figure 3) and ϕ =


−0.6. (Figure 4)
ACF=ARMAacf( a r =.6 , l a g .max=20)
plot ( y=ACF[ − 1 ] , x =1:20 , x l a b= ’ Lag ’ , y l a b= ’ACF ’ , type= ’ h ’ ) ; abline ( h=0)
ACF=ARMAacf( a r = −0.6 , l a g .max=20)
plot ( y=ACF[ − 1 ] , x =1:20 , x l a b= ’ h ’ , y l a b= ’ACF ’ , type= ’ h ’ ) ; abline ( h=0)

Figure 3: Figure 4:
Remark 1 We can write
wt = Xt − ϕXt−1 = (1 − ϕB)Xt := ϕ(B)Xt ,

X ∞
X
Xt = ϕj wt−j = ϕj B j wt := π(B)wt ,
j=0 j=0
P∞
where ϕ(B) = (1 − ϕB), π(B) = j=0 ϕj B j . Then we have
ϕ(B)π(B) = 1.

4
Definition 6 (Causality) A linear process {Xt } is said to be causal for {wt } if
there exists
X∞
2
ψ(B) = ψ0 + ψ1 B + ψ2 B + · · · = ψj B j
j=0
P∞
with j=0 |ψj | < ∞ and Xt = ψ(B)wt .

Remark Consider an AR(1) process {Xt } defined by ϕ(B)Xt = wt , then:


1. If |ϕ| < 1, then {Xt } is causal.
P∞
2. If |ϕ| = 1, then {Xt } = j=0 wt−j ∼ WN(0, ∞).
P∞
3. If |ϕ| < 1, then we can write Xt−1 = ϕ−1 Xt −ϕ−1 wt , and Xt = − j=0 ϕ−j wt+j ,
which means Xt depends on future wj , and {Xt } is not causal.

3 Typical Applications of AR models


1. Economic forecasting: AR a models are commonly used to forecast eco-
nomic variables such as stock prices, exchange rates, and interest rates,
and they are particularly useful for modeling variables with persistent
patterns or trends.

2. Signal processing: AR models are used in signal processing applications


such as audio and image processing. For example, AR models can be used
to model the spectral content of audio signals.
3. Climate modeling: AR models are used in climate modeling to under-
stand and predict long-term climate patterns, such as temperature and
precipitation.

You might also like