Econometrics Chapter 1 UNAV

Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 38

ECONOMETRICS II AND QUANTITATIVE TECHNIQUES II

(TIME SERIES)

Dr. Luis Albériko GIL ALAÑA

CONTENT:

1. INTRODUCTION

2. AUTOREGRESSIONS AND MOVING AVERAGE PROCESSES

3. ESTIMATION

4. HYPOTHESIS TESTING

5. MODEL SELECTION

6. SEASONALITY

7. PREDICTION

8. NONSTATIONARY TIME SERIES

9. STRUCTURAL TIME SERIES

10. FREQUENCY DOMAIN AND LONG MEMORY PROCESSES

11. DYNAMIC MODELS

12. MULTIVARIATE TIME SERIES

13. IM`PULSE RESPONSE FUNCTIONS

14. COINTEGRATION

15. ARCH MODELS

16. NONLINEAR TIME SERIES

17. ADDITIONAL TOPICS

Bibliography:

*. G.E.P. Box and G.M. Jenkins, Time series analysis. Forecasting and control. San
Francisco. Holden-Day (1970).

*. Novales, A. Econometría, McGraw-Hill, 2ª Edición (1993).

*. Aznar, A. y Trivez, F.J., Métodos de predicción en Economía II. Análisis de series temporales,
Ariel, Economía (1993).

*. Uriel, E. y A. Peiró, Introducción al análisis de series temporales, Editorial AC (2000).

1
2
1. INTRODUCTION

Definition: TIME SERIES: Collection of observations across time, spaced at equal


intervals of time.

Given a variable, “y”, we denote the time series as

y1, y2, y3, …, yT o alternatively {yt, t = 1, 2, …, T}

Example: U.S. Consumer Price Index 1947m1 – 2004m12:

Time series análisis is not reduced to Economics. In fact, it is also studied in many other
areas such as Meteorology, Medicine, Biostatistics, Astronomy, Political Sciences, etc.

Objetive of time series:

Find a mathematical model to explain the behaviour of the time series, in order to

*. Predict the future value of the series


*. accept / reject a given theory (based on economics, finance or any other
grounds.).

Basic concepts:

We model the time series as a STOCHASTIC PROCESS. Each observation in a


stochastic process is a random variable and the observations evolve according to some
probabilistic laws.

Definition: Stochastic process: Collection of random variables which are ordered


in time.

3
Time series: y1, y2, y3, …, yT
We treat each observation (data) as if it were a random variable, such that we identify
the time series with the stochastic process

Stochastic process: Y1, Y2, Y3, …, YT


that is, each random variable (Yt), has behind a distribution function, such that

yt → Simple realization of the random variable Yt.

A common assumption in time series is to assume that all variables (that is, all the
observations are i.i.d., = independent and identically distributed).

Given that we treat all the observations as random variables, we need to define their
moments:

*. E(Yt) = E(yt) = μt, t = 1, 2, …, T

*. Var(Yt) = Var(yt) = E(yt – E(yt))2 = E(yt - μt)2, t = 1, 2, …, T

*. Cov(Yt, Yt+k) = Cov(yt, yt+k) = E[(yt – E(yt)) (yt+k – E(yt+k)] =

E[(yt – μt)) (yt+k – μt+k)] = γt, t+k.

However, given that in most cases we only have a single realization available for each
of the random variables, we need to impose some restrictions to make these concepts
feasible. The basic concept here is the one called STATIONARITY.

* Stationarity

A stochastic process (or a time series) is said to be stationary (in a weak sense or
stationary of order 2) if it satisfies the following three properties:

1. E(yt) = μ, for all t.

2. Var(yt) = E((yt – μ)2 = γo for all t.

3. Cov(yt, yt+k) = E[(yt – μ)) (yt+k – μ)] = γk for all t.

In other words, the mean and the variance do not depend on time, and the covariance
between any two observations only depends on the distance between them, but not on
the specific location in time.

4
Thus, given the time series

y1, y2, y3, y4, y5, y6, y7, y8, y9, …, yT

under stationarity,

Cov(y1, y4) = Cov(y2, y5) = Cov(y4, y7) = … = Cov(y10, y13) = etc. = γ3.

To impose stationarity allows us to obtain estimators such as,

*.

*.

*.

If the stochastic process is stationary and ergodic, these estimators, y will


be consistent estimators of μ, γo and γk. (That is, the estimators will tend to be more
precise as the sample size increases).

Though we do not properly define ergodicity, this property requires that observations
which are distant apart will tend to be uncorrelated. That is, a sufficient condition for
ergodicity is:

γk → 0 as k → ∞.

Note that:

Order 2 Stationarity = Covariance Stationarity = Stationarity in Weak Sense

There exists an additional type of stationarity, called “Stationarity in strict sense”. It


imposes that the joint distribution of probabilities in {y1, y2, y3, y4, …, yT} is identical
to the joint distribution of probabilities in {y 1+c, y2+c, y3+c, y4+c, …, yT+c} for all T and c.
That is, the distribution function of probabilities is not affected by traslactions in time:

F(y1, y2, y3, y4, …, yT) = F(y1+c, y2+c, y3+c, y4+c, …, yT+c) for all T and c,

where F represents the distribution function of probabilities.

Note that while the definition of weak stationarity is based on moments, the definition
of stationarity in strict sense is based on the distribution function.

5
Thus, the two definitions are independent and the only relation between them is given
by:

Stationarity in strict sense


+ → Weak stationarity (or of order 2)
Exist the 1 and 2nd order moments
st

Across this course, we mainly work with stationary (in weak sense or order 2) series.
However, most of economic time series are not stationary, and, we will have to
transform them to render stationarity

*. Examples of non-stationary time series:

U.S. Consumer Price Index (monthly data)

1,55

1,5

1,45

1,4

1,35

1,3

1,25

1,2
1 T

Imaginary series (200 observations)

2000

1000

-1000 6

-2000
7.Jan.86 31.Dec.97
Index of the Hong-Kong (Hang-Seng) stock market index (daily data)

The simplest example of a stationary process is the WHITE NOISE process. Generally,
it is denoted as εt.

*. White Noise process: εt:

E(εt) = 0 for all t.

Var(εt) = σ2 for all t,

Cov(εt, εt+k) = 0 for all t, and for all k ≠ 0.

Examples of White Noise processes


T = 100 observations T = 1000 observations
3 6

2 4

1
2
0
0
-1
-2
-2

-3 -4

-4 -6
1 100 1 1000

Across the course we will focus on stationary time series.

If the time series is nonstationary, we will find a way to transform the time series in
another one that is stationary. Once the new series is found to be stationary, we will
look for the mathematical model that best describes the behaviour of the series,
such that the part that is un-modelled is precisely a white noise process..

xt is the original series, (let us supposed that is nonstationary)

yt is the transformed series, supposed to be stationary

7
Then,

yt = f(*) + εt,

where f is the mathematical function that explain the series and ε t is a white noise
process.

*. Example of a stationary stochastic process

yt = εt + θ εt-1, t = 1, 2, … εt ≈ White Noise

It can be proved that (Exercise 1.1):

E(yt) = 0 for all t.

Var(yt) = σ2 (1 + θ2) for all t,

Cov(yt, yt+1) = σ2 θ for all t,

Cov(yt, yt+2) = 0 for all t,

and, in general,

Cov(yt, yt+k) = 0 for all t and all k ≥ 2.

Thus, we observe that the mean, the variance, and the covariance structure do not
depend on time, so the series yt is stationary.

*. Autocovariance function and Autocorrelation function

If yt is stationary,

Cov(yt, yt+k) = γk = γ(k) ≡ autocovariance function

It can be easily proved that (Exercise 1.2):

γk = γ(k) = γ(-k) = γ-k .

That is, the autocovariance function is symmetric, so we just focus on values of k ≥ 0.

Similarly, we define

8
autocorrelation function

And we can show that (Exercise 1.2)

Again, we just focus on values k ≥ 0. You can easily see that

In the previous example, (yt = εt + θ εt-1), we get the following values for the
autocorrelation structure (Exercise 1.3):

for all k ≥ 2.

*. (Sample) autocorrelation function

We defined above the following (sample) concepts (based on data):

(sample mean)

(sample variance)

(sample covariance).

Similarly, we can define the sample autocorrelation function,

(1.1)

and the graphics that relates with k is called CORRELOGRAM.

Thus, we have the sample autocorrelation function (Correlogram) → based on data

and the theoretical autocorrelation function → based on a theoretical model

Example:

9
Suppose we have a theoretical model that is a white noise process (ε t). Its
autocorrelation function (theoretical) will be given by the figure in the left-hand-side
below. However, let us suppose now that we have data, which have no structure, and we
compute its correlogram (sample autocorrelation function (1.1)). The values here (right-
hand-side plot), are similar to those given by the white noise model, and therefore we
can conclude that the time series follows a white noise process. (yt = εt)

Theoretical Autocorr. F. (White noise) Correlogram (based on data)


1,2 1,2

1
1
0,8
0,8
0,6
0,6
0,4

0,4 0,2

0,2 0

-0,2
0 0 1 2 3 10
0 1 2 3 10

Autocorrelation function and correlogram for values k = 1, 2, …, 10.

The correlogram tries to reproduce the theoretical autocorrelation function but it does
not imply to be the same. The correlogram will be the instrument employed to identify
the model.

Required notation in the following chapters:

L: Lag-operator: Lyt = yt-1

Lk yt = yt-k

∆: First differences: ∆ = (1 – L)

∆ yt = (1 – L) yt = yt - yt-1

2. ARMA PROCESSES

This chapter deals with two types of processes:

1. AutoRegressive Processes (AR)

10
2. Moving Average Processes (MA)

The combination of the two processes leads to a new process called ARMA
(AutoRegressive + Moving Average).

Before starting with the processes, we define a fundamental Theorem in Time Series
Analysis:

*. Wold Decomposition Theorem

“Any stationary process of order 2, y t, can be uniquely represented as the sum of two
processes mutually uncorrelated, (vt and wt), such that:

yt = vt + wt,

where vt is a deterministic process,

and wt is a stochastic process, such that:

with

and where εt is a white noise process:


E(εt) = 0 for all t.

Var(εt) = σ2 for all t,

Cov(εt, εt+k) = 0 for all t, and k ≠ 0.”.

In general (and in particular, in this chapter), we focus on the stochastic part (wt).

ARMA PROCESSES

*. Autorregressive processes (AR):

A process yt is said to be autoregressive of order 1 (and denoted by AR(1)) if:

(2.1)

where εt is a white noise process.

Given (2.1), we can operate recursively such that:

(2.2)

and substituting (2.2) in (2.1), we get

11
(2.3)

Again, using (2.1)

(2.4)

And substituting (2.4) in (2.3):

and we can go on further such that (after n substitutions)

or

If we impose in (2.1), and make n → ∞, the previous expression becomes,

(2.5)

since if then as n → ∞.

As a conclusion, we have shown that an autoregressive process of order 1 such as (2.1),


if we impose the condition it can be expressed as (2.5).

An alternative way of getting the same result is using the lag operator:

Starting again with equation (2.1):

it can be represented as

or

12
(2.6)

Next, we use the following mathematical approximation

If │x│ < 1,

such that, if

(2.7)

substituting (2.7) in (2.6) we get:

obtaining the same result as in (2.5).

*. A necessary condition for an AR(1) to be stationary is

and then (2.1) can be expressed as (2.5).

Next, we examine the statistical properties of the AR(1) processes.

Assuming that the AR(1) process is stationary,

we start with the mean,

Noting that in stationary processes, the mean is constant, (μ), and that E(εt) = 0, we get

Next, we look at the variance,

13
(2.8
)

given that εt is white noise.

again here, given that in stationary processes, the variance of yt is constant (and we
denote it by γo), looking at (2.8) we get:

+ 0,

Now, we look at the autocovariances:

(2.9)

The last term in (2.9) is 0, given that a stationary AR(1) process depends exclusively on
εt, εt-1, εt-2, … and thus, it is uncorrelated with εt+1. Then, we obtain that:

Similarly, it is easy to prove that (Exercise1.3)

For a general k, we have that

Once the autocovriance structure is obtained, we can compute the autocorrelation


function:

14
and, in general

and we see that the decay in the autocorrelation function follows an exponential process.

We show now some graphics of theoretical autocorrelation functions for different


AR(1) processes.

We first suppose that is a positive value. If that value is close to 1 ( = 0.9) the
decay in the autocorrelation function will be slow. On the contrary, if is close to 0,
(e.g. = 0.2), the decay is very fast, and the values become fast close to. On the other
hand, given that is positive, all the values in the autocorrelation function are
positive.

If, on the contrary, is negative, we get the same exponential decay to 0, depending on
the magnitude of . However, such decayment oscillates, from postive to negative
values.

Funciones de autocorrelación para distintos procesos AR(1)

  0.2
1 0,25
0,9
0,8 0,2
0,7
0,6 0,15
0,5
0,4 0,1
0,3
0,2 0,05
0,1
0 0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20

15
1
0,8
0,6
0,4
0,2
0
-0,2 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
-0,4
-0,6
-0,8
-1

These previous plots corrspond to the theoretical autocorrelation function. If we are


working with real time series data, we compute the sample autocorrelation function, and
we can get a plot of the correlogram, and, if it is similar to one of the graphics above,
we can argue that the time series follows an AR(1) process.

Later on, we will study higher order autoregressive (AR) processes, e.g., AR(2), AR(3)
and in general AR(p) processes.

*. Moving Average (MA) processes:

A process yt is said to be moving average of order 1 (and denoted by MA(1)) if:

(2.10)

MA processes are ALWAYS stationary.

Again here, we look at its statistical properties.

Given (2.10),

We look now at the autocovariance structure:

16
Similarly, it can be proved that

In fact, for any value k > 1,

Thus, the autocorrelation function of a MA(1) process will be given by:

Finally, its graphical representation will consist on a unique value (positive or negative)
depending on the sing of θ.

Autocorrelation functions for different MA(1) processes

  0.2

0,6

0,5

0,4

0,3

0,2

0,1

0
1 3 5 7 9 11 13 15 17 19

17
Once more, if we are working with data, and the correlogram shows a plot similar to
one of these graphics, we can conclude that the time series follows a MA(1) process.

We have said before that the MA(1) processes (and in general any MA process) are
always stationary. Sometimes, it is desirable the process to be invertible, that means that
the process admits an AR(∞) representation. Then, it will require that < 1
(Invertibility condition). Then, it can be shown that,

(2.11)

and given that < 1:

so (2.11) becomnes:

or

which is an AR(∞) process with a particular structure on the coefficients.

*. AR(2) processes

A process yt is said to be autoregressive of order 2 (and denoted by AR(2)) if:

18
(2.12)

or, using the lag-operator,

(2.13)

Next, we look at the conditions that are required for an AR(2) process to be stationary.

A simple way consists of re-writing the polynomial in (2.13) as a function of x instead


of L, and then equalize it to 0, that is,

Next, if we invert it (in the sense that we write which is in the right in the left and
viceversa) and substitute x by z, we get the following equation:

which is a second order equation (in standard form), and we can compute its roots, z 1
and z2,

The stationarity condition is then:

y . (2.14)

In other words, the stationarity condition in AR(2) processes is that “the roots of the
“inverted” polynomial must be both smaller than 1 in absolute value”.

Example:

Determine if the following AR(2) process is stationary or not:

(2.15)

Re-writing the model as a function of the lag-operator we get:

19
and writing the polynomial in terms of x and equalizing it to 0:

The inverted polynomial as a function of z is:

which roots are:

1.10 and -
0.89.

Given that one of the roots (1.10) is above 1 in absolute value, the process given by
(2.15) is non-stationary.

----------------------------------------------------------------------------------------------------------

We have just said that the stationarity condition in AR(2) process is that the roots

must be such that

and .

Therefore, we can distinguish three different cases:

a.) if , and we obtain two real and distinct roots.

b.) if , and we obtain two real and identical roots:

( )

c.) if , and we obtain two complex and distinct roots.

In the latter case, the roots will adopt the form:

z1 = a + b i

20
z2 = a - b i

and the stationarity conditions states here that:

It is easy to prove that:

(2.16)

Using this last relationship (2.16), we get that yt in (2.12) can also be expressed in the
following way:

and if and , we obtain that

which can be expressed as

Then, yt in (2.12) can be re-written as

where

21
Example:

First we check if this process is stationary or not.

The inverted polynomial is:

and the roots are,

0.2
and -0.4.

Given that the two roots are smaller than 1 in absolute value, the process is stationary.

Furthermore, we know that

Therefore, the process can be re-written as

such that

22
Next we look at an alternative way to determine if an AR(2) process is stationary or not.
For this purpose we use now the coefficients of the process. In particular, an AR(2)
process is stationary if the coefficients in the model satisfy the following three
properties:

1.

2.

3.

If one of these conditions is not satisfied, the process will not be stationary.

These conditions can be represented graphically,

Thus, all the values of and that lie within the triangle will imply stationarity.
Moreover, if we draw, within the triangle, the curve , we can also
distinguish the real roots from the complex ones.

23
Finally, we look at the statistical properties of the stationary AR(2) processes:

and given that the mean (μ) is supposed to be constant,

Now we look at the variance:

Given that in all stationary processes, then similarly,


and so the last two terms in the
previous expression are 0. Moreover, taking into account that Cov(y t-1, yt-2) = γ1, and
given that the variance (γo) is constant, we obtain

Next we look at the autocovariance structure:

and given that we get

24
and given that we obtain

Similarly we can calculate γ3,

and, in general,

(2.17)

Dividing (2.17) by γo, we obtain the structure for the autocorrelations,

(2.18)

such that,

etc.

These values can be obtained from (2.18), taking into account that ρk = ρ-k and ρo = 1.

The graphical representation of the autocorrelation function in AR(2) processes is very


sensitive to the characterization of the coefficients of the process. Let’s see some
examples:

Autocorrelation functions for some AR(2) processes

25
*. MA(2) processes

A process yt is said to be moving average of order 2 (and denoted by MA(2)) if:

(2.19)

or

The MA processes are always stationary.

It can be easily proved that

for all s > 2.

Therefore, the autocorrelation function is given by:

26
for all s > 2.

Thus, the autocorrelation function of MA(2) processes is null for lags higher than 2.
Let’s see some examples in graphical terms:

Autocorrelation functions of MA(2) processes

*. ARMA(1, 1) processes

A process yt is ARMA(1, 1) if:

(2.20)

or

The ARMA(1, 1) processes are:

stationary if:

and invertible if:

27
Next we look at the statistical properties of the stationary ARMA(1, 1) processes.

Given (2.20):

Again here, and We focus first on the last


term,

We get:

Taking into account that and

we get that:

Similarly,

and, in general, for all s > 1, we have that

The autocorrelation structure is then given by:

28
and, in general, for all s > 1

Thus, for s > 1, we get the same structure as in purely AR(1) processes. Generalizing all
the previous processes, we have

*. AR(p) processes

A process yt is autoregressive of order p (and denoted by AR(p)) if:

(2.21)

or

where

An AR(p) process is stationary if the roots of the polynomial:

are smaller than 1 in absolute value. That is,

Next we look at the autocovariance and the autocorrelation structures. Given (2.21):

29
Instead of looking at the variance, (γo), we look at the autocovariances for a given value
k. For this purpose, we first multiply both sides in (2.21) by yt-k, such that

and next we calculate the mathematical expectation, (E(.)), in both sides of the previous
equality, that is,

Taking into account that the means are 0, we can re-write that expression as:

and given that the last term is 0, we get

.
(2.22)

Dividing (2.22) by γo,

and calculating the values in the last expression for k = 1, 2, 3, …p:

...

That is, we have p equations to solve p unknowns (ρ1, …, ρp), which is sufficient to
calculate the autocorrelation function. This system of equations is known as the YULE-
WALKER equations.

*. MA(q) processes

A process yt is moving average of order q (and denoted by MA(q)) if:

30
or

where

Remember once more that all MA processes are always statonary.


.

A MA(q) process is invertible if the roots of the polynomial:

are smaller than 1 in absolute value. That is,

The autocovariance structure is given by:

and it can easily be proved that:

if k = 0,
if k = 1, 2, …, q,
0 if k > q.

Thus, the autocorrelation function is given by:

1 if k = 0,

if k = 1, 2, …, q,

31
0 if k > q.

Thus, the autocorrelation function of a MA(q) process is null for lags above q. This is a
very useful results noting that when we compute the correlogram of a given time series,
if it is close to 0 from a given lag, we can easily identify it as a MA process. For
instance, if the correlogram adopts the graphic in the left (Series 1) we can suggest that
the model that is behind the data is a MA(5), while the correlogram in the right hand
side might correspond to a MA(2) process.

Correlograms corresponding to imaginery time series

Series 1 Series 2

1,2 1,2

0,8
0,8
0,4

3
0 0,4
0 1 2 4 5 0 1 2

-0,4 4
0
-0,8 0 1 2

-1,2 -0,4

Finally, we look at the most general case:

*. ARMA(p, q) process

A process yt is ARMA(p, q) if:

or

An ARMA(p, q) process is stationary if the p roots of the polynomial:

32
are smaller than 1 in absolute value. Similarly, the process will be invertible if the q
roots of the polynomial:

are smaller than 1 in absolute value.

Given the stationary ARMA(p, q) process, it can be proved that:

and

such that

Then, for values k > q, we obtain

k = q + 1, q + 2, …

That is, for lags above q, the autocorrelation function does not take into account the
contribution from the MA part, and thus, its behaviour from lag q is the same as in an
AR(p) process.

*. Additional concepts in Chapter 2

Appendix 1: Partial correlogram

We have seen before that it is quite easy to detect MA(q) processes from the
correlogram. In fact, it is simply to observe from which lag the values are close to 0.
Thus, for example, if the values in the correlogram are approximately 0 in lag 4 and in
all the following ones, the time series may be described as a MA(3) process.

However, to detect AR(p) processes from the correlogram is not an easy task since the
autocorrelation function substantially changes depending on the parameters of the
model Here, we can use another instrument that is called partial correlogram.

We first consider the following processes:

33
and sucesively up to

As you can see, all of them are AR processes with orders that are increasing from one
process to another. The sucession of values

is called the Partial Autocorrelation Function, and is formed by the series of the last AR
coefficients which are sucessively higher.

If, instead of , we use the estimators of such values, , the sucession of values

is called the Partial Correlogram.

The partial correlogram is a very useful instrument to detect purely AR processes,


noting that if

for all s > p

the time series will be an AR(p) process.

Example:

Let us suppose that the partial correlograms of the following series 3 and 4 adopt the
following form
:

Partial correlograms corresponding to imaginery series

Series 3 Series 4

34
1,2 1,2

0,8
0,8
0,4

3
0 0,4
0 1 2 4 5 0 1 2

-0,4 4
0
-0,8 0 1 2

-1,2 -0,4

Then, we can conclude by saying that Series 3 follows an AR(5) process, while Series 4
follows an AR(2) process.

Appendix 2: Matritial representation

Given the time series {y1, y2, y3, …, yT}, we can study its statistical properties from a
matritial viewpoint.

In general, we call “y” the (Tx1) vector of the observations,

and, in general, we have that

where

35
Now, if we assume stationarity of order 2(i.e., covariance stationary), that is,

*. E(yt) = μ, for all t.

*. Var(yt) = E((yt – μ)2 = γo for all t.

*. Cov(yt, yt+k) = E[(yt – μ)) (yt+k – μ)] = γk for all t.

the previous expressions become:

This type of matrices, (where all the elements across the diagonals are equal) are called
TOEPLITZ matrices.

Adic. 3: Nonstationarity

At the beginning of this course we said that most economic time series are of a
nonstationary nature.

Here we briefly present two alterantive ways of transforming a time series that is
nonstationary into stationary.

A usual way (at least until the 80s) of describing the nonstationarity was to employ
deterministic functions (generally linear or quadratic) in time. That is, you can regress y t
on a variable “t” that describes a linear trend, or to regress it on the variables “t” y “t 2”
(quadratic function), assuming that the resulting residuals are stationary. Thus, for
example, if the time series is of form

36
a way of modelling this series would be:

such that once the parameters are estimated, ( y , the resulting residuals are
stationary and can be modelled using the ARMA models:

(2.23)

In other words, what we do is to use a linear trend such that the discrepancies between
the observed values and the estimated ones form the linear trend are stationary. This
type of transformations is usually called the “deterministic approach”.

An alternative way of transforming a time series into statitionarity is to take first


difeferences. That is, instead of working directly with the original data, yt, we do it with
their first differecnes, yt - yt-1, or

ut = (1 - L) yt = yt - yt-1 .

Assuming that ut is now stationary, we proceed to study its ARMA structure in the same
way as in (2.23). This approach is called the “stochastic approach” and will be studied
in detail in future chapters.

Giving some ideas in advance, we say that a process yt is ARIMA(p, d, q) if:

where
is the AR polynomial of order p,

is the MA polynomial of order q,

and d is the number of differences required to get stationarity (usually d


= 1 and/or sometimes 2).

37
38

You might also like