Asteriou - Series de Tiempo
Asteriou - Series de Tiempo
Asteriou - Series de Tiempo
Second Edition
Dimitrios Asteriou
Associate Professor at the Department of Business Administration,
Hellenic Open University, Greece
Stephen G. Hall
Professor of Economics, University of Leicester
Part
CHAPTER CONTENTS
An introduction to time series econometrics 266
ARIMA models 266
Stationarity 267
Autoregressive time series models 267
Moving average models 272
ARMA models 275
Integrated processes and the ARIMA models 275
Box–Jenkins model selection 276
Example: the Box–Jenkins approach 279
Questions and exercises 285
LEARNING OBJECTIVES
After studying this chapter you should be able to:
1. Understand the concept of ARIMA models.
2. Differentiate between univariate and multivariate time series models.
3. Understand the Box–Jenkins approach for model selection in the univariate time
series framework.
4. Know how to estimate ARIMA(p, d, q) models using econometric software.
265
266 Time Series Econometrics
ARIMA models
Box and Jenkins (1976) first introduced ARIMA models, the term deriving from:
AR = autoregressive;
I = integrated; and
MA = moving average.
The following sections will present the different versions of ARIMA models and intro-
duce the concept of stationarity, which will be analysed extensively. After defining
stationarity, we will begin by examining the simplest model – the autoregressive model
ARIMA Models and the Box–Jenkins Methodology 267
of order one then continue with the survey of ARIMA models. Finally, the Box–Jenkins
approach for model selection and forecasting will be presented briefly.
Stationarity
A key concept underlying time series processes is that of stationarity. A time series is
covariance stationary when it has the following three characteristics:
(a) exhibits mean reversion in that it fluctuates around a constant long-run mean;
(b) has a finite variance that is time-invariant; and
(c) has a theoretical correlogram that diminishes as the lag length increases.
Thus these quantities would remain the same whether observations for the time series
were, for example, from 1975 to 1985 or from 1985 to 1995. Stationarity is important
because, if the series is non-stationary, all the typical results of the classical regression
analysis are not valid. Regressions with non-stationary series may have no meaning and
are therefore called ‘spurious’. (The concepts of spurious regressions will be examined
and analysed further in Chapter 16.)
Shocks to a stationary time series are necessarily temporary; over time, the effects of
the shocks will dissipate and the series will revert to its long-run mean level. As such,
long-term forecasts of a stationary series will converge to the unconditional mean of
the series.
Yt = φYt−1 + ut (13.1)
where, for simplicity, we do not include a constant and |φ| < 1 and ut is a Gaussian
(white noise) error term. The assumption behind the AR(1) model is that the time series
behaviour of Yt is largely determined by its own value in the preceding period. So what
268 Time Series Econometrics
The plot of the Yt series will look like that shown in Figure 13.1. It is clear that this series
has a constant mean and a constant variance, which are the first two characteristics of
a stationary series.
If we obtain the correlogram of the series we shall see that it indeed diminishes as
the lag length increases. To do this in EViews, first double-click on yt to open it in a
new window and then go to View/Correlogram and click OK.
Continuing, to create a time series (say Xt ) which has |φ| > 1, type in the
following commands:
smpl 1 1
genr xt=1
smpl 2 500
genr xt = 1.2∗ xt(−1) + nrnd
smpl 1 200
plot xt
With the final command Figure 13.2 is produced, where it can be seen that the series is
exploding. Note that we specified the sample to range from 1 to 200. This is because the
explosive behaviour is so great that EViews cannot plot all 500 data values in one graph.
ARIMA Models and the Box–Jenkins Methodology 269
–2
–4
50 100 150 200 250 300 350 400 450 500
8.E + 12
6.E + 12
4.E + 12
2.E + 12
–2.E + 12
20 40 60 80 100 120 140 160 180 200
Similarly, the AR(p) model will be an autoregressive model of order p, and will have
p lagged terms, as in the following:
p
Yt = φi Yt−i + ut (13.4)
i=1
Finally, using the lag operator L (which has the property Ln Yt = Yt−n ) we can write the
AR(p) model as:
Yt (1 − φ1 L − φ2 L2 − · · · − φp Lp ) = ut (13.5)
(L)Yt = ut (13.6)
(1 − φz) = 0 (13.7)
with its roots being greater than 1 in absolute value. If this is so, and if the first root is
equal to λ, then the condition is:
1
|λ| = > 1 (13.8)
φ
|φ| < 1 (13.9)
A necessary but not sufficient requirement for the AR(p) model to be stationary is that
the summation of the p autoregressive coefficients should be less than 1:
p
φi < 1 (13.10)
i=1
ARIMA Models and the Box–Jenkins Methodology 271
Yt+1 = φ t Y0 + φ t u1 + φ t−1 u2 + · · · + φ 0 ut+1
since |φ| < 1, φ t will be close to zero for large t. Thus we have that:
E(Yt+1 ) = 0 (13.11)
and:
σu2
Var(Yt ) = Var(φYt−1 + ut ) = φ 2 σY2 + σu2 = (13.12)
1 − φ 2 σY2
Time series are also characterized by the autocovariance and autocorrelation func-
tions. The covariance between two random variables Xt and Zt is defined as:
Thus for two elements of the Yt process, say Yt and Yt−1 , we have:
which is called the autocovariance function. For the AR(1) model the autocovariance
function will be given by:
= φσY2 (13.15)
272 Time Series Econometrics
= φ 2 σY2 (13.16)
and in general:
So, for an AR(1) series, the autocorrelation function (ACF) (and the graph of it which
plots the values of Cor(Yt , Yt−k ) against k and is called a correlogram) will decay
exponentially as k increases.
Finally, the partial autocorrelation function (PACF) involves plotting the estimated
coefficient Yt−k from an OLS estimate of an AR(k) process, against k. If the observations
are generated by an AR(p) process then the theoretical partial autocorrelations will be
high and significant for up to p lags and zero for lags beyond p.
Yt = ut + θ ut−1 (13.19)
Thus the implication behind the MA(1) model is that Yt depends on the value of the
immediate past error, which is known at time t.
Invertibility in MA models
A property often discussed in connection with the moving average processes is that
of invertibility. A time series Yt is invertible if it can be represented by a finite-order
MA or convergent autoregressive process. Invertibility is important because the use of
the ACF and PACF for identification assumes implicitly that the Yt sequence can be
approximated well by an autoregressive model. As an example, consider the simple
MA(1) model:
Yt = ut + θ ut−1 (13.24)
Yt = (1 + θ L)ut
Yt
ut = (13.25)
(1 + θ L)
If |θ | < 1, then the left-hand side of Equation (13.25) can be considered as the sum of
an infinite geometric progression:
ut = Yt (1 − θ L + θ 2 L2 − θ 3 L3 + · · · ) (13.26)
Yt = ut − θ ut−1
Lagging the above expression one period and solving for ut−2 and resubstituting we get:
and repeating this an infinite number of times we finally get the expression Equa-
tion (13.26). Thus the MA(1) process has been inverted into an infinite order AR process
with geometrically declining weights. Note that for the MA(1) process to be invertible
it is necessary that |θ| < 1.
In general, MA(q) processes are invertible if the roots of the polynomial:
(z) = 0 (13.27)
= θ σu2 (13.31)
From this we can understand that for the MA(1) process the autocorrelation function
will be:
θσu2 σu2
Cov(Yt , Yt−k ) = for k = 1
Cor(Yt , Yt−k ) = = σu2 (1 + θ 2 ) 1 + θ2 (13.33)
Var(Yt )Var(Yt−k )
0 for k > 1
So, with an MA(q) model the correlogram (the graph of the ACF) is expected to have
q spikes for k = q, and then go down to zero immediately. Also, since any MA process
can be represented as an AR process with geometrically declining coefficients, the PACF
for an MA process should decay slowly.
ARIMA Models and the Box–Jenkins Methodology 275
ARMA models
After presenting the AR(p) and the MA(q) processes, it should be clear that there can
be combinations of the two processes to give a new series of models called ARMA(p, q)
models.
The general form of the ARMA model is an ARMA(p, q) model of the form:
p q
Yt = φi Yt−i + ut + θj ut−j (13.35)
i=1 j=1
Yt (1 − φ1 L − φ2 L2 − · · · − φp Lp ) = (1 + θ1 L + θ2 L2 + · · · + θq Lq )ut (13.36)
(L)Yt = (L)ut (13.37)
In the ARMA(p, q) models the condition for stationarity deals only with the AR(p) part
of the specification. Therefore the p roots of the polynomial equation (z) = 0 should
lie outside the unit circle. Similarly, the property of invertibility for the ARMA(p, q)
models will relate only with the MA(q) part of the specification and the roots of the
(z) polynomial should also lie outside the unit circle. The next section will deal with
integrated processes and explain the ‘I’ part of ARIMA models. Here it is useful to note
that the ARMA(p, q) model can also be denoted as an ARIMA(p,0,q) model. To give
an example, consider the ARMA(2,3) model, which is equivalent to the ARIMA(2,0,3)
model and is:
Yt = φ1 Yt−1 + φ2 Yt−2 + ut
+ θ1 ut−1 + θ2 ut−2 + θ3 ut−3 (13.38)
de-trend the raw data through a process called differencing. The first differences of a
series Yt are given by the equation:
As most economic and financial time series show trends to some degree, we nearly
always take the first differences of the input series. If, after first differencing, a series
is stationary, then the series is also called integrated to order one, and denoted I(1) –
which completes the abbreviation ARIMA. If the series, even after first differencing, is
not stationary, second differences need to be taken, using the equation:
If the series becomes stationary after second differencing it is integrated of order two
and denoted by I(2). In general, if a series d times is differenced in order to induce
stationarity, the series is called integrated of order d and denoted by I(d). Thus the
general ARIMA model is called an ARIMA(p, d, q), with p being the number of lags of
the dependent variable (the AR terms), d being the number of differences required to
take in order to make the series stationary, and q being the number of lagged terms of
the error term (the MA terms).
d Yt (1 − φ1 L − φ2 L2 − · · · − φp Lp ) = (1 + θ1 L + θ2 L2 + · · · + θq Lq )ut (13.41)
This gives rise to the main difficulty in using ARIMA models, called the identification
problem. The essence of this is that any model may be given more than one (and
in most cases many) different representations, which are essentially equivalent. How,
then, should we choose the best one and how should it be estimated? Defining the ’best’
representation is fairly easy, and here we use the principle of parsimony. This simply
means that we pick the form of the model with the smallest number of parameters to
be estimated. The trick is to find this model. You might think it is possible to start with
a high-order ARMA model and simply remove the insignificant coefficients. But this
does not work, because within this high-order model will be many equivalent ways of
representing the same model and the estimation process is unable to choose between
them. We therefore have to know the form of the model before we can estimate it. In
this context this is known as the identification problem and it represents the first stage
of the Box–Jenkins procedure.
Identification
In the identification stage (this identification should not be confused with the
identification procedure explained in the simultaneous equations chapter), the
researcher visually examines the time plot of the series ACF and PACF. Plotting
each observation of the Yt sequence against t provides useful information con-
cerning outliers, missing values and structural breaks in the data. It was men-
tioned earlier that most economic and financial time series are trended and
therefore non-stationary. Typically, non-stationary variables have a pronounced
trend (increasing or declining) or appear to meander without a constant long-run
mean or variance. Missing values and outliers can be corrected at this point. At
one time, the standard practice was to first-difference any series deemed to be
non-stationary.
A comparison of the sample ACF and PACF to those of various theoretical ARIMA
processes may suggest several plausible models. In theory, if the series is non-stationary,
the ACF of the series will not die down or show signs of decay at all. If this is the case,
the series needs to be transformed to make it stationary. As was noted above, a common
stationarity-inducing transformation is to take logarithms and then first differences of
the series.
Once stationarity has been achieved, the next step is to identify the p and q orders of
the ARIMA model. For a pure MA(q) process, the ACF will tend to show estimates that
are significantly different from zero up to lag q and then die down immediately after
the qth lag. The PACF for MA(q) will tend to die down quickly, either by an exponential
decay or by a damped sine wave.
In contrast to the MA processes, the pure AR(p) process will have an ACF that will
tend to die down quickly, either by an exponential decay or by a damped sine wave,
while the PACF will tend to show spikes (significant autocorrelations) for lags up to p
and then will die down immediately.
If neither the ACF nor the PACF show a definite cut-off, a mixed process is suggested.
In this case it is difficult, but not impossible, to identify the AR and MA orders. We
should think of the ACF and PACF of pure AR and MA processes as being superimposed
278 Time Series Econometrics
Table 13.1 ACF and PACF patterns for possible ARMA(p, q) models
Model ACF PACF
Pure white noise All autocorrelations are zero All partial autocorrelations are
zero
MA(1) Single positive spike at lag 1 Damped sine wave or exponential
decay
AR(1) Damped sine wave or exponential Single positive spike at lag 1
decay
ARMA(1,1) Decay (exp. or sine wave) Decay (exp. or sine wave)
beginning at lag 1 beginning at lag 1
ARMA(p, q) Decay (exp. or sine wave) Decay (exp. or sine wave)
beginning at lag q beginning at lag p
onto one another. For example, if both ACF and PACF show signs of slow exponen-
tial decay, an ARMA(1,1) process may be identified. Similarly, if the ACF shows three
significant spikes at lags one, two and three and then an exponential decay, and the
PACF spikes at the first lag and then shows an exponential decay, an ARMA(3,1) pro-
cess should be considered. Table 13.1 reports some possible combinations of ACF and
PACF forms that allow us the detection of the order of ARMA processes. In general, it
is difficult to identify mixed processes, so sometimes more than one ARMA(p, q) model
might be estimated, which is why the estimation and diagnostic checking stages are
both important and necessary.
Estimation
In the estimation stage, each of the tentative models is estimated and the various coef-
ficients are examined. In this second stage, the estimated models are compared using
the Akaike information criterion (AIC) and the Schwarz Bayesian criterion (SBC). We
want a parsimonious model, so we choose the model with the smallest AIC and SBC
values. Of the two criteria, the SBC is preferable. Also at this stage we have to be aware
of the common factor problem. The Box–Jenkins approach necessitates that the series
is stationary and the model invertible.
Diagnostic checking
In the diagnostic checking stage we examine the goodness of fit of the model. The
standard practice at this stage is to plot the residuals and look for outliers and evidence
of periods in which the model does not fit the data well. Care must be taken here to
avoid overfitting (the procedure of adding another coefficient in an appropriate model).
The special statistics we use here are the Box–Pierce statistic (BP) and the Ljung–Box
(LB) Q-statistic (see Ljung and Box, 1979), which serve to test for autocorrelations of
the residuals.
ARIMA Models and the Box–Jenkins Methodology 279
Step 1 Calculate the ACF and PACF of the raw data, and check whether the series is
stationary or not. If the series is stationary, go to step 3; if not, go to step 2.
Step 2 Take the logarithm and the first differences of the raw data and calculate the
ACF and PACF for the first logarithmic differenced series.
Step 3 Examine the graphs of the ACF and PACF and determine which models would
be good starting points.
Step 4 Estimate those models.
Step 5 For each of the estimated models:
(a) check to see if the parameter of the longest lag is significant. If not, there
are probably too many parameters and you should decrease the order of p
and/or q.
(b) check the ACF and PACF of the errors. If the model has at least enough
parameters, then all error ACFs and PACFs will be insignificant.
(c) check the AIC and SBC together with the adj-R2 of the estimated models to
detect which model is the parsimonious one (that is the one that minimizes
AIC and SBC and has the highest adj-R2 ).
Step 6 If changes in the original model are needed, go back to step 4.
Step 1 As a first step we need to calculate the ACF and PACF of the raw data. To do this
we need to double-click on the cpi variable to open the variable in a new EViews
window. We can then calculate the ACF and PACF and view their respective
graphs by clicking on View/Correlogram in the window that contains the gdp
variable. This will give us Figure 13.3.
From Figure 13.3 we can see that the ACF does not die down at all for all lags
(see also the plot of gdp to notice that it is clearly trended), which suggests that
the series is integrated and we need to proceed with taking logarithms and first
differences of the series.
Step 2 We take logs and then first differences of the gdp series by typing the following
commands into the EViews command line:
genr lgdp = log(gdp)
genr dlgdp = lgdp − lgdp(−1)
280 Time Series Econometrics
and then double-click on the newly created dlgdp (log-differenced series) and
click again on View/Correlogram to obtain the correlogram of the dlgdp series.
Step 3 From step 2 above we obtain the ACF and PACF of the dlgdp series, provided
in Figure 13.4. From this correlogram we can see that there are 2 to 3 spikes
on the ACF, and then all are zero, while there is also one spike in the PACF
which then dies down to zero quickly. This suggests that we might have up
to MA(3) and AR(1) specifications. So, the possible models are the ARMA(1,3),
ARMA(1,2) or ARMA(1,1) models.
Step 4 We then estimate the three possible models. The command for estimating the
ARMA(1,3) model is:
The results are presented in Tables 13.2, 13.3 and 13.4, respectively.
Step 5 Finally, the diagnostics of the three alternative models need to be checked,
to see which model is the most appropriate. Summarized results of all three
specifications are provided in Table 13.5, from which we see that, in terms of
the significance of estimated coefficients, the model that is most appropriate
is probably ARMA(1,3). ARMA(1,2) has one insignificant term (the coefficient
of the MA(2) term, which should be dropped), but when we include both
MA(2) and MA(3), the MA(3) term is highly significant and the MA(2) term
is significant at the 90% level. In terms of AIC and SBC we have contradic-
tory results. The AIC suggests the ARMA(1,3) model, but the SBC suggests
the ARMA(1,1) model. The adj-R2 is also higher for the ARMA(1,3) model. So
evidence here suggests that the ARMA(1.3) model is probably the most appro-
priate one. Remembering that we need a parsimonious model, there might
be a problem of overfitting here. For this we also check the Q-statistics of
the correlograms of the residuals for lags 8, 16 and 24. We see that only the
ARMA(1,3) model has insignificant lags for all three cases, while the other two
models have significant (for 90%) lags for the eighth and the 16th lag, sug-
gesting that the residuals are serially correlated. So, again, here the ARMA(1,3)
model seems to be the most appropriate. As an alternative specification, as an
exercise for the reader, go back to step 4 (as step 6 suggests) and re-estimate a
model with an AR(1) term and MA(1) and MA(3) terms, to see what happens
to the diagnostics.
ARIMA Models and the Box–Jenkins Methodology 283
Degrees of freedom 68 69 70
SSR 0.002093 0.002266 0.002288
φ(t -stat in parentheses) 0.71 (7.03) 0.72 (6.3) 0.74 (7.33)
θ1 (t -stat in parentheses) −0.44 (−3.04) −0.34 (−2.0) −0.47 (−2.92)
θ2 (t -stat in parentheses) −0.22 (−1.78) −0.12 (0.9) —
θ3 (t -stat in parentheses) 0.32 (2.85) — —
AIC/SBC −7.4688/−7.3107 −7.4173/−7.2908 −7.4356/−7.3407
Adj R 2 0.301 0.254 0.258
Ljung–Box statistics Q(8) = 5.65(0.22) Q(8) = 9.84(0.08) Q(8) = 11.17(0.08)
for residuals (sig Q(16) = 14.15(0.29) Q(16) = 20.66(0.08) Q(16) = 19.81(0.07)
levels in parentheses) Q(24) = 19.48(0.49) Q(24) = 24.87(0.25) Q(24) = 28.58(0.15)
Step 1 To calculate the ACF and PACF, the command in Stata is:
corrgram gdp
The results obtained are shown in Figure 13.5. Additionally, Stata calculates
the ACF and the PACF with graphs that show the 95% confidence limit. The
commands for these are:
ac gdp
pac gdp
The graphs of these commands are shown in Figures 13.6 and 13.7, respectively.
Step 2 To take logs and first differences of the gdp series the following commands
should be executed:
g lgdp = log(gdp)
g dlgdp = D.lgdp
corrgram dlgdp
ac dlgdp
pac dlgdp
Step 3–5 We proceed with the estimation of the various possible ARMA models. The
command for estimating ARIMA(p, d, q) models in Stata is the following:
−1 0 1 −1 0 1
LAG AC PAC Q Prob > Q [Autocorrelation] [Partial autocor]
1.00
Autocorrelations of gdp
0.50
0.00
−0.50
−1.00
0 10 20 30 40
Lag
Bartlett's formula for MA(q), 95% confidence bands
1.00
0.00
−0.50
0 10 20 30 40
Lag
95% Confidence bands [se = 1/sqrt(n)]
where for #p we put the number of lagged AR terms (that is, if we want
AR(4) we simply put 4) and so on. If we want to estimate an ARMA model,
then the middle term is always defined as zero (that is for ARMA(2,3) we put
arima(2,0,3)).
Therefore, the commands for the gdp variable are:
The results are similar to those presented in Tables 13.2, 13.3 and 13.4,
respectively.
Exercise 13.1
Show that an MA(1) process can be expressed as an infinite AR process.
Exercise 13.2
The file ARIMA.wf1 contains quarterly data for the consumer price index (cpi) and gross
domestic product (gdp) of the UK economy. Follow the steps described in the example
for the Box–Jenkins approach regarding gdp for the cpi variable.
14 Modelling the Variance:
ARCH–GARCH Models
CHAPTER CONTENTS
Introduction 288
The ARCH model 289
The GARCH model 299
Alternative specifications 301
Empirical illustrations of ARCH/GARCH models 313
Questions and exercises 317
LEARNING OBJECTIVES
After studying this chapter you should be able to:
1. Understand the concept of conditional variance.
2. Detect ‘calm’ and ‘wild’ periods in a stationary time series.
3. Understand the autoregressive conditional heteroskedasticity (ARCH) model.
4. Perform a test for ARCH effects.
5. Estimate an ARCH model.
6. Understand the GARCH model and the difference between the GARCH and ARCH
specifications.
7. Understand the distinctive features of the ARCH-M and GARCH-M models.
8. Understand the distinctive features of the TGARCH and EGARCH models.
9. Estimate all ARCH-type models using appropriate econometric software.
287
288 Time Series Econometrics
Introduction
Recent developments in financial econometrics have led to the use of models and tech-
niques that can model the attitude of investors not only towards expected returns but
also towards risk (or uncertainty). These require models that are capable of dealing
with the volatility (variance) of the series. Typical are the autoregressive conditional
heteroskedasticity (ARCH) family of models, which are presented and analysed in
this chapter.
Conventional econometric analysis views the variance of the disturbance terms as
being constant over time (the homoskedasticity assumption that was analysed in Chap-
ter 7). However, often financial and economic time series exhibit periods of unusually
high volatility followed by more tranquil periods of low volatility (‘wild’ and ‘calm’
periods, as some financial analysts like to call them).
Even from a quick look at financial data (see, for example, Figure 14.1, which plots the
daily returns of the FTSE-100 index from 1 January 1990 to 31 December 1999) we
can see that there are certain periods that have a higher volatility (and are therefore
riskier) than others. This means that the expected value of the magnitude of the dis-
turbance terms may be greater at certain periods compared with others. In addition,
these riskier times seem to be concentrated and followed by periods of lower risk (lower
volatility) that again are concentrated. In other words, we observe that large changes in
stock returns seem to be followed by further large changes. This phenomenon is what
financial analysts call volatility clustering. In terms of the graph in Figure 14.1, it is
clear that there are subperiods of higher volatility; it is also clear that after 1997 the
volatility of the series is much higher than it used to be.
Therefore, in such cases, it is clear that the assumption of homoskedasticity (or con-
stant variance) is very limiting, and in such instances it is preferable to examine patterns
that allow the variance to depend on its history. Or, to use more appropriate terminol-
ogy, it is preferable to examine not the unconditional variance (which is the long-run
forecast of the variance and can be still treated as constant) but the conditional variance,
based on our best model of the variable under consideration.
0.06
0.04
0.02
0.00
– 0.02
– 0.04
– 0.06
1/01/90 12/02/91 11/01/93 10/02/95 9/01/97 8/02/99
R_FTSE
Figure 14.1 Plot of the returns of FTSE-100, 1 January 1990 to 31 December 1999
Modelling the Variance: ARCH–GARCH Models 289
Yt = a + β Xt + ut (14.1)
Engle’s idea begins by allowing the variance of the residuals (σ 2 ) to depend on history,
or to have heteroskedasticity because the variance will change over time. One way to
allow for this is to have the variance depend on one lagged period of the squared error
terms, as follows:
σt2 = γ0 + γ1 ut−1
2 (14.3)
Yt = a + β Xt + ut (14.4)
ut |t ∼ iid N(0, ht )
2
ht = γ0 + γ1 ut−1 (14.5)
where t is the information set. Here Equation (14.4) is called the mean equation
and Equation (14.5) the variance equation. Note that we have changed the nota-
tion of the variance from σt2 to ht . This is to keep the same notation from now
on, throughout this chapter. (The reason it is better to use ht rather than σt2 will
become clear through the more mathematical explanation provided later in the
chapter.)
The ARCH(1) model says that when a big shock happens in period t − 1, it is
more likely that the value of ut (in absolute terms because of the squares) will
2
also be bigger. That is, when ut−1 is large/small, the variance of the next inno-
vation ut is also large/small. The estimated coefficient of γ1 has to be positive for
positive variance.
2
ht = γ0 + γ1 ut−1 2
+ γ2 ut−2 (14.6)
2
ht = γ0 + γ1 ut−1 2
+ γ2 ut−2 2
+ γ3 ut−3 (14.7)
2
ht = γ0 + γ1 ut−1 2
+ γ2 ut−2 2
+ · · · + γq ut−q
q
= γ0 + 2
γj ut−j (14.8)
j=1
Modelling the Variance: ARCH–GARCH Models 291
Therefore, the ARCH(q) model will simultaneously examine the mean and the variance
of a series according to the following specification:
Yt = a + β Xt + ut (14.9)
ut |t ∼ iid N(0, ht )
q
ht = γ0 + 2
γj ut−j (14.10)
j=1
Again, the estimated coefficients of the γ s have to be positive for positive variance.
Yt = a + β Xt + ut (14.11)
by OLS as usual (note that the mean equation can also have, as explanatory variables in
the xt vector, autoregressive terms of the dependent variable), to obtain the residuals
ût , and then run an auxiliary regression of the squared residuals (ût2 ) on the lagged
2 , . . . , û2 ) and a constant as in:
squared terms (ût−1 t−q
ût2 = γ0 + γ1 ût−1
2 2 +w
+ · · · + γq ût−q t (14.12)
r_ftse c r_ftse(−1)
Test equation:
Dependent variable: RESID∧ 2
Method: least squares
Date: 12/26/03 Time: 15:27
Sample(adjusted): 1/02/1990 12/31/1999
Included observations: 2609 after adjusting endpoints
homoskedasticity, and conclude that ARCH(1) effects are present. Testing for higher-
order ARCH effects (for example order 6) the results appear as shown in Table 14.3.
This time the T ∗ R2 statistic is even higher (205.24), suggesting a massive rejection
of the null hypothesis. Observe also that the lagged squared residuals are all highly
statistically significant. It is therefore clear for this equation specification that an ARCH
model will provide better results.
To estimate an ARCH model, click on Estimate in the equation results window to
go back to the Equation Specification window (or in a new workfile, by clicking on
Quick/Estimate Equation to open the Equation Specification window) and this time
change the estimation method by clicking on the down arrow in the method setting
294 Time Series Econometrics
Test equation:
Dependent variable: RESID∧ 2
Method: least squares
Date: 12/26/03 Time: 15:31
Sample(adjusted): 1/09/1990 12/31/1999
Included observations: 2604 after adjusting endpoints
r_ftse c rftse(−1)
making sure that the ARCH-M part selects None, which is the default EViews case. For
the ARCH specification choose GARCH/TARCH from the drop-down Model: menu,
which is again the default EViews case, and in the small boxes type 1 for the Order
ARCH and 0 for the GARCH. The Threshold Order should remain at zero (which is
the default setting). By clicking OK the results shown in Table 14.4 will appear.
Note that it took ten iterations to reach convergence in estimating this model. The
model can be written as:
Variance equation
with values of z-statistics in parentheses. Note that the estimate of γ1 is highly signifi-
cant and positive, which is consistent with the finding from the ARCH test above. The
estimates of a and β from the simple OLS model have changed slightly and become
more significant.
To estimate a higher-order ARCH model, such as the ARCH(6) examined above, again
click on Estimate and this time change the Order ARCH to 6 (by typing 6 in the small
box) leaving 0 for the GARCH. The results for this model are presented in Table 14.5.
Again, all the γ s are statistically significant and positive, which is consistent
with the findings above. After estimating ARCH models in EViews you can view
the conditional standard deviation or the conditional variance series by clicking on
the estimation window View/Garch Graphs/Conditional SD Graph or View/Garch
Graphs/Conditional Variance Graph, respectively. The conditional standard devia-
tion graph for the ARCH(6) model is shown in Figure 14.2.
You can also obtain the variance series from EViews by clicking on Procs/Make
GARCH Variance Series. EViews automatically gives names such as GARCH01,
GARCH02 and so on for each of the series. We renamed our obtained variance series
as ARCH1 for the ARCH(1) series model and ARCH6 for the ARCH(6) model. A plot of
these two series together is presented in Figure 14.3.
From this graph we can see that the ARCH(6) model provides a conditional vari-
ance series that is much smoother than that obtained from the ARCH(1) model.
This will be discussed more fully later. To obtain the conditional standard deviation
series plotted above, take the square root of the conditional variance series with the
following command:
Variance equation
A plot of the conditional standard deviation series for both models is presented in
Figure 14.4.
Yt = a + β Xt + ut (14.15)
It is usual to treat the variance of the error term Var(ut ) = σ 2 as a constant, but the vari-
ance can be allowed to change over time. To explain this more fully, let us decompose
the ut term into a systematic component and a random component, as:
ut = zt ht (14.16)
where zt follows a standard normal distribution with zero mean and variance one, and
ht is a scaling factor.
In the basic ARCH(1) model we assume that:
2
ht = γ0 + γ1 ut−1 (14.17)
Modelling the Variance: ARCH–GARCH Models 297
0.025
0.020
0.015
0.010
0.005
1/01/90 11/01/93 9/01/97
Figure 14.2 Conditional standard deviation graph for an ARCH(6) model of the FTSE-100
0.0006
0.0005
0.0004
0.0003
0.0002
0.0001
0.0000
1/01/90 11/01/93 9/01/97
ARCH1 ARCH6
0.025
0.020
0.015
0.010
0.005
1/01/90 11/01/93 9/01/97
SD_ARCH1 SD_ARCH6
and from this expression it is easy to see that the mean of the residuals will be zero
(E(ut ) = 0), because E(zt ) = 0. Additionally, the unconditional (long-run) variance of
the residuals is given by:
γ0
Var(ut ) = E zt2 E(ht ) = (14.19)
1 − γ1
which means that we simply need to impose the constraints γ0 > 0 and 0 < γ1 < 1 to
obtain stationarity.
The intuition behind the ARCH(1) model is that the conditional (short-run) variance
(or volatility) of the series is a function of the immediate past values of the squared
error term. Therefore the effect of each new shock zt depends on the size of the shock
in one lagged period.
An easy way to extend the ARCH(1) process is to add additional, higher-order
lagged parameters as determinants of the variance of the residuals to change
Equation (14.17) to:
q
ht = γ0 + 2
γj ut−j (14.20)
j=1
Modelling the Variance: ARCH–GARCH Models 299
which denotes an ARCH(q) process. ARCH(q) models are useful when the variability
of the series is expected to change more slowly than in the ARCH(1) model. However,
ARCH(q) models are quite often difficult to estimate, because they frequently yield
negative estimates of the γj s. To resolve this issue, Bollerslev (1986) developed the idea
of the GARCH model, which will be examined in the next section.
Yt = a + β Xt + ut (14.21)
ut |t ∼ iid N(0, ht )
p q
ht = γ0 + δi ht−i + 2
γj ut−j (14.22)
i=1 j=1
which says that the value of the variance scaling parameter ht now depends both on
past values of the shocks, which are captured by the lagged squared residual terms, and
on past values of itself, which are captured by lagged ht terms.
It should be clear by now that for p = 0 the model reduces to ARCH(q). The sim-
plest form of the GARCH(p,q) model is the GARCH(1,1) model, for which the variance
equation has the form:
2
ht = γ0 + δ1 ht−1 + γ1 ut−1 (14.23)
This model specification usually performs very well and is easy to estimate because it
has only three unknown parameters: γ0 , γ1 and δ1 .
2
ht = γ0 + δht−1 + γ1 ut−1
2
= γ0 + δ γ0 + δht−2 + γ1 ut−2 2
+ γ1 ut−1
300 Time Series Econometrics
2
= γ0 + γ1 ut−1 2
+ δγ0 + δ 2 ht−2 + δγ1 ut−2
2
= γ0 + γ1 ut−1 + δγ0 + δ 2 γ0 + δht−3 + γ1 ut−3
2 2
+ δγ1 ut−2
···
γ0
= 2
+ γ1 ut−1 2
+ δut−2 2
+ δ 2 γ1 ut−3 + ···
1−δ
∞
γ0
= + γ1 δ j−1 ut−j
2 (14.24)
1−δ
j=1
which shows that the GARCH(1,1) specification is equivalent to an infinite order ARCH
model with coefficients that decline geometrically. For this reason, it is essential to
estimate GARCH(1,1) models as alternatives to high-order ARCH models, because with
the GARCH(1,1) there are fewer parameters to estimate and therefore fewer degrees
of freedom are lost.
r_ftse c rftse(−1)
making sure that within the ARCH-M part None is selected, which is the default in
EViews. For the ARCH/GARCH specification choose GARCH/TARCH from the drop-
down Model: menu, which is again the default EViews case, and in the small boxes
type 1 for the Order ARCH and 1 for the GARCH. It is obvious that for higher orders,
for example a GARCH(4,2) model, you would have to change the number in the small
boxes by typing 2 for the Order ARCH and 4 for the GARCH. After specifying the
number of ARCH and GARCH and clicking OK the required results appear. Table 14.6
presents the results for a GARCH(1,1) model.
Note that it took only five iterations to reach convergence in estimating this model.
The model can be written as:
Variance equation
with values of z-statistics in parentheses. Note that the estimate of δ is highly significant
and positive, as well as the coefficient of the γ1 term. Taking the variance series for the
GARCH(1,1) model (by clicking on Procs/Make GARCH Variance Series) it has been
renamed as GARCH11 and this series has been plotted together with the ARCH6 series
to obtain the results shown in Figure 14.5.
From this we observe that the two series are quite similar (if not identical), because
the GARCH term captures a high order of ARCH terms as was proved earlier. Therefore,
again, it is better to estimate a GARCH instead of a high order ARCH model because of
its easier estimation and the least possible loss of degrees of freedom.
Changing the values in the boxes of the ARCH/GARCH specification to 6 in order to
estimate a GARCH(6,6) model, the results shown in Table 14.7 are obtained, where the
insignificance of all the parameters apart from the ARCH(1) term suggests that it is not
an appropriate model.
Similarly, estimating a GARCH(1,6) model gives the results shown in Table 14.8,
where now only the ARCH(1) and the GARCH(1) terms are significant; also some of
the ARCH lagged terms have a negative sign. Comparing all the models from both the
ARCH and the GARCH alternative specifications, we conclude that the GARCH(1,1) is
preferred, for the reasons discussed above.
Alternative specifications
There are many alternative specifications that could be analysed to model conditional
volatility, and some of the more important variants are presented briefly in this section.
(Berra and Higgins (1993) and Bollerslev et al. (1994) provide very good reviews of these
302 Time Series Econometrics
0.0006
0.0005
0.0004
0.0003
0.0002
0.0001
0.0000
1/01/90 11/01/93 9/01/97
ARCH6 GARCH1,1
Figure 14.5 Plots of the conditional variance series for ARCH(6) and GARCH(1,1)
alternative specifications, while Engle (1995) collects some important papers in the
ARCH/GARCH literature.)
Yt = a + β Xt + θ ht + ut (14.27)
ut |t ∼ iid N(0, ht )
p q
ht = γ0 + δi ht−i + 2
γj ut−j (14.28)
i=1 j=1
Another variant of the GARCH-M type model is to capture risk not through the
variance series but by using the standard deviation of the series having the following
Modelling the Variance: ARCH–GARCH Models 303
Variance equation
GARCH-M models can be linked with asset–pricing models such as the capital asset–
pricing models (CAPM) with many financial applications (for more, see Campbell et al.
1997; Hall et al. 1990).
Variance equation
r_ftse c rftse(−1)
and this time click on either Std.Dev or the Var selections from the ARCH-M part for
versions of the mean Equations (14.29) and (14.27), respectively.
For the ARCH/GARCH specification choose GARCH/TARCH from the drop-down
Model: menu, which is again the default EViews case, and in the small boxes specify
by typing the number of the q lags (1, 2, . . . , q) for the Order ARCH and the number of
p lags (1, 2, . . . , p) for the GARCH. Table 14.9 presents the results for a GARCH-M(1,1)
model based on the specification that uses the variance series to capture risk in the
mean equation, as given by Equation (14.27).
Note that the variance term (GARCH) in the mean equation is slightly significant
but its inclusion substantially increases the significance of the GARCH term in the
variance equation. Re-estimating the above model but this time clicking on the Std.Dev
from the ARCH-M part to include the conditional standard deviation in the mean
equation. The results are presented in Table 14.10, where this time the conditional
305
Variance equation
Variance equation
2
ht = γ0 + γ ut−1 2 d
+ θ ut−1 t−1 + δht−1 (14.31)
where dt takes the value of 1 for ut < 0, and 0 otherwise. So ‘good news’ and ‘bad
news’ have different impacts. Good news has an impact of γ , while bad news has an
impact of γ + θ. If θ > 0 we conclude that there is asymmetry, while if θ = 0 the news
impact is symmetric. TGARCH models can be extended to higher order specifications
by including more lagged terms, as follows:
q q
ht = γ0 + 2 +
(γi + θi dt−i )ut−i δj ht−j (14.32)
i=1 j=1
r_ftse c rftse(−1)
Modelling the Variance: ARCH–GARCH Models 307
Variance equation
ensuring also that None was clicked on in the ARCH-M part of the mean equation
specification.
For the ARCH/GARCH specification, choose GARCH/TARCH from the drop-down
Model: menu, and specify the number of q lags (1, 2, . . . , q) for the Order ARCH, the
number of p lags (1, 2, . . . , p) for the Order GARCH and the Threshold Order by chang-
ing the value in the box from 0 to 1 to have the TARCH model in action. Table 14.11
presents the results for a TGARCH(1,1) model.
Note that, because the coefficient of the (RESID < 0)∗ ARCH(1) term is positive and
statistically significant, indeed for the FTSE-100 there are asymmetries in the news.
Specifically, bad news has larger effects on the volatility of the series than good news.
where γ , the ζ s, ξ s and δs are parameters to be estimated. Note that the left-hand side
is the log of the variance series. This makes the leverage effect exponential rather than
quadratic, and therefore the estimates of the conditional variance are guaranteed to
be non-negative. The EGARCH model allows for the testing of asymmetries as well
308 Time Series Econometrics
as the TGARCH. To test for asymmetries, the parameters of importance are the ξ s. If
ξ1 = ξ2 = · · · = 0, then the model is symmetric. When ξj < 0, then positive shocks
(good news) generate less volatility than negative shocks (bad news).
r_ftse c rftse(−1)
again making sure that None is clicked on in the ARCH-M part of the mean
equation specification.
For the ARCH/GARCH specification now choose EGARCH from the drop-down
Model: menu, and in the small boxes specify the number of the q lags (1, 2, . . . , q)
for the Order ARCH and the number of p lags (1, 2, . . . , p) for the GARCH. Table 14.12
presents the results for an EGARCH(1,1) model.
Variance equation
Note that, because the coefficient of the RES/SQR[GARCH](1) term is negative and
statistically significant, indeed for the FTSE-100 bad news has larger effects on the
volatility of the series than good news.
p q
m
ht = γ0 + δi ht−i + 2 +
γj ut−j µk Xk (14.34)
i=1 j=1 k=1
where xk is a set of explanatory variables that might help to explain the variance. As an
example, consider the case of the FTSE-100 returns once again, and test the assumption
that the Gulf War (which took place in 1994) affected the FTSE-100 returns, making
them more volatile. This can be tested by constructing a dummy variable, named
Gulf, which will take the value of 1 for observations during 1994 and 0 for the rest
of the period. Then in the estimation of the GARCH model, apart from specifying
as always the mean equation and the order of q and p in the variance equation, add
the dummy variable in the box where EViews allows the entry of variance regressors,
by typing the name of the variable there. Estimation of a GARCH(1,1) model with
the dummy variable in the variance regression gave the results shown in Table 14.13,
where it can be seen that the dummy variable is statistically insignificant, so the
hypothesis that the Gulf War affected the volatility of the FTSE-100 returns can be
rejected. Other examples with dummy and regular explanatory variables are given in
310 Time Series Econometrics
Table 14.13 A GARCH(1,1) model with an explanatory variable in the variance equation
Dependent variable: R_FTSE
Method: ML–ARCH
Date: 12/27/03 Time: 17:25
Sample: 1/01/1990 12/31/1999
Included observations: 2610
Convergence achieved after 10 iterations
Variance equation
the empirical illustration section below for the GARCH model of UK GDP and the effect
of socio-political instability.
where L. denotes the lag operator. The results are similar to those in Table 14.1.
To test for ARCH effects, the command is:
The results are similar to those reported in Table 14.2 and suggest that there are ARCH
effects in the series. To test for ARCH effects of a higher order (order 6 in the example
reported in Table 14.3), the command is:
where depvar is replaced with the name of the dependent variable and indepvars
with the names of the independent variables you want to include in the mean equation,
and after the comma choose from the options which type of ARCH/GARCH model you
wish to estimate (that is you specify the variance equation). Thus, for a simple ARCH(1)
model of regressing r_ftse to r_ftset−1 , in the mean equation the command is:
Then, to obtain the ht variance series of this ARCH(1) model, the command is:
(Here, htgarch1 is a name that helps us remember that the series is a variance series
for the ARCH(1) model; any other name the reader might want to give to the series will
work just as well): while the command:
tsline htgarch1
while for higher orders (that is for GARCH(3,4)) only the values in the parentheses
should change:
All these commands are left as an exercise for the reader. The analysis and interpretation
of the results are similar to those discussed previously in this chapter.
r_ftse c r_ftse(-1)
then click on Run, which brings up the GARCH estimation menu. Here, a set of
options is provided, and in each case you need to define which model you want to
estimate from six possible choices:
GARCH
GARCH-M
AGARCH
AGARCH-M
EGARCH
EGARCH-M
Leaving aside cases 3 and 4 of absolute GARCH models, all the rest of the options are
familiar to us from the theory in this chapter. So, to estimate a GARCH-M(1,1) model,
choose option 2 from this list and click OK. Then Microfit requires you to specify the
underlying distribution. This is left as the default case, which is the z-distribution. After
clicking OK again a new window appears, where the orders of ARCH and GARCH terms
in our model must be specified. First, type the number of the GARCH terms and then,
separated by “;”, the number of ARCH terms. Therefore, for GARCH-M(1,1) type:
1 ; 1
Then click Run again, which takes you to the window where you can specify the
number of additional variables to be included in the Variance equation (we can leave
this blank for this example). After clicking Run again the results appear, after a number
of iterations that are shown on the screen while Microfit executes the calculations. The
analysis and interpretation are similar to the cases that have been examined above. The
rest of the ARCH/GARCH models have been left as exercises for the reader.
Modelling the Variance: ARCH–GARCH Models 313
4
4
6
ln(Yt ) = a0 + a1i
ln(Yt−i ) + a2i
ln(It−i ) + dj Xjt + ut (14.35)
i=0 i=0 j=1
ut ∼ N(0, ht ) (14.36)
2 +b h
ht = b1 et−1 (14.37)
2 t−1
Table 14.14 GARCH estimates of GDP growth with political uncertainty proxies
Dependent variable:
ln(Yt ); Sample: 1961q2–1997q4
Parameter 1 2 3 4
Variance equation
Constant 0.00001 (1.83) 0.00001 (1.66) 0.000006 (1.16) 0.00006 (1.71)
ARCH (1) 0.387 (3.27) 0.314 (2.44) 0.491 (4.18) 0.491 (4.46)
GARCH (1) 0.485 (2.95) 0.543 (3.14) 0.566 (6.21) 0.566 (3.36)
R2 0.006 0.099 0.030 0.104
S.E . of d .v . 0.010 0.010 0.010 0.010
S.E . of Reg. 0.010 0.010 0.010 0.010
The results from the alternative specification, with the inclusion of the PCs in place
of the political instability variables (Table 14.14, model 3) are similar to the previ-
ous model. Negative and significant coefficients were obtained for the first and the
third components.
Asteriou and Price (2001) also estimated all the above specifications without including
the investment terms. The results for the case of the political uncertainty dummies are
presented in the same table in model 4, and show clearly that the strong negative
direct impact remains. Thus, the impact of political uncertainty on growth does not
appear to operate through investment growth, leaving open the possibility of political
uncertainty affecting the level of investment.
4
4
ln(Yt ) = a0 + a1i
ln(Yt−i ) + a2i
ln(It−i ) + γ ht + ut (14.38)
i=0 i=0
ut ∼ N(0, ht ) (14.39)
6
2
ht = b1 ut−1 + b2 ht−1 + b3i Xit (14.40)
i=1
Modelling the Variance: ARCH–GARCH Models 315
Parameter 1 2 3
That is, the growth rate of GDP is modelled as an AR process, including four lags of
the growth rate of investments and the variance of the error term. Equation (14.39)
defines ht as the variance of the error term in Equation (14.38), and Equation (14.40)
states that the variance of the error term is in turn a function of the lagged variance
and lagged squared residuals as well as the political instability proxies Xit . To accept the
first hypothesis it would be necessary for γ to be non-zero, while to accept the second
hypothesis there should be evidence of positive statistically significant estimates for
the coefficients of the political instability proxies (b3i ).
Table 14.15, model 1 reports the results of estimating a GARCH-M(1,1) model without
political instability proxies. (Again, as in the previous section, the reported results are
only from the parsimonious models.) The model is satisfactory given that the parame-
ters (b1 , b2 ) are strongly significant. The inclusion of the ‘in mean’ specification turns
out to be redundant as γ is insignificant, suggesting that GDP uncertainty does not
itself affect GDP growth. However, this turns out to be misleading and follows from
the fact that political factors are ignored.
In estimating a GARCH-M(1,1) model including the political dummies in the variance
equation (see Table 14.15, model 2), Asteriou and Price observed that all the political
instability variables – with the exception of REGIME – entered the equation with the
expected positive sign, indicating that political uncertainty increases the variance of
GDP growth. All variables were statistically significant. The ‘in mean’ term is in this
case highly significant and negative. The results from the alternative specification, with
the inclusion of the PCs in the place of the political instability variables (Table 14.15,
model 3) are similar to the previous one, with the exception that positive and significant
coefficients were obtained only for the fifth component.
Continuing, Asteriou and Price estimated more general GARCH-M(1,1) models,
first including the political dummies and the PCs in the growth equation, and then
including political dummies and PCs in both the growth and the variance equation.
316 Time Series Econometrics
With the first version of the model they wanted to test whether the inclusion of
the dummies in the growth equation would affect the significance of the ‘in mean’
term which captures the uncertainty of GDP. Their results, presented in Table 14.16,
showed that GDP growth was significantly affected only by political uncertainty, cap-
tured either by the dummies or by the PCs, denoting the importance of political factors
other than the GARCH process. (We report here only the results from the model with
the political uncertainty dummies. The results with the PCs are similar but are not
presented for economy of space. Tables and results are available from the authors
on request.)
Modelling the Variance: ARCH–GARCH Models 317
The final and most general specification was used to capture both effects stemming
from political uncertainty, namely the effect of political uncertainty on GDP growth,
and its effect on the variance of GDP. Asteriou and Price’s results are presented in
Table 14.17. After the inclusion of the political dummies in the variance equation,
the model was improved (the political dummies significantly altered the variance of
GDP), but the effect on GDP growth came only from the political uncertainty prox-
ies that were included in the growth equation. The ‘in mean’ term was negative and
insignificant.
The final conclusion of Asteriou and Price (2001) was that political instability has
two identifiable effects. Some measures impact on the variance of GDP growth; others
directly affect the growth itself. Instability has a direct impact on growth and does not
operate indirectly via the conditional variance of growth.
Exercise 14.1
The file arch.wf1 contains daily data for the logarithmic returns FTSE-100 (named r_ftse)
and three more stocks of the UK stock market (named r_stock1, r_stock2 and r_stock3,
respectively). For each of the stock series do the following:
(a) Estimate an AR(1) up to AR(15) model and test the individual and joint significance
of the estimated coefficients.
(b) Compare AIC and SBC values of the above models and, along with the results for
the significance of the coefficients, conclude which will be the most appropriate
specification.
318 Time Series Econometrics
(c) Re-estimate this specification using OLS and test for the presence of ARCH(p) effects.
Choose several alternative values for p.
(d) For the preferred specification of the mean equation, estimate an ARCH(p) model
and compare your results with the previous OLS results.
(e) Obtain the conditional variance and conditional standard deviations series and
rename them with names that will show from which model they were obtained
(for example SD_ARCH6 for the conditional standard deviation of an ARCH(6)
process).
(f ) Estimate a GARCH(q,p) model, obtain the conditional variance and standard devia-
tion series (rename them again appropriately) and plot them against the series you
have already obtained. What do you observe?
(g) Estimate a TGARCH(q,p) model. Test the significance of the TGARCH coefficient.
Is there any evidence of asymmetric effects?
(h) Estimate an EGARCH(q,p) model. How does this affect your results?
(i) Summarize all models in one table and comment on your results.
Exercise 14.2
You are working in a financial institution and your boss proposes to upgrade the finan-
cial risk-management methodology the company uses. In particular, to model the
FTSE-100 index your boss suggests estimation using an ARCH(1) process. You disagree
and wish to convince your boss that a GARCH(1,1) process is better.
(a) Explain, intuitively first, why a GARCH(1,1) process will fit the returns of FTSE-100
better than an ARCH(1) process. (Hint: You will need to refer to the stylized facts
of the behaviour of stock indices.)
(b) Prove your point with the use of mathematics. (Hint: You will need to mention
ARCH(q) processes here.)
(c) Estimate both models and try to analyse them in such a way that you can convince
your boss about the preferability of the model you are proposing. Check the condi-
tional standard deviation and conditional variance series as well. (Hint: Check the
number of iterations and talk about computational efficiency.)