NBER WORKING PAPERS SERIES
MEASURING AND TESTING THE IMPACT OF
NEWS ON VOLATILITY
Robert F. Engle
Victor K. Ng
Working Paper No. 3681
NATIONAL BUREAU OF ECONOMIC RESEARCH
1050 Massachusetts Avenue
Cambridge, MA 02138
April 1991
This paper is part of NBER's research program in Financial
Markets and Monetary Economics. Any opinions expressed are those
of the authors and not those of the National Bureau of Economic
Research.
NBER Working Paper #3681
April 1991
MEASURING AND TESTING THE IMPACT OF
NEWS ON VOLATILITY
ABSTRACT
This paper introduces the News Impact Curve to measure how
new information is incorporated into volatility estimates. A
variety of new and existing ARCH models are compared and
estimated with daily Japanese stock return data to determine the
shape of the News Impact Curve. New diagnostic tests are
presented which emphasize the asymmetry of the volatility
response to news. A partially non-parametric ARCH model is
introduced to allow the data to estimate this shape. A
comparison of this model with the existing models suggests that
the best models are one by Glosten Jaganathan and Runkle (GJR)
and Nelson's EGARCE. Similar results hold on a pre--crash sample
period but are less strong.
Robert F. Engle
Chairman and Professor
Department of Economics
University of California,
San Diego
La Jolla, CA 92093
Victor K. Ng
Assistant Professor
of Finance
School of Business
Administration
University of Michigan
Ann Arbor, MI 48100-1234
MEASURING AND TESTING THE IMPACT OF NEWS ON VOLATILITY
Robert F. Engle, Professor of Economics
University of California, San Diego
Victor K. Ng, Assistant Professor of Finance
University of Michigan, Ann Arbor
It is now well established that volatility is predictable in most financial markets. In
a recent survey by Bollerslev et al (1990) over 200 articles were cited which estimated or
examined ARCH or alternative models of time varying heteroskedasticity. With this
growth in interest and applications, there has also grown a literature on alternative models
which are designed to allow different features of the data to be reflected in the model. Some
of these are tightly parameteric models while others are non—parametric in spirit.
In this paper, we suggest a new metric with which these volatility models can be
compared. We discuss some of the alternative models which are being tried and introduce
several models of our own which should nest many of the existing models. We will also
suggest several new diagnostic tests for volatility models.
In the next section, we discuss several models of predictable volatility and introduce
the idea of a News Impact Curve which characterizes the impact of innovations on
volatility implicit in a volatility model. In section III, we suggest several new diagnostic
tests based on the News Impact Curve. In section IV, a partially non—parametric ARCH
model is introduced. Section V presents and compares empirical estimates of several
volatility models using a Japanese stock returns series. The new diagnostic tests are
employed to check the adequacy of the models. In section VI the partially non—parametric
model is estimated and compared with the others, and in section VII, the best models are
reestimated on a pre—crash sample period. Section VIII concludes the paper.
2
II: MODELS OF PREDICTABLE VOLATILITY
The conditional mean and variance of a time series {} given a past information
set,
are defined by
(1)
m=E(yj
=y—m
h=V(yI
where ht is in general a non—negative random variable. The precise parametrization of this
conditional variance function is however a matter of econometric specification just as is the
specification of the mean.
In Engle (1982) several alternative formulations were discussed but the one
developed in most detail was the th order autoregressive model:
(2)
ht =
which was generalized to the GARCH(p,q) model by Bollerslev (1986):
p
(3)
ht =
w+
q
+ fliht..
Apparently, the GARCH model is an infinite order ARCH model and often provides a
highly parsimonious lag shape. Empirically these models have been very successful with the
GARCH(1,1) the general favorite in the vast majority of cases. Furthermore, these
applications typically reveal that there is a long term persistence in the effects of shocks in
period t onto the conditional volatility in period t+s for large s. That is, there typically
appears to be a unit root in the autoregressive polynomial associated with (2) or (3).
In spite of the apparent success of these simple parametrizations, there are some
features of the data which these models are unable to pick out. The most interesting of
these is the "leverage" effect emphasized by Nelson (1990) based on an argument by
3
Black(1976). Statistically, this effect says that negative surprises to asset markets increase
predictable volatility more than positive surprises. Thus the conditional variance function
ought not be constrainted to be symmetric in past c's. Nelson (1990) proposed the
exponential GARCH or EGARCH model:
(4)
logh = w + fi.logh
+ {I:t]i — (2/lr)h12]
+7•
which is asymmetric because the level of relative to its standard deviation is included
with coefficient ' which would typically be negative reflecting that positive shocks generate
less volatility all else being equal. Shocks are measured relative to their standard
deviations. The use of absolute errors and logs increases the impact of large shocks on the
next period conditional variance.
A comparison between the GARCH(1,l) model and the EGARCH(1,1) suggests an
interesting metric in which to analyze all general forms of conditional heteroskedasticity.
How does new information affect the next period variance? Holding constant the
information dated t—2 and earlier, one can examine the implied relation between
and
ht. This curve might be called the News Impact Curve. In the GARCH model this curve is
a quadratic centered on
For the ECARCH, it has its minimum at _=0 and is
exponentially increasing in both directions but with different parameters. In particular, the
relation for the EGARCH evaluated at a conditional variance h is:
(5)
ht = A.exp[L.
h = A. exp
], for _>0, and
ct_i]' for Etlczo
where, A h-exp[w— 0(2/11)1/2]
4
In figure 1, these are compared with the GARCH(1,1) for 7<0 but a+y>O. If the curves
were extrapolated, the EGARCH would have the higher variance in both directions as the
exponential curve eventually dominates the quadratic. Thus, from the News Impact Curve,
the EGARCH model differs from the standard GARCH model in two main respects: [1]
EGARCH allows good news and bad news to have a different impact on volatility while the
standard C ARCH model does not, and [2] EGARCH allows big news to have much more
impact on volatility than the standard GARCH model.
FIGURE 1
ht
.
4'
.
•
.
*
.
.
*
.
.
*
.4'
4'
GARCH
*
*
EGARCH
*
E1
0
The News Impact Curve can be examined for many other models. A great many of
the alternatives are symmetric such as Schwert's (1990) standard deviation model:
p
(6)
2
ht= [w+aIEtlJ
which is quadratic in the news as is Bera and Lee (1989) augmented ARCH model. Other
symmetric functions are implied by Engle and Bollerslev (1986) who fit both
(7)
ht = w+ 01t117+fiht1
5
(8)
ht = w + a[24(t 1/6)_i] + flhti
where $(•) is
the cumulative density function
of a standard normal
If 7<2 and if 5>0, then both have a News Impact Curve which is symmetric but with
reduced response to extreme news. This was found empirically. The multiplicative ARCH
model of Mihoj (1987) and Geweke (1986) and Pantula (1986) relates the log of h to the log
of lagged squared residuals. Thus the news impact response curve is given by:
(9)
ht = A
where A is a constant
which therefore has the same shape as that in (7) although it goes automatically through
the origin which appears to be a drawback to this model discussed by Engle and Bollerslev
(1986a). Similarly, Higgins and Bera (1990) introduce a non—linear ARCH model which has
a constant elasticity of substitution between terms in an ordinary ARCH model or C ARCH
model. Agaln this implies a power function as in (7) and (9). Finally, Friedman and
Kuttner (1990) introduce a modified ARCH model which gives smaller coefficients to large
residuals for the purpose showing that large shocks have lower persistence than small
shocks. A similar finding was illustrated by the options pricing results in Engle and
Mustafa (1989) and Schwert (1990). In each case, the extreme portions of the News Impact
Curve are reduced in this specification.
To allow an asymmetric impact response curve centered at a non—zero
other
extensions are needed. Possibly the simplest is proposed by Engle (1990) which simply
allows the minimum of the News Impact Curve for the GARCH model to lie other than
the origin. The model called the asymmetric GARCH or AGARCH model is
(10)
ht = w + 0(t_i+7)2 +
*
2
*
=w +ac_1+7 t_i+flht_i
6
at
where w = u
*
the
2
+ u-y
and 7 = 27.
By the Black or leverage effect we expect and find
and 7 are negative so that the rmmmum of the News Impact Curve lies to the right
of the origin at tii= —y. Notice that even though t can be large, it cannot drive the
conditional variance negative as long as the square is in the specification as well. The
Schwert (1990) specification will give this formulation as well.
A closely related model which allows the minimum point of the News Impact Curve
to depend upon the standard deviation, is called the Non—linear asymmetric GARCH or
NGARCH. It is formulated as:
(11)
ht = a' + flht 1 +
+
= (—Wy). h'
The minimum of the News Impact Curve of the NGARCH model is at
which is also to the right of the origin when 7 is negative.
Following Nelson's lead, a VGARCH model can also be formulated as:
(12)
ht = a' + fiht4 + a(eti/1iL + 7)2
where the name VGARCH comes from the representation of c/hh/2=v. Interesting enough,
the minimum of the News Impact Curve of the VGA itCH model is also at
(—'').
=
However, the slope of this News Impact Curve is different from that of
NGARCH model.
Another important suggestion initially proposed by Glosten, Jaganathan and
Runkle (1989) and more recently analyzed by Zakoian (1990), allows both sides of the
GARCH News Impact Curve to have different slopes. The model is:
(13)
ht = a' + ffi_1 + ac...1 + 7StlE_l,
S=1 if
S=0 otherwise
7
In figure 2, the News Impact Curve is plotted for the GJR model and the non centered
AGARCH model.
FIGURE 2
*
ht
•
*
*
•
. *
*
*
f-
GJR
AGARCH
•(uncentered)
.
.
t
U
ct—i
Finally, there are several papers which have used non— parameteric approaches to
the specification and estimation of ARCH models. In some cases, these have started with
squared residuals and in others even more generous parameterizations are allowed. See for
examples, Pagan and Schwert (1990), Galant, Hsieh and Tauchen (1990), and Gourioux
and Monfort (1990). In each case, the non—parameteric procedure forces a short lag
structureto be used, and in most cases, the heteroskedasticity in the residuals of the
squared error regression is not acknowledged in the estimation procedure which leads to
inefficient estimation within the class.
III: DIAGNOSTIC TESTS BASED ON THE NEWS IMPACT CURVE
As we have discussed in section II, implicit in any choice of volatility model is a
particular News Impact Curve. The standard GARCH model has a News Impact Curve
which is symmetric and centered at c=0. That is, positive and negative surprises of the
same magnitude would produce the same amount of volatility. Also, larger innovations
would create more volatility at a rate proportional to the square of the size of the
innovation. If, in fact, a negative innovation casuses more volatility than a positive
8
innovation of the same size, then the GARCII model will under—predict the amount of
volatility following bad news and over—predict the amount of volatility following good
news. Furthermore, if large innovations cause more volatility than would be allowed by a
quadratic function, then the standard GARCH model will also under—predict volatility
after a large shock and over—predict volatilty after a small shock. These observations
suggest at least three new diagnostic tests for volatility models which we will call the
Sign—Bias Test, the Negative—Size—Bias Test, and the Positive—Size—Bias Test. A variety
of closely related tests can also be created using the same approach and these tests can be
carried out individually or jointly. All these tests can also be applied to other volatility
models.
Each of these test statistics examines whether the squared standardized residuals
are indeed independent and identically distributed. If there is information in
which
could predict these residuals, then the variance process was misspecified. Defining
v= et/fc, for y=y+e, h= estimated conditional variance
and letting z_1 be a vector of measurable functions of the past information set including
e_1 in particular, then it is proposed to run the regression:
(14)
v2 a + ztib + Ut.
Intuitively, if the model is correctly specified, then b=O and
is iid. Thus t—statistics and
F—statistics on b should have the familiar limiting distributions. By selecting different
measures of z, the different tests are constructed. By testing one variable at a time, tests
are formulated agalnst a particular alternative, and by allowing several z's, joint tests are
constructed. The exact limiting distribution will be discussed after the tests are
introduced.
9
As the names suggest, the Sign—Bias Test focuses on the different impacts that
positive and negative innovations have on volatility which are not predicted by the null
volatility model. The Negative—Size—Bias Test focuses on the different impacts that large
and small negative innovations would have on volatility which are not predicted by the
null model. The Positive—Size—Bias Test focuses on the different impacts that large and
small positive innovations would have on volatility which are not explained by the null
model. It is important to distinguish between positive and negative innovations while
examining the "size" effect as an important piece of bad news might have very different
impact on volatility than an important piece of good news.
Let S be a dummy variable that takes a value of 1 if
is negative and zero
otherwise. The Sign—Bias Test Statistic, is defined as the t—ratio for the coefficient b in the
regression equation:
v=a+b•S+zt1'y-i-vt
(15)
where there may be one or more other variables ztl in the regression.
The Negative Size-Bias test statistic, is defined as the t—ratio of the coefficient b in
the OLS regression:
(16)
v = a + bSct_l + zt_17 + Lit
This test examines whether the more extreme negative c's are associated with more
extreme biases. The corresponding Positive Size—Bias test statistic is defined as the
t—ratio of the coefficient b in the same regression equation with
(17)
v2 = a + bStct 1 + z1y +
10
a 1—S as
Alternatively, we can examine even more extreme values of c for particular biases.
One approach is to use a variable such as
of
so that
Another is to define the order statistics
is the 0th percentile of the set of {}. For example, regressions
or
including either
would be particularly sensitive to the volatility following
extreme negative innovations.
These diagnostic test statistics can also be used as summary statistics on the raw
data to explore the nature of conditional heteroscedasticity in the data series without first
imposing a volatility model. In this case, c and v would simply be defined as follows:
—
p
Vt a ft/cr
where p and u are the unconditional mean and standard deviation of
respectively. Using
these t'5 and Vt's, the five summary statistics can be computed based on the regression
analyses described above.
Finally, the model can be subjected to all of these tests at once by running the
regression:
= a + b1S + b2Sct + b3St ct—i + b4Sc2t_i +
+ b6D0 + b7Dt9° +v
and testing that all the b's are equal to zero. This can be simply TB.2 or the F statistic for
the regression. As there is collinearity among these regressors, the most powerful test
against some alternative will not be the joint test. In practice, the use of the first three of
these terms seems adequate to reveal the biases in a wide class of heteroskedastic functions
for the data set in this study.
In this paper four test statistics will be reported for each model. The regression
ii
v2 = a b1S + b2St_l + b3St_i +
(18)
is computed and the t—ratios for b1, b2 and b3 are called the sign bias, the negative size
and the postitive size bias tests respectively. The joint test is the F statistic from ;this
regression which is presented along with its p—value. Finally, three other version8 are
computed but only reported when they reveal a different behavior. To test the extremes of
the size bias, the D10 and D9° are entered into (18); alternatively D5 and D95, are added
to (18); finally rather than the order statistics, S_1 and S1 are added to (18).
Each of these regressions has 5 coefficients. If the p value is smaller than the joint one for
(18), this fact is noted.
The exact asymptotic distribution of the tests can be derived by considering the LM
test as presented in Godfrey(1979) or Engle(1984). Suppose
ht= h(xt i + z1)
and we wish to test that =O. Letting y be the MLE of y when =O and ht be the
estimated variance for observation t, and e the estimated residual, then the LM test is
(19)
LM = TR2 of the regression
e/ h = a + xti[h/ htJc + z1[h/ htjb +
where
is the scalar derivative of ht. When the null hypothesis is true, the scores satisfy
a central limit theorem, and the information matrix converges in probability to a constant,
then:
LM Xk
where k is the number of variables in z_1. The difference between the regressions in (19)
and (14) is mainly in the omission of the term in x_1. As the R2 can only be decreased by
omitting a set of variables, the distribution of the test statistics from (14) will be less than
or equal to the LM test statistic and therefore will have a size less than or equal to the
12
nominal size. That is, TR2 from (14) will have a limiting distribution which is less than or
equal to a chi square with degrees of freedom equal to the number of variables in ztr
While it would be easy to construct a test which has the correct asymptotic size, the test
would have different variables depending on which model was taken as the null. If the goal
is to be certain that the model is able to mimic the observed movements in conditional
heteroskedasticity, then the tests described above are natural.
The second implication of the LM derivation is the choice of variables to use for
To find an optimal test against an alternativez ,one should use z =
Corresponding to any test z, the implied alternative is z =
z[h/ lit].
z/[h/ ht].
IV A PARTIALLY NON—PARAMETRIC NEWS IMPACT MODEL
It may well be that no simply parameterized model will pass all of these diagnostic
tests. Hence it is appropriate to seek non—parametric models which allow all of these
properties to be determined by the data directly. In particular, if the News Impact Curve
is the object of the analysis, then it is natural to estimate it non—parametrically. In this
section, a simple version of this model is suggested. Because of the long memory
characteristic of most variance processes, the decay parameter is specified parametrically.
This mixture of parametric and non—parametric parts is labeled partially parametric to
distinguish it from Engle and Gonzalez(1990).
Let the range of {ct} be divided into m intervals with break points ri. Let m be
the number of intervals less than zero and m+ be the number of positive intervals so that
m = m++ m. Denote these boundaries by the numbers {rm''ri'ro' ri,...,rm+}.
These intervals need not be equal in size and there need not be the same number on each
side of r0. For convenience and the ability to test symmetry, r0 = 0 is a natural selection.
-
Now define
13
= 1 if
and Nt = 1 if
Then a piecewise linear specification of the heteroskedasticity function is
(20)
=+
+
0
it—li) + .ojN1(ct_i_r)
This functional form is guaranteed to be continuous and is really a linear spline with knots
at the r. Between 0 and
the slope will be 00 while between
and
it will be 00+ 01,
and so forth. Above Tm+J the slope will be the sum of all the 0's. Clearly, the shape will
be monotonic if the partial sums at each point are of the same sign.
As the sample becomes larger, it is possible and desirable to increase m to obtain
more resolution. This is an example of the method of sieves approach to non—parametric
estimation. A larger value of m can be interpreted as a smaller bandwidth which will give.
lower bias and higher variance to each point on the curve. If m is increased slowly as a
function of sample size, the procedure should asymptotically give a consistent estimate of
any News Impact Curve. In this case however, the rate of convergence and the standard
errors may be different from standard maximum likelihood results. On the other hand, if
m is held fixed, then the estimator will only produce a consistent estimate of the News
Impact Curve if (20) is correctly specified. In such a case, the standard errors will be given
in their usual form.
It should of course be pointed out that although the specification in (20) is capable
of generating a wide range of News Impact Curves, it is very simple with respect to the
impact of older information. All information is assumed to decay in an exponential fashion
with decay rate fi. News affects volatility in the same way in the long run as in the short
run. Obviously other terms could be added but this would substantially increase the
computaionai complexity.
14
Two simple approaches to choosing the
could be used. They could be unequally
spaced based on for example the order statistics, or they could be equally spaced. In the
example here, equally spaced bins were used with break points at ci for i = 0,±1,2,±3,*4
where is the unconditional standard deviation of the dependent variable. Thus:
(21) ht = w + flht i + OjPt
1hi) +
m+= m= 4 so that there are 10 coefficients in the News Impact Curve.
Figure 3 below gives an example of the graph of a Partially Non—parametric or PNP News
Impact Curve.
FIGURE 3
•
ht
PNP
I
ct—i
15
V: ESTIMATION OF JAPANESE STOCK RETURN VOLATILITY: 1980—1988
To compare and demonstrate the empirical properties of some of the above
mentioned volatility models, we apply these models to the daily returns series of the
Japanese TOPIX index. The data were obtained from the PACAP Databases provided by
the Pacific Basin Capital Market Research Center at the University of Rhode Island. In
this section, we will report our estimation and testing results for the parametric models for
the full sample period from January 1, 1980 to December 31, 1988. In the next section, we
estimate conditional volatility and the News Impact Curve using a non—parametric
approach, and compare the News Impact Curve obtained from the non—parametric method
to those obtained from the various parametric volatility models. In section seven, we check
the robustness of our results by reestimating some of our models using a shorter sample
period from January 1, 1980 to September 30, 1987.
As our focus is on the conditional variance rather than the conditional mean, we
concentrate on the unpredictable part of the stock returns obtained through a procedure
similar to the one in Pagan and Schwert (1990). The procedure involves a Day—of—the—
week effect adjustment and an autoregressive regression removing the predictable part of
the return series.
Let
be the daily return of the TOPIX index for day t. We first regressed
on a
constant and five day—of—the-week dummies (for Tue, Wed, Thur, Fri, and Sat) to get the
residual, Ut. The u was then regressed against a constant and u_1, ..,
to obtained
the residual, t' which is our unpredictable stock return data.
The results for the above adjustment regressions and some summary statistics for
our unpredictable stock return series are reported below:
16
Day—of—the—week effect adiustment:
y =_O.01l9_0.0907•TUEt+0.2025.WEDt+o.o953.THut
(0.039) (0.055)
(0.055)
(0.055)
+0.ilOO•FRJ +0.1629.SATt+u
(0.055)
(standard
(0.060)
errors in parentheses)
Autocorrelation adjustment:
u =—O.0002 +0.1231•u
—0.0802•u
(0.016) (0.02)
(0.02) t—2
+0.0456•u
(0.02) t—3
—O.0526•u+0.0772.u 5—0.0706u
+
(0.02) t—6
(0.02)
(0.02)
(standard errors in parentheses)
Summary statistics for the unpredictable stock returns:
Mean:
Variance:
Coefficient of Skewness:
Coefficient of Kurtosis:
Ljung Box (12) for the levels:
Ljung Box (12) for the squares:
Sign—bias test:
0.0000
.6397
—1.8927
71.3511
1.3251
406.6700
—6.26
—20.3
Negative size-bias test:
Positive size—bias test:
1.56
139.0
[0.000]
Joint test:
Number of observations:
2532
From the Ljung Box test statistic for 12-order serial correlation for the levels
reported above, there is no significant serial correlation left in the stock returns series after
our adjustment procedure. The coefficient of skewness and the coefficient of kurtosis
indicate that the unpredictable stock returns, the
have a distribution which is skewed
to the left and very fat tailed. Furthermore, the Ljung Box test statistic for 12-order serial
correlations in the squares strongly suggests the presence of time—varying volatility. The
sign—bias, negative size-bias and positive size—bias test statistics introduced in the last
section are also computed. The sign bias and negative size bias tests are both highly
significant as t—statistics with one degree of freedom. The positive size bias test is not
particularly significant, although if the size term were dropped, it would be significant.
17
These statistics strongly indicate that the value of —i influences today's volatility.
Positive innovations appear to increase volatility regardless of the size, while large negative
innovations cause more volatility than small ones. The other three regressions which
respectively add the 5% and 95% dummy variables, the 10% and 90% dummy variables
and the
terms all give lower F statistics (the latter is the highest) and all have p
values of 0.0000.
Using the unpredictable stock returns series as the data series, the standard
GAFtCH(l,l) model as well as five other parametric models from the first section which are
capable of capturing the leverage effect and the size effect are estimated. They are: the
Exponential.—GARCH(1, 1), the Asymmetric—.GARCH(1 ,1), the VGARCH( 1,1), the
Nonlinear—Asymmetric--GARCH(1,l) and the Glosten—Jaganathan—Runkle (GJR) model.
The estimations are performed using the Bollerslev—Wooldridge Quasi Maximum
Likelihood approach. The adequacy of these models is then checked using the sign—bias, the
negative size—bias and the positive size—bias tests we have introduced in the last section.
The estimation and diagnostic results for each of these models are presented and discussed
one by one below. As a convention, the asymptotic standard errors are reported in brackets
(.) and the Bollerslev—Wooldridge robust standard errors are reported in square brackets
[•1.
The GARCH(L1) model
h = 0.0238 + 0.6860.h + 0.3299.E2
(0.003) (0.011)
[0.005]
[0.059)
(0.008
[0.097
—2356.03
Log—likelihood=
Sign—Bias test:
—0.34
—2.79
—1.54
Negative Size-Bias test:
Positive Size—Bias test:
Joint test:
4.70
[.0028]
18
The GARCH(1,1) model assumes a symmetric news impact curve that centers at
It allows big innovations to produce more volatility than small ones following a quadratic
function. The significance of the parameters corresponding to the hti term and the
confirms the existence of autoregressive conditional heteroscedasticity. However, the model
does not distinguish between positive and negative innovations and the negative size—bias
test statistic indicates that there is a leverage effect in the data not captured by the
specification. Apparently big negative innovations cause more volatility than the
GARCH(1,1) model can explain. None of the other three regressions had a lower p value
although the one with
had the same p value.
The Exponential—GARCH(1,1) model
lnh = —.0668 + 0.90l2lnh
(0.010
[0.020
(0.007
[0.022
+ 0.4927
if
st—il
— f21I1
—0.1450
i—i
ct—i
(0.016)L h /
(0.011) h 1
[0.104]
(0.048]
—2344.03
Log—likelihood=
Sign—Bias test:
—.10
Negative Size—Bias test:
Positive Size—Bias test:
—1.92
—.62
Joint test:
1.36
[.254]
The Exponential—GARCH model introduced by Nelson (1990) explicitly allows negative
and positive innovations to have different impacts nn volatility. The significantly negative
coefficient corresponding to the term ti/ht confirms the presence of the leverage
effect. The "exponential" nature of the model also allows big innovations to have much
more impact on volatility than small innovations. Although the negative size bias test
borders on significance, the tests indicate that the EGARCH model is quite successful in
capturing the impacts of news on volatility. In the diagnostic test with
19
the p value
drops to .21 and the negative size bias test becomes significant with a t—ratio of —2.56.
The Asvmmetric—GARCH(1,1) model
h = 0.0216
+O.6896•h
+O.3174.(E
_0.11082
(0.009)
[0.088]
(0.017
[0.030
(0.003) (0.012)
[0.005]
[0.055]
Log—likelihood=
345.12
Sign—Bias test:
Negative Size—Bias test:
Positive Size—Bias test:
—1.49
—2.77
—1.49
Joint test:
3.38
[.017]
The Asymmetic—GARCH model manages to capture the leverage effect by allowing the
The model is attractive as it nests the
news impact curve to center on an non—zero
standard GARCH(1,1) model. Given the same ARCH and GARCH parameters, it offers
the same unconditional variance as the standard GARCH(1,1) model but a higher
unconditional fourth moment. The new intercept is significantly greater than zero. While
the asymmetric GARCH model did correct the sign—bias nature of the standard GARCH
model as indicated by the insignificant sign—bias test statistics, the model seems to give
too little weight to big negative innovations. The significant negative size-bias test
statistic indicates that big negative innovations cause more volatility than the model can
explain.
The VGARCH(1.1) model
h = 0.0192 +0.6754•h
(0.005)
+0.1508(f /h112—0.1458 2
(0.014)
[0.013] [0.071]
(0.004)
[0.047]
(0.031
[0.052
—2424.63
—1.49
—5.20
—0.09
Log—likelihood=
Sign—Bias test:
Negative Size-Bias test:
Positive Size-Bias test:
Joint test:
9.33
[.000]
20
The VGARCH model takes virtually the same form as the Asymmetric GARCH model but
with the normalized residuals replacing the residuals. The modification reflects the
alternative approach which consider the normalized residual rather than the residual itself
as "news". Like the Asymmetric GARCH model, the VGARCH model allows a news
impact curve that centers on a non—zero ti This again is highly significant. The
significant negative size—bias test statistic indicates that the VGARCH model suffers from
the same problem as the Asymmetric GARCH model. It falls to capture entirely the fact
that big negative innovations do cause much more volatility than small negative
innovations.
fl Nonlinear—Asymmetric GARCH(1.1) model
h
0.0199 + 0.7253•h
0.002) (0.010)
0.005]
1 + 0.2515.(c 1 —0.2683.VlFj)2
[0.060]
0.008)
0.083]
(0.036
[0.061
—2335.34
Log—likellhood=
Sign Bias test:
—1.49
—3.12
—.99
Negative Size-Bias test:
Positive Size-Bias test:
Joint test:
3.63
(.0125]
The Non—linear Asymmetric GARCH(1,1) model gives more weight to extreme
innovations by allowing the minimum of the News Impact Curve to depends on the past
conditional standard deviation. While the model passes the Sign—bias test and the Positive
size—bias test, the Negative size-bias test statistic is significant at the 1% level indicating
that big negative innovations still product more volatility than the model can explain.
21
The Glosten—Jaganathan—Runkle (GJR) model
h = 0.0241 +0.7053•h
(0.003) (0.013)
(0.005]
+0.1672.2 +0.2636•S
(0.045]
(0.018)
(0.020
[0.036]
[0.102
—2333.11
Log—likelihood=
Sign—Bias test:
—1.08
—.99
—.99
Negative Size—Bias test:
Positive Size—Bias test:
Joint test:
1.59
[.189]
The GJR model, designed explicitly for the leverage effect, has a News Impact Curve that
centers at t 1=0 but has a much steeper slope for negative —i' The significance of the
coefficient corresponding to the term S_1e_1 confirms the existence of the leverage effect.
There is no evidence of unexplained sign or size bias in the positive or negative side.
Overall, the Exponential GARCH model and the GJR model seems to outperform
all other models in capturing the dynamic behavior of the Japanese stock return volatilty
with the GJR model having a higher loglikelihood. To further our understanding about
these different volatility models, some summary statistics including the mean, standard
deviation, minimum, maximum, skewness and kurtosis are produced for each of the
estimated conditional variance series. They are report as follows:
Mean
Std.Dev.
Summary Statistics
Mi
Max.
Skew.
Kurto.
0.6397
5.366
2.8e—8
236.60
37.038
1543.47
hARCH 0.7483
3.124
0.0842
90.83
21.173
523.20
hGARCH 0.8669
10.555
0.0491
485.27
40.843
1799.19
hGARCH 0.7367
3.047
0.0807
87.78
21.014
515.64
h'IGARCH 0.5243
0.674
0.0943
20.16
15.279
373.39
11GARCH 0.6961
2.574
0.0847
64.47
18.215
392.61
0.7561
3.492
0.0885
104.21
21.950
559.01
h?JR
22
The conditional variance series produced by our winners: the EGARCH model and the
GJR model have the highest variation over time. The estimated conditional variance
ranges from a low of 0.0491 to a high of 485.27 compared to 0.0842 and 90.83 under the
standard GARCH model. The standard deviation of the EGARCH conditional variance,
10.555 is more than three times that of the standard GARCH model. It also has a much
more skewed and fat tailed distribution than the other conditional variance series or even
the squared innovations.
Yl: PARTIALLY NON—PARAMETRIC ARCH ESTIMATION
The News Impact Curve is also estimated by fitting a partially non—parametric
model of the form given in (19). The exact specification and the estimation results are
reported below:
Partially Non—narametric ARCH (PNP)
h = 0.0039 + 0.8015
(0.002
(0.013
•htl
(logL = —2310.72)
10.0121
+ 0.0897
+ 0.2269P
(0.014
to.
043yD0tifti
+0.6666•
2t—l t—i2°
(0. 353 'I
0.7201
(0.o88
(Et_i_cT)
to. 172j
—3.7664.P
(1.096 3t—i (t_i—3u)
11.9911
+ 3.6915.
(1.540
12.
?4tl(tr40)
0.1536.Notieti
(0.014
—0.33i2.N
[0.0531
0.203
+ 7.348i.N
(0.959 3t—i (ft_i+4U)
—
3.ll94.N2t1(Eti+20)
(0.278
4.143J
— 5.4904.
(O.093 it•4 (Et_i+u)
I 8.699J
(i.679N4t1(fti+47)
23
where, a is the unconditional standard deviation of
a 1 if c > hi
i=O,1,2,3,4
a 0 otherw i se
N1t a 1 if c c —hi i=0,i,2,3,4
a 0 otherw i se
(Asymptotic standard errors in (.), Bollerslev—Wooldridge
robust standard errors in [fl
The specification is a piecewise linear model with kinks at i equal to 0, a, 2c, 3o- and
4a. If we compare the values of the coefficients corresponding to the terms Pt_i(ct_i_ia)
i=0,1,2 to their counterparts: Nit_i(cti+ior) i=0,1,2, we can see that negative ft—i's do
cause more volatility than positive ctfs of equal absolute size. Moreover, the rate of
increase in volatility as we move towards ct l' with bigger absolute size is higher for
negative c than for the positives- Hence, there seems to be a sign or leverage effect as well
as a size effect that differs for negative and positive c's. The estimated parameter values for
the terms Pt (t 1—ia) and Nt 1(Ct_l+lU) for i=3,4 have somewhat unexpected signs
and magnitudes. Since these terms are for the extreme c's, they might be driven by only a
few outliers. Indeed, even though they are significant under the traditional asymptotic
standard errors, they are by all means insignificant under the Bollerslev—Wooldridge
robust standard errors. The non—parametric estimation results thus seem to indicate that
the true News Impact Curve probably has a steeper slope in the negative side than in the
positive side.
A comparison of the news impact curve implied by the various volatility models and
that corresponding to this piecewise linear non—parametric model is performed by
computing, for each of these models, the implied volatility level at several prespecified
values for c1 under the assumption that h1_1=a=0.63966. The results are summarized in
the table below:
24
The News Impact Curves
hGARCH
hEGARCH
hAGARCH
hVGARCH
t
t
t
t
thNGARCHt hGJR hPNP
t
—10.
—5.
—2.5
—2.0
—1.0
—0.5
0.0
0.5
1.0
2.0
2.5
5.0
10.0
33.45
8.71
1225.1
22.739
26.73
7.323
43.55
11.245
12.793
0.745
0.541
0.454
0.486
0.635
1.287
1.790
2.337
1.717
0.855
0.612
0.495
0.504
0.639
1.286
1.797
3.167
2.198
0.906
0.583
0.475
0.517
0.642
1.144
1.520
3.533
2.470
0.736
0.593
0.517
0.561
0.652
1.235
1.348
6.073
23.480
6.243
24.566
4.655
17.195
1.038
5.579
24.58
32.91
8.753
6.623
2.065
2.524
1.782
0.793
0.545
0.463
0.545
0.793
1.782
2.524
3.098
2.079
0.937
0.629
0.422
0.525
0.652
1.007
2.626
1.877
0.854
0.581
0.467
0.511
0.714
1.596
1.251
2.275
8.710
33.453
3.710
32.616
31.503
1.507
8.050
4.061
If we first confine ourselves to _i in the range (—2.5,2.5), we can see that the standard
GARCH model tends to understate ht for large negative cti's and overstate lit for large
positive 1's relative to the EGARCH as was indicated by our previous test statistics.
These are also true for the AGARCH, VGARCH and NGARCH models. Among all six
parametric models, the EGARCH and the GJR have News Impact Curves that are closest
to the one suggested by the non—parametric estimation. Now if we consider the very
extreme values for
then we can see that the EGARCH and the GJR are indeed very
different. In fact, because of the exponential functional form, the EGARCH produces a
ridiculously high ht of 1225.1 for an
equal to —10 which is about three thousand times
the value of the unconditional variance. Since stock market volatility wasn't that high in
Japan after the 1987 crash, we feel that the EGARCH might be too extreme in the tails.
The GJR model which also has a higher log—likelihood than the EGARCH might be a more
reasonable model to use.
25
VII: SUBSAMPLE ROBUSTNESS CHECK
To judge the sensitivity of our results to the extreme observations around the 1987
crash, we have repeated part of our our analysis for the subsample period from January 1,
1980 to September 30, 1987 excluding the crash. The results for the day—of—the—week and
autocorrelation adjustments as well as some summary statistics for the residuals are
reported below:
Day—of—the—week effect adjustment:
y=
0.0162—0.1088TUE +0.1412•WED +0.0682•THU
(0.035) (0.049)
(0.049)
(0.049)
+0.1008-FRI +0.1411•SAT +u
(0.049)
(standard
(0.053)
errors in parentheses)
Autocorrelation adjustment:
u = —0.0002 + 0.2491.u
—0.0614.u
—0.0275•u
(0.014) (0.02) t4 (0.02) t—2 (0.02) t—3
—0.0490u --t
+O.O496u+0.0019.u
(0.02) t—5
(0.02) t—6
(0.02)
(standard errors in parentheses)
Summary Statistics for :
Mean:
Variance:
Coefficient of Skewness:
Coefficient of Kurtosis:
Ljung—Box(12) for the levels:
Ljung—Box(12) for the squares:
o.oooo
0.4302
0.0947
8.7135
5.9197
540.8552
Sign—bias test:
—2.3
—13.9
Negative size-bias test:
Positive size-bias test:
5.64
Joint test:
77.0
[.0001
2192
Number of observations:
26
The Ljung—Box(12) statistic for the squares strongly suggest the existence of
autocorrelation in the squared residuals and hence time—varying conditional volatility of
the autoregressive type. The Sign—bias test statistic is significant and the two size—bias
test statistics are also highly significant with the Negative size—bias test statistics having
a higher value. It is therefore highly probable that there is a size effect (big news cause
more volatility than small news) which is stronger for bad news than for good news. Given
the superiority of the EGARCII and the GJR over the other asymmetric volatility models,
we have therefore repeated our estimation for the standard GARCH, the EGARCH and the
GJR only. The results are reported below:
The GARCH(1,1) model
h = 0.0129 + 0.8007h + 0.1829.62
(0.002) (0.013)
[0.003]
[0.025]
(0.014
[0.026
—1829.50
Log—likelihood=
.05
Sign—bias test:
Negative size-bias test:
Positive size-bias test:
—1.83
—.84
Joint test:
2.51
[.057]
The Exponential—c3AItCH(i.1) model
lnh = —.0350 + 0.9579•lnh
(0.007) (0.005
[0.014] [0.011
ii
21/1
't—i
— I—I
+ 0.2955.1 ct—i'
—0.0615•
(0.019)1 h—
[0.037]
-
(0.010)
[0.024]
hi
—1822.30
Log—likelihood=
Sign—bias test:
—.47
—2.14
—.03
Negative size-bias test:
Positive size-bias test:
Joint test:
1.75
[.154]
27
The Glosten—.Jaganathan—Runkle (Gilt) model
h =0.1093+0.8181•h
0.002)
(0.012) t4
0.003]
[0.021]
+0.1130.c2 +0.1048•S
0.014)
0.023]
(0.019
[0.038
Log—likelihood=
Sign—bias test:
—1819.23
Joint test:
.75
—.18
—1.24
—.29
Negative size—bias test:
Positive size—bias test:
519]
Several points are worth special notice in the above results. First, the parameter
corresponding to the e1/h{ term in the EGARCH and the parameter corresponding to
the
term in the Gilt are both highly significant even under the
Bollerslev—Wooldridge t—test. Second, the Joint test for the standard GARCH model is
nearly significant while those for the EGARCH and Gilt are not. The log—likelihood of the
EGARCH and the Gilt are substantially higher than that of the standard GARCH. All of
these results point to the presence of a leverage effect in the data. In terms of the size
effect, the positive size—bias test is insignificant for all three models indicating that there is
not much size effect for positive innovations. However, the negative size—bias test statistics
is marginally significant for the standard GARCH and significant for the EGARCH but
insignificant for the Gilt. The failure of the EGARCH to capture the size effect is probably
due to the fact that the quadratic dominates the exponential for small e's and that the
Japanese stock market was quite calm before the 1987 crash. The only model that seems to
do well in both normal and abnormal times is the Gilt model which also has the higher
log—likelihood in both periods.
28
VIII: SUMMARY AND CONCLUSION
This paper has introduced the News Impact Curve as a standard measure of how
news is incorporated into volatility estimates. In order to better estimate and match News
Impact Curves to the data, several new candidates for modelling time varying
heteroskedasticity are introduced and contrasted.. These models allow several types of
asymmetry in the impact of news on volatility. Furthermore, some new diagnostic tests
are presented which are designed to determine whether the volatility estimates are
adequately representing the data. Finally, a partially non—parametric model is suggested
which allows the data to determine the News Impact Curve directly.
These new models are fitted to daily Japanese stock returns from 1980—1988. All
the models find that negative shocks introduce more volatility than positive shocks and
that this is particularly apparent for the largest shocks. The diagnostic tests however
indicate that in many cases, the modelled asymmetry is not adequate. The best models are
ones proposed by Glosten Jaganathan and Runkle(GJR) and Nelson's(1990) EGARCH.
The partially non—parametric (PNP) ARCH model is then fitted to the data and
reveals much the same behavior. For reasonable values of the surprises, the volatility
forecast by EGARCH, GJR and PNP, are rather similar. For more extreme shocks, they
differ dramatically. It turns out that the standard deviation, skewness and kurtosis of
EGARCH and GJR are all greater than the other models, and in some cases greater than
even the squared returns.
When the same analysis is carried out excluding the October 1987 crash, the results
are less dramatic but roughly the same. The evidence agains the symmetric GARCH
model is not as strong, but the asymmetric models GJR and EGARCH again dominate. In
this case, there is also evidence agains the EGARCH and the GJR model appears the best.
29
REFERENCES
Bera, A. and S. Lee (1989), "On the Formulation of a General Structure for
Conditional Heteroskedasticity," unpublished manuscript, Department of Economics,
University of Illinois at Urbana—Champaign.
Black, Fisher (1976), "Studies in Stock Price Volatility Changes," PROCEEDINGS
OF THE 1976 BUSINESS MEETING OF THE BUSINESS AND ECONOMICS
STATISTICS SECTION, AMERICAN STATISTICAL ASSOCIATION, 177—181.
Bollerslev, T. (1986), "Generalized Autoregressive Conditional Heteroskedasticity,"
JOURNAL OF ECONOMETRICS, 31, 307—327.
Bollerslev, T. and J. Wooldridge (1989), "Quasi Maximum Likelihood Estimation of
Dynamic Models with Time Varying Covariances," unpublished manuscript, Department
of Economics, MIT.
Bollerslev, T., R. Chou, N. Jayaraman, and K. Kroner (1990), "ARCH Modeling in
Finance: A Selective Review of the Theory and Empirical Evidence, with Syggestions for
Future Research," Memo, Northwestern University.
Engle, R. (1982), "Autoregressive Conditional Heteroskedasticity with Estimates of
the Variance of U.K. Inflation," ECONOMETRICA, 50, 987—1008.
Engle, R. (1990), "Discussion: Stock Volatility and the Crash of 87," REVIEW OF
FINANCIAL STUDIES, Volume 3, Number 1, 103—106.
Engle, R. and T. Bollerslev (1986), "Modelling the Persistence of Conditional
Variances," ECONOMETRIC REVIEW, 5, 1—50, 81—87.
Engle, R. and C. Mustafa (1989), "Implied ARCH Models from Options Prices,"
unpublished manuscript, Department of Economics, UCSD.
Friedman, B. and K. Kuttner (1990), "Time Varying Risk Perceptions and the
Pricing of Risky Assets," Department of Economics, Harvard University.
Gallant, A.R., D. Hsieh and C. Tauchen (1990), "On Fitting a Recalcitrant Series:
The Pound/Dollar Exchange Rate 1974—83," unpublished manuscript, Department of
Economics, Duke University.
Geweke, J. (1986), "Modeling the Persistence of Conditional Variances: A
Comment," ECONOMETRIC REVIEW, 5, 57—61.
Glosten, L., R. Jagannathan and D. Runkle (1989), "Relationship Between the
Expected Value and the Volatility of the Nominal Excess Return on Stocks," unpublished
manuscript, J.L. Kellogg Graduate School, Northwestern University.
Gourioux, C. and A. Monfort (1990), "Qualitative Threshold ARCH Models,"
unpublished manuscript, INSEE.
Higgins, M. and A. Bera (1990), "A Class of Nonlinear ARCH Models," Department
of Economics, University of Wisconsin at Milwaukee.
30
Mihoj, A. (1987), "A Multiplicative Parameterization of ARCH Models,"
unpublished manuscript, Department of Statistics, University of Copenhagen.
Nelson, D. (1990), "Conditional Heteroskedasticity in Asset Returns: A New
Approach," ECONOMETRICA, forthcoming.
Pantula, S.G. (1986), "Modeling the Persistence of Conditional Variances: A
Comment," ECONOMETRIC REVIEW, 5, 71—74.
Paan, A. and G.W. Schwert (1990), "Alternative Models for Conditional Stock
Volatility,' JOURNAL OF ECONOMETRICS, forthcoming.
Schwert, OW. (1990), "Stock Volatility and the Crash of 87," REVIEW OF
FINANCIAL STUDIES, Volume 3, Number 1, 77—102.
Zakoian, J. (1990), " Threshold Heteroskedasticity Model," unpublished
manuscript, INSEE.
31