The Estimation of Market Var Using Garch Models and A Heavy Tail Distributions
The Estimation of Market Var Using Garch Models and A Heavy Tail Distributions
The Estimation of Market Var Using Garch Models and A Heavy Tail Distributions
distributions
Tagliafichi Ricardo A. *
Av. Cordoba 1646 6to 209
1055 Buenos Aires
Argentina
Telephone: 054-11-4811-3185
Facsimile: 054-11-4811-3185
E- mail: [email protected] m.ar
Topic 4
Abstract (about 30 pages)
The new rules presented in the documents published by Basel II, obliges to the actuaries to think
in this new challenge that is to provide answers to the Institutions that need comply with these
new rules based in three risk principal components, Market Risk, Credit Risk, and Operational
Risk
When we need to develop an estimation of Market VaR, we must predict the probability of a
maximum loss. To comply with this objective we must predict the volatility for the next period
and the probability associated with this value.
This paper contains a development of Garch theory and the application of different, symmetric
and asymmetric models, to predict the volatility of financial series, accompanied with the theory
of Extreme Value Theory, EVT, and others heavy tails distributions to estimate the probability
that the maximum loss may be occurred.
In the first part I analyze the presence of different Garch models in the returns of stocks in
several markets and compare the same with other models in use. In the second part it is
presented the estimation of the probability associated with the volatility forecasted. The methods
used are the Kupiec estimation of the probability the Extreme Value Theory, and other heavy
tails distributions as Weibull, Pareto, Pearson, etc. In the third part there are an estimation of
several methods, for different series of returns. In the fourth part there are presented the results
and the different methods used. Finally in the last part there are the conclusions arrived
Keywords: Arch, Garch, Egarch, Tarch, EVT (Extreme Value Theory) Kupiec, Pareto,
Heteroscedasticity, VaR (Value at Risk), Market Risk, Kolmogorov Smirnov Test, Anderson
Darling Test, Basel II
Actuary. Master in Business Administration. Professor of Statistics for Actuaries, Statistics for Business
Administration and Financial Mathematics in the University of Buenos Aires, Faculty of Economics. Professor of
financial Mathematics and Software Applied to finance in University of Palermo, Faculty of Economics and
Business Administration. Professor of Investments and Financial Econometric, MBA in finance Palermo University.
Member of Global Association of Risk Professionals. Independent Consultant
Introduction
The actuarial work in the insurance companies and in the pension funds, and in extension in the
government controllers of these entities, is the developing of models to estimate adequate
mathematical and technical reserves to attend the demands by the sinister produced. These
models estimate the probability that an event will occur and in other words is the risk covered by
an insurance policy.
These reserves may be invested in several portfolios but these investments have three principal
risks, the market value risk, the credit risk, and the operational risk. The Basel Committee on
Banking Supervision in several documents concluding with the QIS-3 (Quantitative Impact
Study) Technical Guidance document and the QIS-3 Instructions, both on October 2002, treats
these risks.
In a brief commentary I try to develop the idea under each type of risk.
1) The Operational risk
The operational risk includes the risk of fraud, trading errors, legal end regulatory risk and so on.
This risk is one of the risks more injurious, given that affects indirectly to the financial results.
There isnt a concrete definition on this risk. It is more useful use the approach given in the
documents issued by the Committee of Banks of Basel.
2) The Credit risk
Credit Risk is viewed as one component of market risk, but nevertheless focus here in the market
risk associated with changes in the prices or rates of underlying traded documents over short
times horizons. The credit risk includes the risk of default of a counterpart on a long-term
contract. When we think in Credit Risk we think in how much money we will loss next year
When we compute the portfolio position, we have the value of the investments that covers the
mathematical and technical reserves, but the question is: how is the value of the investments
tomorrow, the next week or the next month?
After 1995, the Financial Risk or the Value at Risk has introduced several models to estimate
these values. The components of these models are the assets returns, the volatility, the time
horizon to predict the worst loss possible, the probability desired, and the liquidity risk1 .
The behavior of assets returns was developed in a previous work The Garch model and their
application to the VaR where I was demonstrated that the returns following these conditions:
1) Dont have normal probability distribution, due the results of goodness of fit test, using
the Kolmogorov Smirnov statistic.
2) Are correlated, because the tested for the Q Statistic presented by Ljung Box, determine
the presence of black noise in the autocorrelation function and in the partial
autocorrelation function.
3) The hypotheses that H o : n = 1 n against H1 : n 1 n , studied by Young in 1971,
was applied a several assets and conclude that not accept the null hypothesis.
The liquidity risk is known as a relationship between the market and the risk produced to liquidate a position. This
is produced when the position in an assets is greater than the quantity that trades the market
In the case exposed, it is obvious that the maximum loss possible is the extreme of the negative
values as a result of 0.5 (2.33 * 2.5). In consequence if we have 1.000.000 , the maximum
loss for one day is 53.250 , for that reason the portfolio only covers reserves for 946.750
What happen if for liquidity market reasons or for comply with Basle regulations, we need
estimate an horizon to predict of 10 days, in consequence if the volatility follows the law of t
0.5
then the maximum loss is (0.5 * 10) [2.33 * (2.5 * 10 0.5)] = 13.4203%. Now the maximum loss
possible in 10 days is 134.203 and the portfolio value is 865.797 .
Really the company doesnt have 1.000.000 , probably only will have a 946.750 tomorrow
or 865.797 in the next 10 days.
In view of these results, and taking in account that the reserve for VaR is necessary to compute
in the balance, we have many problems to solve
1) If we apply the considerations done about the autocorrelation of daily volatility, we need
to apply a model to predict the same.
2) Considering that the returns arent nid and with strong presence of skew-ness and
kurtosis, we need to think in a heavy tails distribution.
3) By the presence of liquidity risk it is important to determine how much time we need to
liquidate the position. Basel Committee recommends a 10 days forecast, in consequence
is appropriate to think how estimate the speed of increase the daily volatility.
I.
The first part to estimate VaR is the volatility forecast. As it has been explain in previous paper
the volatility is a predictable process. If we regress the series returns on a constant the model is:
Rt = c + t
This constant is de mean of the series and the t is the error or the difference between the real
value and the constant or mean. If we analyze these squared differences we are in presence of
variance series or t2 . If the autocorrelation and the partial autocorrelation function of the
variance series present a black noise is a signal that the variance is a predictable process.
5
i2
Q = n(n + 2)
k2
i=1 n i
k
quite low. With a sample of T residuals, under the null hypothesis of no Arch errors, the test
statistic TR2 converges to a k2
If TR2 is sufficiently large, we reject the null hypothesis. It is to say that the coefficients 1
through k are jointly equal to zero and is equivalent to rejecting the null hypothesis of no Arch
errors, or is to say there are Arch effects. In the same sense if TR2 is sufficiently low, it is
possible to conclude that there are not Arch effects.
A simple Arch model to forecast the conditional variance
After we detect the presence of Arch effects in the model proposed (regress the series on a
constant) we could forecast the volatility for the next period using the conditional variance as
follows:
k
t2 = 0 + i t2i + h t
i =1
This is an Arch (k) model and forecasts the variance for to morrow conditioned to its past
realizations.
Bollerslev presents the Garch model adding the concept that the volatility for tomorrow depends
not only on the past realizations but it depends too on the errors of the volatility predicted. In
consequence the model is the following:
p
t2 = + i t21 +
i =1
i =1
2
t i
+ vt
Where 2t is the impact realization or the difference between the variance forecasted by the
model and the real variance produced, 2t is the variance forecasted by the model and vt is a
white noise process with mean zero and variance equals to 1
The market uses with successful in the middle of 90 the Garch (1,1) that is the traders do to take
decisions on the behavior of different assets. The question: What will happen tomorrow? Is
answered by the traders with this concept a little of the error of my prediction of today plus a
little of the prediction for today and this is a Garch (1,1) model.
Everything seems to indicate until here that we have arrived to the most precise values for this
estimate, and the obtained results help to predict the variance or the volatility for the period t+1.
Analyzing the coefficients of the equation of the estimate of the variance and can we
calculate the non-conditional variance or the traditional variance considering that t2 = t21 = 2 .
Solving for we obtain 2 =
So that the pattern is stationary the sum of the parameters and should it be smaller to the
unit. This sum of + knows as persistence of the model and on this base this Garch (1,1)
model can predict the variance for an horizon of days. We dont forget that when we estimate
the VaR for an asset, the presence of liquidity risk obliged to forecast the variance not only for
one day, we need to forecast for days.
If the volatilities of financial series of the capital market are analyzed we can observe that the
persistence of the model Garch (1,1) used it comes closer to the unit, but is what is noticed that
the values of and are different to each other.
Two alternatives are presented for the representation of the equation of the variance of the
model:
1) The equation of the conditioned variance estimated by a Garch (1,1) we can write it starting
from considering that in the following way:
t2 = vt + t2
t2 = t2 v t
t2 v t = + t21 + t21 v t
= + ( + )
2
t
2
t 1
+ v t v t 1
Then the square error of an heteroscedasticity process seems an ARMA (1,1). The
autoregressive root that governs the persistence of the shocks of volatility is the sum of
( + )
2) It is possible in recursive form to substitute the variance of the previous periods in the right
part of the equation and to express the variance like the sum of a pondered average from all
the residuals to the last square.
t2 = + t21 + t2 1
t2 = + t21 + ( + t2 2 + t2 2 )
M
t2 =
k
(1 k )
+
1
j =1
j 1
t2 j + k t21
We can appreciate that in the Garch (1,1) model the variance decays in exponential form
pondering the last residuals, giving him most importance to the residuals nearer and less
importance to the residuals more distant. If k (k is the lagged value) then (1 - k) = 1
because k = 0 also affecting to the last one adding of the right. In consequence the variance for
the period t in recursive form is in the following way:
t2 =
+
1
j =1
j 1
t2 j
As the expressed, with a model Garch, it is assumed that the variance of the returns can be a
predictable process. In a long term prediction horizon, using a Garch (1,1), the variance can be
calculated for one period to periods in the following way, taking as the quantity of periods
that there are between the horizon to be predict and t+1
( ) (
( ) = E ( +
Et 1 t2+1 = Et 1 + t2 + t2 = + ( + )t2
Et 1
2
t +2
t1
2
+1t
Substituting period for the future one has the following formula that ensures the variance for
the day if the series have a behavior of a Garch (1,1)
1 ( + )
E( ) =
+ ( + ) t2
1 ( + )
2
In the previous formula if trends to zero the decay factor is (+)k for that reason an impact in
the predicted variance decays following an exponential function. The shock disappears in a some
days and trends to the non-conditional variance. The quantity of days is a function of the sum of
(+) or the persistence of the model.
In consequence the total sum of the variance for the periods is the following
Et 1 ( t , ) =
1 ( + ) 1 1 ( + ) 2
1
)
)
t
+
1 ( + )
1
)
1
The extrapolation of the next day variance to a longer horizon is a complicated function of the
variance process and the initial conditions. Thus the consideration of the volatility follows the
law of t 0.5 fails due to the fact that the returns are not identically distributed.
Some results applying these equations are the following:
If we analyze a Garch (1,1) with the following coefficients:
t2 = 0.001 + 0 .03 t21 + 0 .90 t21
If the values of 21 =1.45% and 21= 2.4025% the variance predicted for time t is 1.3781%. If
we apply this result and estimate de total variance for a period of 10 days de value is 10,1963%
very different of the result of the two ways proposed.
1) Considering the non-conditional variance 2 =
In Engle and Ng (1993) the news impact curve analyzes how the asymmetry Garch models
predicts the volatility for tomorrow based on todays impact. Nelson (1990) develop the idea of
modify the Garch model due to the simple structure of Garch models. We have found evidence
of negative correlations between the stocks returns and the changes in return volatility. In other
words volatility tends to rise in response to bad news and tends to fall in response to good
news. The analysts have fruitfully applied the Garch methodology in assets pricing models and
in the volatility forecast. Risk Metrics 2 use a special Garch model when use the decay factor
= 0.94. The behavior of this model is similar to a Garch (1,1) with = (1), = and = 0
It is easy verify that the coefficients of a Garch (1,1) process are nonnegative due to the
conditional variance is captured by a constant plus a weighted average (positive weights) of past
squared residuals. These models elegantly capture the volatility clustering in assets returns, first
noted by Mandelbroot (1963): large changes tend to be followed by large changes of
either sign - and small changes by small changes . The Garch models has important limitations
for example:
1) Researchers have found evidence that stocks returns that stock returns are negatively
correlated with changes in returns volatility. As it was explained before in response to the
type of excess of returns, positive or negative, the Garch models only are sensitive to the
magnitude of the excess of returns and not to the sign of this excess of return.
2) An other limitation of Garch models results from the non negative constraints on
, , and which are imposed to ensure that 2t remains positive, and if t increases the
values of 2 t+m for m 1
3) This third drawback in the Garch modeling is the concept of persistence of shocks to
conditional variance. The problem is to estimate how long shocks persists on conditional
variance. In Garch (1,1) models, shocks may persist in one norm and die out in another,
so the conditional moments, may explode when the process itself is strictly stationary and
ergodic.
To solve the restrictions of Garch models Nelson and Glosten Jaganathan and Runkle presents
the asymmetric Garch models named Exponential arch (Egarch) and Threshold arch (Tarch)
respectively.
2
11
t 1
+ t 1
t 1
t 1
In the left hand side is the ln of of the conditional variance. This implies that the leverage effect
is exponential, rather than quadratic, and that forecasts of the conditional variance are guaranteed
to be nonnegative. The presence of leverage effects is due to that < 0. The impact is
asymmetric if 0
If we generalize the model we obtain:
q
p
Ln(t2 ) = + j Ln(t2 j 1 ) + i t i + i t i
t i
t i
j =1
i =1
In Exponential Garch ln (2 t) is a linear process and its stationarity3 and ergodicity4 are easily
checked. If the shocks to {ln(2 t)} die out quickly, and remove the time varying component
then {ln(2 t)} is strictly stationarity and ergodic
The Threshold Arch (TARCH)
Glosten, Jaganathan and Runkle (1993) presents the Tarch model and independently Zakoian
presents the same model (1990) to forecast the variance. The conditional variance is specified as
follows:
[1 /(T j )]
(y
i= j +1
)(y t j )
12
In this model, good news are represented by > 0 , and bas news by < 0 , and have
differential effects on the conditional variance equation. The good news only has an impact of
and bad news has impact in the sum of + .The leverage effect of bad news exists only when
0 in this case the news impact is asymmetric.
For a higher order of Tarch model the specification is the following:
q
= +
2
t
i=1
where
2
t i
+ d
2
t 1 t 1
+ t2i
i =1
As synthesis of it was exposed we are in the presence of symmetric Garch (p,q) model and
asymmetric models as Egarch (p,q) and Tarch (p,q) models. Engle and Ng (1993) presents a
resume of several models that they try to explain the volatility structure.
Using the convenient software we can select the appropriate model to analyze the behavior of the
volatility. The first step after accepts the presence of heteroscedasticity is test the significance of
the type of model select between the asymmetric and symmetric Garch model.
The asymmetry test
To check if the model is correctly specified, we have two forms to detect the right model. One
form is comparing the coefficients of the model. The log likelihood, the Akaike info criteria and
the Schwarz Criteria aids to take a good model selection. The value of the log likelihood
estimates when the model converges, and then is calculated by the following equation:
l=
T
[1 + log( 2 ) + log ( ' / T )]
2
2l /n + 2k / n
The other form is making an asymmetry test, where we must do a cross correlation between the
squared residuals of the Garch model and the standardized residuals of the same. The result of
this cross correlation will be a white noise if the model is symmetric or in other words the Garch
model is correctly specified, and a black noise is the model is asymmetric.
Now we have identified a model to forecast the volatility and we know a part of the problem,
because the interval of confidence has two parts the volatility and the coefficient of the
probability distribution that covers the limits of the interval of confidence due to a certain
probability established previously.
II.
In previous papers it was demonstrated that the returns not follows a normal distribution, and we
reject the test of goodness of fit in the same sense. The behaviors of these time series have
probability distribution with heavy tails. These distributions are treated in detail in the Annex 1
of this paper, and complete the forecast of VaR considering jointly the volatility and the
probability distribution.
These heavy tails distributions have incidence in the estimation of VaR especially when we
analyze the extreme limits of the distribution. Remember that VaR treats to estimate the possible
maximum loss at certain level of confidence established previously. Normally this maximum
probability is determined at 99% of confidence, and then we are analyzing the lower percentile
of the distribution.
If we are in presence of a Garch model to predict de volatility, this is a symmetric model and in
consequence the model does not detect the incidence of bad news. For example estimate the risk
for the 5 years treasury bond.
Using the appropriate software, the models to forecast the volatility are shown in table I, in
consequence of this results, we accept that the best model is an Egarch (1,1) model due to the
values of the log likelihood and the Akaike Info Criterion and Schwarz Criterion.
14
If we apply the results and estimate the VaR for the first percentile, using the normal distribution
and make a back testing using the last 755 observations, ended on 31st January 2003, we obtain
the results exhibit on table II, for the different Arch models
Table I
Coefficient
Mean
Log Likelihood
AIC
SC
Garch(1,1)
-0.100235
(0.041346)
0.019106
(0.009783)
0.129244
(0.017804)
0.879834
(0.016776)
-
Tarch (1.1)
-0.145484
(0.043854)
0.009516
(0.007824)
0.030235
(0.028565)
0.909609
(0.015203)
0.150783
(0.031201)
-1351.342
3.592959
3.623599
-1359.123
3.610922
3.635434
Egarch(1,1)
-0.181462
(0.045529)
-0.002929
(0.010497)
0.013904
(0.011090)
0.998601
(0.001713)
-0.110551
(0.008830)
-1343.758
3.572869
3.603509
(The numbers into the brackets are the standard deviation of the coefficients)
Table II
Number of returns that exceed the VaR Forecasted
Garch (1,1)
Tarch (1,1)
Egarch (1,1)
6
6
4
15
As a result of the back testing exhibit in the graph, we can observe the behavior of a Garch (1,1)
with bold line and the behavior of the Egarch (1,1) with fine line. Both models comply with the
objective, using the normal distribution, that there are 1% of observations at left of the maximum
loss forecasted, 6 over 755 observations for Garch (1,1) and 4 over the same number of
observations for Egarch (1,1).
The model selection accepts that the Egarch (1,1) model because it improves lightly the log
likelihood and the Akaike and Schwarz Criterion.
To estimate the amount of VaR or the maximum loss possible, the normal distribution of
probabilities jointly with the volatility forecasted by Garch family models produce good results
because the number of observations over the limit in the low limit complies with de amount
estimated.
The probability distributions of returns dont follow the normal distribution as I had
demonstrated in a previous work. The distributions found for some time series are shown in table
III
Table III
Time series
5 yr. T
bond
Bayer
Dow Jones
Bovespa 5
Merval
IDP 6
Period Number
First
Second
analyzed of obs. Distribution Distribution
Fitted
Fitted
02/01/00
30/01/03
06/06/02
18/03/02
04/01/99
18/03/03
04/01/99
18/03/03
03/01/01
18/03/03
25/01/01
30/01/03
Extreme
value Dist.
Extreme
value Distr.
755 Logistic
201
1056
1036
773
Logistic
Logistic
Weibull
Logistic
Weibull
Logistic
Extreme
value Distr.
Weibull
445 Logistic
The test of goodness of fit used are the Kolmogorov Smirnov test (KS) and the Anderson Darling
(AD). Both test of goodness of fit, have strong results specially when it is necessary to test
5
6
16
asymmetric and heavy tails distributions of probabilities. The Kolmogorov Smirnov test is a test
that is independent of any gaussian distribution, and have the benefit that not need a great
number of observations. The Anderson Darling test is a refinement of KS test, specially studied
for heavy tails distributions. Once we fit the appropriate distribution based on the sample or the
time series available we can simulate the values that will take the series using a sampling method
Monte Carlo or Latin Hypercube. After 20.000 trials the value corresponding to the first
percentile is presented in the Table IV and these results are compared with the values obtained
by de interval of confidence using Arch models and normal probability distribution.
Table IV
st
Time Series
Value at 1% 1
distribution
fitted
Value at 1%
2nd distribution
fitted
Arch model
5 yr. T bond
Bayer
Dow Jones
Bovespa 7
Merval
IDP 8
- 1.02
-10.03
- 3.35
- 5.46
- 6.50
- 2.34
-5.78
-21.78
- 3.90
- 6.65
-14.95
- 5.86
Egarch (1,1)
Egarch (1,1)
Egarch (1,1)
Tarch (1,1)
Egarch (1,1)
Egarch(1,1)
The results of these arch models presented in table IV are detailed in the following part, but
considering that the number of excess of values founded under the lower limit not exceed the 1%
of the total observations, the combination of Arch model and normal distribution provides a good
fit for the VaR forecast.
III.
As a result of the previous considerations about the volatility forecast and the distribution fitted
to compute de Market VaR, the conclusion arrived are the following:
1) The volatility it is presented in clusters and have an strong incidence of heteroscedasticity
in the model used to forecast the volatility for the period t+1
7
8
17
2) The prices of stocks and the corresponding index market are agitated by bad news
product of, terrorism, war, economic depression, adulterated balance sheet, etc, and in
consequence the asymmetric models are the selected Arch models.
The use and asymmetric model, that weights the negative results, produce leverage in the
volatility forecast, and directly the impact of negative returns open the parametric space and
produces a great value of volatility.
In the following graphs, we can observe the volatility through the time pass.
40
30
20
10
0
100
200
300
400
500
600
700
800
0
200
400
600
800
1000
As an example we can observe in the Merval Index, the index of the Stock Exchange of Buenos
Aires Argentina, the great volatility in the middle of the graph is a result of the change of
government at the end of 2001 and principle of 2002. In the other graph, the Dow Jones index
18
has been agitated by September eleven, the crises of bankruptcy of Enron and MCI, the war
notices in the final.
Then the asymmetric Garch model and the normal distribution of probabilities produce a best
approach to forecast how much many we will lose tomorrow, and the distribution fitted for the
returns only forecast the VaR for a long range of time. Remember that when we think in Market
risk, we think in the result for tomorrow, not for the next year.
IV.
Some results
The results, to estimate de volatility for the next pe riod, of the time series analyzed are the
following:
Table V
Series
Coeff.
Model
Mean
return
Outliers 9
Outliers /
observations
5 years T
bond
Egarch
(1,1)
Bayer
Dow Jones
Bovespa
Merval
IDP
Egarch
(1,1)
-0.233684
(0.194770)
0.174492
(0.194770)
-0.142854
(0.028628)
0.974717
(0.006145)
-0.185586
(0.035579)
1
Egarch
(1,1)
-0.036214
(0.036492)
-0.036214
(0.017712)
0.060276
(0.022571)
0.974012
(0.006254)
-0.129700
(0.014525)
10
Tarch
(1,1)
0.003047
(0.063416)
0.397125
(0.089869)
-0.001285
(0.014372)
0.833621
(0.034005)
0.163282
(0.021862)
10
Egarch
(1,1)
-0.051153
(0.081690)
-0.121925
(0.016902)
0.198057
(0.025022)
0.983886
(0.006189)
-0.064811
(0.014907)
13
Egarch
(1,1)
0.096725
(0.053694)
0.359076
(0.031723)
0.188690
(0.028072)
0.578215
(0.074028)
-0.181756
(0.181756)
3
0.5%
1%
1%
1.1%
0.9%
The importances of these results are that the percentage of outliers over the number of
observations not exceeds the 1% equivalent to the probability of the first percentile. Only in the
case of the Merval index, it presents in the back test that the total of outliers are great the 1%. In
this case is reasonably because at and of 2001 it was produced the end of the convertibility.
The outliers are the negative returns that exceeds the VaR forecasted using de normal probability distribution for
the first percentile and the conditioned volatility estimated by the Garch model
19
V.
Conclusions
20
Bibliography
Bollerslev T., 1986, Generalized autoregressive conditional heteroscedasticity, Journal of
Econometrics 31, 307-327
Basel Committee on Banking Supervision, 2001, The New Basel Capital Accord, Bank for
International Settlement
Cruz Marcelo G., 2002, Modeling measuring and hedging operational risk, John Wiley and Sons
Duffie Darrell, Jun Pan, 1997, An Overview of Value at Risk, The Journal of Derivatives, Spring
1997
Embrechts Paul, Kluppelberg Claudia, Mikosch Thomas, 1997 Modeling Extremal Events for
insurance and finance, Springer
Engle, Robert F., 1982 Autoregressive conditional heteroscedasticity with estimates of the
variance of United Kingdom inflation, Econometrica 50, 987-1007
Engle, R., y T. Bollerslev, 1986, Modeling persistence of conditional variances, Econometric
Review 5, 1-50
Engle, R., y Victor K. Ng, , 1993, Measuring and testing the impact of News an Volatility, The
Journal of Finance Vol. XLVIII, Nro. 5
Glosten, Lawrence, Ravi Jaganathan, and David Runkle, 1993, On the relationship between then
expected value of the volatility of the nominal excess return on stocks, The Journal of
Finance Vol. XLVIII, Nro. 5
Greene, William H., 1997, Econometric Analysis, Prentice Hall, New Jersey
Duan Jin-Chuan, 1995, The Garch option pricing model, Mathematical Finance Vol. V Nro.1
J.P. Morgan, 1994 Riskmetrics, Technical document Nro 4 Reuters
Jorion Phillippe, 2000, Value at Risk, Mac-Graw-Hill, New York
Jorion Phillippe, 2001, Handbook Financial Risk Manager 2001-2002, John Wiley and Sons
Nelson, D., 1990, Conditional heteroscedasticity in asset returns: A new approach, Econometrica
59, 347-370
Peters Edgar, 1994, Fractal Market Analysis, John Wiley & Sons, Inc. New York
Reiss R. D., Thomas M. 2001, Statistical Analysis of Extreme Values, Birkhauser Verlag
21
Tagliafichi Ricardo A., 2001, The Garch model and their application to the VaR, XXXI
International Astin Colloquium, Washington 2001
Tagliafichi Ricardo A. 2002, Betas calculated with Garch models provides new parameters for a
Portfolio selection with Efficient Frontier, ICA Cancun 2002
Tsay Ruey S. 2002, Analysis of Financial Time Series, John Wiley and Sons
Wiggins, J.B., 1987, Option Values under stochastic volatility: Theory and empirical tests.
Journal of Financial Economics 19, 351-372
Zakoian Jean-Michel, 1992, Threshold Heteroskedastic models, Journal of Economic Dynamics
and Control Nro 18
22
Annex 1
F (x ) = 1 e
The calculus of the parameter value, will be estimate by the following estimator:
=
1
Xi
i =1
Weibull Distribution
The Sweden Waloodi Weibull presents this distribution in 1939. The financial econometrics pay
attention to this distributions in the last part of the last century. The PDF y CDF are the
following in the same order:
f ( x) = x 1 e
F (x ) = 1 e
23
f ( x) =
( x + ) +1
F (x ) = 1
x +
xi2
i =1
n
= 2
n
xi2
i =1
n
xi
i =1
n
xi
2 i =1
n
xi
i =1
n
, =
xi2
i =1
n 2
xi
i =1
n
xi
2 i =1
n
f (x ) =
z
(1 + z) 2
= scale factor = 3
z=e
F (x) =
1
1 + ez
be standardized by the parameters of location (mode), scale and shape, selected to show an
appropriate distribution of the standardized extremes.
Frechet, Gumbel and Weibull define three important extreme distributions. A really
representation of those distributions is done by the Extreme Value Distribution o Generalized
Extreme Value.
The three distribution parameters, F,,, arise as a maximum normal limit distribution id
random variables
For the random variable Y = X1,n ,we have:
Z=
(Y )
z=
(z )
P (Y y) = F ( y) = F0 ( z ) = exp 11 + z z
1+ z 0
Where is shape parameter. Due to the values that take the EVD tends this behavior
=0
Gumbel Distribution
Frechet Distribution
Weibull Distribution
The purpose is to found a practice form to estimate the parameters values for use EVD. The most
important parameter is that describes the weight of the tails distribution. If the shape parameter
is significant then the EVD is significant too. If the shape parameter is not significant then a
log normal distribution or a logistic distribution is a good distribution to descibe the behavior of
the random variable.
One of the form to approximate the calculus of this three parameters, , , , is the PWM or
probability weighted moments, that follows this steps.
m r (,, ) =
1 n
X Ur
n i =1 i i
Where U is a plotting positio n that follows a free distribution and k takes the probability as:
pk,n = [(n-k)+0.5]/n.
25
The best approach to estimate the parameters , , and , is presented by Hosking in 1985
following this formula:
mr =
(1 + )
1
+ 1
r +1
(1 + r )
> 1, 0
The value of arise from the iterative estimations suggested too by Hosking
(2m2 m1 )
(1 + )(1 2 )
= m1 +
(1 (1 ) )
\ = 7.859 c + 2 .9554 c 2
c=
2 m 2 m1
3 m3 m1
log 2
log 3
The values of the scale and location, and depends on the values that take se shape value
Once we obtain the parameter values it can estimate de maximum values or the minimum values
of the series. To estimate the quantiles based on a certain level of confidence it is possible to
estimate the value of the random variable as follows
x p =
(1 ( ln p ) )
p (1 p )
f (x )2
If we apply the Kupiec considerations for the first percentile, the value of z, is:
z Kupiec =
0.01(1 0.01)
= 3.7332
0.026652
In consequence for the same probability the parameter space of the random variable is
3.7332 against 2.33 of a normal distribution.
27
Notas Varias
28