2778-Article Text-8143-1-10-20191009 PDF
2778-Article Text-8143-1-10-20191009 PDF
2778-Article Text-8143-1-10-20191009 PDF
1 33-154
ABSTRACT
This paper explores different approaches to modelling and forecasting VaR, using both historical simulation and volatility-weighted bootstrap
methods, where volatility is estimated using GARCH (1,1) and EGARCH (1,1). It examines the one day predictive ability of three historical
simulation VaR models at the 90%, 95%, and 99% confidence levels for developed and emerging equity markets for the period 2011- 2017
that witnessed difficult and extreme market conditions. 870 scenarios of future returns are generated for each of the 500 days representing the
out of sample period extending from March 2015 up to January 2017 in order to estimate the corresponding VaR for both markets. The
GARCH (1,1) volatility-weighted model is accepted for both markets and is classified as the best performing model. The EGARCH (1,1)
volatility-weighted model’s results were inconclusive; in fact, the back-test was accepted at all confidence levels for the developed markets
while rejected at the 99% confidence level for the emerging markets. The basic historical simulation failed in estimating an accurate VaR for
the emerging markets.
Key words: Modeling Value at Risk (VaR); MSCI world index; MSCI emerging markets index; volatility-weighted bootstrap methods;
GARCH models
RESUMEN
Este documento explora diferentes enfoques para modelar y pronosticar el VaR, utilizando tanto la simulación histórica como los métodos de
bootstrap ponderados por volatilidad, en los que la volatilidad se estima utilizando GARCH (1,1) y EGARCH (1,1). Examina la capacidad
predictiva de un día de tres modelos VaR de simulación histórica en los niveles de confianza del 90%, 95% y 99% para los mercados de
valores desarrollados y emergentes para el período 2011-2017 que fueron testigos de condiciones de mercado difíciles y extremas. Se generan
870 escenarios de rentabilidad futura para cada uno de los 500 días que representan el período fuera de muestra que se extiende desde marzo
de 2015 hasta enero de 2017 con el fin de estimar el VaR correspondiente para ambos mercados. El modelo GARCH (1,1) ponderado por
volatilidad es aceptado para ambos mercados y está clasificado como el modelo de mejor desempeño. Los resultados del modelo EGARCH
(1,1) ponderado por volatilidad no fueron concluyentes; de hecho, la prueba retrospectiva fue aceptada en todos los niveles de confianza para
los mercados desarrollados, mientras que fue rechazada en el nivel de confianza del 99% para los mercados emergentes. La simulación
histórica básica falló en la estimación de un VaR preciso para los mercados emergentes.
Palabras Clave: Modelado del valor en riesgo (VaR); índice mundial MSCI; índice MSCI de mercados emergentes; métodos bootstrap
ponderados por volatilidad; modelos GARCH
1. INTRODUCTION
Increased insecurity in financial markets is the main reason behind developing effective market
risk’s measures. Wide movements in market prices led to using risk measures that allow capturing and
mitigating financial risk. Upper managements along with regulatory requirements demand allocating
risk in order to make comprehensive investment decisions. For this reason, quantifying market risk
became essential in the world of finance. A well-known tool called Value at Risk (VaR) for the specific
area of market risk management became a widely used instrument in the 1990s although its origins go
back to the 1952. The motivation behind estimating VaR was contributed to previous financial crisis
that led the Basel Committee to set minimum capital requirements which can be computed through
VaR. VaR is defined as the maximum loss amount given a specific confidence interval and a specific
time horizon. VaR allowed quantifying the market risk to better compare risk limits. Additionally, VaR
facilitates the formulation of hedging policies and evaluating the effect of a transaction on the portfolio
net risk. If the definition of VaR is agreed upon, there is no consensus on how to calculate it. Today, no
ideal model was derived to calculate VaR and risk managers are using different ways of calculation.
Models for VaR calculation include the parametric approaches for exposures that assume a certain
distribution such as the variance-covariance method and the non-parametric approaches such as
historical simulation which can assume any distribution and looks at historical data, as well as the Monte
Carlo simulation which involves developing a model for future returns (Jorion, 2007).
This paper models the VaR of a particular type of assets: the “MSCI world index” and the “MSCI
emerging markets index”. Both indices capture large and mid-cap representations across 23 developed
markets and 23 emerging markets. The purpose behind choosing these two indices is to compare the
different outcomes of VaR when applied on diverse markets during a critical period extending from
2011 to 2017 that was particularly representative of unusual market conditions and extreme events.
This paper attempts to evaluate three methods of calculating VaR which are categorized under the
non-parametric approach. The first method used is the historical simulation which is a traditional
approach extensively used by risk managers due to its simplicity. In 1998 Hull and White introduced
an extension for the basic historical simulation that allows incorporating volatility into updating
historical data. Hence, the second and third methods involve the use of the volatility-weighted bootstrap
model, whereby volatility is computed using symmetric and asymmetric GARCH models, GARCH
(1,1) and EGARCH (1,1) respectively. For each stock index, the parameters are estimated whereby the
goodness of fit of these models is tested. The winning model based on the back-testing methodology
reveals how diverse markets and selected time periods can affect the performance of VaR.
The paper is structured as follows. Section 2 is a literature review of the performance of different
VaR models for emerging and developed markets. Section 3 reviews the methodology and defines the
in sample and out of sample data while reviewing the specificities of the applied econometric models
together with the back-testing methodology. Section 4 portrays the main findings where the parameters
of each of the GARCH models are estimated 10 times, each 50 sub-sample, in order to have an accurate
estimate of the volatility, and where 870 scenarios of future returns are generated for each of the 500
days representing the out of sample period to estimate the corresponding VaR. Also, this section
assesses the results of the Kupiec back-test and the predictive ability of the chosen VaR models. Section
5 concludes and discusses the empirical findings.
2. LITERATURE REVIEW
Many studies tried to assess the accuracy of VaR models for different types of commodities. Some
of them compare different types of VaR models to a certain type of asset, while others determine the
power of a certain model when used on different types of assets and different time period of observations
(Montero et al 2010).
A comprehensive study by Berkowitz and O’Brien (2002) examines VaR models for six US financial
institutions. The results showed that VaR was highly inaccurate in some cases and losses suffered in
banks exceeded the estimated VaR. The banks models examined were not able to adapt to changes in
volatility. Their results suggest that simpler models such as GARCH can perform better than banks’
structural models and could even be a replacement for these models. Same was echoed by Lucas (2000)
134
Studies for Applied Economics Vol 37-3
who found that simpler univariate VaR models perform better than much more sophisticated models.
Additionally, Jorion (2007) states that in the presence of volatility clusters, VaR estimates are more
accurate when utilizing the GARCH models.
Angelidis et al. (2004) evaluated the accuracy of daily VaR, using a family of ARCH models, for
five stock indices in Europe, Japan, and U.S. (CAC40, DAX30, FTSE100, NIKKEI225 and S&P500),
with different distributional assumptions and different sample sizes. The results showed that using
ARCH models, based on student’s-t distribution or generalized error distribution, produces acceptable
results. In contrary, using these models with normal distribution gave insufficient VaR estimates.
Additionally, the sample size was seen to have important impact on VaR accuracy; for example, when
using low confidence levels with a sample size less than 2000 observations, the probability values
improved for GARCH(1,1) model.
Dimitrakopoulos et al. (2010) investigated the efficiency of VaR approaches for 20 stock markets,
covering America, Asia and Europe, 16 of which represent the emerging market and 4 represent the
developed market. The second part of their research was examining VaR approaches for the same stock
markets in crisis period, hence they took the period 1997 - 1999, covering the Asian, Russian, and
Brazilian financial crisis. Interestingly, for both markets, the same VaR models were seen to perform
best. They noticed a certain pattern, whereby the majority of the VaR models tended to overestimate
VaR for portfolios in emerging markets, when large sample size is used, and underestimate VaR for
portfolios in developed markets, irrespective of the sample size chosen. Additionally, VaR models
seemed to be affected less during crisis period in developed markets. Finally, the performance of
parametric VaR models enhanced in post-crisis period in comparison to non-parametric models.
Gencay, Selcuk, and Ulugulyagci (2003) studied VaR models for markets with high volatility
represented by the Istanbul Stock Exchange (ISE-100) Index. They compared the traditional approaches
of VaR such as GARCH, historical simulation and variance-covariance methods to the extreme value
theory models. They concluded that the variance-covariance method performed the worst on any sample
size. The GARCH(1,1) was seen to also perform bad except at a confidence level of 95%. On the other
hand and at higher confidence levels, the extreme value VaR performed the best.
Maghyereh and Al-Zoubi (2006) were interested in estimating VaR for emerging stock markets in
the MENA region specifically in Bahrain, Egypt, Jordan, Morocco, Oman, Saudi Arabia, and Turkey.
For most indices the extreme value theory seems to give the best VaR estimates. However, the weak
performance of the EVT in Morocco and Turkey markets was contributed to the low number of extreme
values in such markets.
Choi and Min (2011) attempted to find the factors behind the different performances of VaR by
using conditional and unconditional approaches. Their analysis was conducted on different set of
financial data constituted of stock market indices, stock prices and exchange rate data. The results
showed that the GARCH models can be improved if used with more flexible distribution. Thus,
replacing the normal distribution with Student’s-t or generalized T distributions will considerably
improve the performance of VaR models and solve the underestimation problem accompanied with the
GARCH-normal model at 99% and 99.5% confidence levels.
Huang and Tseng (2009) used the kernel estimator (KE) approach which is a non-parametric method
of estimating VaR and an improvement of the extreme value theory. The kernel estimator (KE) allowed
them to directly study the tail behavior of the asset return. The most reliable VaR estimates were the
models of the KE approach for both developed and emerging countries, while the other approaches
were found to overestimate VaR. Also, Giot and Laurent (2003) estimated VaR for three international
stock indices for traders with short and long positions. They found that VaR models based on a skewed
Student’s-t distribution performed better than the ones based on normal distribution or on Student’s-t
distribution.
Andjelic et al. (2010) used the delta normal and historical simulation approaches to test the
performance of VaR. They used a sample of data for stock indices representing the central and Eastern
European countries (Slovenian, Croatian, Serbian and Hungarian stock indices) aiming at investigating
VaR performance in developing countries by using different rolling windows with confidence levels of
95% and 99%. In stable conditions, the proposed approaches performed well at a confidence level of
135
Viviane Naimy and Melissa Bou Zeidan
95%, however in volatile market conditions, the tested approaches gave accurate results at a confidence
level of 99%.
Driven by the fact that the available literature does not clearly indicate a superior model for VaR
estimation, Miletic et al. (2015) also chose several stock exchange indices in the central and Eastern
European emerging capital markets specifically in Czech Republic, Hungary, Croatia, Romania and
Serbia, to study the performance of VaR models. They used symmetric and asymmetric GARCH
models based on Student’s-t distribution and normal distribution. They found that results vary
significantly at different confidence levels and that GARCH type models perform better than
RiskMetric and historical simulation models.
Many articles tackled the sample size issue to check whether it affects the accuracy of VaR estimates.
For instance, Hendricks (1996) applied twelve approaches of VaR and the results showed that models
used with longer observations period produce better outcomes. Same was the conclusion of Danielsson
(2002) who found that VaR estimates are more accurate with longer time period. On the other hand,
Hoppe (1998) in Angelidis et al. (2004) argued that a smaller sample size could result in a more accurate
VaR. Frey and Michaud (1997) also stated that in order to capture the recent structural changes in the
return data, due to the changes in trading behavior, a smaller sample size would be appropriate.
The review of the available literature in the previous section reveals the absence of a superior model
for the VaR calculation. Knowing the growing importance of VaR calculation in the world of finance,
the need to continuously research further the models of VaR with different sample size and periods
becomes more evident. Therefore, this paper attempts to evaluate VaR for two market indices, the
“MSCI world index” and the “MSCI emerging markets index” for the period November 2011 - January
2017 that witnessed several volatile episodes including the plunge in oil prices in the second half of
2014. The tested models are all under the non-parametric approach and involves the basic historical
simulation with equal weights, and the incorporation of volatility using GARCH (1,1) and EGARCH
(1,1). Outputs will be compared and ranked from the most to least accurate.
“MSCI world index” and “MSCI emerging markets index” closing prices data are downloaded from
www.msci.com from November 1, 2011 till January 31, 2017 totaling 1,371 daily observations. For
each index the 1,371 observations are used to build 500 sub-samples each consisted of 871 observations
resulting in a total of 871,500 daily observations. The mentioned sub-samples are constructed as a
moving window, whereby to construct a new sub-sample, the first observation of the previous sub-
sample should be deleted and the next observation following the previous sub-sample should be added.
The data from November 1, 2011 till March 3, 2015 totaling 871 observations, which is considered as
the first sub-sample, is used to compute VaR for March 4, 2015 that is the first day of the out-of-sample
period then the second sub-sample from November 2, 2011 till March 4, 2015 is used to compute VaR
for March 5, 2015 which is the second day of the out-of-sample period and so on. The data from March
4, 2015 till January 31, 2017 totaling 500 is used for the out of sample period to back-test VaR. For the
purposes of VaR calculation the daily observations of the two indices are converted into daily returns
using the following equation:
𝑢 = (1)
Where V and V are respectively the closing prices of the index at the end of day i and at the end
of the previous day i − 1
The descriptive statistics of the daily returns of the two indices are presented in Table 1.
136
Studies for Applied Economics Vol 37-3
Table 1. Descriptive Statistics of the “MSCI World Index” and “MSCI Emerging Markets Index”
The mean daily return for the “MSCI world index” was 0.0337% with a standard deviation of
0.7504% compared to -0.000365% and 0.9228% respectively for the MSCI emerging markets index.
Both markets exhibit very close maximum and minimum returns. The Jarque-Bera probability confirms
the non-normality of both return distributions. This is further confirmed by the kurtosis values greater
than 3 revealing a leptokurtic returns’ distribution of both markets.
From the plot of return series in Figure 1 and Figure 2, persistence and volatility clustering are
visible, which implies that the volatility can be forecasted.
.04
.02
.00
-.02
-.04
-.06
IV I II III IV I II III IV I II III IV I II III IV I II III IV I
2012 2013 2014 2015 2016 2017
137
Viviane Naimy and Melissa Bou Zeidan
.02
.00
-.02
-.04
-.06
IV I II III IV I II III IV I II III IV I II III IV I II III IV I
2012 2013 2014 2015 2016 2017
The Augmented Dickey Fuller (ADF) test is used to check for stationarity. Table 2 summarizes the
ADF test for the two indices’ returns. The test statistics confirm the stationarity of data samples.
Consequently, no transformation of the return series is needed.
MSCI EM
Index
MSCI World Index
t-Statistic
t-Statistic Prob.* Prob.*
VaR is defined as the maximum loss amount given a specific confidence interval and a specific time
horizon (Jorion, 2007). Mathematically it can be written as:
P(L(t) ≤ VaR) = 1 − α (2)
138
Studies for Applied Economics Vol 37-3
with 0 ≤ 𝛼 ≤ 1, and L(t) is the maximum probable loss that VaR at time t will not exceed with a
probability of (1 − α).
Using the 500 sub-samples of data, 500 VaR estimates are calculated for each day of the out of
sample period from March 4, 2015 till January 31, 2017 at different confidence levels: 90%, 95% and
99% for both the developed and emerging markets.
In order to estimate VaR, all models used are under the historical simulation approach known to be
non-parametric.
The main assumption for this method is that the returns are independently and identically distributed
(IID), which means that returns are affected only by new information and they are time uncorrelated.
Historical simulation approach assumes that the historical price changes will reflect the future price
changes and it requires collecting as many historical data as possible to estimate the VaR. The historical
simulation includes generating a set of data representing the daily changes in the market variable
through a period of time. Using the past observed day-to-day variations in the values of the two selected
market indices, the profit/loss probability distribution can be estimated for each index over a future
period of time. The first sub-sample of each stock index, consisting of 871 days of observation from
November 1, 2011 till March 3, 2015, is used to create 870 alternative scenarios for what can happen
on day 872 (March 4, 2015). Scenario 1 assumes that the percentage changes in the value of the stock
index are equivalent to what they were on day 1 and scenario 2 assumes that the percentage changes in
the value of the stock index are equivalent to what they were on day 2, etc. The value under the ith
scenario is calculated mathematically as follows (Hull, 2012):
𝑉 =𝑣 (3)
where 𝑣 is the value of the stock index on day 𝑖; 𝑣 is the value of the stock index on the last day
of the chosen time period
Using equation (4), the return scenarios are calculated for each simulation trial resulting in 870 return
scenarios for the reason of deducing the losses and gains expected on the first day of the out of sample
period.
( )
𝑢 = (4)
where:
𝑣 is the value of the stock index under the 𝑖𝑡ℎ scenario
𝑣 is the value of the stock index on the last day of the chosen time period
The same procedure outlined above is repeated for each sub-sample in order to estimate VaR for
500 days from March 4, 2015 till January 31, 2017.
Hull and White (1998) elaborated an extension for the basic historical simulation which involves
incorporating volatility in updating the historical return. Because the volatility of a market variable
may vary over time, sometimes it is high other times low, they recommend modifying the historical
data to reflect the variation in volatility. This approach uses the variation in volatility in a spontaneous
way to estimate VaR by including more recent information. The first sub-sample of each stock index,
consisting of 871 days of observation from November 1, 2011 till March 3, 2015, is used to create 870
alternative scenarios for what can happen on day 872 (March 4, 2015). Using this approach, the value
of the stock index under the ith scenario becomes:
( ) /
𝑉 =𝑣 (5)
139
Viviane Naimy and Melissa Bou Zeidan
where:
𝑣 is the value of the stock index on day 𝑖;
𝑣 is the value of the stock index on the last day of the chosen time period;
𝜎 is the estimate of the daily volatility on day 𝑖;
𝜎 is the most recent estimate of the daily volatility
Similar to the basic historical simulation method, for the reason of deducing the losses and gains
expected on the first day of the out of sample period, equation (4) will be used to calculate the return
scenarios for each simulation trial resulting in 870 return scenarios. Hull and White (1998) replaced the
return scenarios by the following equation:
𝑢 =𝜎 (6)
where:
𝑢 is the return on day 𝑖;
𝜎 is the estimate of the daily volatility on day 𝑖;
𝜎 is the most recent estimate of the daily volatility
Equation (6) allows calculating the return scenarios directly using the volatilities and the returns of
the indices; hence calculating different price scenarios is irrelevant through this technique. However,
equation (6) is used only to confirm the results obtained from equation (4), under the method of
incorporating volatility updating into historical simulation. The same procedure outlined above is
repeated for each sub-sample in order to estimate VaR for 500 days from March 4, 2015 till January 31,
2017.
As previously indicated, incorporating volatilities requires estimating daily variance using the
Generalized Autoregressive Conditional Heteroskedasticity model GARCH(1,1), and the Exponential
GARCH model EGARCH(1,1) described below.
Engle (1982) introduced the Autoregressive Conditional Heteroskedastic model, ARCH, which
permitted the conditional variance to vary over time as a function of past errors. Bollerslev (1986)
generalized this model and developed the GARCH model by adding the lagged conditional variances.
GARCH(p,q) can be written as:
𝜀 |𝜓 ~ 𝑁(0, ℎ ) (7)
ℎ = 𝛼 +∑ 𝛼𝜀 +∑ 𝛽ℎ (8)
= 𝛼 + 𝐴(𝐿)𝜀 + 𝐵(𝐿) ℎ
with
p ≥ 0, q>0
α >0 α ≥ 0, i = 1, … , q
β ≥0 i = 1, … , p
Where 𝜀 indicate a real-valued discrete-time stochastic process, and 𝜓 denote the information set
(𝜎-field) of all information during time t.
The GARCH(p,q) regression model could be achieved, by letting the 𝜀 ’s be innovations in a linear
regression:
𝜀 = 𝑦 − 𝑥′ 𝑏 (9)
140
Studies for Applied Economics Vol 37-3
where
𝑦 : dependent variable;
𝑥 : a vector of explanatory variables;
𝑏: a vector of unknown parameters
ℎ can be expressed as a distributed lag of past 𝜀 ’s, when all the roots of 1 − 𝐵(𝑧) = 0 lie outside
the unit circle:
ℎ = 𝛼 (1 − 𝐵(1)) + 𝐴(𝐿)(1 − 𝐵(𝐿)) 𝜀 (10)
= 𝛼 (1 − ∑ 𝛽) +∑ 𝛿𝜀
The power series expansion of 𝐷(𝐿) = 𝐴(𝐿)(1 − 𝐵(𝐿)) allows finding the 𝛿 ’s.
Brooks (2008) stated that the lag (1,1) is adequate to capture the data’s volatility clustering.
Additionally, GARCH(1,1) is the most popular between the GARCH models since it calculates σ
based on the latest observation and the latest estimate of the variance rate; GARCH (1,1) model is
defined by the following equation:
ℎ =𝛼 +𝛼 𝜀 +𝛽 ℎ (11)
With 𝛼 > 0, 𝛼 ≥ 0, 𝛽 ≥ 0
And for a stable model the following should be met: 𝛼 + 𝛽 <1
The following notation of GARCH(1,1) will be adopted in this study to calculate the variance for
day 𝑡 :
𝜎 = 𝛾𝑣 + 𝛼𝑢 + 𝛽𝜎 (12)
where 𝛾𝑣 = 𝜔 and the parameters ω, α, and β are weights; α, and β are the weights assigned
to u and σ respectively.
𝛾 can be calculated using equation (13):
𝛾 =1−𝛼−𝛽 (13)
𝑣 is the long-run variance and it can be calculated using the following equation:
𝑣 = (14)
Parameters of the GARCH(1,1) model are estimated 10 times each 50 sub-sample using the
maximum likelihood approach (equation 17) in order to get an accurate estimate of the volatility and to
keep the computation time short. Therefore, the estimation period will slide down to each day of the
out of sample period.
Despite the simplicity of GARCH(1,1) model, it doesn’t allow the effect of a shock to be independent
of its sign, whereas the stock market is known to have asymmetric response. In fact, the volatility
increases due to a drop in price levels in stock market, however it may decrease even more due to a rise
of the same magnitude in price levels. Actually, the GARCH(1,1) model includes only the squared
residuals in its conditional variance equation, hence the signs of the residuals have no impact on the
calculated conditional volatility. In finance, bad shocks are known to have larger effect on the volatility
than good shocks, or a falling market will lead to a higher volatility than a rising market. The asymmetric
news influence on the market variable is known as the leverage effect (Miron and Tudor, 2010). To
solve this problem, we will use the EGARCH model which allows for asymmetric effect to be
considered.
Using the EGARCH(1,1) model that was first proposed by Nelson in 1991, the variance for day 𝑡 is
calculated using equation (15):
141
Viviane Naimy and Melissa Bou Zeidan
| |
𝑙𝑛 𝜎 = 𝜔 + 𝛽ln (𝜎 )+𝛾 +𝛼 − (15)
Whereby the error terms are presumed to be normally distributed with mean equal to , 𝜔 is the
long-term average value and 𝛼 represents the “GARCH” effect or the symmetric effect of the model.
Including the parameter 𝛽 allows capturing the persistence of volatility shocks. The parameter 𝛾 allows
to determine if there is leverage effect. Hence, if 𝛾 = 0 then the model is symmetric. When 𝛾 < 0,
negative shocks generate more volatility than positive shocks of same magnitude.
By nature the EGARCH model assures that the conditional variance 𝜎 will always be positive even
if the parameters are negative, since 𝑙𝑛 𝜎 instead of 𝜎 is used to calculate the conditional variance.
Hence, one advantage of the EGARCH model over the GARCH model is that the positive constraints
of the parameters could be ignored. The parameters 𝜔, 𝛼, 𝛽, 𝛾 will be estimated 10 times as previously
explained, similar to the symmetric GARCH model, using the maximum likelihood approach (equation
17).
The maximum likelihood method is a technique that involves determining the parameters values of
GARCH and similar models by maximizing the likelihood of historical data occurring. 𝑓(𝑦|𝜃)
represents the probability density function, where 𝑦 is random variable conditioned on a set of
parameters 𝜃. This function is a mathematical description, whereby given an observed sample of time
series, the process of generating data can be identified. From this process, the joint density of n
observations, known as the likelihood function, is the product of the individual densities:
𝑓(𝑦 … , 𝑦 |𝜃) = ∏ 𝑓(𝑦 |𝜃) = 𝐿(𝜃|𝑦) (16)
𝑦 is used to indicate the time series at time 𝑖 and 𝜃 denotes the vector of model parameters.
Furthermore, the parameters are constants and their estimation will be based on the observed data. We
use the Log of the Likelihood Function (LLF) since it is relatively simpler to work with (Greene, 2003):
ln 𝐿(𝜃|𝑦) = ∑ ln 𝑓(𝑦 |𝜃) (17)
To evaluate the models of VaR, a back-test is done. This shows how well the model used for
estimating VaR has performed if used in the past. Usually the number of times the actual loss exceeds
VaR is considered as an exception. If exceptions occurred on 1% of the days, the current model for
calculating a one-day 99% VaR will be accurate.
The most common test used for back-testing VaR is the Kupiec (1995) test which will be used in
this study.
If the confidence level is X%, and if the model is accurate, then the probability that the actual loss
exceeds VaR will be 𝑝 = 1 − 𝑋%/100
The number of exceptions will follow a binomial distribution:
𝑇
𝑃 ,𝑝 = 𝑝 (1 − 𝑝) (18)
𝑁
where:
𝑁 is the number of exceptions;
𝑇 is the number of trials;
𝑝 is the probability of failure.
Kupiec (1995) suggested the following log-likelihood Ratio (LR) to test the accuracy of VaR:
142
Studies for Applied Economics Vol 37-3
Equation (19) follows a chi-square distribution with one degree of freedom. There is a 5%
probability that the chi-square variable with one degree of freedom will be more than 3.84. Hence, the
VaR model is rejected whenever the LR is greater than 3.84. The LR value is large for either low or
high number of exceptions, hence the VaR models are rejected in both cases were high or low failures
occur. Additionally, the 𝑝 values (probability of failure) are 0.1, 0.05, and 0.01 corresponding to VaR
confidence levels of 90%, 95% and 99% respectively. The 𝑇 value (number of trials) is 500, constant
for all models since the out of sample period is equivalent to 500 days. Furthermore, the daily losses
are taken into consideration and compared with the estimated VaRs, consequently the 𝑁 values (number
of exceptions) will be determined by counting the number of times the actual loss return exceeds the
computed VaR on a given day.
4. FINDINGS
As illustrated in Table 3, the number of violations where the actual loss exceeds VaR is greater for
the emerging markets index than for the developed markets index. Particularly, at a 95% confidence
level, the world index returns exceeded the VaR limits in 5.8% of the observations, while the
corresponding percentage for the emerging index is 8.4%.
Figures 3 and 4 illustrate the results for the whole out of sample period for both the “MSCI world
index” and the “MSCI emerging markets index”. Clearly, the emerging markets index exhibits a more
volatile structure, hence the basic historical simulation method wasn’t able to capture the large losses
and VaR is exceeded in several occasions as presented in Table 3 where the number of exceptions is
greater for the emerging markets index compared to the developed markets index. Also, VaR at 90%
confidence level visibly displays the poorest performance and seems to understate the risk of both stock
indices. On the contrast, the number of times the loss return exceeded the 99% VaR for the world index
seems to be relatively limited.
Using equally weighted observations fails in capturing the shifts in risk, which is a major
disadvantage of the basic historical simulation.
143
Viviane Naimy and Melissa Bou Zeidan
Figure 3. “MSCI World Index” Out-of-Sample Daily Returns Vs. VaR (Basic Historical Simulation)
.03
.02
.01
.00
-.01
-.02
-.03
-.04
-.05
I II III IV I II III IV I
2015 2016 2017
Actual returns
Value at risk 90% - HS for the World Index
Value at risk 95% - HS for the World Index
Value at risk 99% - HS for the World Index
Figure 4. “MSCI Emerging Markets Index” Out-of-Sample Daily Returns Vs. VaR (Basic Historical Simulation)
.04
.02
.00
-.02
-.04
-.06
I II III IV I II III IV I
2015 2016 2017
Actual returns
Value at risk 90% - HS for the EM Index
Value at risk 95% - HS for the EM Index
Value at risk 99% - HS for the EM Index
The parameters of the GARCH(1,1) and EGARCH(1,1) models are found using EViews 7. We
assume that the probability distribution function of errors is normally Gaussian distributed knowing that
the Student’s t-distribution and the Generalized Error Distribution (GED) were also tested to optimize
the predictive ability of both models, GARCH(1,1) and EGARCH(1,1). The Log Likelihood Function
and the Akaike-Information Criterion were calculated in addition to a series of tests done on the
standardized residuals (Exhibit 1 and Exhibit 2) which all confirm that the models’ assumptions are
respected and that both models are stable. In fact, the calculated means and standard deviations of the
errors are all found to be close to zero and one respectively and the absence of ARCH effect was
confirmed by applying the heteroskedasticity test and the serial correlation in the squared residuals is
insignificant under the assumption of normal distribution of errors.
As previously mentioned, the GARCH(1,1) parameters are estimated 10 times each 50 sub-samples
under the assumption of normal distribution of errors and are shown in Table 4.
144
Studies for Applied Economics Vol 37-3
145
Viviane Naimy and Melissa Bou Zeidan
Based on the above estimated parameters the daily variances are calculated 871 times for each sub-
sample using equation (12), which results in a total of 435,500 values of variances for each stock index.
These volatilities are then plugged in equation (5) to generate various price scenarios. The outcome is
870 price scenarios on each day from March 4, 2015 till January 31, 2017 and a total of 435,000
scenarios for each stock index. Accordingly, the generated price scenarios are fitted into equation (4)
for the purpose of estimating the return scenarios. On each day of the out of sample period, 870 return
scenarios are created, which represents profit and loss distribution. Additionally, the return scenarios
are deduced using equation (6). The results obtained from equation (4) and equation (6) led to generating
equivalent return scenarios which further favors our results. The 90th, 95th, and 99th percentiles of the
profit/loss probability distribution are estimated and represent the VaR confidence levels. We ended up
with 500 VaR estimates covering the out of sample period, for each confidence level and for each stock
index. The actual returns in the out of sample period are considered to determine the number of
exceptions were the loss return exceeded VaR as a loss value. Table 5 depicts the number of exceptions
obtained from computing VaR using the GARCH(1,1) volatility-weighted historical simulation.
Incorporating GARCH(1,1) using historical simulation led to a decrease in the number of violations
compared to the basic historical simulation, for both sock indices and at all confidence levels, however,
this is to be confirmed by the Kupiec test. Figures 5 and 6 show the distribution of daily returns in
comparison with VaR weighted by GARCH (1,1) at the three confidence levels. The VaR curves are
slightly shifting downwards with a falling market, hence allowing a better capture of the changes in risk
which justifies the decrease in the number of exceptions.
Figure 5. “MSCI World Index” Out-of-Sample Daily Returns Vs. VaR (GARCH(1,1) Volatility-Weighted Historical
Simulation)
.03
.02
.01
.00
-.01
-.02
-.03
-.04
-.05
-.06
I II III IV I II III IV I
2015 2016 2017
Actual returns
Value at risk 90% - HW GARCH (1,1) for the World Index
Value at risk 95% - HW GARCH (1,1) for the World Index
Value at risk 99% - HW GARCH (1,1) for the World Index
146
Studies for Applied Economics Vol 37-3
Figure 6. “MSCI Emerging Markets Index” Out-of-Sample Daily Returns Vs. VaR (GARCH(1,1) Volatility-
Weighted Historical Simulation)
.04
.02
.00
-.02
-.04
-.06
I II III IV I II III IV I
2015 2016 2017
Actual returns
Value at risk 90% - HW GARCH (1,1) for the EM Index
Value at risk 95% - HW GARCH (1,1) for the EM Index
Value at risk 99% - HW GARCH (1,1) for the EM Index
The same methodology is also implemented at this level. Table 6 summarizes the EGARCH(1,1)
parameters together with models fit, LLF and AIC.
Coefficients of the asymmetric effect 𝛾 range between -21% and -5% indicating that negative shocks
are more destabilizing than positive shocks. On the other hand, the observed values of the first
coefficient of GARCH component 𝛽 vary between 91% and 99% for the two stock indices which
explains the high relative importance of the observations on the returns in determining the current
variance rate. Table 7 and Figures 7 and 8 demonstrate how incorporating EGARCH(1,1) into historical
simulation also resulted in a lower number of violations, compared to the basic historical simulation,
for both stock indices and at all confidence levels. However, compared to the GARCH(1,1) volatility-
weighted model, the outcomes of exceptions are inconclusive.
147
Viviane Naimy and Melissa Bou Zeidan
The
The first The first
constant in
coefficient The first coefficient Goodness of Fit
the
of the leverage of the (Normal
conditional
ARCH coefficient GARCH Distribution)
volatility
component component
equation
148
Studies for Applied Economics Vol 37-3
Figure 7. “MSCI World Index” Out-of-Sample Daily Returns Vs. VaR (Volatility-Weighted Historical Simulation
(EGARCH(1,1))
.04
.02
.00
-.02
-.04
-.06
-.08
-.10
I II III IV I II III IV I
2015 2016 2017
Actual returns
Value at risk 90% - HW EGARCH (1,1) for the World Index
Value at risk 95% - HW EGARCH (1,1) for the World Index
Value at risk 99% - HW EGARCH (1,1) for the World Index
The VaR plots for the developed markets seem more coherent compared to the ones of the emerging
markets. It is also apparent that VaR with a confidence level of 99% overrates the risk for the emerging
markets during 2015 although it generated 11 violations hence underrating the risk during other
different periods. Consequently, it cannot be concluded if EGARCH(1,1) volatility-weighted model
yields accurate VaR estimates.
Comparing the violations’ outcome of each model is inconclusive as to which model yields the most
accurate VaR. The Kupiec test is therefore utilized by calculating LR using equation (19). Results of
the Kupiec test for the developed and emerging markets index are presented in Table 8.
149
Viviane Naimy and Melissa Bou Zeidan
Figure 8. “MSCI Emerging Markets Index” Out-of-Sample Daily Returns Vs. VaR (Volatility-Weighted Historical
Simulation (EGARCH(1,1))
.04
.02
.00
-.02
-.04
-.06
-.08
-.10
-.12
I II III IV I II III IV I
2015 2016 2017
Actual returns
Value at risk 90% - HW EGARCH (1,1) for the EM Index
Value at risk 95% - HW EGARCH (1,1) for the EM Index
Value at risk 99% - HW EGARCH (1,1) for the EM Index
MSCI emerging
“MSCI world
markets index”
Models Applied index”
95%
Critical
value (Chi-
VaR square Test Test
LR LR
CL distribution Outcome Outcome
with one
degree of
freedom)
Basic Historical Simulation 90% 3.84 4.04 Reject 5.22 Reject
95% 3.84 0.64 Accept 10.19 Reject
99% 3.84 2.61 Accept 8.97 Reject
Incorporating volatility to 90% 3.84 0.35 Accept 0.02 Accept
historical Simulation using
95% 3.84 0.00 Accept 2.46 Accept
GARCH (1,1)
99% 3.84 0.72 Accept 1.54 Accept
Incorporating volatility to 90% 3.84 0.09 Accept 0.77 Accept
historical Simulation using 95% 3.84 0.64 Accept 2.46 Accept
EGARCH (1,1)
99% 3.84 0.22 Accept 5.42 Reject
Source: Own elaboration
Kupiec test results unveil that both the GARCH(1,1) and the EGARCH(1,1) volatility-weighted
historical simulation models are classified as the best performing models at all confidence levels for the
developed markets index, while the basic historical simulation model is ranked the worst in terms of
accuracy as it is rejected at a 90% confidence level.
150
Studies for Applied Economics Vol 37-3
The GARCH(1,1) volatility-weighted model was the only model accepted at all confidence levels
for the emerging markets index, whereby the basic historical simulation failed to produce an accurate
VaR at a 90%, 95% and 99% confidence levels. The EGARCH(1,1) volatility-weighted model is ranked
as the second best model since the 90% and 95% VaRs are accepted while the 99% VaR underestimates
the risk of emerging markets index. The superiority of the GARCH(1,1) model can be related to its
ability in taking into account the volatility changes in a natural way and generating VaR estimates that
include more recent information. Such findings confirmed by those of Dimitrakopoulos et al. (2010),
who state that the filtered historical simulation which is a mix between the GARCH model and the
traditional historical simulation and the extreme value method- peaks over threshold are the most
successful VaR models for both emerging and developed markets.
On the other hand, the ability of the GARCH(1,1) and the EGARCH(1,1) in incorporating
information into the historical simulation VaR allowed to obtain VaR estimates that surpass the
maximum loss in the historical data. This is in agreement with the conclusion reached by Hull and
White (1998). The basic historical simulation was ranked as the worst method since it overlooks the
volatility changes, which is a main drawback.
Interestingly, VaR models performed differently for the developed and emerging markets indices in
some cases. While the basic historical simulation VaR estimates are accurate using 95% and 99%
confidence levels for the developed markets index, this method failed at all confidence levels for the
emerging markets index. Similarly, EGARCH(1,1) volatility-weighted model led to rejecting the 99%
VaR for the emerging markets index and accepting the same model for the developed markets index.
This implies that VaR models may act differently depending on the attributes of the market chosen
which corroborates with the results reached by Andjelic et al. (2010) who proposed that VaR models
which perform well in developed markets do not necessarily in developing and illiquid markets. Finally,
our results are in line with Jorion (2007) who stated that when there are volatility clusters, VaR estimates
are more accurate when utilizing GARCH models.
REFERENCES
ANDJELIC, G.; DJOKOVIC, V. and RADISIC, S. (2010). “Application of VaR in Emerging Markets: A Case of
Selected Central and Eastern European countries” in African Journal of Business Management, 4, pp. 3666-
3680.
ANGELIDIS, T.; BENOS, A. and DEGIANNAKIS, S. (2004). “The Use of GARCH Models in VaR Estimation” in
Statistical Methodology, 1(1), pp. 105-128.
BERKOWITZ, J. and O’BRIEN, J. (2002). “How Accurate Are Value-at-Risk Models at Commercial Banks?” in The
Journal of Finance, 57(3), pp. 1093-1111.
BOLLERSLEV, T. (1986). “Generalized Autoregressive Conditional Heteroskedasticity” in Journal of Econometrics,
31, pp. 307-327.
BROOKS, C. (2008). Introductory Econometrics for Finance (Second edition). Cambridge University Press.
CHOI, P. and MIN, I. (2011). “A Comparison of Conditional and Unconditional Approaches in Value-at-Risk
Estimation" in The Japanese Economic Review, 62(1), pp. 99-115.
DANIELSSON, J. (2002). “The Emperor has no Clothes: Limits to Risk Modelling” in Journal of Banking & Finance,
26, pp. 1273-1296.
DIMITRAKOPOULOS, D.; KAVUSSANOS, M. and SPYROU, S. (2010). “Value at risk models for volatile emerging
markets equity portfolios” in The Quarterly Review of Economics and Finance, 50, pp. 515-526.
ENGLE, R. (1982). “Autoregressive Conditional Heteroscedasticity with Estimates of the Variance of U.K. Inflation”
in Econometrica, 50, pp. 987-1008.
FREY, R. and MICHAUD, P. (1997). “The Effect of GARCH-Type Volatilities on the Prices and Payoff Distributions
of Nonlinear Derivatives Assets – a Simulation Study”. Working Paper, ETH Zurich.
GENÇAY, R.; SELCUK, F. and ULUGULYAGCI, A. (2003). “High volatility, Thick Tails and Extreme Value Theory
in Value-at-Risk Estimation” in Insurance: Mathematics and Economics, 33, pp. 337-356.
GIOT, P. and LAURENT, S. (2003). “Value-at-Risk for Long and Short Trading Positions” in Journal of Applied
Econometrics, 18(6), pp. 641-664.
GREENE, W. (2003). Econometric Analysis (Fifth Edition). Prentice Hall, Pearson Education.
HENDRICKS, D. (1996). “Evaluation of Value-at-Risk Models Using Historical Data” in Economic Policy Review,
2(1), pp. 39-70.
HOPPE, R. (1998). “VaR and the Unreal World” in Risk, 11, pp. 45-50.
HUANG, A. and TSENG, T. (2009). “Forecast of Value at Risk for Equity Indices: An Analysis from Developed and
Emerging Markets” in The Journal of Risk Finance, 10(4), pp. 393-409.
151
Viviane Naimy and Melissa Bou Zeidan
HULL, J.C. (2012). Risk Management & Financial Institutions (Third edition). John Wiley Sons, Inc.
HULL, J. and WHITE, A. (1998). “Incorporating Volatility Updating into the Historical Simulation Method for Value
at Risk” in Journal of Risk, 1(1), pp. 5-19.
JORION, P. (2007). Value at risk: The new Benchmark for Managing Financial Risk (Third edition). The McGraw-
Hill Companies Inc.
KUPIEC, P.H. (1995). “Techniques for Verifying the Accuracy of Risk Measurement Models” in The Journal of
Derivatives, 3, pp. 73-84.
LUCAS, A. (2000). “A Note on Optimal Estimation from a Risk Management Perspective Under Possibly Miss
Specified Tail Behavior” in Journal of Business and Economics Statistics, 18, pp. 31-39.
MAGHYEREH, A. and AL-ZOUBI, H. (2006). “Value-at-risk Under Extreme Values: The Relative Performance in
MENA Emerging Stock Markets” in International Journal of Managerial Finance, 2(2), pp. 154-172.
MILETIC, M. and MILETIC, S. (2015). “Performance of Value at Risk Models in the Midst of the Global Financial
Crisis in Selected CEE Emerging Capital Markets” in Economic Research-Ekonomska Istraživanja, 28(1), pp.
132-166.
MIRON, D. & TUDOR, C. (2010). “Asymmetric Conditional Volatility Models: Empirical Estimation and Comparison
of Forecasting Accuracy” in Romanian Journal of Economic Forecasting, 13(3), pp. 74-92.
MONTERO, J.M., FERNÁNDEZ-AVILÉS, G. & GARCÍA, M.C. (2010). Estimation of Asymmetric Stochastic
Volatility Models. Application to Daily Average Price of Energy Products in International Statistical Review, 11,
pp. 330-347.
NELSON, D. (1991). “Conditional Heteroskedasticity & Asset Returns: A New Approach” in Econometrica, 59, pp.
347-370.
152
Studies for Applied Economics Vol 37-3
Null Hypothesis
"There is no
serial “Errors "There
“MSCI world index”
correlation are is no
Residual Analysis for AVG STDEV
in the normally ARCH
each sub-sample
squared distributed” effect"
residuals"
02/11/2011 03/03/2015 -0.032536 1.002737
11/01/2012 12/05/2015 -0.034637 0.999799
21/03/2012 21/07/2015 -0.034492 1.001031
30/05/2012 29/09/2015 -0.047173 1.0001
08/08/2012 08/12/2015 -0.046767 0.998998
Accepted Rejected Accepted
17/10/2012 16/02/2016 -0.056566 0.999604
26/12/2012 26/04/2016 -0.049559 1.000391
06/03/2013 05/07/2016 -0.04869 0.99876
15/05/2013 13/09/2016 -0.047147 0.999115
24/07/2013 22/11/2016 -0.047752 0.998953
“MSCI emerging
markets index”
Residual Analysis for
each sub-sample
02/11/2011 03/03/2015 -0.02702 1.003626
11/01/2012 12/05/2015 -0.020922 1.00362
21/03/2012 21/07/2015 -0.025783 1.000915
30/05/2012 29/09/2015 -0.038121 1.00054
08/08/2012 08/12/2015 -0.036427 0.999717
Accepted Rejected Accepted
17/10/2012 16/02/2016 -0.045983 0.998672
26/12/2012 26/04/2016 -0.035669 1.00219
06/03/2013 05/07/2016 -0.035623 0.999113
15/05/2013 13/09/2016 -0.0295 1.000936
24/07/2013 22/11/2016 -0.02648 1.0002
153
Viviane Naimy and Melissa Bou Zeidan
Null Hypothesis
"There is no
serial “Errors "There
“MSCI world index”
correlation are is no
Residual Analysis for AVG STDEV
in the normally ARCH
each sub-sample
squared distributed” effect"
residuals"
02/11/2011 03/03/2015 0.002836 1.005313
11/01/2012 12/05/2015 0.000146 1.002045
21/03/2012 21/07/2015 -0.000917 1.001545
30/05/2012 29/09/2015 0.000617 0.999885
08/08/2012 08/12/2015 -0.006545 1.000092
Accepted Rejected Accepted
17/10/2012 16/02/2016 -0.004191 0.998202
26/12/2012 26/04/2016 0.002645 1.002761
06/03/2013 05/07/2016 0.005644 1.00363
15/05/2013 13/09/2016 0.00576 1.002256
24/07/2013 22/11/2016 0.000634 1.000868
“MSCI emerging
markets index”
Residual Analysis for
each sub-sample
02/11/2011 03/03/2015 0.014417 1.012125
11/01/2012 12/05/2015 0.020225 1.013081
21/03/2012 21/07/2015 -0.003619 1.007475
30/05/2012 29/09/2015 0.017266 1.02589
08/08/2012 08/12/2015 0.010538 1.009677
Accepted Rejected Accepted
17/10/2012 16/02/2016 -0.0114 1.003296
26/12/2012 26/04/2016 0.063195 0.98456
06/03/2013 05/07/2016 0.024841 0.994022
15/05/2013 13/09/2016 0.021192 1.011055
24/07/2013 22/11/2016 0.000926 1.003354
Note: Exhibit 1& 2 show the residual analysis of EGARCH(1,1) model assuming a normal distribution of errors. The calculated
means and standard deviations of the errors are found to be close to zero and one respectively. Moreover, the absence of ARCH
effect was confirmed by applying the heteroskedasticity test and the serial correlation in the squared residuals is insignificant
under the assumption of normal distribution of errors. However, the errors are found to be not normally distributed.
154