FRM Bionic Turtle T2-Quantitative

Download as pdf or txt
Download as pdf or txt
You are on page 1of 133

Quantitative Analysis

FRM 2011 Study Notes Vol. II

By David Harper, CFA FRM CIPM


www.bionicturtle.com

Table of Contents
Stock, Chapter 2: Review of Probability

Stock, Chapter 2: Review of Statistics

28

Stock, Chapter 4: Linear Regression with one regressor

51

Stock, Chapter 5: Single Regression: Hypothesis Tests

60

Stock: Chapter 6: Linear Regression with Multiple Regressors

63

Stock, Chapter 7: Hypothesis Tests and Confidence Intervals in Multiple Regression

67

Rachev, Menn, and Fabozzi, Chapter 2: Discrete Probability Distributions

71

Rachev, Menn, and Fabozzi, Chapter 3: Continuous Probability Distributions

75

Jorion, Chapter 12: Monte Carlo Methods

86

Hull, Chapter 21: Estimating Volatilities and Correlations

97

Allen, Boudoukh, and Saunders, Chapter 2: Quantifying Volatility in VaR Models

www.bionicturtle.com

106

FRM 2011 QUANTITATIVE ANALYSIS 1

Stock, Chapter 2:

Review of Probability
In this chapter

Define random variables, and distinguish between continuous and discrete random
variables.
Define the probability of an event.
Define, calculate, and interpret the mean, standard deviation, and variance of a
random variable.
Define, calculate, and interpret the skewness, and kurtosis of a distribution.
Describe joint, marginal, and conditional probability functions.
Explain the difference between statistical independence and statistical
dependence.
Calculate the mean and variance of sums of random variables.
Describe the key properties of the normal, standard normal, multivariate normal,
Chi-squared, Student t, and
F distributions.
Define and describe random sampling and what is meant by i.i.d.
Define, calculate, and interpret the mean and variance of the sample average.
Describe, interpret, and apply the Law of Large Numbers and the Central Limit
Theorem.

www.bionicturtle.com

FRM 2011 QUANTITATIVE ANALYSIS 2

Define random variables, and distinguish between continuous and


discrete random variables.
We characterize (describe) a random variable with a probability distribution. The random
variable can be discrete or continuous; and in either the discrete or continuous case, the
probability can be local (PMF, PDF) or cumulative (CDF).
A random variable is a variable whose value is determined by the outcome of an
experiment (a.k.a., stochastic variable)

Continuous

probability
function
(pdf, pmf)

Pr (c1 Z c2) =
(c2) - (c1)

Pr (Z c)= (c)

Pr (X = 3)

Pr (X 3)

Cumulative
Distribution
Function (CDF)

Discrete

Continuous random variable


A continuous random variable (X) has an infinite number of values within an interval:
b

P (a X b) a f ( x )dx

www.bionicturtle.com

FRM 2011 QUANTITATIVE ANALYSIS 3

Discrete random variable


A discrete random variable (X) assumes a value among a finite set including x1, x2, x3 and so
on. The probability function is expressed by:

P( X xk ) f ( xk )

Notes on continuous versus discrete random variables

Discrete random variables can be counted. Continuous random variables must be


measured.

Examples of a discrete random variable include: coin toss (head or tails, nothing in
between); roll of the dice (1, 2, 3, 4, 5, 6); and did the fund beat the benchmark?(yes,
no). In risk, common discrete random variables are default/no default (0/1) and loss
frequency.

Examples of continuous random variables include: distance and time. A common


example of a continuous variable, in risk, is loss severity.

Note the similarity between the summation ( ) under the discrete variable and the
integral () under the continuous variable. The summation () of all discrete outcomes
must equal one. Similarly, the integral () captures the area under the continuous
distribution function. The total area under this curve, from (-) to (), must equal one.

All four of the so-called sampling distributionsthat each converge to the


normalare continuous: normal, students t, chi-square, and F distribution.

www.bionicturtle.com

FRM 2011 QUANTITATIVE ANALYSIS 4

Summary
Discrete
Are counted
Finite

Continuous
Are measured
Infinite

Examples in Finance
Distance, Time (e.g.)
Default (1,0) (e.g.)
Severity of loss (e.g.)
Frequency of loss (e.g.)
Asset returns (e.g.)
For example
Normal
Sampling
Students t
Chi-square
distributions
F distribution
Lognormal
Exponential
Gamma, Beta
EVT Distributions (GPD, GEV)

Bernoulli (0/1)
Binomial (series of i.i.d. Bernoullis)
Poisson
Logarithmic

Define the probability of an event.


Probability: Classical or a priori definition
The probability of outcome (A) is given by:

P ( A)

Number of outcomes favorable to A


Total number of outcomes

For example, consider a craps roll of two six-sided dice. What is the probability of rolling a
seven; i.e., P[X=7]? There are six outcomes that generate a roll of seven: 1+6, 2+5, 3+4, 4+3, 5+2,
and 6+1. Further, there are 36 total outcomes. Therefore, the probability is 6/36.
In this case, the outcomes need to be mutually exclusive, equally likely, and
cumulatively exhaustive (i.e., all possible outcomes included in total). A key property
of a probability is that the sum of the probabilities for all (discrete) outcomes is 1.0.

www.bionicturtle.com

FRM 2011 QUANTITATIVE ANALYSIS 5

Probability: Relative frequency or empirical definition


Relative frequency is based on an actual number of historical observations (or Monte Carlo
simulations). For example, here is a simulation (produced in Excel) of one hundred (100) rolls of
a single six-sided die:

Roll
1
2
3
4
5
6
Total

Empirical Distribution
Freq.
11
17
18
21
18
15
100

%
11%
17%
18%
21%
18%
15%
100%

Note the difference between an a priori probability and an empirical probability:

The a priori (classical) probability of rolling a three (3) is 1/6,

But the empirical frequency, based on this sample, is 18%. If we generate another
sample, we will produce a different empirical frequency.

This relates also to sampling variation. The a priori probability is based on population
properties; in this case, the a priori probability of rolling any number is clearly 1/6th.
However, a sample of 100 trials will exhibit sampling variation: the number of threes (3s)
rolled above varies from the parametric probability of 1/6th. We do not expect the
sample to produce 1/6th perfectly for each outcome.

www.bionicturtle.com

FRM 2011 QUANTITATIVE ANALYSIS 6

Define, calculate, and interpret the mean, standard deviation, and


variance of a random variable.
If we can characterize a random variable (e.g., if we know all outcomes and that each outcome is
equally likelyas is the case when you roll a single die)the expectation of the random
variable is often called the mean or arithmetic mean.

Mean (expected value)


Expected value is the weighted average of possible values. In the case of a discrete random
variable, expected value is given by:
k

y k pk y i pi

E ( X ) y1p1 y 2 p2

i 1

In the case of a continuous random variable, expected value is given by:

E( X ) xf ( X )dx
Variance
Variance and standard deviation are the second moment measures of dispersion. The variance of
a discrete random variable Y is given by:
k

2
2
Y2 variance(Y ) E Y Y y i Y pi

i 1

Variance is also expressed as the difference between the expected value of X^2 and the square
of the expected value of X. This is the more useful variance formula:

Y2 E[(Y Y )2 ] E(Y 2 ) [E(Y )]2


Please memorize this variance formula above: it comes in handy! For example, if the
probability of loan default (PD) is a Bernouilli trial, what is the variance of PD?
We can solve with E[PD^2] (E[PD])^2.

www.bionicturtle.com

FRM 2011 QUANTITATIVE ANALYSIS 7

Example: Variance of a single six-sided die


For example, what is the variance of a single six-sided die? First, we need to solve for the
expected value of X-squared, E[X2]. This is given by:

91
1
1
1
1
1
1
E [ X 2 ] (12 ) (22 ) (32 ) (42 ) (52 ) (62 )
6
6
6
6
6
6
6
Then, we need to square the expected value of X, [E(X)]2. The expected value of a single six-sided
die is 3.5 (the average outcome). So, the variance of a single six-sided die is given by:

Variance( X ) E ( X 2 ) [E ( X )]2

91
(3.5)2 2.92
6

Here is the same derivation of the variance of a single six-sided die (which has a uniform
distribution) in tabular format:

What is the variance of the total of two six-sided die cast together? It is simply the
Variance (X) plus the Variance (Y) or about 5.83. The reason we can simply add them
together is that they are independent random variables.

Sample Variance:
The unbiased estimate of the sample variance is given by:

sx2

1 k

( y i Y )2

k 1 i 1

www.bionicturtle.com

FRM 2011 QUANTITATIVE ANALYSIS 8

Properties of variance

1.

2
constant
0

2a. X2 Y X2 Y2

only if independent

2b. X2 Y X2 Y2

only if independent

3.

X2 b X2

4.

2
aX
a 2 X2

5.

2
2
aX
b a X

6.

2
2 2
2 2
aX

b
Y only if independent
bY
X

7.

X2 E ( X 2 ) E ( X )2

Standard deviation:
Standard deviation is given by:
2
Y var(Y ) E Y Y

y i Y 2 pi

As variance = standard deviation^2, standard deviation = Square Root[variance]

Sample Standard Deviation:


The unbiased estimate of the sample standard deviation is given by:

sX

1 k
( y i Y )2

k 1 i 1

This is merely the square root of the sample variance. This formula is important because
this is the technically precise way to calculate volatility.

www.bionicturtle.com

FRM 2011 QUANTITATIVE ANALYSIS 9

Define, calculate, and interpret the skewness, and kurtosis of a


distribution.
Skewness (asymmetry)
Skewness refers to whether a distribution is symmetrical. An asymmetrical distribution is
skewed, either positively (to the right) or negatively (to the left) skewed. The measure of relative
skewness is given by the equation below, where zero indicates symmetry (no skewness):

Skewness = 3

E [( X )3 ]

For example, the gamma distribution has positive skew (skew > 0):

Gamma Distribution
Positive (Right) Skew
1.20
1.00
0.80
0.60
0.40
0.20
-

alpha=1,
beta=1

0.0
0.6
1.2
1.8
2.4
3.0
3.6
4.2
4.8

alpha=2,
beta=.5
alpha=4,
beta=.25

Skewness is a measure of asymmetry


If a distribution is symmetrical, mean = median = mode. If a distribution has positive
skew, the mean > median > mode. If a distribution has negative skew, the mean <
median < mode.

Kurtosis
Kurtosis measures the degree of peakedness of the distribution, and consequently of
heaviness of the tails. A value of three (3) indicates normal peakedness. The normal
distribution has kurtosis of 3, such that excess kurtosis equals (kurtosis 3).

Kurtosis = 4

E [( X )4 ]

Note that technically skew and kurtosis are not, respectively, equal to the third and fourth
moments; rather they are functions of the third and fourth moments.

www.bionicturtle.com

FRM 2011 QUANTITATIVE ANALYSIS 10

A normal distribution has relative skewness of zero and kurtosis of three (or the same
idea put another way: excess kurtosis of zero). Relative skewness > 0 indicates positive
skewness (a longer right tail) and relative skewness < 0 indicates negative skewness (a
longer left tail). Kurtosis greater than three (>3), which is the same thing as saying
excess kurtosis > 0, indicates high peaks and fat tails (leptokurtic). Kurtosis less than
three (<3), which is the same thing as saying excess kurtosis < 0, indicates lower peaks.
Kurtosis is a measure of tail weight (heavy, normal, or light-tailed) and peakedness:
kurtosis > 3.0 (or excess kurtosis > 0) implies heavy-tails.
Financial asset returns are typically considered leptokurtic (i.e., heavy or fat- tailed)
For example, the logistic distribution exhibits leptokurtosis (heavy-tails; kurtosis > 3.0):

Logistic Distribution
Heavy-tails (excess kurtosis > 0)
0.50
0.40
0.30
0.20
0.10
-

alpha=0, beta=1
alpha=2, beta=1
alpha=0, beta=3
N(0,1)
1 5 9 13 17 21 25 29 33 37 41

Univariate versus multivariate probability density functions


A single variable (univariate) probability distribution is concerned with only a single random
variable; e.g., roll of a die, default of a single obligor. A multivariate probability density
function concerns the outcome of an experiment with more than one random variable. This
includes, the simplest case, two variables (i.e., a bivariate distribution).

www.bionicturtle.com

Density

Cumulative

Univariate

f(x)= P(X = x)

Bivariate

f(x)= P(X = x, Y =y)

F(x) = P(X x)
f(x) = P(X x, Y y)

FRM 2011 QUANTITATIVE ANALYSIS 11

Describe joint, marginal, and conditional probability functions.


Stock & Watson illustrate with two variables:

The age of the computer (A), a Bernoulli such that the computer is old (0) or new (1)

The number of times the computer crashes (M)

Marginal probability functions


A marginal (or unconditional) probability is the simple case: it is the probability that does
not depend on a prior event or prior information. The marginal probability is also called the
unconditional probability. It is just another name for the probability distribution (Stock).
l

Pr(Y y ) Pr X xi ,Y y

Pr( A 1) 0.5

i 1

0
1

Old

New

Tot

Tot

0.35

0.065

0.05

0.025

0.01

0.50

0.45

0.035

0.01

0.005

0.00

0.50

0.80

0.100

0.03

0.030

0.01

1.00

Joint probability functions


The joint probability is the probability that the random variables (in this case, both random
variables) take on certain values simultaneously.

Pr( X y,Y y )

0
1

Old

New

Tot

www.bionicturtle.com

Pr( A 0, M 0) 0.35
0

Tot

0.35

0.065

0.05

0.025

0.01

0.50

0.45

0.035

0.01

0.005

0.00

0.50

0.80

0.100

0.03

0.030

0.01

1.00

FRM 2011 QUANTITATIVE ANALYSIS 12

Conditional probability functions


The conditional probability is the probability of an outcome given (i.e., conditional on) another
outcome.

Pr(Y y | X x )

Pr( X x,Y y )
Pr( X x )

Pr(M 0 | A 0) 0.35 0.50 0.70

0
1

Old

New

Tot

Tot

0.35

0.065

0.05

0.025

0.01

0.50

0.45

0.035

0.01

0.005

0.00

0.50

0.80

0.100

0.03

0.030

0.01

1.00

Conditional probability = Joint Probability/Marginal Probability


What is the probability of B occurring, given that A has already occurred?

P (B | A )

P( A B)
P ( A)P (B | A) P ( A B )
P ( A)

Conditional and unconditional expectation


An unconditional expectation is the expected value of the variable without any restrictions (or
lacking any prior information).
A conditional expectation is an expected value for the variable conditional on prior
information or some restriction (e.g., the value of a correlated variable). The conditional
expectation of Y, conditional on X = x, is given by:

E(Y | X x )
The conditional variance of Y, conditional on X=x, is given by:

var(Y | X x )
The two-variable regression is a important conditional expectation. In this case, we say
the expected Y is conditional on X:

www.bionicturtle.com

E(Y | X i ) B1 B2 X i

FRM 2011 QUANTITATIVE ANALYSIS 13

For Example: Joint Distributions


For example, consider two stocks. Assume that both Stock (S) and Stock (T) can each only reach
three price levels. Stock (S) can achieve: $10, $15, or $20. Stock (T) can achieve: $15, $20, or $30.
Historically, assume we witnessed 26 outcomes and they were distributed as follows.
Note S = S$10/15/20 and T = T$15/20/30 :

T=$15
T=$20
T=$30
Total

S= $10
0
3
3
6

S= $15
2
4
6
12

S=$20
2
3
3
8

Total
4
10
12
26

Example: marginal (unconditional) probability


The unconditional probability of the outcome where S=$20 = 8/26 because there are eight
events out of 26 total events that produce S=$20. The unconditional probability P(S=20) = 8/26

Example: Joint probability


A joint probability is the probability that both random variables will have a certain outcome.
Here the joint probability P(S=$20, T=$30) = 3/26.

Example: Conditional probability


Instead we can ask a conditional probability question: What is the probability that S=$20 given
that T=$20? The probability that S=$20 conditional on the knowledge that T=$20 is 3/10
because among the 10 events that produce T=$20, three are S=$20.

P (S $20 T $20)

P (S $20,T $20) 3

P (T $20)
10

In summary:

The unconditional probability P(S=20) = 8/26

The conditional probability P(S=20 | T=20) = 3/10

The joint probability P(S=20,T=30) = 3/26

www.bionicturtle.com

FRM 2011 QUANTITATIVE ANALYSIS 14

Explain the difference between statistical independence and statistical


dependence.
X and Y are independent if the condition distribution of Y given X equals the marginal
distribution of Y. Since independence implies Pr (Y=y | X=x) = Pr(Y=y):

Pr(Y y | X x )

Pr( X x,Y y )
Pr( X x )

The most useful test of statistical independence is given by:

Pr( X x,Y y ) Pr( X x )P(Y y )


X and Y are independent if their joint distribution is equal to the product of their
marginal distributions.
Statistical independence is when the value taken by one variable has no effect on the value
taken by the other variable. If the variables are independent, their joint probability will equal
the product of their marginal probabilities. If they are not independent, they are dependent.
For example, when rolling two dice, the second will be independent of the first.

This independence implies that the probability of rolling double-sixes is equal to the product
of P(rolling one six) and P(rolling one six). If two die are independent, then P (first roll = 6,
second roll = 6) = P(rolling a six) * P (rolling a six). And, indeed: 1/36 = (1/6)*(1/6)

www.bionicturtle.com

FRM 2011 QUANTITATIVE ANALYSIS 15

Calculate the mean and variance of sums of random variables.


Mean

E(a bX cY ) a b X c Y
Variance
In regard to the sum of correlated variables, the variance of correlated variables is given by the
following (note the two expressions; the second merely substitutes the covariance with the
product of correlation and volatilities. Please make sure you are comfortable with this
substitution).

X2 Y X2 Y2 2 XY , and given that XY XY


X2 Y X2 Y2 2 X Y
In regard to the difference between correlated variables, the variance of correlated variables is
given by:

X2 Y X2 Y2 2 XY and given that XY X Y


X2 Y X2 Y2 2 X Y
Variance with constants (a) and (b)
Variance of sum includes covariance (X,Y):

variance(aX bY ) a2 X2 2ab XY b2Y2


If X and Y are independent, the term with the covariance drops out and this simplifies to:

variance( X Y ) X2 Y2

www.bionicturtle.com

FRM 2011 QUANTITATIVE ANALYSIS 16

Describe the key properties of the normal, standard normal, multivariate


normal, Chi-squared, Student t, and F distributions.
Normal distribution

0.5

f (x)

0.3

2
1
e ( x )
2

2 2

4.0

3.0

2.0

1.0

0.0

(1.0)

(2.0)

(3.0)

-0.1

(4.0)

0.1

Key properties of the normal:

Symmetrical around mean; skew = 0

Parsimony: Only requires (is fully described by) two parameters: mean and variance

Summation stability: a linear combination (function) of two normally distributed random


variables is itself normally distributed

Kurtosis = 3 (excess kurtosis = 0)

The normal distribution is commonplace for at least three reasons:

The central limit theorem (CLT) says that sampling distribution of sample means tends
to be normal (i.e., converges toward a normally shaped distributed) regardless of the
shape of the underlying distribution; this explains much of the popularity of the normal
distribution.

The normal is economical (elegant) because it only requires two parameters (mean
and variance). The standard normal is even more economical: it requires no
parameters.

The normal is tractable: it is easy to manipulate (especially in regard to closed-form


equations like the Black-Scholes)

www.bionicturtle.com

FRM 2011 QUANTITATIVE ANALYSIS 17

Standard normal distribution


A normal distribution is fully specified by two parameters, mean and variance (or standard
deviation). We can transform a normal into a unit or standardized variable:

Standard normal has mean = 0,and variance = 1

No parameters required!

This unit or standardized variable is normally distributed with zero mean and variance of
one (1.0). Its standard deviation is also one (variance = 1.0 and standard deviation = 1.0). This is
written as: Variable Z is approximately (asymptotically) normally distributed: Z ~ N(0,1)

Standard normal distribution: Critical Z values:


Key locations on the normal distribution are noted below. In the FRM curriculum, the choice of
one-tailed 5% significance and 1% significance (i.e., 95% and 99% confidence) is common, so
please pay particular attention to the yellow highlights:

% of all (two-tailed) % to the left (one-tailed) Critical values


~ 68%
~ 34%
1.0
~ 90%
~ 5.0 %
1.645 (~1.65)
~ 95%
~ 2.5%
1.96
~ 98%
~ 1.0 %
2.327 (~2.33)
~ 99%
~ 0.5%
2.58

Memorize two common critical values: 1.65 and 2.33. These correspond to confidence
levels, respectively, of 95% and 99% for a one-tailed test. For VAR, the one-tailed test is
relevant because we are concerned only about losses (left-tail) not gains (right-tail).

Multivariate normal distributions


Normal can be generalized to a joint distribution
of normal; e.g., bivariate normal distribution.
Properties include:
1. If X and Y are bivariate normal, then aX + bY is normal;

any linear combination is normal


2. If a set of variables has a multivariate normal distribution,

the marginal distribution of each is normal


3. If variables with a multivariate normal distribution have covariances that equal zero,

then the variables are independent

www.bionicturtle.com

FRM 2011 QUANTITATIVE ANALYSIS 18

Chi-squared distribution

Chi-square distribution

40%
30%

k=2

20%

k=5

10%

k = 29

0%
0

10

20

30

For the chi-square distribution, we observe a sample variance and compare to hypothetical
population variance. This variable has a chi-square distribution with (n-1) d.f.:

s2
2
2 (n 1) ~ ( n 1)

Chi-squared distribution is the sum of m squared independent standard normal random
variables. Properties of the chi-squared distribution include:

Nonnegative (>0)

Skewed right, but as d.f. increases it approaches normal

Expected value (mean) = k, where k = degrees of freedom

Variance = 2k, where k = degrees of freedom

The sum of two independent chi-square variables is also a chi-squared variable

Chi-squared distribution: For example (Googles stock return variance)


Googles sample variance over 30 days is 0.0263%. We can test the hypothesis that the
population variance (Googles true variance) is 0.02%. The chi-square variable = 38.14:

Sample variance (30 days)


Degrees of freedom (d.f.)
Population variance?
Chi-square variable
=CHIDIST() = p value
Area under curve (1- )

0.0263%
29
0.0200%
38.14
11.93%
88.07%

= 0.0263%/0.02%*29
@ 29 d.f., Pr[.1] = 39.0875

With 29 degrees of freedom (d.f.), 38.14 corresponds to roughly 10% (i.e., to left of 0.10 on the
lookup table). Therefore, we can reject the null with only 88% confidence; i.e., we are likely to
accept the probability that the true variance is 0.02%.

www.bionicturtle.com

FRM 2011 QUANTITATIVE ANALYSIS 19

Student ts distribution

t distribution vs. Normal


0.04
0.03
2

0.02

20

0.01

Normal
0
0.4
0.8
1.2
1.6
2
2.4
2.8
3.2
3.6

0.00

The students t distribution (t distribution) is among the most commonly used distributions. As
the degrees of freedom (d.f.) increases, the t-distribution converges with the normal
distribution. It is similar to the normal, except it exhibits slightly heavier tails (the lower the d.f..,
the heavier the tails). The students t variable is given by:

X X
Sx n

Properties of the t-distribution:

Like the normal, it is symmetrical

Like the standard normal, it has mean of zero (mean = 0)

Its variance = k/(k-2) where k = degrees of freedom. Note, as k increases, the variance
approaches 1.0. Therefore, as k increases, the t-distribution approximates the
standard normal distribution.

Always slightly heavy-tail (kurtosis>3.0) but converges to normal. But the students t is
not considered a really heavy-tailed distribution

In practice, the students t is the mostly commonly used distribution. When we test the
significance of regression coefficients, the central limit thereom (CLT) justifies the
normal distribution (because the coefficients are effectively sample means). But we
rarely know the population variance, such that the students t is the appropriate
distribution.
When the d.f. is large (e.g., sample over ~30), as the students t approximates the
normal, we can use the normal as a proxy. In the assigned Stock & Watson, the sample
sizes are large (e.g., 420 students), so they tend to use the normal.

www.bionicturtle.com

FRM 2011 QUANTITATIVE ANALYSIS 20

Student ts distribution: For example


For example, Googles average periodic return over a ten-day sample period was +0.02% with
sample standard deviation of 1.54%. Here are the statistics:
Sample Mean
Sample Std Dev
Days (n=10)

0.02%
1.54%
10

Confidence
Significance (1-)

95%
5%

Critical t

2.262

Lower limit
Upper limit

-1.08%
1.12%

The sample mean is a random variable. If we know the population variance, we assume the
sample mean is normally distributed. But if we do not know the population variance (typically
the case!), the sample mean is a random variable following a students t distribution.
In the Google example above, we can use this to construct a confidence (random) interval:

X t

s
n

We need the critical (lookup) t value. The critical t value is a function of:

Degrees of freedom (d.f.); e.g., 10-1 =9 in this example, and

Significance; e.g., 1-95% confidence = 5% in this example

The 95% confidence interval can be computed. The upper limit is given by:

X (2.262)

1.54%
1.12%
10

And the lower limit is given by:

X (2.262)

1.54%
1.08%
10

Please make sure you can take a sample standard deviation, compute the critical t value
and construct the confidence interval.

www.bionicturtle.com

FRM 2011 QUANTITATIVE ANALYSIS 21

Both the normal (Z) and students t (t) distribution characterize the sampling distribution of
the sample mean. The difference is that te normal is used when we know the population
variance; the students t is used when we mus rely on the sample variance. In practice, we dont
know the population variance, so the students t is typically appropriate.

X
X

X
X

SX
n

F-Distribution

F distribution
10%
8%
6%
4%
2%
0%

19,19
9,9
0.1 0.4 0.7 1.0 1.3 1.6 1.9 2.2

The F distribution is also called the variance ratio distribution (it may be helpful to think of it as
the variance ratio!). The F ratio is the ratio of sample variances, with the greater sample variance
in the numerator:

s x2
F 2
sy
Properties of F distribution:

Nonnegative (>0)

Skewed right

Like the chi-square distribution, as d.f. increases, approaches normal

The square of t-distributed r.v. with k d.f. has an F distribution with 1,k d.f.

m * F(m,n)=2

www.bionicturtle.com

FRM 2011 QUANTITATIVE ANALYSIS 22

F-Distribution: For example


For example, based on two 10-day samples, we calculated the sample variance of Google and
Yahoo. Googles variance was 0.0237% and Yahoos was 0.0084%. The F ratio, therefore, is 2.82
(divide higher variance by lower variance; the F ratio must be greater than, or equal to, 1.0).

=VAR()
=COUNT()
F ratio
Confidence
Significance
=FINV()

GOOG
0.0237%
10
2.82
90%
10%
2.44

YHOO
0.0084%
10

At 10% significance, with (10-1) and (10-1) degrees of freedom, the critical F value is 2.44.
Because our F ratio of 2.82 is greater than (>) 2.44, we reject the null (i.e., that the population
variances are the same). We conclude the population variances are different.

Moments of a distribution
The k-th moment about the mean () is given by:

( x )k

i 1 i
k-th moment
n

In this way, the difference of each data point from the mean is raised to a power (k=1, k=2, k=3,
and k=4). There are the four moments of the distribution:

If k=1, refers to the first moment about zero: the mean.

If k=2, refers to the second moment about the mean: the variance.

If k=3, refers to the third moment about the mean: skewness

If k=4, refers to the fourth moment about the mean: tail density and peakedness.

www.bionicturtle.com

FRM 2011 QUANTITATIVE ANALYSIS 23

Define and describe random sampling and what is meant by i.i.d.


A random sample is a sample of random variables that are independent and identically
distributed (i.i.d.)

Independent

Identical

Not (auto) correlated

Same Mean,
Same Variance
Homo-skedastic

Independent and identically distributed (i.i.d.) variables:

Each random variable has the same (identical) probability distribution (PDF/PMF, CDF)
distribution

Each random variable is drawn independently of the others; no serial- or autocorrelation

The concept of independent and identically distributed (i.i.d.) variables is a key


assumption we often encounter: to scale volatility by the square root of time requires
i.i.d. returns. If returns are not i.i.d., then scaling volatlity by the square root of time
will give an incorrect answer.

Define, calculate, and interpret the mean and variance of the sample
average.
The sample mean is given by:

1 n
E (Y ) E (Yi ) Y
n i 1
The variance of the sample mean is given by:

variance(Y )

www.bionicturtle.com

Y2
n

Std Dev(Y ) Y

Y
n

FRM 2011 QUANTITATIVE ANALYSIS 24

We expect the sample mean to equal the population mean


The sample mean is denoted by Y . The expected value of the sample mean is, as you might
expect, the population mean:

E(Y ) Y Y
This formula says, we expect the average of our sample will equal the average of the
population. (over-bar signifies sample, Greek mu signifies the mean (average).

Sampling distribution of the sample mean


If either: (i) the population is infinite and random sampling, or (ii) finite population and
sampling with replacement, the variance of the sampling distribution of means is:

E [(Y Y ) ]
2

2
Y

Y2
n

This says, The variance of the sample mean is equal to the population variance divided by the
sample size. For example, the (population) variance of a single six-sided die is 2.92. If we roll
three die (i.e., sampling with replacement), then the variance of the sampling distribution =
(2.92 3) = 0.97.
If the population is size (N), if the sample size n N, and if sampling is conducted without
replacement, then the variance of the sampling distribution of means is given by:


2
Y

Y2 N n

n N 1

Standard error is the standard deviation of the sample mean


The standard error is the standard deviation of the sampling distribution of the estimator,
and the sampling distribution of an estimator is a probability (frequency distribution) of the
estimator (i.e., a distribution of the set of values of the estimator obtained from all possible
same-size samples from a given population). For a sample mean (per the central limit theorem!),
the variance of the estimator is the population variance divided by sample size. The
standard error is the square root of this variance; the standard error is a standard deviation:

se

Y2
n

www.bionicturtle.com

Y
n

FRM 2011 QUANTITATIVE ANALYSIS 25

If the population is distributed with mean and variance 2 but the distribution is not a
normal distribution, then the standardized variable given by Z below is asymptotically
normal; i.e., as (n) approaches infinity () the distribution becomes normal.

Y Y ~ N(0,1)
Y

se

n
The denominator is the standard error: which is simply the name for the standard
deviation of sampling distribution.

Describe, interpret, and apply the Law of Large Numbers and the Central
Limit Theorem.
In brief:

Law of large numbers: under general conditions, the sample mean () will be near the
population mean.

Central limit theorem (CLT): As the sample size increases, regardless of the underlying
distribution, the sampling distributions approximates (tends toward) normal

Central limit theorem (CLT)


We assume a population with a known mean and finite variance, but not necessarily a normal
distribution (we may not know the distribution!). Random samples of size (n) are then
drawn from the population. The expected value of each random variable is the populations
mean. Further, the variance of each random variable is equal the populations variance divided
by n (note: this is equivalent to saying the standard deviation of each random variable is equal to
the populations standard deviation divided by the square root of n).
The central limit theorem says that this random variable (i.e., of sample size n, drawn from the
population) is itself normally distributed, regardless of the shape of the underlying
population. Given a population described by any probability distribution having mean () and
finite variance (2), the distribution of the sample mean computed from samples (where each
sample equals size n) will be approximately normal. Generally, if the size of the sample is at least
30 (n 30), then we can assume the sample mean is approximately normal!

www.bionicturtle.com

FRM 2011 QUANTITATIVE ANALYSIS 26

Not Normal!
(individually)

But sample mean (and sum)


Normal Distribution!
(if finite variance)

Each sample has a sample mean. There are many sample means. The sample means have
variation: a sampling distribution. The central limit theorem (CLT) says the sampling
distribution of sample means is asymptotically normal.

Summary of central limit theorem (CLT):

We assume a population with a known mean and finite variance, but not necessarily a
normal distribution.

Random samples (size n) drawn from the population.

The expected value of each random variable is the population mean

The distribution of the sample mean computed from samples (where each sample equals
size n) will be approximately (asymptotically) normal.

The variance of each random variable is equal to population variance divided by n


(equivalently, the standard deviation is equal to the population standard deviation
divided by the square root of n).

Sample Statistics and Sampling Distributions


When we draw from (or take) a sample, the sample is a random variable with its own
characteristics. The standard deviation of a sampling distribution is called the
standard error. The mean of the sample or the sample mean is a random variable defined by:

X1 X 2
n

www.bionicturtle.com

Xn

FRM 2011 QUANTITATIVE ANALYSIS 27

Stock, Chapter 2:

Review of Statistics
In this chapter

Describe statistical inference, including estimation & hypothesis testing.


Describe and interpret estimators of the sample mean and their properties.
Describe and interpret the least squares estimator.
Define and interpret critical tvalues.
Define, calculate and interpret a confidence interval.
Describe the properties of point estimators:
Distinguish between unbiased and biased estimators
Define an efficient estimator and consistent estimator
Explain and apply the process of hypothesis testing:
Define and interpret the null hypothesis and the alternative hypothesis
Distinguish between onesided and twosided hypotheses
Describe the confidence interval approach to hypothesis testing
Describe the test of significance approach to hypothesis testing
Define, calculate and interpret type I and type II errors
Define and interpret the p value
Define, calculate, and interpret the sample variance, sample standard deviation,
and standard error.
Define, calculate, and interpret confidence intervals for the population mean.
Perform and interpret hypothesis tests for the difference between two means.
Define, describe, apply, and interpret the t-statistic when the sample size is small.
Interpret scatterplots.
Define, describe, and interpret the sample covariance and correlation.

Describe the concept of statistical inference, including estimation and


hypothesis testing.
Statistical inference is the process of generalizing from the sample value to the population value.

A random sample is obtained.

An estimate is calculated from the sample (a.k.a., a sample statistic). For example, sample
mean, sample variance, sample skew, sample kurtosis.

In addition to the estimate itself (e.g., sample mean), we estimate the sampling error or
sampling variation.

Next, we conduct hypothesis testing by either: (i) confidence interval, (ii) test of
significance, or (iii) p value.

www.bionicturtle.com

FRM 2011 QUANTITATIVE ANALYSIS 28

Statistical inference is the process of inferring facts about a population (i.e., the entire group)
based on an examination of a sample (i.e., a small part of the population). The process of
obtaining samples, and therefore sample estimators or statistics, is called sampling.

State null hypothesis


Straw man
H0:B2 = 0

Confidence
interval

p value

Test of
significanc
e

Population parameters
A population is considered known or understood when we know the probability distribution
function. If X is normally distributed, we say that the population is normally distributed (or, that
we have a normal population). If X is binomially distributed, we say that the population is
binomially distributed (or, that we have a binomial population.)

Population
Parameters
Sample
Statistic

The population is the entire group under study. The population is often unknowable.
The population size is denoted by a capital N.

The population (of which there is typically one) has parameters; e.g., the population
mean or the population variance. A parameter is a quantity in the f(x) distribution
such as mean, or standard deviation or (p) in the case of the binomial distributionthat
helps describe the distribution. Quantities that appear in f(x), such as the mean () and
the standard deviation () are called population parameters.

The sample is a subset of the population. For practical purposes, we draw a sample
(from the population) in order to make inferences about the population. The sample size
is denoted with small n

From the sample (of which there are many) we calculate estimates from estimators or
statistics; e.g., the sample mean or the sample variance. Estimators (statistics) are the
recipes for the best guesses about the true population parameters. Estimators
(statistics) versus parameters

www.bionicturtle.com

FRM 2011 QUANTITATIVE ANALYSIS 29

In the context of linear regression, the parameters are the slope and intercept associated with
the population regression function (PRF); i.e., the true slope and true intercept. The
estimators are the formulas that produce the estimate slope and intercept coefficients associated
with the sample regression function (SRF). In short, we estimate slope and intercept (the
estimates) in the sample regression function, hoping to infer the true, unobserved population
slope and intercept (the parameters).

Describe and interpret estimators of the sample mean and their


properties.
The sample mean, , is the best linear unbiased estimator (BLUE). In the Stock & Watson
example, the average (mean) wage among 200 people is $22.64:

Sample Mean
Sample Standard Deviation
Sample size (n)
Standard Error
H0: Population Mean =
Test t statistic
p value

$22.64
$18.14
200
1.28
$20.00
2.06
4.09%

Please note:

The average wage of (n = ) 200 observations is $22.64

The standard deviation of this sample is $18.14

The standard error of the sample mean is $1.28 because $18.14/SQRT(200) = $1.28

The degrees of freedom (d.f.) in this case are 199 = 200 - 1

www.bionicturtle.com

FRM 2011 QUANTITATIVE ANALYSIS 30

Describe and interpret the least squares estimator.


The estimator (m) that minimizes the sum of squared gaps (Yi m) is called the least squares
estimator:
n

Yi m

i 1

i 1

Define and interpret critical tvalues.


The t-statistic or t-ratio is given by:

Y Y ,0
SE (Y )

The critical t-value or lookup t-value is the t-value for which the test just rejects the null
hypothesis at a given significance level. For example:

95% two-tailed (2T) critical t-value with 20 d.f. is 2.086

Significance test: is t-statistic > critical (lookup) t?

The critical t-values bound a region within the students distribution that is a specific
percentage (90%? 95%? 99%?) of the total area under the students t distribution curve. The
students t distribution with (n-1) degrees of freedom (d.f.) has a confidence interval given by:

Y t

SY
S
Y Y t Y
n
n

For example: critical t


If the (small) sample size is 20, then the 95% two-tailed critical t is 2.093. That is because the
degrees of freedom are 19 (d.f. = n - 1) and if we review the lookup table on the following page
(corresponds to Gujarati A-2) under the column = 0.025/0.5 and row = 19, then we find the cell
value = 2.093. Therefore, given 19 d.f., 95% of the area under the students t distribution is
bounded by +/- 2.093. Specifically, P(-2.093 t 2.093) = 95%.
Please note, further because the distribution is symmetrical (skew=0), 5% among both tails
implies 2.5% in the left-tail.

www.bionicturtle.com

FRM 2011 QUANTITATIVE ANALYSIS 31

Students t Lookup Table


Excel function: = TINV(two-tailed probability [larger #], d.f.)

d.f.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30

1-tail: 0.25
2-tail: 0.50
1.000
0.816
0.765
0.741
0.727
0.718
0.711
0.706
0.703
0.700
0.697
0.695
0.694
0.692
0.691
0.690
0.689
0.688
0.688
0.687
0.686
0.686
0.685
0.685
0.684
0.684
0.684
0.683
0.683
0.683

0.1
0.2
3.078
1.886
1.638
1.533
1.476
1.440
1.415
1.397
1.383
1.372
1.363
1.356
1.350
1.345
1.341
1.337
1.333
1.330
1.328
1.325
1.323
1.321
1.319
1.318
1.316
1.315
1.314
1.313
1.311
1.310

0.05
0.1
6.314
2.920
2.353
2.132
2.015
1.943
1.895
1.860
1.833
1.812
1.796
1.782
1.771
1.761
1.753
1.746
1.740
1.734
1.729
1.725
1.721
1.717
1.714
1.711
1.708
1.706
1.703
1.701
1.699
1.697

0.025
0.05
12.706
4.303
3.182
2.776
2.571
2.447
2.365
2.306
2.262
2.228
2.201
2.179
2.160
2.145
2.131
2.120
2.110
2.101
2.093
2.086
2.080
2.074
2.069
2.064
2.060
2.056
2.052
2.048
2.045
2.042

0.01
0.02
31.821
6.965
4.541
3.747
3.365
3.143
2.998
2.896
2.821
2.764
2.718
2.681
2.650
2.624
2.602
2.583
2.567
2.552
2.539
2.528
2.518
2.508
2.500
2.492
2.485
2.479
2.473
2.467
2.462
2.457

0.005
0.01
63.657
9.925
5.841
4.604
4.032
3.707
3.499
3.355
3.250
3.169
3.106
3.055
3.012
2.977
2.947
2.921
2.898
2.878
2.861
2.845
2.831
2.819
2.807
2.797
2.787
2.779
2.771
2.763
2.756
2.750

0.001
0.002
318.309
22.327
10.215
7.173
5.893
5.208
4.785
4.501
4.297
4.144
4.025
3.930
3.852
3.787
3.733
3.686
3.646
3.610
3.579
3.552
3.527
3.505
3.485
3.467
3.450
3.435
3.421
3.408
3.396
3.385

The green shaded area represents values less than three (< 3.0). Think of it as the sweet
spot. For confidences less than 99% and d.f. > 13, the critical t is always less than 3.0. So, for
example, a computed t of 7 or 13 will generally be significant. Keep this in mind because in
many cases, you do not need to refer to the lookup table if the computed t is large; you can
simply reject the null.

www.bionicturtle.com

FRM 2011 QUANTITATIVE ANALYSIS 32

Define, calculate and interpret a confidence interval.


The confidence interval uses the product of [standard error critical lookup t]. In the Stock
& Watson example, the confidence interval is given by 22.64 +/- (1.28)(1.96) because 1.28 is the
standard error and 1.96 is the critical t (critical Z) value associated with 95% two-tailed
confidence:

Sample Mean
$22.64
Sample Std Deviation $18.14
Sample size (n)
200
Standard Error
1.28
Confidence
95%
Critical t
1.972
Lower limit
$20.11 95% CI for Y Y 1.96SE Y
Upper limit
$25.17 22.64 1.28 1.972

Confidence Intervals: Another example with a sample of 28 P/E ratios


Assume we have price-to-earnings ratios (P/E ratios) of 28 NYSE companies:

Mean
Variance
Std Dev
Count
d.f.
Confidence (1-)
Significance ()
Critical t
Standard error
Lower limit
Upper limit
Hypothesis
t value
p value
Reject null with

23.25
90.13
9.49
28
27
95%
5%
2.052
1.794
19.6
26.9
18.5
2.65
1.3%
98.7%

= 23.25 - (2.052)*(1.794)
= 23.25 + (2.052)*(1.794)

= (23.25 - 18.5) / (1.794)

The confidence coefficient is selected by the user; e.g., 95% (0.95) or 99% (0.99).
The significance = 1 confidence coefficient.

www.bionicturtle.com

FRM 2011 QUANTITATIVE ANALYSIS 33

To construct a confidence interval with the dataset above:

Determine degrees of freedom (d.f.). d.f. = sample size 1. In this case, 28 1 = 27 d.f.

Select confidence. In this case, confidence coefficient = 0.95 = 95%

We are constructing an interval, so we need the critical t value for 5% significance with
two-tails.

The critical t value is equal to 2.052. Thats the value with 27 d.f. and either 2.5% onetailed significance or 5% two-tailed significance (see how they are the same provided the
distribution is symmetrical?)

The standard error is equal to the sample standard deviation divided by the square root
of the sample size (not d.f.!). In this case, 9.49/SQRT(28) 1.794.

The lower limit of the confidence interval is given by: the sample mean minus the
critical t (2.052) multiplied by the standard error (9.49/SQRT[28]).

The upper limit of the confidence interval is given by: the sample mean plus the
critical t (2.052) multiplied by the standard error (9.49/SQRT[28]).

Sx
S
X X t x
n
n
9.49
9.49
23.25 2.052
X 23.25 2.052
28
28
X t

This confidence interval is a random interval. Why? Because it will vary randomly with
each sample, whereas we assume the population mean is static.
We dont say the probability is 95% that the true population mean lies within
this interval. That implies the true mean is variable. Instead, we say the
probability is 95% that the random interval contains the true mean. See how the
population mean is trusted to be static and the interval varies?

www.bionicturtle.com

FRM 2011 QUANTITATIVE ANALYSIS 34

Describe the properties of point estimators:

An estimator is a function of a sample of data to be drawn randomly from a population.

An estimate is the numerical value of the estimator when it is actually computed using
data from a specific sample.

The key properties of point estimators include:

Linearity: estimator is a linear function of sample observations. For example, the sample
mean is a linear function of the observations.

Unbiasedness: the average or expected value of the estimator is equal to the true value
of the parameter.

Minimum variance: the variance of the estimator is smaller than any competing
estimator. Note: an estimator can have minimum variance yet be biased.

Efficiency: Among the set of unbiased estimators, the estimator with the minimum
variance is the efcient estimator (i.e., it has the smallest variance among unbiased
estimators)

Best linear estimator (BLUE): the estimate that combines three properties: (i) linear,
(ii) unbiased, and (iii) minimum variance

Consistency: an estimator is consistent if, as the sample size increases, it approaches


(converges on) the true value of the parameter

Distinguish between unbiased and biased estimators


An estimator is unbiased if:

E Y Y
Otherwise the estimator is biased.
If the expected value of the estimator is the population parameter, the estimator is
unbiased. If, in repeated applications of a method the mean value of the estimators
coincides with the true parameter value, that estimator is called an unbiased estimator.
Unbiasedness is a repeated sampling property: if we draw several samples of size (n)
from a population and compute the unbiased sample statistic for each sample, the
average of will tend to approach (converge on) the population parameter.

www.bionicturtle.com

FRM 2011 QUANTITATIVE ANALYSIS 35

Define an efficient estimator and consistent estimator


An efficient estimate is both unbiased (i.e., the mean or expectation of the statistic is equal to
the parameter) and its variance is smaller than the alternatives (i.e., all other things being equal,
we would prefer a smaller variance). A statement of the error or precision of an estimate is
often called its reliability

Efficient: among unbiased, estimator will smallest variance

Consistent is about property as sample size increases

Efficient

Consistent

Unbiased
Smallest variance

As sample size increases,


estimator approaches true
parameter value
As n, E*estimator+ =
parameter

variance Y variance Y

www.bionicturtle.com

p
Y
Y

FRM 2011 QUANTITATIVE ANALYSIS 36

Explain and apply the process of hypothesis testing:


Define & interpret the null hypothesis and
the alternative

Distinguish between onesided and


twosided hypotheses

Describe the confidence interval approach


to hypothesis testing

Describe the test of significance approach


to hypothesis testing

Define, calculate and interpret type I and


type II errors

Define and interpret the p value

www.bionicturtle.com

FRM 2011 QUANTITATIVE ANALYSIS 37

Define and interpret the null hypothesis and the alternative hypothesis
Please not the null must contain the equal sign (=):
Define & interpret the null
hypothesis and the alternative

H0 : E (Y ) Y ,0
H1 : E (Y ) Y ,0

Distinguish between onesided


and twosided hypotheses

H0 : E (Y ) $20

Describe the confidence interval


approach to hypothesis testing

H1 : E (Y ) $20

Describe the test of significance


approach to hypothesis testing
The null hypothesis, denoted by H0, is tested against
the alternative hypothesis, which is denoted by H1 or
sometimes HA.

Define, calculate and interpret


type I and type II errors

Often, we test for the significance of the intercept or a


Define and interpret the p value
partial slope coefficient in a linear regression. Typically,
in this case, our null hypothesis is: the slope is zero or there is no correlation between X and
Y or the regression coefficients jointly are not significant. In which case, if we reject the null,
we are finding the statistic to be significant which, in this case, means significantly different
than zero.
Statistical significance implies our null hypothesis (i.e., the parameter equals zero) was
rejected. We concluded the parameter is nonzero. For example, a significant slope
estimate means we rejected the null hypothesis that the true slope is zero.

www.bionicturtle.com

FRM 2011 QUANTITATIVE ANALYSIS 38

Distinguish between onesided and twosided hypotheses


Your default assumption should be a two-sided
hypothesis. If unsure, assume two-sided.

Define & interpret the null


hypothesis and the alternative

Here is a one-sided null hypothesis:

H0 : E (Y ) Y ,0

Distinguish between onesided


and twosided hypotheses

H1 : E (Y ) Y ,0
Specifically, The one-sided null hypothesis is that the
population average wage is less than or equal to $20.00:

H0 : E (Y ) $20
H1 : E (Y ) $20

Describe the confidence interval


approach to hypothesis testing
Describe the test of significance
approach to hypothesis testing
Define, calculate and interpret
type I and type II errors

Define and interpret the p value

The null hypothesis always includes the equal sign (=), regardless! The null cannot include
only less than (<) or greater than (>).

www.bionicturtle.com

FRM 2011 QUANTITATIVE ANALYSIS 39

Describe the confidence interval approach to hypothesis testing


In the confidence interval approach, instead of
computing the test statistic, we define the confidence
interval as a function of our confidence level; i.e., higher
confidence implies a wider interval.

Define & interpret the null


hypothesis and the alternative

Then we simply ascertain if the null hypothesized value


is within the interval (within the acceptance region).

Distinguish between onesided


and twosided hypotheses

Y 1.96SE Y
Y 2.58SE Y

90% CI for Y Y 1.64SE Y


95% CI for Y
99% CI for Y

Describe the confidence interval


approach to hypothesis testing
Describe the test of significance
approach to hypothesis testing
Define, calculate and interpret
type I and type II errors

Define and interpret the p value

www.bionicturtle.com

FRM 2011 QUANTITATIVE ANALYSIS 40

Describe the test of significance approach to hypothesis testing


In the significance approach, instead of defining the
confidence interval, we compute the standardized
distance in standard deviations from the observed mean
to the null hypothesis: this is the test statistic (or
computed t value). We compare it to the critical (or
lookup) value.
If the test statistic is greater than the critical (lookup)
value, then we reject the null.

Reject H0 at 90% if t

act

1.64

Reject H0 at 95% if t act 1.96


Reject H0 at 99% if t

act

2.58

Define & interpret the null


hypothesis and the alternative
Distinguish between onesided
and twosided hypotheses
Describe the confidence interval
approach to hypothesis testing
Describe the test of significance
approach to hypothesis testing
Define, calculate and interpret
type I and type II errors

Define and interpret the p value

www.bionicturtle.com

FRM 2011 QUANTITATIVE ANALYSIS 41

Define, calculate and interpret type I and type II errors


If we reject a hypothesis which is actually true, we have
committed a Type I error. If, on the other hand, we
accept a hypothesis that should have been rejected, we
have committed a Type II error.

Define & interpret the null


hypothesis and the alternative

Type I error = significance level = = Pr [reject H0


| H0 is true]

Distinguish between onesided


and twosided hypotheses

Type II error = = Pr [accept H0 | H0 is false]

We can reject null with (1-p)% confidence

Describe the confidence interval


approach to hypothesis testing

Type I: to reject a true hypothesis

Type II: to accept a false hypothesis

Type I and Type II errors: for example

Describe the test of significance


approach to hypothesis testing
Define, calculate and interpret
type I and type II errors

Suppose we want to hire a portfolio manager who has


produced an average return of +8% versus an index that
Define and interpret the p value
returned +7%. We conduct a test statistical test to
determine whether the excess +1% is due to luck or alpha skill. We set a 95% confidence
level for our test. In technical parlance, our null hypothesis is that the manager adds no skill (i.e.,
the expected return is 7%).
Under the circumstances, a Type I error is the following: we decide that excess is significant and
the manager adds value, but actually the out-performance was random (he did not add skill). In
technical terms, we mistakenly rejected the null. Under the circumstances, a Type II error is the
following: we decide the excess is random and, to our thinking, the out-performance was
random. But actually it was not random and he did add value. In technical terms, we falsely
accepted the null.

www.bionicturtle.com

FRM 2011 QUANTITATIVE ANALYSIS 42

Define and interpret the p value


The p-value is the exact significance level:

Lowest significance level a which a null


can be rejected
We can reject null with (1-p)% confidence

The p-value is an abbreviation that stands for


probability-value. Suppose our hypothesis is
that a population mean is 10; another way of
saying this is our null hypothesis is H0: mean =
10 and our alternative hypothesis is H1: mean
10. Suppose we conduct a two-tailed test, given
the results of a sample drawn from the
population, and the test produces a p-value of .03.
This means that we can reject the null hypothesis
with 97% confidence in other words, we can be
fairly confident that the true population mean is
not 10.

Define & interpret the null


hypothesis and the alternative
Distinguish between onesided
and twosided hypotheses
Describe the confidence interval
approach to hypothesis testing
Describe the test of significance
approach to hypothesis testing
Define, calculate and interpret
type I and type II errors
Define and interpret the p value

Our example was a two-tailed test, but recall we have three possible tests:

The parameter is greater than (>) the stated value (right-tailed test), or

The parameter is less than (<) the stated value (left-tailed test), or

The parameter is either greater than or less than () the stated value (two-tailed test).

Small p-values provide evidence for rejecting the null hypothesis in favor of the alternative
hypothesis, and large p values provide evidence for not rejecting the null hypothesis in favor of
the alternative hypothesis.
Keep in mind a subtle point about the p-value and rejecting the null. It is a soft rejection.
Rather than accept the alternative, we fail to reject the null. Further, if we reject the null, we are
merely rejecting the null in favor of the alternative.
The analogy is to a jury verdict. The jury does not return a verict of innocent; rather they
return a verdict of not guilty.

p value PrH0 Z t act 1 t act

www.bionicturtle.com

FRM 2011 QUANTITATIVE ANALYSIS 43

Define, calculate, and interpret the sample variance, sample standard


deviation, and standard error.

sY2

2
1 n
Yi Y

n 1 i 1

SE (Y ) Y

sY

2
1 n
Yi Y

n 1 i 1

sY
n

Define, calculate, and interpret confidence intervals for the population


mean.

Y 1.96SE Y
Y 2.58SE Y

90% CI for Y Y 1.64SE Y


95% CI for Y
99% CI for Y

Perform and interpret hypothesis tests for the difference between two
means
Test statistic for comparing two means:

Ym Yw d0
SE Ym Yw

Define, describe, apply, and interpret the t-statistic when the sample size
is small.
If the sample size is small, t-statistic has a Students t distribution with (n-1) degrees of freedom

Y Y ,0
sY2 n

www.bionicturtle.com

FRM 2011 QUANTITATIVE ANALYSIS 44

Interpret scatterplots.
The scattergram is a plot of the dependent variable (on the Y axis) against the independent
(explanatory) variable (on the X axis). In Stock and Waton, the explanatory variable is the
student-teacher ratio (STR). The dependent variable is test score:

Test Scores verus Student-Teacher Ratio (Stock Watson


Fig 5.2)

720.0
700.0
680.0
Test
660.0
Scores
640.0
620.0
600.0

10.0

15.0

20.0
Student-teacher ratio

25.0

30.0

Define, describe, and interpret the sample covariance and correlation.


Covariance is the average cross-product. Sample covariance multiplies the sum of crossproducts by 1/(n-1) rather than 1/n:

s XY

1 n

X i X Yi Y
n 1 i 1

Sample correlation is sample covariance divided by the product of sample standard deviations:

r XY

s XY
S X SY

XY cov( X,Y ) E[( X X )(Y Y )]

www.bionicturtle.com

FRM 2011 QUANTITATIVE ANALYSIS 45

Covariance: For example


For a very simple example, consider three (X,Y) pairs: {(3,5), (2,4), (4,6)}:

(X-X

3
2
4
Avg = 3

5
4
6
Avg = 5

0.0
1.0
1.0
Avg = = 0.67

StdDev = SQRT(.67)

StdDev = SQRT(.67)

Correl. = 1.0

avg

)(Y-Y

avg

XY

Please note:

Average X = (3+2+4)/3 = 3.0. Average Y = (5+4+6)/3 = 5.0

The first cross-product = (3 3)*(5 - 5) = 0.0

The sum of cross-products = 0 + 2 + 1 = 2.0

The population covariance = [sum of cross-products] / n = 2.0 / 3 = 0.67

The sample covariance = [sum of cross-products] / (n- 1) = 2.0 / 2 = 1.0

Properties of covariance

1. If X &Y are independent, XY cov( X ,Y ) 0


2. cov(a bX , c dY ) bd cov( X ,Y )
3. cov( X , X ) var( X ). In notation, XX X2
4. If X &Y are not independent,

X2 Y X2 Y2 2 XY
X2 Y X2 Y2 2 XY
Note that a variables covariance with itself is its variance. Keeping this in mind, we
realize that the diagonal in a covariance matrix is populated with variances.

www.bionicturtle.com

FRM 2011 QUANTITATIVE ANALYSIS 46

Correlation Coefficient
The correlation coefficient is the covariance (X,Y) divided by the product of the each variables
standard deviation. The correlation coefficient translates covariance into a unitless metric
that runs from -1.0 to +1.0:

XY
cov( X ,Y )

XY StandardDev( X ) StandardDev(Y )

XY XY
Memorize this relationship between the covariance, the correlation coefficient, and the
standard deviations. It has high testability.
On the next page we illustrate the application of the variance theorems and the correlation
coefficient.
Please walk through this example so you understand the calculations.
The example refers to two products, Coke (X) and Pepsi (Y).
We (somehow) can generate growth projections for both products. For both Coke (X) and Pepsi
(Y), we have three scenarios (bad, medium, and good). Probabilities are assigned to each
growth scenario.
In regard to Coke:

Coke has a 20% chance of growing 3%,

a 60% of growing 9%, and

a 20% chance of growing 12%.

In regard to Pepsi,

Pepsi has a 20% chance of growing 5%,

a 60% chance of growing 7%, and

a 20% of growing 9%.

Finally, we know these outcomes are not independent. We want to calculate the correlation
coefficient.

www.bionicturtle.com

FRM 2011 QUANTITATIVE ANALYSIS 47

20%
3
5

60%
9
7

20%
12
9

pX
pY

0.6
1.0

5.4
4.2

2.4
1.8

E(X)
E(Y)

8.4
7.0

XY
pXY
E(XY)

15
3
62.4

Coke (X)
Pepsi (Y)

Sum of pXs above


Sum of pYs above
63
37.8

108
21.6

E(XY)-E(X)E(Y)

3.6

X2
Y2

9
25

81
49

144
81

pX2
pY2

1.8
5

48.6
29.4

28.8
16.2

E(X2)
E(Y2)

79.2
50.6

VAR(X)
VAR(Y)

8.64
1.60

STDEVP(X)
STDEVP(Y)

Key formula: Covariance of X,Y

E[X^2] [E(X)]^2
E[Y^2] [E(Y)]^2

2.939
1.265

COV/(STD)(STD) 0.9682

The calculation of expected values is required: E(X), E(Y), E(XY), E(X2) and E(Y2). Make sure you
can replicate the following two steps:

The covariance is equal to E(XY) E(X)E(Y): 3.6 = 62.4 (8.4)(7.0)

The correlation coefficient () is equal to the Cov(X,Y) divided by the product of the
standard deviations: XY = 97% = 3.6 (2.94 1.26)

www.bionicturtle.com

FRM 2011 QUANTITATIVE ANALYSIS 48

Key properties of correlation:

Correlation has the same sign (+/-) as covariance

Correlation measures the linear relationship between two variables

Between -1.0 and +1.0, inclusive

The correlation is a unit-less metric

Zero covariance zero correlation (But the converse not necessarily true. For example,
Y=X^2 is nonlinear )

Correlation (or dependence) is not causation. For example, in a basket credit default
swap, the correlation (dependence) between the obligors is a key input. But we do not
assume there is mutual causation (e.g., that one default causes another). Rather, more
likely, different obligors are similarly sensitive to economic conditions. So, economic
deterioration may the the external cause that all obligors have in common.
Consqequently, their default exhibit dependence. But the causation is not internal.
Further, note that (linear) correlation is a special case of dependence. Dependence is
more general and includes non-linear relationships.

Sample mean
Sample mean is the sum of observations divided by the number of observations:
n

Xi

X i 1

Variance
A population variance is given by:

x2

1 n
( X i X )2

n i 1

The sample variance is divided by (n-1):

sx2

1 n
( X i X )2

n 1 i 1

For example: population versus sample variance


Assume the following series of four numbers: 10, 12, 14, and 16. The average of the series is
(10+12+14+16) 4 = 13. For the population variance, in the numerator we want to sum the
squared differences. The population variance is given by [(10-13)2 + (12-13) 2 + (14-13) 2 + (1613) 2] 4 = 20 4 = 5. The sample variance has the same numerator and (5-1) for the
denominator: 20 3 6.7. The standard deviation is the square roots of the variance. The
population standard deviation is (square root of 5 2.24) and the sample standard deviation is
(square root of 6.7 2.6).

www.bionicturtle.com

FRM 2011 QUANTITATIVE ANALYSIS 49

Covariance
Covariance is the average cross-product:

XY

1
( X i X )(Yi Y )
n

Sample covariance is given by:

sample XY

1
( X i X )(Yi Y )
n 1

Correlation coefficient
Correlation coefficient is given by:

XY
cov( X ,Y )

XY StdDe v( X ) StdDev(Y )

Sample correlation coefficient is given by:

sample

sample XY
S X SY

Skewness
Skewness is given by:

Sample skewness is given by:

E [( X )3 ]

Skewness = 3

3
3

( X X )3
Sample Skewness = 3

(N 1)

S3

Kurtosis
Kurtosis is given by:

Kurtosis = 4

www.bionicturtle.com

Sample kurtosis is given by:

E [( X )4 ]

4
4

( X X )4
Sample Kurtosis =

(N 1)

FRM 2011 QUANTITATIVE ANALYSIS 50

Stock, Chapter 4:

Linear Regression
with one regressor
In this chapter

Explain how regression analysis in econometrics measures the relationship between


dependent and independent variables.
Define and interpret a population regression function, regression coefficients,
parameters, slope and the intercept.
Define and interpret the stochastic error term (or noise component).
Define and interpret a sample regression function, regression coefficients,
parameters, slope and the intercept.
Describe the key properties of a linear regression.
Describe the method and assumptions of ordinary least squares for estimation of
parameters:
Define and interpret the explained sum of squares, the total sum of squares,
and the residual sum of squares
Interpret the results of an ordinary least squares regression

What is Econometrics?
Econometrics is a social science that applies tools (economic theory, mathematics and statistical
inference) to the analysis of economic phenomena. Econometrics consists of the application of
mathematical statistics to economic data to lend empirical support to the models constructed
by mathematical economics.

Methodology of econometrics

Create a statement of theory or hypothesis

Collect data: time-series, cross-sectional, or pooled (combination of time-series and


cross-sectional)

Specify the (pure) mathematical model: a linear function with parameters (but without
an error term)

Specify the statistical model: adds the random error term

Estimate the parameters of the chosen econometric model: we are likely to use ordinary
least squares (OLS) approach to estimate parameters

www.bionicturtle.com

FRM 2011 QUANTITATIVE ANALYSIS 51

Check for model adequacy: model specification testing

Test the models hypothesis

Use the model for prediction or forecasting

Create theory
(hypothesis)

Estimate
parameters

Collect data

Test model
specification

Specify
mathematical model

Test hypothesis

Specify statistical
(econometric) model

Use model to
predict or forecast

Note:

The pure mathematical model although of prime interest to the mathematical


economist, is of limited appeal to the econometrician, for such a model assumes an
exact, or deterministic, relationship between the two variables.

The difference between the mathematical and statistical model is the random error
term (u in the econometric equation below). The statistical (or empirical) econometric
model adds the random error term (u):

Yi B0 B1X i ui

www.bionicturtle.com

FRM 2011 QUANTITATIVE ANALYSIS 52

Three different data types used in empirical analysis


Three types of data for empirical analysis:

Time series - returns over time for an individual asset

Cross-sectional - average return across assets on a given day

Pooled (combination of time series and cross-sectional) - returns over time for a
combination of assets; and

Panel data (a.k.a., longitudinal or micropanel) data is a special type of pooled data in
which the cross-sectional unit (e.g., family, company) is surveyed over time.

For example, we often characterize a portfolio with a matrix. In such a matrix, the assets are
given in the rows and the period returns (e.g., days/months/years) are given in the columns:

Time
2006
2007
2008
Asset #1

Returns: Auto or Serial Correlation


Cross
Asset #2
Asset #3 Sectional Return Volatility: Auto or Serial Correlation
Asset #4 (or spatial)
correlation
For such a matrix portfolio, we can examine the data in at least three ways:

Returns over time for an individual asset (time series)

Average return across assets on a given day (cross-sectional or spatial)

Returns over time for a combination of assets (pooled)

Time Series

Cross-sectional

Pooled

Returns over time


for an individual
asset
Example Returns
on a single asset
from Jan. through
Mar. 2009

Average return
across assets on a
given day
Example Returns
for a
business/family
on a given day

Returns over time


for a combination
of assets
Includes panel
data (a.k.a.,
longitudinal,
micropanel)

www.bionicturtle.com

FRM 2011 QUANTITATIVE ANALYSIS 53

Explain how regression analysis in econometrics measures the


relationship between dependent and independent variables.
A linear regression may have one or more of the following objectives:

To estimate the (conditional) mean, or average, value of dependent variable

Test hypotheses about nature of dependence

To predict or forecast dependent

One or more of the above

Correlation (dependence) is not causation. Further, linear correlation is a specific type of


dependence, which is a more general relationship (e.g., non-linear relationship).
In Stock and Watson, the authors regress student test scores (dependent variable) against class
size (independent variable):

TestScore

ClassSize ClassSize

Yi

ui

Dependent
(regressand)
Variable

0
0 1X i

other factors

Independent
(regressor)
Variable

Define and interpret a population regression function, regression


coefficients, parameters, slope and the intercept.

Yi

www.bionicturtle.com

0 1Xi

ui

FRM 2011 QUANTITATIVE ANALYSIS 54

Define and interpret the stochastic error term (or noise component).
The error term contains all the other factors aside from (X) that determine the value of the
dependent variable (Y) for a specific observation.

Yi

0 1Xi

ui

The stochastic error term is a random variable. Its value cannot be a priori determined.

May (probably) contains variables not explicit in model

Even if all variables included, will still be some randomness

Error may also include measurement error

Ockhams razor: a model is a simplification of reality. We dont necessarily want to


include every explanatory variable

Define and interpret a sample regression function, regression


coefficients, parameters, slope and the intercept.
In theory, there is one unknowable population and one set of unknowable parameters (B1, B2).
But there are many samples, each sample SRF Estimator (statistic) Estimate
Stochastic PRF
Sample regression function (SRF)
Stochastic sample regression function (SRF)

Yi B0 B1X i ui
Yi b0 b1X i
Yi b0 b1X i ei

Each sample produces its own scatterplot. Through this sample scatterplot, we can plot a sample
regression line (SRL). The sample regression function (SRF) characterizes this line; the SRF is
analogous to the PRF, but for each sample.

B0 = intercept = regression coefficient

B1 = slope = regression coefficient

e(i) = the residual term

Note the correspondence between error term and the residual. As we specify the model,
we ex ante anticipate an error; after we analyze the observations, we ex post observe
residuals.

www.bionicturtle.com

FRM 2011 QUANTITATIVE ANALYSIS 55

Unlike the PRF which is presumed to be stable (unobserved), the SRF varies with each
sample. So, we expect to get different SRF. There is no single correct SRF!

Each Sample returns a different SRF


(sampling variation)
50
Sample #1

40
30

Sample #2

20

Linear (Sample #1)

10

Linear (Sample #2)

0
0

200

400

Describe the key properties of a linear regression.


It is okay if the regression function is non-linear in variables, but it must be linear in
parameters:

E(Y ) B0 B12 X i

E(Y ) B0 B1X i2

www.bionicturtle.com

Linear variable, nonlinear parameter

Nonlinear variable, Linear parameter

FRM 2011 QUANTITATIVE ANALYSIS 56

Describe the method and assumptions of ordinary least squares for


estimation of parameters:
The process of ordinary least squares estimation seeks to achieve the minimum value for the
residual sum of squares (squared residuals = e^2).

Estimate (conditional) mean of dependent


variable
Test hypotheses about nature of
dependence

The conditional distribution of


u(i) given X(i) has a mean of
zero

[X(i), Y(i)] are independent


and identically distributed
(i.i.d.)

To forecast the mean value of the


dependent
Correlation (dependence) is not causation

Large outliers are unlikely

Define and interpret the explained sum of squares, the total sum of
squares, and the residual sum of squares
We can break the regression equation into three parts:

Explained sum of squares (ESS),

Sum of squared residual (RSS), and

Total sum of squares (TSS).

The explained sum of squares (ESS) is the squared distance between the predicted Y and the
mean of Y:
n

ESS (Yi Y )2
i 1

www.bionicturtle.com

FRM 2011 QUANTITATIVE ANALYSIS 57

The sum of squared residuals (SSR) is the summation of each squared deviation between
the observed (actual) Y and the predicted Y:
n

SSR (Yi Yi )2
i 1

The sum of squared residual (SSR) is the square of the error term. It is directly related to the
standard error of the regression (SER):
n

SSR (Yi Yi )2
i 1

ui2 SER 2 (n 2)
i 1

Equivalently:

ei2

n2

ei2

n2

The ordinary least square (OLS) approach minimizes the SSR.


The SSR and the standard error of regression (SER) are directly related; the SER is the
standard deviation of the Y values around the regression line.
The standard error of the regression (SER) is a function of the sum of squared residual (SSR):

ei2
SSR
SER

n2
n2
Note the use of the use of (n-2) instead of (n) in the denominator. Division by this smaller
numberin this case (n-2) instead of (n)is referred to as an unbiased estimate.
(n-2) is used because the two-variable regression has (n-2) degrees of freedom (d.f.).
In order to compute the slope and intercept estimates, two independent observations are
consumed.
If k = the number of explanatory variables plus the intercept (e.g., 2 if one explanatory
variable; 3 if two explanatory variables), then SER = SQRT[SSR/(n-k)].
If k = the number of slope coefficients (excluding the intercept), then similarly, SER =
SQRT[SSR/(n-k -1)]

www.bionicturtle.com

FRM 2011 QUANTITATIVE ANALYSIS 58

Interpret the results of an ordinary least squares regression


In the Stock & Watson example, the authors regress TestScore against the Student-teacher ratio
(STR):

Test Scores versus Student-Teacher Ratio


720.0

Test Scores

700.0
680.0
660.0
640.0
620.0
600.0
10.0

15.0

20.0

25.0

30.0

Student-teacher ratio
The regression function, with standard error, is given by:

TestScore

698.9
(9.47)

2.28 STR
(0.48)

The regression results are given by:

Regression coefficients
Standard errors, SE()
R^2, SER
F, d.f.
ESS, RSS

B(1)
-2.28
0.48
0.05
22.58
7,794

B(0)
698.93
9.47
18.58
418.00
144,315

Please note:

Both the slope and intercept are both significant at 95%, at least. The test statistics are
73.8 for the slope (699/9.47) and 4.75 (2.28/0.48). For example, given the very high test
statistic for the slope, its p-value is approximately zero.

The coefficient of determination (R^2) is 0.05, which is equal to 7,794/(7,7794 +


144,315)

The degrees of freedom are n 2 = 420 2 = 418

www.bionicturtle.com

FRM 2011 QUANTITATIVE ANALYSIS 59

Stock, Chapter 5:

Single Regression:
Hypothesis Tests
In this chapter

Define, calculate, and interpret confidence intervals for regression coefficients.


Define and interpret hypothesis tests about regression coefficients.
Define and differentiate between homoskedasticity and heteroskedasticity.
Describe the implications of homoskedasticity and heteroskedasticity.

Define, calculate, and interpret confidence intervals for regression


coefficients.

Upper/lower limit = Regression coefficient [standard error critical value @ c%]

In the example from Stock and Watson, lower limit = 680.4 = 698.9 9.47 1.96

www.bionicturtle.com

Intercept

Coefficient
698.9

SE
9.47

Slope (B1)

-2.28

0.48

Confidence Interval
Lower
Upper
680.4
717.5
-3.2

-1.3

FRM 2011 QUANTITATIVE ANALYSIS 60

Construct, perform, and interpret hypothesis tests and confidence


intervals for a single coefficient in a multiple regression.
In the Stock and Watson example,

STR (Student/Teacher ratio) is the independent variable

TestScore (Students test score ) is the dependent variable

The sample regression function (SRF) is given by:

TestScore

698.9
(9.47)

2.28 STR
(0.48)

Since 1.96 is the critical value associated with two-tailed 95% confidence, the 95%
confidence interval is given by:

95% CE = 1 1.96SE( 1)
B0

STR:

Lower limit

= 698.9 9.47 1.96 = 680.4

Upper limit

= 698.9 + 9.47 1.96 = 717.5

Lower limit

= -2.28 0.48 1.96 = -3.2

Upper limit

= -2.28 + 0.48 1.96 = -1.3

Define and interpret hypothesis tests about regression coefficients.


The key idea here is that the regression coefficient (the estimator or sample statistic) is a
random variable that follows a students t distribution (because we do not know the population
variance, or it would be the normal):

b1 B1 regression coefficient null hypothesis [0]

~ tn 2
se(b1)
se(regression coefficient)

Test of hypothesis for the slope (b1)


To test the hypothesis that the regression coefficient (b1) is equal to some specified value (),
we use the fact that the statistic

test statistic t

www.bionicturtle.com

b1
se(b1)

FRM 2011 QUANTITATIVE ANALYSIS 61

This has a student's distribution with n - 2 degrees of freedom because there are two coefficients
(slope and intercept).
Using the same example:

TestScore

STR:

t statistic

698.9
(9.47)

2.28 STR
(0.48)

= |(-2.28 0)/0.48| = 4.75

p value 2 Tail ~ 0 %

Define and differentiate between homoskedasticity and


heteroskedasticity.
The error term u(i) is homoskedastic if the variance of the conditional distribution of u(i) given
X(i) is constant for i = 1,,n and in particular does not depend on X(i).
Otherwise the error term is heteroskedastic.

Describe the implications of homoskedasticity and heteroskedasticity.


Implications of homoskedasticity: the OLS estimators remain unbiased and asymptotically
normal.
If heteroskedasticity, we can use heteroskedastic-robust standard errors.

www.bionicturtle.com

FRM 2011 QUANTITATIVE ANALYSIS 62

Stock: Chapter 6:

Linear Regression
with Multiple
Regressors
In this chapter

Define, interpret, and discuss methods for addressing omitted variable bias.
Distinguish between simple and multiple regression.
Define and interpret the slope coefficient in a multiple regression.
Describe homoskedasticity and heterosckedasticity in a multiple regression.
Describe and discuss the OLS estimator in a multiple regression.
Define, calculate, and interpret measures of fit in multiple regression.
Explain the assumptions of the multiple linear regression model.
Explain the concept of imperfect and perfect multicollinearity and their
implications.

Define, interpret, and discuss methods for addressing omitted variable


bias.
Omitted variable bias occurs if both:
1. Omitted variable is correlated with the included regressor, and
2. Omitted variable is a determinant of the dependent variable

Distinguish between simple [single] and multiple regression.


Multiple regression model extends the single variable regression model:

Yi 0 1X1i ui

Yi 0 1X1i 2 X2i

www.bionicturtle.com

k X ki ui , i 1,..., n

FRM 2011 QUANTITATIVE ANALYSIS 63

Define and interpret the slope coefficient in a multiple regression.


The B(1) slope coefficient, for example, is the effect on Y of a unit change in X(1) if we hold the
other independent variables, X(2) ., constant.

Yi 0 1X1i 2 X2i

k X ki ui , i 1,..., n

Effect on Y(i) for a unit change in X(2)


if we hold constant X(1), X(3), X(n)

Describe homoskedasticity and heterosckedasticity in a multiple


regression.
If variance [u(i) | X(1i), . X(ki)] is contant for i = 1, , n, then model is homoskedastic.
Otherwise, the model is heteroskedastic.

Describe and discuss the OLS estimator in a multiple regression.

TestScore

686

1.10 STR 0.65 PctEL

OLS estimate of the


coefficient on the studentteacher ratio (B1)

www.bionicturtle.com

FRM 2011 QUANTITATIVE ANALYSIS 64

Define, calculate, and interpret measures of fit in multiple regression.


Standard error of regression (SER)
Standard error of regression (SER) estimates the standard deviation of the error term u(i). In
this way, the SER is a measure of spread of the distribution of Y around the regression line. In a
multiple regression, the SER is given by:

SER

SSR
n k 1

Where (k) is the number of slope coefficients; e.g., in this case of a two variable regression, k = 1.
For the standard error of the regression (SER), the denominator is n [# of variables], or
n [# of coefficients including the interect].

Coefficient of determination (R^2)


The coefficient of determination is the fraction of the sample variance of Y(i) explained by (or
predicted by) the independent variables

R2

ESS
SSR
1
TSS
TSS

Adjusted R^2
The unadjusted R^2 will tend to increase as additional independent variables are added.
However, this does not necessarily reflect a better fitted model.
The adjusted R^2 is a modified version of the R^2 that does not necessarily increase with a new
independent variable is added. Adjusted R^2 is given by:

su2
n 1 SSR
R 1
1 2
n k 1 TSS
sY
2

www.bionicturtle.com

FRM 2011 QUANTITATIVE ANALYSIS 65

Explain the assumptions of the multiple linear regression model.


1. Conditional distribution of u(i) given X(1i), X(2i),,X(ki) has mean of zero
2. X(1i), X(2i), X(ki), Y(i) are independent and identically distributed (i.i.d.)
3. Large outliers are unlikely
4. No perfect collinearity

Explain the concept of imperfect and perfect multicollinearity and their


implications.
Imperfect multicollinearity does not prevent estimation of the regression, nor does it imply a
logical problem with the choice of independent variables (i.e., regressor).

However, imperfect multicollinearity does mean that one or more of the regression
coefficients could be estimated imprecisely

www.bionicturtle.com

FRM 2011 QUANTITATIVE ANALYSIS 66

Stock, Chapter 7:

Hypothesis Tests
and Confidence
Intervals in
Multiple Regression
In this chapter

Construct, perform, and interpret hypothesis tests and confidence intervals for a
single coefficient in a multiple regression.
Construct, perform, and interpret hypothesis tests and confidence intervals for
multiple coefficients in a multiple regression.
Define and interpret the F-statistic.
Define, calculate, and interpret the homoskedasticity-only F-statistic.
Describe and interpret tests of single restrictions involving multiple coefficients.
Define and interpret confidence sets for multiple coefficients.
Define and discuss omitted variable bias in multiple regressions.
Interpret the R2 and adjusted-R2 in a multiple regression.

Construct, perform, and interpret hypothesis tests and confidence


intervals for a single coefficient in a multiple regression.
The Stock & Watson example adds an additional independent variable (regressor). Under this
three variable regression, Test Scores (dependent) are a function of the Student/Teacher ratio
(STR) and the Percentage of English learners in district (PctEL).

www.bionicturtle.com

FRM 2011 QUANTITATIVE ANALYSIS 67

STR = Student/Teacher ratio


PctEL = Percentage (%) of English learners in district

TestScore

STR:

686
(7.41)

1.10 STR 0.65 PctEL


(0.38)
(0.04)

t statistic

= (-1.10 0)/0.38 = -2.90

p value 2 Tail

= 0.40%

PctEL t statistic

= (-0.65 0)/0.04 = 16.52

p value 2 Tail

~ 0.0%

Lower limit

= -1.10 0.381.96 = -1.85

Upper limit

= -1.10 + 0.381.96 = -0.35

PctEL Lower limit

= -0.65 0.041.96 = -0.73

Upper limit

= -0.65 + 0.041.96 = -0.57

STR:

Construct, perform, and interpret hypothesis tests and confidence


intervals for multiple coefficients in a multiple regression.
The overall regression F-statistic tests the joint hypothesis that all slope coefficients are zero

Define and interpret the F-statistic.


F-statistic is used to test joint hypothesis about regression coefficients.
2
2
1 t1 t2 2 t 1,t 2t1t2
F

2
1 2 t 1,t 2

www.bionicturtle.com

FRM 2011 QUANTITATIVE ANALYSIS 68

Define, calculate, and interpret the homoskedasticity-only F-statistic.


If the error term is homoskedastic, the F-statistic can be written in terms of the improvement in
the fit of the regression as measured either by the sum of squared residuals or by the regression
R^2.

SSRrestricted SSRunrestricted q
SSRunrestricted n kunrestricted 1

Describe and interpret tests of single restrictions involving multiple


coefficients.
Approach #1: Test the restrictions directly
Approach #2: Transform the regression

Define and interpret confidence sets for multiple coefficients.


Confidence ellipse characterizes a confidence set for two coefficients; this is the two-dimension
analog to the confidence interval:

9
8
7
6
5
Coefficient on
4
Expn (B2)
3
2
1
0
-1 -2

-1.5

-1

-0.5

0.5

1.5

Coefficient on STR (B1)

www.bionicturtle.com

FRM 2011 QUANTITATIVE ANALYSIS 69

Define and discuss omitted variable bias in multiple regressions.


Omitted variable bias: an omitted determinant of Y (the dependent variable) is correlated with
at least one of the regressor (independent variables).

Interpret the R2 and adjusted-R2 in a multiple regression.


There are four pitfalls to watch in using the R^2 or adjusted R^2:
1. An increase in the R^2 or adjusted R^2 does not necessarily imply that an added variable

is statistically significant
2. A high R^2 or adjusted R^2 does not mean the regressors are a true cause of the

dependent variable
3. A high R^2 or adjusted R^2 does not mean there is no omitted variable bias
4. A high R^2 or adjusted R^2 does not necessarily mean you have the most appropriate set

of regressors, nor does a low R^2 or adjusted R^2 necessarily mean you have an
inappropriate set of regressors

www.bionicturtle.com

FRM 2011 QUANTITATIVE ANALYSIS 70

Rachev, Menn, and

Fabozzi, Chapter 2:

Discrete
Probability
Distributions
In this chapter

Describe the key properties of the Bernoulli distribution, Binomial distribution, and
Poisson distribution, and identify common occurrences of each distribution.
Identify the distribution functions of Binomial and Poisson distributions for various
parameter values.

Describe the key properties of the Bernoulli distribution, Binomial


distribution, and Poisson distribution, and identify common occurrences
of each distribution.
Bernoulli
A random variable X is called Bernoulli distributed with parameter (p) if it has only two
possible outcomes, often encoded as 1 (success or survival) or 0 (failure or
default), and if the probability for realizing 1 equals p and the probability for 0
equals 1 p. The classic example for a Bernoulli-distributed random variable is the default
event of a company.
A Bernoulli variable is discrete and has two possible outcomes:

1 if C defaults in I
X
else
0

www.bionicturtle.com

FRM 2011 QUANTITATIVE ANALYSIS 71

Binomial
A binomial distributed random variable is the sum of (n) independent and identically distributed
(i.i.d.) Bernoulli-distributed random variables. The probability of observing (k) successes is
given by:

n
n
n!
P (Y k ) pk (1 p )n k ,
k
k (n k )! k !
Poisson
The Poisson distribution depends upon only one parameter, lambda , and can be interpreted as
an approximation to the binomial distribution. A Poisson-distributed random variable is usually
used to describe the random number of events occurring over a certain time interval. The
lambda parameter () indicates the rate of occurrence of the random events; i.e., it tells us
how many events occur on average per unit of time.
In the Poisson distribution, the random number of events that occur during an interval of time,
(e.g., losses/ year, failures/ day) is given by:

P (N k )

k
k!

Normal Binomial

np

Mean
Variance
2
Standard Dev.

2 npq
npq

Poisson

In Poisson, lambda is both the expected value (the mean) and the variance!

www.bionicturtle.com

FRM 2011 QUANTITATIVE ANALYSIS 72

Identify common occurrences of the Bernoulli distribution, Binomial


distribution, and Poisson distribution.
The Bernoulli is used to characterize default; consequently the binomial is used to characterize a
portfolio of credits. In finance, the Poisson distribution is often used, as a generic a stochastic
process, to model the time of default in some credit risk models.

Bernoulli

Binomial

Poisson

Default (0/1)

Basket of
Credits; Basket
CDS
BET

Operational
loss frequency

Identify the distribution functions of Binomial and Poisson distributions


for various parameter values.

Poisson

binomial

normal

6.0%
4.0%
2.0%

60
65
70
75
80
85
90
95
100
105
110
115
120
125
130
135
140

0.0%

www.bionicturtle.com

FRM 2011 QUANTITATIVE ANALYSIS 73

Binomial with different (p)


12.0%
10.0%
8.0%
6.0%
4.0%
2.0%
0.0%

p=20%
p=50%

5.0
10.0
15.0
20.0
25.0
30.0
35.0
40.0
45.0
50.0
55.0
60.0
65.0
70.0
75.0
80.0

p=80%

Poisson with different lambdas


20.0%
15.0%

lambda = 5

10.0%

lambda = 10

5.0%

lambda = 20

6.0
12.0
18.0
24.0
30.0
36.0
42.0
48.0
54.0
60.0
66.0
72.0
78.0

0.0%

www.bionicturtle.com

FRM 2011 QUANTITATIVE ANALYSIS 74

Rachev, Menn, and

Fabozzi, Chapter 3:

Continuous
Probability
Distributions
In this chapter

Describe the key properties of Normal, Exponential, Weibull, Gamma, Beta, Chi
squared, Students t, Lognormal, Logistic and Extreme Value distributions.
Explain the summation stability of normal distributions.
Describe the hazard rate of an exponentially distributed random variable.
Explain the relationship between exponential and Poisson distributions.
Explain why the generalized Pareto distribution is commonly used to model
operational risk events.
Explain the concept of mixtures of distributions.

Describe the key properties of Normal, Exponential, Weibull, Gamma,


Beta, Chisquared, Students t, Lognormal, Logistic and Extreme Value
distributions.
Normal
Characteristics of the normal distribution include:

The middle of the distribution, mu (), is the mean (and median). This first moment is
also called the location

Standard deviation and variance are measures of dispersion (a.k.a., shape). Variance is
the second-moment; typically, variance is denoted by sigma-squared such that standard
deviation is sigma.

The distribution is symmetric around . In other words, the normal has skew = 0

The normal has kurtosis = 3 or excess kurtosis = 0

www.bionicturtle.com

FRM 2011 QUANTITATIVE ANALYSIS 75

Properties of normal distribution:

Location-scale invariance: Imagine random variable X, which is normally distributed


with the parameters and . Now consider random variable Y, which is a linear function
of X: Y = aX + b. In general, the distribution of Y might substantially differ from the
distribution of X, but in the case where X is normally distributed, the random variable Y
is again normally distributed with parameters [mean = a*mu + b] and [variance =
a*sigma]. Specifically, we do not leave the class of normal distributions if we
multiply the random variable by a factor or shift the random variable.

Summation stability: If you take the sum of several independent random variables,
which are all normally distributed with mean (i) and standard deviation (i), then the
sum will be normally distributed again.

The normal distribution possesses a domain of attraction. The central limit theorem
(CLT) states thatunder certain technical conditionsthe distribution of a large sum of
random variables behaves necessarily like a normal distribution.

The normal distribution is not the only class of probability distributions having a domain
of attraction. Actually three classes of distributions have this property: they are called
stable distributions.

Exponential
The exponential distribution is popular in queuing theory. It is used to model the time we have
to wait until a certain event takes place. According to the text, examples include the time
until the next client enters the store, the time until a certain company defaults or the time until
some machine has a defect.

f ( x ) e x , 1 , x 0

The exponential function is non-zero:

2.00
1.50

Exponential
0.5

1.00

0.50

0.00

www.bionicturtle.com

FRM 2011 QUANTITATIVE ANALYSIS 76

Weibull
Weibull is a generalized exponential distribution; i.e., the exponential is a special case of the
Weibull where the alpha parameter equals 1.0.

F(x) 1 e

,x 0

Weibull distribution
2.00
alpha=.5,
beta=1

1.50
1.00

alpha=2, beta=1

0.50
alpha=2, beta=2
0.0
0.5
1.0
1.5
2.0
2.5
3.0
3.5
4.0
4.5
5.0

The main difference between the exponential distribution and the Weibull is that, under the
Weibull, the default intensity depends upon the point in time t under consideration. This allows
us to model the aging effect or teething troubles:
For > 1also called the light-tailed casethe default intensity is monotonically increasing
with increasing time, which is useful for modeling the aging effect as it happens for machines:
The default intensity of a 20-year old machine is higher than the one of a 2-year old machine.
For < 1the heavy-tailed casethe default intensity decreases with increasing time. That
means we have the effect of teething troubles, a gurative explanation for the effect that after
some trouble at the beginning things work well, as it is known from new cars. The credit spread
on noninvestment-grade corporate bonds provides a good example: Credit spreads usually
decline with maturity. The credit spread reects the default intensity and, thus, we have the
effect of teething troubles. If the company survives the next two years, it will survive for a
longer time as well, which explains the decreasing credit spread.
For = 1, the Weibull distribution reduces to an exponential distribution with parameter .

www.bionicturtle.com

FRM 2011 QUANTITATIVE ANALYSIS 77

Gamma distribution
The family of Gamma distributions forms a two parameter probability distribution family with
the density function (pdf) given by:

1
f (x)
e x
( )

,x 0
Gamma distribution

1.20
1.00
0.80

alpha=1, beta=1

0.60

alpha=2, beta=.5

0.40

alpha=4, beta=.25

0.20
The Gamma distribution is related to:

For alpha = 1, Gamma distribution becomes exponential distribution

For alpha = k/2 and beta = 2, Gamma distribution becomes Chi-square distribution

Beta distribution
The beta distribution has two parameters: alpha (center) and beta (shape). The beta
distribution is very flexible, and popular for modeling recovery rates.

Beta distribution
(popular for recovery rates)
0.06
0.05
0.04
0.03
0.02
0.01
0.00

alpha = 0.6, beta = 0.6


alpha = 1, beta = 5

0.07
0.14
0.21
0.28
0.35
0.42
0.49
0.56
0.63
0.70
0.77
0.84
0.91
0.98

alpha = 2, beta = 4

www.bionicturtle.com

alpha = 2, beta = 1.5

FRM 2011 QUANTITATIVE ANALYSIS 78

Example of Beta Distribution


The beta distribution is often used to model recovery rates. Here are two examples: one beta
distribution to model a junior class of debt (i.e., lower mean recovery) and another for a senior
class of debt (i.e., lower loss given default):
Junior Senior
alpha (center)

2.0

4.0

beta (shape)

6.0

3.3

Mean recovery 25%

55%

0.03
0.02
0.01

Senior
Junior
0%
6%
12%
18%
24%
30%
36%
42%
48%
54%
60%
66%
72%
78%
84%
90%
96%

0.00

Recovery (Residual Value)

Lognormal
The lognormal is common in finance: If an asset return (r) is normally distributed, the
continuously compounded future asset price level (or ratio or prices; i.e., the wealth ratio) is
lognormal. Expressed in reverse, if a variable is lognormal, its natural log is normal.

Lognormal
Non-zero, positive skew, heavy right tail
1.00%
0.80%
0.60%
0.40%
0.20%
0.00%

www.bionicturtle.com

FRM 2011 QUANTITATIVE ANALYSIS 79

Logistic
A logistic distribution has heavy tails:

Logistic distribution
0.45
0.40
0.35
0.30
0.25
0.20
0.15
0.10
0.05
-

alpha=0, beta=1
alpha=2, beta=1
alpha=0, beta=3
N(0,1)

7 10 13 16 19 22 25 28 31 34 37 40

Extreme Value Theory


Measures of central tendency and dispersion (variance, volatility) are impacted more by
observations near the mean than outliers. The problem is that, typically, we are concerned with
outliers; we want to size the liklihood and magnitude of low frequency, high severity (LFHS)
events. Extreme value theory (EVT) solves this problem by fitting a separate distribution to
the extreme loss tail. EVT uses only the tail of the distribution, not the entire dataset.

In applying extreme value theory (EVT), two general approaches are:

Block maxima (BM). The classic approach

Peaks over threshold (POT). The modern approach that is often preferred.

www.bionicturtle.com

FRM 2011 QUANTITATIVE ANALYSIS 80

Block maxima
The dataset is parsed into (m) identical, consecutive and non-overlapping periods called blocks.
The length of the block should be greater than the periodicity; e.g., if the returns are daily, blocks
should be weekly or more. Block maxima partitions the set into time-based intervals. It requires
that observations be identically and independently (i.i.d.) distributed.

Generalized extreme value (GEV) fits block maxima


The Generalized extreme value (GEV) distribution is given by:
1

exp (1 y ) 0

H ( y )

y
0
exp( e )

The (xi) parameter is the tail index; it represents the fatness of the tails. In this expression, a
lower tail index corresponds to fatter tails.

Generalized Extreme Value (GEV)

0.15
0.10
0.05

45

40

35

30

25

20

15

10

0.00

Per the (unassigned) Jorion reading on EVT, the key thing to know here is that (1) among
the three classes of GEV distributions (Gumbel, Frechet, and Weibull), we only care
about the Frechet because it fits to fat-tailed distributions, and (2) the shape parameter
determines the fatness of the tails (higher shape fatter tails)

www.bionicturtle.com

FRM 2011 QUANTITATIVE ANALYSIS 81

Peaks over threshold (POT)


Peaks over threshold (POT) collects the dataset of losses above (or in excess of) some threshold.

The cumulative distribution function here refers to the probability that the excess loss (i.e., the
loss, X, in excess of the threshold, u, is less than some value, y, conditional on the loss exceeding
the threshold):

FU ( y ) P( X u y | X u )

u
-4 -3 -2 -1

Peaks over threshold (POTS):


1

x
)
1 (1

G , ( x )
1 exp( x )

www.bionicturtle.com

0
0

FRM 2011 QUANTITATIVE ANALYSIS 82

Generalized Pareto Distribution


(GPD)
1.50
1.00
0.50
0

Block maxima is: time-based (i.e., blocks of time), traditional, less sophisticated, more
restrictive in its assumptions (i.i.d.)
Peaks over threshold (POT) is: more modern, has at least three variations (semiparametric; unconditional parametric; and conditional parametric), is more flexible

EVT Highlights:
Both GEV and GPD are parametric distributions used to model heavy-tails.
GEV (Block Maxima)

Has three parameters: location, scale and tail index

If tail > 0: Frechet

GPD (peaks over threshold, POT)

Has two parameters: scale and tail (or shape)

But must select threshold (u)

Explain the summation stability of normal distributions.


The sum of independent normally distributed random variables is also normally distributed

www.bionicturtle.com

FRM 2011 QUANTITATIVE ANALYSIS 83

Describe the hazard rate of an exponentially distributed random variable.


In credit risk modeling, the parameter 1/ is interpreted as a hazard rate or default intensity.

f (x)

,x 0
x

F(x) 1 e , x 0

1
f ( x ) e x
F ( x ) 1 e x

Explain the relationship between exponential and Poisson distributions.


The Poisson distribution counts the number of discrete events in a fixed time period; it is related
to the exponential distribution, which measures the time between arrivals of the events. If
events occur in time as a Poisson process with parameter , the time between events are
distributed as an exponential random variable with parameter . For example (from the learning
XLS):

Avg. events / day (lambda)


Number of hours/day
Average per hour
Poisson
Events / day (x)
P[X = x]
P[X = x], Excel check
Exponential
Hours [H]
Average Events / H
P [Y > ]
Days / hour
P [ Y > t]
P [ Y < t], CDF
Alternative:
Days / event
Hours / event
P [ Y > t]
P [ Y < t], CDF

www.bionicturtle.com

6
24
0.250

4
24
0.167

6.00
16.1%
16.1%

2.00 Poisson distribution gives the


14.7% probability that exactly 6 events
14.7% (losses) will occur in one day

1
0.25
77.9%
0.042
77.9%
22.1%

12
2.00
13.5% Exponential distribution gives
0.042 the probability that the next
13.5% loss will occur before (within)
86.5% the next 1 hour (12 hours)

0.17
4.00
77.9%
22.1%

0.25
6.00
13.5%
86.5%

FRM 2011 QUANTITATIVE ANALYSIS 84

Explain why the generalized Pareto distribution is commonly used to


model operational risk events.
The generalized Pareto distribution (GPD) models the distribution of so-called peaks over
threshold. GPD is limiting distribution the distribution of excesses about a threshold (Peaksover-threshold model). Possible applications are in the field of operational risks, where we are
concerned about losses above a certain threshold.
For severity tails, empirical distributions rarely sufficient (there is rarely enough data!).

Explain the concept of mixtures of distributions.


If two normal distributions have the same mean, they combine (mix) to produce mixture
distribution with leptokurtosis (heavy-tails). Otherwise, mixtures are infinitely flexible.

0.45
Normal 1
Normal 2
Mixture

0.40
0.35
0.30
0.25
0.20
0.15
0.10
0.05
0.00
-10

www.bionicturtle.com

-5

10

FRM 2011 QUANTITATIVE ANALYSIS 85

Jorion, Chapter 12:

Monte Carlo
Methods
In this chapter

Describe how to simulate a price path using a geometric Brownian motion model.
Describe how to simulate various distributions using the inverse transform method.
Describe the bootstrap method.
Explain how simulations can be used for computing VaR and pricing options.
Describe the relationship between the number of Monte Carlo replications and the
standard error of the estimated values.
Describe and identify simulation acceleration techniques.
Explain how to simulate correlated random variables using Cholesky factorization.
Describe deterministic simulations.
Discuss the drawbacks and limitations of simulation procedures.

Describe how to simulate a price path using a geometric Brownian motion


(GBM) model.

Specify a
random
process (GBM
for a stock)

Run trials
(10 or 1 MM)
each a
function of
random
variable

For all trials,


calculate
terminal (at
horizon) asset
(or portolio)
value

Sort
outcomes,
best to worst.
Quintiles
(e.g., 1%ile)
are VaRs

Geometric Brownian Motion (GBM) is the continuous motion/ process in which the randomly
varying quantity (in our example Asset Value) has a fluctuated movement and is dependent on
a variable parameter (in our example the stochastic variable is Time). The standard variable
parameter is the shock and the progress in the assets value is the drift. Now, the GBM can be
represented as drift + shock as shown below.

www.bionicturtle.com

FRM 2011 QUANTITATIVE ANALYSIS 86

dSt t St dt t St dz is the infinitesimal (continuous) representation of the GBM


S St 1( t t ) is the discrete representation of the GBM
GBM models a deterministic drift plus a stochastic random shock

The above illustration is the shock and drift progression of the asset. The asset drifts upward
with the expected return of

over the time interval t . But the drift is also impacted by shocks

from the random variable . We measure the standard deviation by a random variable

(which is the random shock) here. As the variance is adjusted with time t , volatility is adjusted
with the square root of time

t .

10-day GBM simulation (40 trials)


$12.00
$11.00
$10.00

day 10

day 9

day 8

day 7

day 6

day 5

day 4

day 3

day 2

$8.00

day 1

$9.00

Expected Drift is the deterministic component but shock is the random component in this stock
price process simulation. The Y-axis has an empirical distribution rather than a parametric
distribution and can be easily used to calculate the VaR. This Monte Carlo Distribution allows us
to produce an empirical distribution in future which can be used to calculate the VaR.
GBM assumes constant volatility (generally a weakness) unlike GARCH(1,1) which models
time-varying volatility.

www.bionicturtle.com

FRM 2011 QUANTITATIVE ANALYSIS 87

Describe how to simulate various distributions using the inverse


transform method.
The inverse transform method translates a random number (under a uniform distribution) into
a cumulative standard normal distribution:

Random

0.10
0.15
0.20
0.25
0.30
0.35
0.40
0.45
0.50

CDF
NORMSINV()

-1.282
-1.036
-0.842
-0.674
-0.524
-0.385
-0.253
-0.126
0.000

Pdf
NORMDIST()

0.18
0.23
0.28
0.32
0.35
0.37
0.39
0.40
0.40

A random variable is generated, between 0 and 1. In Excel, the function is =RAND(). This
corresponds to the Y-axis on the first chart below. This will necessarily correspond to standard
normal CDF; e.g., RAND(.4) corresponds to -0.126 because NORMSINV(RAND(.4)) = -0.126.

www.bionicturtle.com

FRM 2011 QUANTITATIVE ANALYSIS 88

Describe the bootstrap method.


The bootstrap method is a subclass of (type of) historical simulation (like HS). In regular
historical simulation, current portfolio weights are applied to the historical sample (e.g., 250
trading days). The bootstrap differs because it is with replacement: a historical period (i.e., a
vector of daily returns on a given day in the historical window) is selected at random. This
becomes the simulated vector of returns for day T+1. Then, for day T+2 simulation, a daily
vector is again selected from the window; it is with replacement because each simulated day
can select from the entire historical sample. Unlike historical simulationwhich runs the

Standard normal PDF

-5.000

0.000

5.000

current portfolio through the single historical samplethe bootstrap randomizes the historical
sample and therefore can generate many historically-informed samples.
The advantages of the bootstrap include: can model fat-tails (like HS); by generating
repeated samples, we can ascertain estimate precision. Limitations, according to Jorion,
include: for small sample sizes, the bootstrapped distribution may be a poor
approximation of the actual one.

Randomize
Historical Date,
But same indexed
returns within date
(preserves cross-sectional
correlations)

www.bionicturtle.com

FRM 2011 QUANTITATIVE ANALYSIS 89

Monte Carlo versus bootstrapping


Monte Carlo simulation is a generation of a distribution of returns and/or asset prices paths by
the use of random numbers. Bootstrapping randomizes the selection of actual historical returns.

Monte Carlo

Bootstrapping

Both generate hypothetical future scenario and determine VaR by


th

th

lookup function: what is 95 or 99 worst simulated loss?


Neither incorporates autocorrelation (basic MC does not)
Algorithm describes return
path (e.g., GBM)

Retrieves set (vector) of actual


historical returns

Randomizes return

Randomizes historical date

Correlation must be modeled

Built-in correlation

Uses parametric assumptions

No distributional assumption
(does not assume normality)

Model risk

Needs lots of data

Monte Carlo advantages include:

Powerful & flexible

Able to account for a range of risks (e.g., price risk, volatility risk, and nonlinear
exposures)

Ban be extended to longer horizons (important for credit risk measurement)

Can measure operational risk.

However, Monte Carlo simulation can be expensive and time-consuming to run,


including: costly computing power and costly expertise (human capital).

Bootstrapping advantages include:

Simple to implement

Naturally incorporates spatial (cross-sectional) correlation

Automatically captures non-normality in price changes (i.e., does not impose a


parametric distributional assumption)

www.bionicturtle.com

FRM 2011 QUANTITATIVE ANALYSIS 90

Explain how simulations can be used for computing VaR and pricing
options.
Value at Risk (VaR)
Once a price path has been generated, we can build a portfolio distribution at the end of the
selected horizon:
1. Choose a stochastic process and parameters
2. Generate a pseudo-sequence of variables from which prices are computed
3. Calculate the value of the asset (or portfolio) under this particular sequence of prices at
the target horizon
4. Repeat steps 2 and 3 as many times as needed
This process creates a distribution of values. We can sort the observations and tabulate the
expected value and the quantile, which is the expected value in c times 10,000 replications.
Value at risk (VaR) relative to the mean is then:

VaR(c,T ) E(FT ) Q(FT ,C )


Pricing options
Options can be priced under the risk-neutral valuation method by using Monte Carlo simulation:
1. Choose a process with drift equal to riskless rate (mu = r)
2. Simulate prices to the horizon
3. Calculate the payoff of the stock option (or derivative) at maturity
4. Repeat steps as often as needed
The current value of the derivative is obtained by discounting at the risk free rate and averaging
across all experiments:

ft E * e rt F (ST )
This formula means that each future simulated price, F(St), is discounted at the risk-free rate;
i.e., to solve for the present value. Then the average of those values is the expected value, or
value of the option. The Monte Carlo method has several advantages. It can be applied in
many situations, including options with so-called price-dependent paths (i.e., where the value
depends on the particular path) and options with atypical payoff patterns. Also, it is powerful
and flexible enough to handle varieties of options. With one notable exception: it cannot
accurately price options where the holder can exercise early (e.g., American-style options).

www.bionicturtle.com

FRM 2011 QUANTITATIVE ANALYSIS 91

Describe the relationship between the number of Monte Carlo


replications and the standard error of the estimated values.
The relationship between the number of replications and precision (i.e., the standard error of
estimated values) is not linear: to increase the precision by 10X requires approximately 100X
more replications. The standard error of the sample standard deviation:

SE ( )

1
SE ( )

2T

1
2T

Therefore to increase VaR precision by (1/T) requires a multiple of about T2 the number of
replications; e.g., x 10 precision needs x 100.

= 10^2 =
100x
replications
se() = 1/10
reduce se() for
better precision

Describe and identify simulation acceleration techniques.


Because an increase in precision requires exponentially more replications, acceleration
techniques are used:

Antithetic variable technique: changes the sign of the random samples. Appropriate
when the original distribution is symmetric. Creates twice as many replications at little
additional cost.

Control variates technique: attempts to increase accuracy by reducing sample variance


instead of by increasing sample size (the traditional approach).

Importance sampling technique (Jorion calls this the most effective acceleration
technique): attempts to sample along more important paths

Stratified sampling technique: partitions the simulation region into two zones.

www.bionicturtle.com

FRM 2011 QUANTITATIVE ANALYSIS 92

Explain how to simulate correlated random variables using Cholesky


factorization.
Cholesky factorization
By virtue of the inverse transform method, we can use =NORMSINV(RAND()) to create standard
random normal variables. The RAND() function is a uniform distribution bounded by [0,1]. The
NORMSINV() translates the random number into the z-value that corresponds to the probability
given by a cumulative distribution. For example, =NORMSINV(5%) returns -1.645 because 5% of
the area under a normal curve lies to the left of - 1.645 standard deviations.
But no realistic asset or portfolio contains only one risk factor. To model several risk factors, we
could simply generate multiple random variables. Put more technically, the realistic modeling
scenario is a multivariate distribution function that models multiple random variables. But the
problem with this approach, if we just stop there, is that correlations are not included. What we
really want to do is simulate random variables but in such a way that we capture or reflect the
correlations between the variables. In short, we want random but correlated variables.
The typical way to incorporate the correlation structure is by way of a Cholesky factorization (or
decomposition) . There are four steps:
1. The covariance matrix. This contains the implied correlation structure; in fact, a
covariance matrix can itself be decomposed into a correlation matrix and a volatility
vector.
2. The covariance matrix(R) will be decomposed into a lower-triangle matrix (L) and an
upper-triangle matrix (U). Note they are mirrors of each other. Both have identical
diagonals; their zero elements and nonzero elements are merely "flipped"
3. Given that R = LU, we can solve for all of the matrix elements: a,b,c (the diagonal) and x,
y, z. That is by definition. That's what a Cholesky decomposition is, it is the solution
that produces two triangular matrices whose product is the original (covariance) matrix.
4. Given the solution for the matrix elements, we can calculate the product of the triangle
matrix to ensure the produce does equal the original covariance matrix (i.e., does LU =
R?). Note, in Excel a single array formula can be used with = MMULT().
The lower triangle (LU) is the result of the Cholesky Decomposition. It is the thing we can use to
simulate random variables, that itself is "informed" by our covariance matrix.

www.bionicturtle.com

FRM 2011 QUANTITATIVE ANALYSIS 93

Correlated random variables


The following transforms two independent random variables into correlated random variables:

1 1
2 1 (1 2 )2
1,2 : independent random variables
:
correlation coefficient
1, 2 : correlated random variables
Correlated Random Vars

Correlated Time Series

2.00

$16.0

1.00

$14.0
$12.0

$10.0

-1.00

$8.0

-2.00

$6.0

-3.00
-4.00

$4.0
-2.00

1 4 7 10 13 16 19 22 25 28 31

2.00

Snapshot from the learning spreadsheet:

Correlation 0.75

N (0,1)
2.06
0.52
1.51
(1.44)

Correlated
N (0,1)
1.26
(0.73)
0.99
0.48

Mean
Volatility

Series
#1
1%
10%

Series
#2
1%
10%

Series
#1
$10.0
$10.62
$12.34
$10.68

Series
#2
$10.0
$9.37
$10.39
$11.00

If the variables are uncorrelated, randomization can be performed independently for each
variable. Generally, however, variables are correlated. To account for this correlation, we start
with a set of independent variables , which are then transformed into the (). In a two-variable
setting, we construct the following:

www.bionicturtle.com

FRM 2011 QUANTITATIVE ANALYSIS 94

1 1
2 1 (1 2 )1/22
= is the correlation coefficient between the variables ().
This is a transformation of two independent random variable into correlated random variables.
Prior to the transformation, 1 and 2 are random variables that have necessary correlation.
The first random variable is retained (1 = 1) and the second is transformed (recast) into a
random variable that is correlated

Describe deterministic simulations.


Quasi Monte Carlo (QMC) a.k.a. deterministic simulation
Instead of drawing independent samples, the deterministic scheme systematically fills the space
left by previous numbers in the series.

Advantage: Standard error shrinks at 1/K rather than 1/

k.

Disadvantage: Since not independent accuracy cannot be easily determined.

Monte Carlo simulations methods generate independent, pseudorandom points that attempt to
fill an N-dimensional space, where N is the number of variables driving the price of securities.
Researchers now realize that the sequence of points does not have to be chosen randomly. In a
deterministic scheme, the draws (or trials) are not entirely random. Instead of random trials,
this scheme fills space left by previous numbers in the series.

Scenario Simulation
The first step consists of using principal-component analysis to reduce the dimensionality of the
problem; i.e., to use the handful of factors, among many, that are most important.
The second step consists of building scenarios for each of these factors, approximating a normal
distribution by a binomial distribution with a small number of states.

www.bionicturtle.com

FRM 2011 QUANTITATIVE ANALYSIS 95

Discuss the drawbacks and limitations of simulation procedures.


Simulation Methods are flexible

Either postulate stochastic process or resample historical

All full valuation on target date

However

More prone to model risk: need to pre-specify the distribution

Much slower and less transparent than analytical methods

Sampling variation (more precision requires vastly greater number of replications)

The tradeoff is speed vs. accuracy


A key drawback of the Monte Carlo method is the computational requirements; a large number
of replications are typically required (e.g., thousands of trials are not unusual).
Simulations inevitably generate sampling variability, or variations in summary statistics due to
the limited number of replications. More replications lead to more precise estimates but take
longer to estimate.

www.bionicturtle.com

FRM 2011 QUANTITATIVE ANALYSIS 96

Hull, Chapter 21:

Estimating
Volatilities and
Correlations
In this chapter

Discuss how historical data and various weighting schemes can be used in
estimating volatility.
Describe the exponentially weighted moving average (EWMA) model for estimating
volatility and its properties.
Estimate volatility using the EWMA model.
Describe the generalized auto regressive conditional heteroscedasticity
[GARCH(p,q)] model for estimating volatility and its properties.
Estimate volatility using the GARCH(p,q) model.
Explain mean reversion and how it is captured in the GARCH(1,1) model.
Discuss how the parameters of the GARCH(1,1) and the EWMA models are estimated
using maximum likelihood methods.
Explain how GARCH models perform in volatility forecasting.
Discuss how correlations and covariances are calculated, and explain the
consistency condition for covariances.

Discuss how historical data and various weighting schemes can be used in
estimating volatility.
Take two steps to compute historical (not implied) volatility:
1. Compute the series of periodic (e.g., daily) returns,
2. Choose a weighting scheme (to translate a series into a single metric)

www.bionicturtle.com

FRM 2011 QUANTITATIVE ANALYSIS 97

Compute the series of periodic returns (e.g., 1 period = 1 day)


Assume that one period equals one day. You can either compute the continuously compounded
daily return or the simple percentage change. If Si-1 is yesterdays price and Si is todays price,
the continuously compounded return (ui) is given by:

S
ui ln i
Si 1
The simple percentage change is given by:

ui

Si Si 1
Si 1

John Hull uses simple percentage but Linda Allen uses log return (continously
compounded) because log returns are time consistent. Hulls method is not incorrect,
rather, it is an acceptable approximation for short (daily) periods.

Choose a weighting scheme


The series can be either un-weighted (each return is equally weighted) or weighted. A weighted
scheme puts more weight on recent returns because they tend to be more relevant.

Un-weighted (or equally weighted) scheme


The un-weighted (which is really equally-weighted) variance is a standard historical variance.
In this case, the variance is given by:

1 m

(un i u )2

m 1 i 1
2
n

n2
m
u

variance rate per day


most recent m observations
the mean/average of all daily returns (ui )

Hull, for practical purposes, makes the following two simplifying assumptions:

The average daily return of zero is assumed to be zero:

The denominator (m-1) is replaced with m

u 0

This produces a simplified version of the standard (un-weighted) variance:

1 m 2
variance = un i
m i 1
2
n

www.bionicturtle.com

FRM 2011 QUANTITATIVE ANALYSIS 98

This simplified version replaces (m-1) with (m) in the denominator. (m-1) produces an
unbiased estimator and (m) produces a maximum likelihood estimator.
How can there be two different formulas for sample variance? Recall (from Gujarati) these
are estimators: recipes intended to produce estimates of the true population variance.
There can be different recipes; although many will have undesirable properties.

The weighted scheme (a better approach, generally)


The standard approach gives equal weight to each return. But to forecast, it is better to give
greater weight to more recent data. A generic model for this approach is given by a weighted
moving average:

i un2 i
2
n

i 1

The alpha () parameters are simply weights; the sum of the alpha () parameters must equal
one because they are weights.
We can now add another factor to the model: the long-run average variance rate. The idea is
that the variance is mean regressing: think of it the variance as having a gravitational pull
toward its long-run average. We add another term to the equation above, in order to capture the
long-run average variance. The added term is the weighted long-run variance:
m

VL i un2i
2
n

i 1

The added term is gamma (the weighting) multiplied by () the long-run variance because
the variance is a weighted factor.
This is known as an ARCH (m) model. Often omega () replaces the first term. So here is a reformatted ARCH (m) model:
m

i un2i
2
n

www.bionicturtle.com

i 1

FRM 2011 QUANTITATIVE ANALYSIS 99

Summary: Un-weighted versus weighted


Un-Weighted Scheme

1 m 2
un i
m i 1
2
n

Weighted Scheme
m

n2 i un2 i

Weights must sum to one

i 1

Describe the exponentially weighted moving average (EWMA) model for


estimating volatility and its properties. Estimate volatility using the EWMA
model.
In exponentially weighted moving average (EWMA), the weights decline (in constant proportion,
given by lambda).
Exponentially weighted moving average (EWMA):

n2 (1 ) 0un21
(1 ) 1un2 2

Ratio between any two consecutive


weights is constant lambda

(1 ) 2un23
Recursive version of EWMA:

n2 n21 (1 )un21

Infinite series elegantly reduces


to recursive EWMA

RiskMetricsTM is a branded EWMA:

n2 (0.94) n21 (0.06)un21

www.bionicturtle.com

FRM 2011 QUANTITATIVE ANALYSIS 100

EWMA is a special case of GARCH (1,1). Here is how we get from GARCH (1,1) to EWMA:

GARCH(1,1) t2 a brt21,t c t21


Then we let a = 0 and (b + c) =1, such that the above equation simplifies to:

GARCH(1,1) = t2 brt21,t (1 b) t21


This is now equivalent to the formula for exponentially weighted moving average (EWMA):

EWMA t2 brt21,t (1 b) t21

t2 t21 (1 )rt21,t
In EWMA, the lambda parameter now determines the decay: a lambda that is close to one
(high lambda) exhibits slow decay.

RiskMetricsTM Approach
RiskMetrics is a branded form of the exponentially weighted moving average (EWMA) approach:

ht ht 1 (1 )rt21
The optimal (theoretical) lambda varies by asset class, but the overall optimal parameter used
by RiskMetrics has been 0.94. In practice, RiskMetrics only uses one decay factor for all series:

0.94 for daily data

0.97 for monthly data (month defined as 25 trading days)

Technically, the daily and monthly models are inconsistent. However, they are both easy to use,
they approximate the behavior of actual data quite well, and they are robust to misspecification.
Each of GARCH (1, 1), EWMA and RiskMetrics are each parametric and recursive.

www.bionicturtle.com

FRM 2011 QUANTITATIVE ANALYSIS 101

Describe the generalized auto regressive conditional heteroscedasticity


(GARCH(p,q)) model for estimating volatility and its properties. Estimate
volatility using the GARCH(p,q) model.
EWMA is a special case of GARCH(1,1) where gamma = 0 and (alpha + beta = 1)

n2 n21 (1 )un21
GARCH (1,1) is the weighted sum of a long run-variance (weight = gamma), the most recent
squared-return (weight = alpha), and the most recent variance (weight = beta)

n2 VL un21 n21
The mean reversion term is the product of a weight (gamma) and the long-run
(unconditional) variance. If gamma = 0, GARCH(1,1) reduces to EWMA

n2 VL un21 n21

Product of beta and


recent variance. Beta is
analogous to lambda in
EWMA.

Product of alpha weight and recent


return-squared. Alpha is analogous to (1lambda) in EWMA

www.bionicturtle.com

FRM 2011 QUANTITATIVE ANALYSIS 102

In the volatility practice bag (learning spreadsheet 2.b.6), we illustrate and compare the
calculation of EWMA to GARCH(1,1):

beta (b) or lambda


0.860
0.898 In both, most weight to lag variance
If EWMA: lambda only
1-lambda
0.140
0.102 In EWMA, only two weights
sum of weights
1.00
1.00
If GARCH (1,1): alpha, beta, & gamma
omega (w)
0.00000200 0.00000176 omega = gamma * long run variance
alpha (a)
0.130
0.063 Weight to lag return
alpha + beta (a+b)
0.9900
0.9602 persistence of GARCH
gamma
0.010
0.040 Weight to L.R. var = 1 alpha beta
sum of weights:
1.000
1.000
Long Term Variance
0.00020
0.00004 omega/(1-alpha-beta)
Long Term Volatility
1.4142%
0.6650% SQRT()
Updated Volatility Estimate
Assumptions
Last Volatility
1.60%
Last Variance
0.000256
Yesterday's price
10.00
Today's price
9.90
Last Return
-1.0%
EWMA
Updated Variance
0.000235
Updated Volatility
1.53%
GARCH(1,1)
Updated Variance
Updated Volatility
GARCH (1,1) Forecast
Number of days (t)
Forecast Variance
Forecast Volatility

www.bionicturtle.com

0.000236
1.53%

10
0.00023218
1.524%

0.60%
0.000036
10.00
10.21
2.0%
0.000074 *lag variance+(1-)*lag return^2
0.86%

0.000060 GARCH (1,1) = omega + beta*lag var


0.77% + alpha * lag return^2

10
0.00005463 L.R. var + (alpha + beta)^t*(var-L.R. var)
0.739%

FRM 2011 QUANTITATIVE ANALYSIS 103

Explain mean reversion and how it is captured in the GARCH(1,1) model.


Compared to EWMA, GARCH(1,1) has an extra term (omega). This extra term is the weighted
long-run average variance; i.e., the product of the long-run average variance and the weight
(gamma):

n2 un21 n21
We can solve for the long-run average variance as a function of omega and the weights (alpha,
beta):

VL

Discuss how the parameters of the GARCH(1,1) and the EWMA models are
estimated using maximum likelihood methods.
In maximum likelihood methods we choose parameters that maximize the likelihood of the
observations occurring.

Max Likelihood Estimation (MLE) for GARCH(1,1)


Avg Return

0.001

mu

0.0006

Std Dev (Returns)

0.006

Omega
Alpha
Beta

0.0000
0.0001
0.8221

mu * 1000

0.646

alpha

0.000

persistence

0.822

variance*10000
Log likelihood value:

0.363
110.94

Explain how GARCH models perform in volatility forecasting.


The forecasted volatility forward (k) days is given by:

E[ n2 k ] VL ( )k ( n2 VL )
The expected future variance rate, in (t) periods forward, is given by:

E[ n2t ] VL ( )t ( n2 VL )

www.bionicturtle.com

FRM 2011 QUANTITATIVE ANALYSIS 104

For example, assume that a current volatility estimate (period n) is given by the following
GARCH (1, 1) equation:

n2 0.00008 ( 0.1)(4%)n2 1 ( 0.7)(.0016)


In this example, alpha is the weight (0.1) assigned to the previous squared return (the previous
return was 4%), beta is the weight (0.7) assigned to the previous variance (0.0016).
What is the expected future volatility, in ten days (n + 10)?
First, solve for the long-run variance. It is not 0.00008; this term is the product of the variance
and its weight. Since the weight must be 0.2 (= 1 - 0.1 -0.7), the long run variance = 0.0004.

VL

0.00008

0.0004
1 1 0.7 0.1

Second, we need the current variance (period n). That is almost given to us above:

n2 0.00008 0.00016 0.00112 0.00136


Now we can apply the formula to solve for the expected future variance rate:

E [ n2 t ] (0.0004) (0.1 0.7)10 (0.00136 0.0004)


0.0005031
This is the expected variance rate, so the expected volatility is approximately 2.24%. Notice how
this works: the current volatility is about 3.69% and the long-run volatility is 2%. The 10-day
forward projection fades the current rate nearer to the long-run rate.

Discuss how correlations and covariances are calculated, and explain the
consistency condition for covariances.
Correlations play a key role in the calculation of value at risk (VaR). We can use similar methods
to EWMA for volatility. In this case, an updated covariance estimate is a weighted sum of

The recent covariance; weighted by lambda

The recent cross-product; weighted by (1-lambda)

cov n cov n 1 (1 )xn 1y n 1

www.bionicturtle.com

FRM 2011 QUANTITATIVE ANALYSIS 105

Allen, Boudoukh, and

Saunders, Chapter 2:

Quantifying Volatility in
VaR Models
In this chapter

Discuss how asset return distributions tend to deviate from the normal distribution.
Explain potential reasons for the existence of fat tails in a return distribution and
discuss the implications fat tails have on analysis of return distributions.
Distinguish between conditional and unconditional distributions.
Discuss the implications regime switching has on quantifying volatility.
Explain the various approaches for estimating VaR.
Compare, contrast and calculate parametric and non-parametric approaches for
estimating conditional volatility, including: Historic simulation
Compare, contrast and calculate parametric and non-parametric approaches for
estimating conditional volatility, including: Historical standard deviation
Compare, contrast and calculate parametric and non-parametric approaches for
estimating conditional volatility, including: Exponential smoothing
Compare, contrast and calculate parametric and non-parametric approaches for
estimating conditional volatility, including: GARCH approach
Compare, contrast and calculate parametric and non-parametric approaches for
estimating conditional volatility, including: Multivariate density estimation
Compare, contrast and calculate parametric and non-parametric approaches for
estimating conditional volatility, including: Hybrid methods
Explain the process of return aggregation in the context of volatility forecasting
methods.
Explain how implied volatility can be used to predict future volatility and discuss its
advantages and disadvantages.
Explain the implications of mean reversion in returns and return volatility for
forecasting VaR over long time horizons.
Discuss the effects non-synchronous data has on estimating correlation and describe
approaches that mitigate the impact of non-synchronous data on risk estimates.
Discuss the use of backtesting for comparing VaR results using different volatility
estimation approaches and the desirable attributes of VaR estimates.

www.bionicturtle.com

FRM 2011 QUANTITATIVE ANALYSIS 106

Key terms
Risk varies over time. Models often assume a normal (Gaussian) distribution (normality) with
constant volatility from period to period. But actual returns are non-normal and volatility varies
over time (volatility is time-varying or non-constant). Therefore, it is hard to use parametric
approaches to random returns; in technical terms, it is hard to find robust distributional
assumptions for stochastic asset returns

Conditional parameter (e.g., conditional volatility): a parameter such as variance that


depends on (is conditional on) circumstances or prior information. A conditional
parameter, by definition, changes over time.

Persistence: In EWMA, the lambda parameter (). In GARCH (1,1), the sum of the
alpha () and beta () parameters. High persistence implies slow decay toward to
the long-run average variance.

Autoregressive: Recursive. A parameter (todays variance) is a function of itself


(yesterdays variance).

Heteroskedastic: Variance changes over time (homoskedastic = constant variance).

Leptokurtosis: a fat-tailed distribution where relatively more observations are near the
middle and in the fat tails (kurtosis > 3)

How to Estimate Volatility


Take two steps to compute historical (not implied) volatility:
1. Compute the series of periodic (e.g., daily) returns,
2. Choose a weighting scheme (to translate a series into a single metric)

Compute the series of periodic returns (e.g., 1 period = 1 day)


Assume that one period equals one day. You can either compute the continuously compounded
daily return or the simple percentage change. If Si-1 is yesterdays price and Si is todays price,
Continuously compounded return:

The simple percentage return is given by:

S
ui ln i
Si 1

ui

Si Si 1
Si 1

Linda Allen contrasts three periodic returns (i.e., continuously compounded, simple
percentage change, and absolute level change). She argues continuously compounded
must be used when computing VAR because it is time consistent (except for interestrate related variables which use the absolute level change).

www.bionicturtle.com

FRM 2011 QUANTITATIVE ANALYSIS 107

Choose a weighting scheme


The series can be either un-weighted (each return is equally weighted) or weighted. A weighted
scheme puts more weight on recent returns because they tend to be more relevant.

The standard un-weighted (or equally weighted) scheme


The un-weighted (which is really equally-weighted) variance is a standard historical variance.
In this case, the variance is given by:
2
1 m
2 n

(un i u )
m 1 i 1
m
u
2
n

variance rate per day


most recent m observations
the mean/average of all daily returns (ui )

For practical purposes, the above equation is often simplified with the following assumptions:

The average daily return of zero is assumed to be zero:

The denominator (m-1) is replaced with m

u 0

This produces a simplified version of the standard (un-weighted) variance:

1 m 2
variance = un i
m i 1
2
n

This simplified version replaces (m-1) with (m) in the denominator. (m-1) produces an
unbiased estimator and (m) produces a maximum likelihood estimator.

The weighted scheme (a better approach, generally)


The standard approach gives no weight (or equal weight) to each return. But for forecasting
purposes, it is better to give greater weight to more recent data. A generic model for this
approach is given by a weighted moving average:
m

i un2 i
2
n

i 1

The alpha () parameters are simply weights; the sum of the alpha () parameters must equal
one because they are weights.We can now add another factor to the model: the long-run average
variance rate. The idea here is that the variance is mean regressing: think of it the variance as
having a gravitational pull toward its long-run average. We add another term to the equation
above, in order to capture the long-run average variance. The added term is the weighted longrun variance:

www.bionicturtle.com

FRM 2011 QUANTITATIVE ANALYSIS 108

VL i un2i
2
n

i 1

The added term is gamma (the weighting) multiplied by () the long-run variance because
the variance is a weighted factor.
formatted ARCH (m) model:
m

n2 i un2i
i 1

Stochastic behavior of returns


Risk measurement (VaR) concerns the tail of a distribution, where losses occur. We want to
impose a mathematical curve (a distributional assumption) on asset returns so we can
estimate losses. The parametric approach uses parameters (i.e., a formula with parameters) to
make a distributional assumption but actual returns rarely conform to the distribution curve. A
parametric distribution plots a curve (e.g., the normal bell-shaped curve) that approximates a
range of outcomes but actual returns are not so well-behaved: they rarely cooperate.

www.bionicturtle.com

FRM 2011 QUANTITATIVE ANALYSIS 109

Value at Risk (VaR) 2 asset, relative vs. absolute


Know how to compute two-asset portfolio variance & scale portfolio volatility to derive VaR:
Inputs
Trading days /year
Initial portfolio value (W)
VaR Time horizon (days)
(h)
VaR confidence interval
Asset A
Volatility (per year)
Expected Return (per year)
Portfolio Weight (w)
Asset B
Volatility
Expected Return (per year)
Portfolio Weight (1-w)
Correlation (A,B)
Autocorrelation (h-1, h)
Outputs
Annual
Covariance (A,B)
Portfolio variance
Exp portfolio return (per
year)
Portfolio volatility (per
year)
Period (h days)
Exp periodic return (u)
Std deviation (h), i.i.d
Scaling factor
Std deviation (h),
Autocorrelation
Normal deviate (critical z
value)
Expected future value
Relative VaR, i.i.d
Absolute VaR, i.i.d
Relative VaR, AR(1)
Absolute VaR, AR(1)

www.bionicturtle.com

252
$100
10
95%
10.0%
12.0%
50%
20.0%
25.0%
50%
0.30
0.25 If independent, = 0. Mean reverting returns = negative

0.0060
0.0155
18.5%
12.4%
0.73%
2.48%
15.78 Dont need to know this, used for AR(1)
3.12% Standard deviation if auto-correlation.
1.64 Normal deviate
100.73
$4.08 Doesnt include the mean return
$3.35 Includes return; i.e., loss from zero
$5.12 The corresponding VaRs, if autocorrelation incorporated.
$4.39 Note VaR is higher!

FRM 2011 QUANTITATIVE ANALYSIS 110

Discuss how asset return distributions tend to deviate from the normal
distribution.
Compared to a normal (bell-shaped) distribution, actual asset returns tend to be:

Fat-tailed (a.k.a., heavy tailed): A fat-tailed distribution is characterized by having


more probability weight (observations) in its tails relative to the normal distribution.

Skewed: A skewed distribution refersin this context of financial returnsto the


observation that declines in asset prices are more severe than increases. This is in
contrast to the symmetry that is built into the normal distribution.

Unstable: the parameters (e.g., mean, volatility) vary over time due to variability in
market conditions.

NORMAL RETURNS

ACTUAL FINANCIAL RETURNS

Symmetrical

Skewed

Normal Tails

Fat-tailed (leptokurtosis)

Stable

Unstable (time-varying)

Interest rate distributions are not constant over time


10 years of interest rate data are collected (1982 1993). The distribution plots the daily change
in the three-month treasury rate. The average change is approximately zero, but the probability
mass is greater at both tails. It is also greater at the mean; i.e., the actual mean occurs more
frequently than predicted by the normal distribution.

4.5%
4.0%
3.5%
3.0%
2.5%
2.0%
1.5%
1.0%
0.5%
0.0%

3rd Moment =
Skew 3

2nd Variance
scale

4th Moment =
kurtosis 4

1st moment
-3

www.bionicturtle.com

Actual returns:
1. Skewed
2. Fat-tailed
(kurtosis>3)
3. Unstable

-2

-1

0
1
Mean
location

FRM 2011 QUANTITATIVE ANALYSIS 111

Explain potential reasons for the existence of fat tails in a return


distribution and discuss the implications fat tails have on analysis of
return distributions.
A distribution is unconditional if tomorrows distribution is the same as todays distribution. But
fat tails could be explained by a conditional distribution: a distribution that changes over time.
Two things can change in a normal distribution: mean and volatility. Therefore, we can explain
fat tails in two ways:

Conditional mean is time-varying; but this is unlikely given the assumption that
markets are efficient

Conditional volatility is time-varying; Allen says this is the more likely explanation!

Normal distribution says:


-10% @ 95th %ile

If fat tails, expected VaR loss is


understated!
Explain how outliers can really be indications that the volatility varies with time.
We observe that actual financial returns tend to exhibit fat-tails. Jorion (like Allen et al) offers
two possible explanations:

The true distribution is stationary. Therefore, fat-tails reflect the true distribution but the
normal distribution is not appropriate

The true distribution changes over time (it is time-varying). In this case, outliers can in
reality reflect a time-varying volatility.

www.bionicturtle.com

FRM 2011 QUANTITATIVE ANALYSIS 112

Distinguish between conditional and unconditional distributions.


An unconditional distribution is the same regardless of market or economic conditions; for
this reason, it is likely to be unrealistic.
A conditional distribution in not always the same: it is different, or conditional on, some
economic or market or other state. It is measured by parameters such as its conditional mean,
conditional standard deviation (conditional volatility), conditional skew, and conditional
kurtosis.

Discuss the implications regime switching has on quantifying volatility.


A typical distribution is a regime-switching volatility model: the regime (state) switches from
low to high volatility, but is never in between. A distribution is regime-switching if it changes
from high to low volatility.
The problem: a risk manager may assume (and measure) an unconditional volatility but the
distribution is actually regime switching. In this case, the distribution is conditional (i.e., it
depends on conditions) and might be normal but regime-switching; e.g., volatility is 10% during a
low-volatility regime and 20% during a high-volatility regime but during both regimes, the
distribution may be normal. However, the risk manager may incorrectly assume a single 15%
unconditional volatility. But in this case, the unconditional volatility is likely to exhibit fat
tails because it does not account for the regime switching.

www.bionicturtle.com

FRM 2011 QUANTITATIVE ANALYSIS 113

Explain the various approaches for estimating VaR.


Volatility versus Value at Risk (VaR)
Volatility is an input into our (parametric) value at risk (VaR):

VaR$ W$z

VaR% z
Linda Allens Historical-based approaches
The common attribute to all the approaches within this class is their use of historical time series
data in order to determine the shape of the conditional distribution.

Parametric approach. The parametric approach imposes a specific distributional


assumption on conditional asset returns. A representative member of this class of models
is the conditional (log) normal case with time-varying volatility, where volatility is
estimated from recent past data.

Nonparametric approach. This approach uses historical data directly, without


imposing a specific set of distributional assumptions. Historical simulation is the
simplest and most prominent representative of this class of models.

Implied volatility based approach.


This approach uses derivative pricing models and current derivative prices in order to impute
an implied volatility without having to resort to historical data. The use of implied volatility

www.bionicturtle.com

FRM 2011 QUANTITATIVE ANALYSIS 114

obtained from the BlackScholes option pricing model as a predictor of future volatility is the
most prominent representative of this class of models.

Jorions Value at Risk (VaR) typology


Please note that Jorions taxonomy approaches from the perspective of local versus full
valuation. In that approach, local valuation tends to associate with parametric approaches:

Risk Measurement
Local valuation

Full valuation

Linear models

Nonlinear models

Historical
Simulation

Full Covariance
matrix

Gamma

Monte Carlo
Simulation

Factor Models

Convexity

Diagonal Models

www.bionicturtle.com

FRM 2011 QUANTITATIVE ANALYSIS 115

Value at Risk (VaR)


Parametric

Delta normal

Non parametric

Historical
Simulation

Bootstrap

Monte Carlo

Hybrid (semi-p)

HS + EWMA

EVT

POT (GPD)

Block
maxima
(GEV)

www.bionicturtle.com

FRM 2011 QUANTITATIVE ANALYSIS 116

Volatility
Implied Volatility
Equally weighted returns
or un-weighted (STDEV)
More weight to recent
returns

GARCH(1,1)

EWMA

MDE (more weight to


similar states!)

www.bionicturtle.com

FRM 2011 QUANTITATIVE ANALYSIS 117

Historical approaches
An historical-based approach can be non-parametric, parametric or hybrid (both). Nonparametric directly uses a historical dataset (historical simulation, HS, is the most common).
Parametric imposes a specific distributional assumption (this includes historical standard
deviation and exponential smoothing)

Compare, contrast and calculate parametric and non-parametric


approaches for estimating conditional volatility, including: HISTORIC
SIMULATION
Historical simulation is easy: we only need to determine the lookback window. The problem is
that, for small samples, the extreme percentiles (e.g., the worst one percent) are less precise.
Historical simulation effectively throws out useful information.

Compare, contrast and calculate parametric and non-parametric


approaches for estimating conditional volatility, including: HISTORICAL
STANDARD DEVIATION
Historical standard deviation is the simplest and most common way to estimate or predict
future volatility. Given a history of an assets continuously compounded rate of returns we take
a specific window of the K most recent returns.
This standard deviation is called a moving average (MA) by Jorion. The estimate requires a
window of fixed length; e.g., 30 or 60 trading days. If we observe returns (rt) over M days, the
volatility estimate is constructed from a moving average (MA):
M

t2 (1/ M ) rt2i
i 1

Each day, the forecast is updated by adding the most recent day and dropping the furthest day.
In a simple moving average, all weights on past returns are equal and set to (1/M). Note raw
returns are used instead of returns around the mean (i.e., the expected mean is assumed zero).
This is common in short time intervals, where it makes little difference on the volatility estimate.
For example, assume the previous four daily returns for a stock are 6% (n-1), 5% (m-2), 4% (n3) and 3% (n-4). What is a current volatility estimate, applying the moving average, given that
our short trailing window is only four days (m=14)? If we square each return, the series is
0.0036, 0.0025, 0.0016 and 0.0009. If we sum this series of squared returns, we get 0.0086.
Divide by 4 (since m=4) and we get 0.00215. Thats the moving average variance, such that the
moving average volatility is about 4.64%.

www.bionicturtle.com

FRM 2011 QUANTITATIVE ANALYSIS 118

The above example illustrates a key weakness of the moving average (MA): since all
returns weigh equally, the trend does not matter. In the example above, notice that
volatilty is trending down, but MA does not reflect in any way this trend. We could
reverse the order of the historical series and the MA estimation would produce the same
result.
The moving average (MA) series is simple but has two drawbacks

The MA series ignores the order of the observations. Older observations may no
longer be relevant, but they receive the same weight.

The MA series has a so-called ghosting feature: data points are dropped arbitrarily due
to length of the window.

Compare, contrast and calculate parametric and non-parametric


approaches for estimating conditional volatility
Including: GARCH APPROACH
Including: EXPONENTIAL SMOOTHING (EWMA)
Exponential smoothing (conditional parametric)
Modern methods place more weight on recent information. Both EWMA and GARCH place
more weight on recent information. Further, as EWMA is a special case of GARCH, both EWMA
and GARCH employ exponential smoothing.

GARCH (p, q) and in particular GARCH (1, 1)


GARCH (p, q) is a general autoregressive conditional heteroskedastic model. Key aspects
include:

Autoregressive (AR): tomorrows variance (or volatility) is a regressed function of


todays varianceit regresses on itself

Conditional (C): tomorrows variance dependsis conditional onthe most recent


variance. An unconditional variance would not depend on todays variance

Heteroskedastic (H): variances are not constant, they flux over time

GARCH regresses on lagged or historical terms. The lagged terms are either variance or
squared returns. The generic GARCH (p, q) model regresses on (p) squared returns and (q)
variances. Therefore, GARCH (1, 1) lags or regresses on last periods squared return (i.e., just 1
return) and last periods variance (i.e., just 1 variance).
GARCH (1, 1) given by the following equation.

t2 a brt21,t c t21

www.bionicturtle.com

FRM 2011 QUANTITATIVE ANALYSIS 119

The same GARCH (1, 1) formula can be given with Greek parameters: Hull writes the same
GARCH equation as: n

VL un21 n21 . The first term (VL) is important because

VL is the long run average variance. Therefore, (VL) is a product: it is the weighted long-run
average variance. The GARCH (1, 1) model solves for the conditional variance as a function of
three variables (previous variance, previous return^2, and long-run variance):

ht 0 1rt21 ht 1

ht or t2

conditional variance (i.e., we're solving for it)

a or
ht 1 or
2
rt-1

or

weighted long-run (average) variance


2
t-1

2
rt-1,t

previous variance
previous squared return

Persistence is a feature embedded in the GARCH model.


In the above formulas, persistence is = (b + c) or (alpha-1+ beta). Persistence refers to
how quickly (or slowly) the variance reverts or decays toward its long-run average.
High persistence equates to slow decay and slow regression toward the mean; low
persistence equates to rapid decay and quick reversion to the mean.
A persistence of 1.0 implies no mean reversion. A persistence of less than 1.0 implies reversion
to the mean, where a lower persistence implies greater reversion to the mean.
As above, the sum of the weights assigned to the lagged variance and lagged squared
return is persistence (b+c = persistence). A high persistence (greater than zero but less
than one) implies slow reversion to the mean.
But if the weights assigned to the lagged variance and lagged squared return are greater
than one, the model is non-stationary. If (b+c) is greater than 1 (if b+c > 1) the model is
non-stationary and, according to Hull, unstable. In which case, EWMA is preferred.
Linda Allen says about GARCH (1, 1):
GARCH is both compact (i.e., relatively simple) and remarkably accurate. GARCH models
predominate in scholarly research. Many variations of the GARCH model have been attempted,
but few have improved on the original.
The drawback of the GARCH model is its nonlinearity [sic]

www.bionicturtle.com

FRM 2011 QUANTITATIVE ANALYSIS 120

For example: Solve for long-run variance in GARCH (1,1)


Consider the GARCH (1, 1) equation below:

n2 0.2 un21 n21


Assume that:

the alpha parameter = 0.2,

the beta parameter = 0.7, and

Note that omega is 0.2 but dont mistake omega (0.2) for the long-run variance! Omega is the
product of gamma and the long-run variance. So, if alpha + beta = 0.9, then gamma must be
0.1. Given that omega is 0.2, we know that the long-run variance must be 2.0 (0.2 0.1 = 2.0).

EWMA
EWMA is a special case of GARCH (1,1) and GARCH(1,1) is a generalized case of EWMA. The
salient difference is that GARCH includes the additional term for mean reversion and EWMA
lacks a mean reversion. Here is how we get from GARCH (1,1) to EWMA:

GARCH(1,1) t2 a brt21,t c t21


Then we let a = 0 and (b + c) =1, such that the above equation simplifies to:

GARCH(1,1) = t2 brt21,t (1 b) t21


This is now equivalent to the formula for exponentially weighted moving average (EWMA):

EWMA t2 brt21,t (1 b) t21

t2 t21 (1 )rt21,t
In EWMA, the lambda parameter now determines the decay: a lambda that is close to one
(high lambda) exhibits slow decay.

www.bionicturtle.com

FRM 2011 QUANTITATIVE ANALYSIS 121

The RiskMetricsTM Approach


RiskMetrics is a branded form of the exponentially weighted moving average (EWMA) approach:

ht ht 1 (1 )rt21
The optimal (theoretical) lambda varies by asset class, but the overall optimal parameter used
by RiskMetrics has been 0.94. In practice, RiskMetrics only uses one decay factor for all series:

0.94 for daily data

0.97 for monthly data (month defined as 25 trading days)

Technically, the daily and monthly models are inconsistent. However, they are both easy to use,
they approximate the behavior of actual data quite well, and they are robust to misspecification.
Note: GARCH (1, 1), EWMA and RiskMetrics are each parametric and recursive.

n2 n21 (1 )un21
Recursive EWMA
EWMA is (technically) an infinite series but the infinite series elegantly reduces to a recursive
form:

n2 (1 ) 0un21
(1 ) 1un2 2
(1 ) 2un23

n2 n21 (1 )un21
n2 (0.94) n21 (0.06)un21

www.bionicturtle.com

FRM 2011 QUANTITATIVE ANALYSIS 122

Advantages and Disadvantages of MA (i.e., STDEV) vs. GARCH


GARCH estimations can provide estimations that are more accurate than MA
Jorions Moving average (MA)
= Allens STDEV

GARCH

Ghosting feature

More recent observations assigned greater weights.

Trend information is not incorporated

A term is added to incorporate reversion to the mean

Except Linda Allen warns: GARCH (1,1) needs more parameters and may pose greater
MODEL RISK (chases a moving target) when forecasting out-of-sample

Graphical summary of the parametric methods that assign more weight to recent
returns (GARCH & EWMA)

www.bionicturtle.com

FRM 2011 QUANTITATIVE ANALYSIS 123

Summary Tips:
GARCH (1, 1) is generalized RiskMetrics; and, conversely, RiskMetrics is restricted case of
GARCH (1,1) where a = 0 and (b + c) =1. GARCH (1, 1) is given by:

n2 VL un21 n21
The three parameters are weights and therefore must sum to one:

1
Be careful about the first term in the GARCH (1, 1) equation: omega () = gamma() *
(average long-run variance). If you are asked for the variance, you may need to divide
out the weight in order to compute the average variance.
Determine when and whether a GARCH or EWMA model should be used in volatility
estimation
In practice, variance rates tend to be mean reverting; therefore, the GARCH (1, 1) model is
theoretically superior (more appealing than) to the EWMA model. Remember, thats the big
difference: GARCH adds the parameter that weights the long-run average and therefore it
incorporates mean reversion.
GARCH (1, 1) is preferred unless the first parameter is negative (which is implied if alpha
+ beta > 1). In this case, GARCH (1,1) is unstable and EWMA is preferred.

Explain how the GARCH estimations can provide forecasts that are more accurate.
The moving average computes variance based on a trailing window of observations; e.g., the
previous ten days, the previous 100 days.
There are two problems with moving average (MA):

Ghosting feature: volatility shocks (sudden increases) are abruptly incorporated into the
MA metric and then, when the trailing window passes, they are abruptly dropped from
the calculation. Due to this the MA metric will shift in relation to the chosen window
length

Trend information is not incorporated

GARCH estimates improve on these weaknesses in two ways:

More recent observations are assigned greater weights. This overcomes ghosting
because a volatility shock will immediately impact the estimate but its influence will fade
gradually as time passes

A term is added to incorporate reversion to the mean

www.bionicturtle.com

FRM 2011 QUANTITATIVE ANALYSIS 124

Explain how persistence is related to the reversion to the mean.


Given the GARCH (1, 1) equation:
Persistence is given by:

ht 0 1rt21 ht 1

Persistence 1

GARCH (1, 1) is unstable if the persistence > 1. A persistence of 1.0 indicates no mean reversion.
A low persistence (e.g., 0.6) indicates rapid decay and high reversion to the mean.
GARCH (1, 1) has three weights assigned to three factors. Persistence is the sum of the
weights assigned to both the lagged variance and lagged squared return. The other
weight is assigned to the long-run variance.
If P = persistence and G = weight assigned to long-run variance, then P+G = 1.
Therefore, if P (persistence) is high, then G (mean reversion) is low: the persistent series
is not strongly mean reverting; it exhibits slow decay toward the mean.
If P is low, then G must be high: the impersistent series does strongly mean revert; it
exhibits rapid decay toward the mean.
The average, unconditional variance in the GARCH (1, 1) model is given by:

LV

0
1 1

www.bionicturtle.com

FRM 2011 QUANTITATIVE ANALYSIS 125

Compare and contrast the use of historic simulation, multivariate density


estimation, and hybrid methods for volatility forecasting.
Nonparametric Volatility Forecasting

Historic Simulation
(HS)
Sort returns
Lookup worst
If n=100, for 95th
percentile look
between bottom 5th
& 6th

Like ARCH(m)
But weights based on
function of [current
vs. historical state]
If state (n-50) state
(today), heavy
weight to that
return2

Advantages
Historical
Simulation
Multivariate
density
estimation
Hybrid
approach

www.bionicturtle.com

Hybrid
(HS & EWMA)

MDE

Sort returns (like HS)


But weight them,
greater weight to
recent (like EWMA)

Disadvantages

Easiest to implement
(simple, convenient)
Very flexible: weights are
function of state (e.g.,
economic context such as
interest rates) not constant
Unlike the HS approach,
better incorporates more recent
information

Uses data inefficiently


(much data is not used)
Onerous model: weighting
scheme; conditioning variables;
number of observations
Data intensive
Requires model
assumptions; e.g., number of
observations

FRM 2011 QUANTITATIVE ANALYSIS 126

Compare, contrast and calculate parametric and non-parametric


approaches for estimating conditional volatility, including: MULTIVARIATE
DENSITY ESTIMATION

Multivariate Density Estimation (MDE)


The key feature of multivariate density estimation is that the weights (assigned to historical
square returns) are not a constant function of time. Rather, the current stateas
parameterized by a state vectoris compared to the historical state: the more similar the states
(current versus historical period), the greater the assigned weight. The relative weighting is
determined by the kernel function:

( t i )ut2i
2
t

i 1

Kernel function

Vector describing economic state at


time t-i

Instead of weighting returns^2 by time,


Weighting by proximity to current state

Compare EWMA to MDE:

Both assign weights to historical squared returns (squared returns = variance


approximation);

Where EWMA assigns the weight as an exponentially declining function of time (i.e., the
nearer to today, the greater the weight), MDE assigns the weight based on the nature of
the historical period (i.e., the more similar to the historical state, the greater the weight)

www.bionicturtle.com

FRM 2011 QUANTITATIVE ANALYSIS 127

Compare, contrast and calculate parametric and non-parametric


approaches for estimating conditional volatility, including: HYBRID
METHODS
The hybrid approach is a variation on historical simulation (HS). Consider the ten (10)
illustrative returns below. In simple HS, the return are sorted from best-to-worst (or worst-tobest) and the quantile determines the VaR. Simple HS amounts to giving equal weight to each
returns (last column). Given 10 returns, the worst return (-31.8%) earns a 10% weight under
simple HS.

Sorted
Return

Periods
Ago

Hybrid
Weight

-31.8%
-28.8%
-25.5%
-22.3%
5.7%
6.1%
6.5%
6.9%
12.1%
60.6%

7
9
6
10
1
2
3
4
5
8

8.16%
6.61%
9.07%
5.95%
15.35%
13.82%
12.44%
11.19%
10.07%
7.34%

Cum'l
Hybrid Compare
Weight
to HS
8.16%
14.77%
23.83%
29.78%
45.14%
58.95%
71.39%
82.58%
92.66%
100.00%

10%
20%
30%
40%
50%
60%
70%
80%
90%
100%

However, under the hybrid approach, the EWMA weighting scheme is instead applied. Since the
worst return happened seven (7) periods ago, the weight applied is given by the following,
assuming a lambda of 0.9 (90%):
Weight (7 periods prior) = 90%^(7-1)*(1-90%)/(1-90%^10) = 8.16%
Note that because the return happened further in the past, the weight is below the 10% that is
assigned under simple HS.
120%
100%

Hybrid
Weights
HS Weights

80%
60%
40%
20%
0%
1

www.bionicturtle.com

10

FRM 2011 QUANTITATIVE ANALYSIS 128

Hybrid methods using Google stocks prices and returns:


Number
Google (GOOG)
Period
of days
Date
Close
Return
Sorted
ago
6/24/2009 409.29
0.89%
1
-5.90%
76
6/23/2009 405.68
-0.41%
2
-5.50%
94
6/22/2009 407.35
-3.08%
3
-4.85%
86
6/19/2009 420.09
1.45%
4
-4.29%
90
6/18/2009 414.06
-0.27%
5
-4.25%
78
6/17/2009 415.16
-0.20%
6
-3.35%
47
6/16/2009
416
-0.18%
7
-3.26%
81
6/15/2009 416.77
-1.92%
8
-3.08%
3
6/12/2009 424.84
-0.97%
9
-3.01%
88
6/11/2009
429
-0.84%
10
-2.64%
55

Cumulative Weight
HS
1.0%
2.0%
3.0%
4.0%
5.0%
6.0%
7.0%
8.0%
9.0%
10.0%

0.2%
0.1%
0.1%
0.1%
0.2%
0.6%
0.2%
3.7%
0.1%
0.4%

Hybrid
0.2%
0.3%
0.4%
0.5%
0.7%
1.3%
1.4%
5.1%
5.2%
5.7%

In this case:

Sample includes 100 returns (n=100)

We are solving for the 95th percentile (95%) value at risk (VaR)

For the hybrid approach, lambda = 0.96

Sorted returns are shown in the purple column

The HS 95% VaR = ~ 4.25% because it is the fifth-worst return (actually, the quantile can
be determined in more than one way)

However, the hybrid approach returns a 95% VaR of 3.08% because the worst returns
that inform the dataset tend to be further in the past (i.e., days ago = 76, 94, 86, 90).
Due to this, the individual weights are generally less than 1%.

Explain the process of return aggregation in the context of volatility


forecasting methods.
The question is: how do we compute VAR for a portfolio which consists of several positions.
The first approach is the variance-covariance approach: if we make (parametric) assumptions
about the covariances between each position, then we extend the parametric approach to the
entire portfolio. The problem with this approach is that correlations tend to increase (or change)
during stressful market events; portfolio VAR may underestimate VAR in such circumstances.
The second approach is to extend the historical simulation (HS) approach to the portfolio:
apply todays weights to yesterdays returns. In other words, what would have happened if we
held this portfolio in the past?

www.bionicturtle.com

FRM 2011 QUANTITATIVE ANALYSIS 129

The third approach is to combine these two approaches: aggregate the simulated returns and
then apply a parametric (normal) distributional assumption to the aggregated portfolio.
The first approach (variance-covariance) requires the dubious assumption of normalityfor the
positions inside the portfolio. The text says the third approach is gaining in popularity and is
justified by the law of large numbers: even if the components (positions) in the portfolio are not
normally distributed, the aggregated portfolio will converge toward normality.

Explain how implied volatility can be used to predict future volatility


To impute volatility is to derivate volatility (to reverse-engineer it, really) from the observed
market price of the asset. A typical example uses the Black-Scholes option pricing model to
compute the implied volatility of a stock option; i.e., option traders will average at-the-money
implied volatility from traded puts and calls.
The advantages of implied
volatility are:

Truly predictive (reflects


markets forwardlooking consensus)
Does not require, nor is
restrained by, historical
distribution patterns

www.bionicturtle.com

The shortcomings (or disadvantages) of implied volatility


include:

Model-dependent

Options on the same underlying asset may trade at


different implied volatilities; e.g., volatility
smile/smirk

Stochastic volatility; i.e., the model assumes constant


volatility, but volatility tends to change over time

Limited availability because it requires traded (set by


market) price

FRM 2011 QUANTITATIVE ANALYSIS 130

Explain how to use option prices to derive forecasts of volatilities


This requires that a market mechanism (e.g., an exchange) can provide a market price for the
option. If a market price can be observed, then instead of solving for the price of an option, we
use an option pricing model (OPM) to reveal the implied (implicit) volatility. We solve (goal
seek) for the volatility that produces a model price equal to the market price:

cmarket f ( ISD )
Where the implied standard deviation (ISD) is the volatility input into an option pricing model
(OPM). Similarly, implied correlations can also be recovered (reverse-engineered) from
options on multiple assets. According to Jorion, ISD is a superior approach to volatility
estimation. He says, Whenever possible, VAR should use implied parameters [i.e., ISD or
market implied volatility].

Explain the implications of mean reversion in returns and return volatility


The key idea refers to the application of the square root rule (S.R.R. says that variance scales
directly with time such that the volatility scales directly with the square root of time). The
square root rule, while mathematically convenient, doesnt really work in practice because it
requires that normally distributed returns are independent and identically distributed (i.i.d.).
What I mean is, we use it on the exam, but in practice, when applying the square root rule to
scaling delta normal VaR/volatility, we should be sensitive to the likely error introduced.
Allen gives two scenarios that each illustrate violations in the use of the square root rule to
scale volatility over time:

If mean reversion Then square root rule


In returns
In return volatility

Overstates long run volatility


If current vol. > long run volatility, overstates
If current vol. < long run volatility, understates

For FRM purposes, three definitions of mean reversion are used:

Mean reversion in the asset dynamics. The price/return tends towards a long-run
level; e.g., interest rate reverts to 5%, equity log return reverts to +8%

Mean reversion in variance. Variance reverts toward a long-run level; e.g., volatility
reverts to a long-run average of 20%. We can also refer to this as negative
autocorrelation, but it's a little trickier. Negative autocorrelation refers to the fact that a
high variance is likely to be followed in time by a low variance. The reason it's tricky is
due to short/long timeframes: the current volatility may be high relative to the long run
mean, but it may be "sticky" or cluster in the short-term (positive autocorrelation) yet, in
the longer term it may revert to the long run mean. So, there can be a mix of (short-term)
positive and negative autocorrelation on the way being pulled toward the long run mean.

www.bionicturtle.com

FRM 2011 QUANTITATIVE ANALYSIS 131

Autoregression in the time series. The current estimate (variance) is informed by (a


function of) the previous value; e.g., in GARCH(1,1) and exponentially weighted moving
average (EWMA), the variance is a function of the previous variance.

Square root rule


The simplest approach to extending the horizon is to use the square root rule

(rt ,t J ) (rt ,t 1) J

J-period VAR = J 1-period VAR

For example, if the 1-period VAR is $10, then the 2-period VAR is $14.14 ($10 x square root of 2)
and the 5-period VAR is $22.36 ($10 x square root of 5).
The square-root-rule: under the two assumptions below, VaR scales with the square root
of time. Extend one-period VaR to J-period VAR by multiplying by the square root of J.
The square root rule (i.e., variance is linear with time) only applies under restrictive i.i.d.
The square-root rule for extending the time horizon requires two key assumptions:

Random-walk (acceptable)

Constant volatility (unlikely)

www.bionicturtle.com

FRM 2011 QUANTITATIVE ANALYSIS 132

You might also like