Risk Measures

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 7

The bank assigns to every customer a default probability (DP), a loss fraction called the loss given default

(LGD), describing the fraction of the loan’s exposure expected to be lost in case of default, and the
exposure at default (EAD) subject to be lost in the considered time period. The loss of any obligor is then
defined by a loss variable:

where D denotes the event that the obligor defaults in a certain period of time (most often one year),
and P(D) denotes the probability of D.

Although we will not go too much into technical details, we should mention here that underlying our
model is some probability space (Ω, F, P), consisting of a sample space Ω, a σ-Algebra F, and a probability
measure P. The elements of F are the measurable events of the model, and intuitively it makes sense to
claim that the event of default should be measurable.

Moreover, it is common to identify F with the information available, and the information if an obligor
defaults or survives should be included in the set of measurable events.

Now, in this setting it is very natural to define the expected loss (EL) of any customer as the expectation
of its corresponding loss variable L˜, namely

because the expectation of any Bernoulli random variable, like 1D, is its event probability.

Assumptions:

For obtaining representation (1. 2) of the EL, we need some additional assumption on the constituents
of Formula (1. 1), for example, the assumption that EAD and LGD are constant values. This is not
necessarily the case under all circumstances. There are various situations in which, for example, the EAD
has to be modeled as a random variable due to uncertainties in amortization, usage, and other drivers of
EAD up to the chosen planning horizon.

In such cases the EL is still given by Equation (1. 2) if one can assume that the exposure, the loss given
default, and the default event D are independent and EAD and LGD are the expectations of some
underlying random variables. But even the independence assumption is questionable and in general
very much simplifying.

Altogether one can say that (1. 2) is the most simple representation formula for the expected loss, and
that the more simplifying assumptions are dropped, the more one moves away from closed and easy
formulas like (1. 2).

However, for now we should not be bothered about the independence assumption on which (1. 2) is
based: The basic concept of expected loss is the same, no matter if the constituents of formula (1. 1) are
independent or not.

Equation (1. 2) is just a convenient way to write the EL in the first case.
Our convention from now on is that the EAD always is a deterministic (i.e., nonrandom) quantity,
whereas the severity (SEV) of loss in case of default will be considered as a random variable with
expectation given by the LGD of the respective facility. For reasons of simplicity we assume in this
chapter that the severity is independent of the variable L in (1.1).

The Default Probability


The task of assigning a default probability to every customer in the bank’s credit portfolio is far from
being easy. There are essentially two approaches to default probabilities:

1. Calibration of default probabilities from market data.


The most famous representative of this type of default probabilities is the concept of Expected
Default Frequencies (EDF) from KMV2 Corporation. We will describe the KMV-Model in Section
1.2.3 and in Chapter 3.
Another method for calibrating default probabilities from market data is based on credit spreads
of traded products bearing credit risk, e.g., corporate bonds and credit derivatives (for example,
credit default swaps; see the chapter on credit derivatives).

2. Calibration of default probabilites from ratings.


In this approach, default probabilities are associated with ratings, and ratings are assigned to
customers either by external rating agencies like Moody’s Investors Services, Standard & Poor’s
(S&P), or Fitch, or by bank-internal rating methodologies. Because ratings are not subject to be
discussed in this book, we
will only briefly explain some basics about ratings. An excellent treatment of this topic can be
found in a survey paper by Crouhy et al.

Ratings
Basically ratings describe the creditworthiness of customers. Hereby quantitative as well as qualitative
information is used to evaluate a client. In practice, the rating procedure is often more based on the
judgement and experience of the rating analyst than on pure mathematical procedures with strictly
defined outcomes. It turns out that in the US and Canada, most issuers of public debt are rated at least
by two of the three main rating agencies Moody’s, S&P, and Fitch.

Their reports on corporate bond defaults are publicly available, either by asking at their local offices for
the respective reports or conveniently per web access; see www.moodys.com,
www.standardandpoors.com, ww.fitchratings.com.

The natural candidates for assigning a rating to a customer are the credit analysts of the bank. Hereby
they have to consider many different drivers of the considered firm’s economic future:
• Future earnings and cashflows,
• debt, short- and long-term liabilities, and financial obligations,
• capital structure (e.g., leverage),
• liquidity of the firm’s assets,
• situation (e.g., political, social, etc.) of the firm’s home country,
• situation of the market (e.g., industry), in which the company has its main activities,
• management quality, company structure, etc.
Calibration of Default Probabilities to Ratings
The process of assigning a default probability to a rating is called a calibration. In this paragraph we will
demonstrate how such a calibration works. The end product of a calibration of default probabilities to
ratings is a mapping

Rating → DP, e.g., {AAA, AA, ..., C} → [0, 1], R → DP(R),


such that to every rating R a certain default probability DP(R) is assigned.

TABLE ON PAGE 17

Now, an important observation is that for best ratings no defaults at all have been observed. This is not
as surprising as it looks at first sight: For example rating class Aaa is often calibrated with a default
probability of 2 bps (“bp” stands for ‘basispoint’ and means 0.01%) essentially meaning that one expects
a Aaa-default in average twice in 10, 000 years.

Calibration basing on the table on page 17


STEPS

Step 1:

Calculate the mean and standard deviation of historic default frequency of rating class R

Step 2:

Next, we plot the mean values m(R) into a coordinate system, where the x-axis refers to the rating
classes (here numbered from 1 (Aaa) to 16 (B3)). One can see in the chart in Figure 1.1 that on a
logarithmic scale the mean default frequencies m(R) can be fitted by a regression line. Here we should
add a comment that there is strong evidence from various empirical default studies that default
frequencies grow exponentially with decreasing creditworthiness. For this reason we have chosen an
exponential fit (linear on logarithmic scale). Using standard regression theory, or by simply using any
software providing basic statistical functions, one can easily obtain the following exponential function
fitting our data:

DP (x)=3× 10−5 e 0.5075 x (x = 1, ..., 16).

Step 3:

As a last step, we use our regression equation for the estimation of default probabilities DP(x) assigned
to rating classes x ranging from 1 to 16. Figure 1.1 shows our result, which we now call a calibration of
default probabilities to Moody’s ratings. Note that based on our regression even the best rating Aaa has
a small but positive default probability. Moreover, our hope is that our regression analysis has smoothed
out sampling errors from the historically observed data.

The Exposure at Default


The EAD is the quantity in Equation (1. 2) specifying the exposure the bank does have to its borrower. In
general, the exposure consists of two major parts, the outstandings and the commitments. The
outstandings refer to the portion of the exposure already drawn by the obligor.

The commitments can be divided in two portions, undrawn and drawn, in the time before default. The
total amount of commitments is the exposure the bank has promised to lend to the obligor at her or his
request. Historical default experience shows that obligors tend to draw on committed lines of credit in
times of financial distress. Therefore, the commitment is also subject to loss in case of the obligor’s
default, but only the drawn (prior default) amount of the commitments will actually contribute to the
loss on loan.

where OUTST denotes the outstandings and COMM the commitments of the loan, and γ is the expected
portion of the commitments likely to be drawn prior to default.

More precisely, γ is the expectation of the random variable capturing the uncertain part of the EAD,
namely the utilization of the undrawn part of the commitments.

In case of covenants allowing the bank to close committed lines triggered by some early default
indication, it really is a question of time if the bank picks up such indications early enough to react
before the customer has drawn on her or his committed lines.

For off-balance sheet transactions there are two approaches: For the foundation approach the
committee proposes to define the EAD on commitments and revolving credits as 75% of the off-balance
sheet amount of the exposure. For example, for a committed line of one billion Euro with current
outstandings of 600 million, the EAD would be equal to 600 + 75% × 400 = 900 million Euro.

The Loss Given Default


The LGD of a transaction is more or less determined by “1 minus recovery rate”, i.e., the LGD quantifies
the portion of loss the bank will really suffer in case of default. The estimation of such loss quotes is far
from being straightforward, because recovery rates depend on many driving factors, for example on the
quality of collateral (securities, mortgages, guarantees, etc.) and on the seniority of the bank’s claim on
the borrower’s assets. This is the reason behind our convention to consider the loss given default as a
random variable describing the severity of the loss of a facility type in case of default. The notion LGD
then refers to the expectation of the severity.
A bank-external source for recovery data comes from the rating agencies. For example Moody’s [95]
provides recovery values of defaulted bonds, hereby distinguishing between different seniorities.

Unfortunately many banks do not have good internal data for estimating recovery rates. In fact,
although LGD is a key driver of EL, there is in comparison with other risk drivers like the DP little progress
made in moving towards a sophisticated calibration.

However, one can expect that in a few years LGD databases will have significantly improved, such that
more accurate estimates of the LGD for certain banking products can be made.

Unexpected Loss
At the beginning of this chapter we introduced the EL of a transaction as an insurance or loss reserve in
order to cover losses the bank expects from historical default experience. But holding capital as a
cushion against expected losses is not enough. In fact, the bank should in addition to the expected loss
reserve also save money for covering unexpected losses exceeding the average experienced losses from
past history.
As a measure of the magnitude of the deviation of losses from the EL, the standard deviation of the loss
variable L˜ as defined in (1. 1) is a natural choice. For obvious reasons, this quantity is called the
Unexpected Loss (UL), defined by

Proposition Under the assumption that the severity and the default event D are uncorrelated, the
unexpected loss of a loan is given by
An Overview of Today’s Industry Models
In the last five years, several industry models for measuring credit portfolio risk have been developed.
Besides the main commercial models we find in large international banks various so-called internal
models, which in most cases are more or less inspired by the well-known commercial products. For most
of the industry models it is easy to find some kind of technical documentation describing the
mathematical framework of the model and giving some idea about the underlying data and the
calibration of the model to the data. An exception is KMV’s Portfolio ManagerTM, where most of the
documentation is proprietary or confidential.

You might also like