Using Best Practices To Determine A Best Reserve Estimate

Download as pdf or txt
Download as pdf or txt
You are on page 1of 60

Using Best Practices to Determine a

Best Reserve Estimate

Paul J. Struzzieri, FCAS


Paul R. Hussian, FCAS
Milliman & Robertson, Inc.
Two Pennsylvania Plaza
Suite 1552
New York, NY 10121
(212) 279-7166

Using Best Practices to Determine a Best Reserve Estimate


ABSTRACT

Currently, many actuaries produce a range of reasonable reserve estimates for IBNR loss
and loss adjustment expense. NAIC Issue Paper No. 55 would effectively eliminate the
possibility of booking any amount except managements best estimate within this
range. The term best estimate has not been defined, neither in the issue paper nor in the
actuarial literature. We propose to define the best estimate by describing a set of best
practices -- many already found in the actuarial literature -- that a reserving actuary
should follow, while minimizing the number of arbitrary judgments. Central to this paper
is the recently introduced Generalized Cape Cod method. Many of these best practices
have been shown to lead to minimum bias results. The best estimate, therefore, will be
the outcome of following the framework contained in this paper.

Keywords:


Reserving

IBNR

Generalized Cape Cod

Chain Ladder

Bornhuetter-Ferguson

INTRODUCTION AND OVERVIEW

In Issue Paper No. 55 [10], the NAIC addresses the issue of recording a point estimate of
unpaid loss and loss adjustment expense (LAE). The key element of this issue paper is
the requirement that company management book its best estimate of the loss and LAE
reserves.

Presumably, management will want to consider basing their best estimate on

their actuarys best estimate.

Currently, actuaries produce a range of reasonable reserve estimates.

As long as

management records loss and LAE reserves within this range, most actuaries would not
object. Issue Paper No. 55 would effectively eliminate the possibility of booking any
amount except managements best estimate within this range. In the instances where
no point within the range is more probable than any other point, the midpoint of the range
would be accrued.

The term best estimate has not been defined, neither in the issue paper nor in the
actuarial literature. The Casualty Actuarial Society Statement of Principles Regarding
Property and Casualty Loss and Loss Expense Adjustment Reserves refers to a most
appropriate reserve within a range of actuarially sound estimates.

This estimate is

dependent on (1) the relative likelihood of estimates within the range and (2) the financial
reporting context in which the reserve will be presented.

We propose to define the best estimate by describing a set of best practices that a
reserving actuary should follow. A best estimate would result by incorporating certain
best practices already found in the actuarial literature and minimizing the number of
arbitrary judgments that the actuary may otherwise make. Many of these best practices
have been shown to lead to minimum bias results. The use of these techniques, then,
would lead to more likely or more probable estimates.

The best estimate, therefore, will be the outcome of following the framework contained in
this paper, rather than selecting a best estimate from a range of results. Since there will
always be an element of judgment within this framework, a range can then be determined
by varying some of the underlying judgments and assumptions.

We hope this paper will promote dialogue among property/casualty actuaries in which
best practices can be defined and refined periodically.

DESCRIPTION OF RESERVING ENVIRONMENT

The reserving practices described below are not intended to apply to all situations. For
example, we do not expect that all of these best practices will apply to start-up insurance
companies or to situations where reserves are being analyzed using publicly-available
information (e.g. from Schedule P). These practices do apply to the scenario where (1)
the data are within the control of the insurance company, (2) there is sufficient
consistency within the data, and (3) there is sufficient history within the data, i.e. there is

enough information about development in the tail. Best practices for tail factors and
ULAE reserves are beyond the scope of this paper. The situation where the data is not
consistent is dealt with separately in the Appendix to this paper.

In this environment, the following data is available for each line of business or
homogeneous business segment:

1. paid and incurred loss development triangles, gross and net of reinsurance;
2. paid and incurred allocated loss adjustment expense (ALAE) development
triangles;
3. triangles of salvage and subrogation (S&S) or other recoveries;
4. claim count triangles, preferably number reported, number open, number
closed with indemnity payment (CWIP), and number closed without payment
(CWOP);
5. several exposure bases, including number of policies issued, premiums earned
and ratemaking exposure units;
6. historical profiles of policies, exposures and premiums by limit of liability,
deductible, classification, state or territory and other categories that have the
effect of changing the risk profile of the book;
7. information on changes in the rate of settlement of claims, case reserving
practices, underwriting, loss control or risk management (if any);

8. summaries of ceded reinsurance programs, showing historical excess of loss


and quota share retentions, company participation in excess layers, and the
like.

When beginning a loss reserve study, the actuary has a wide array of tools and methods
from which to choose. One school of thought says to use several methods, and average
all of the methods to get to the selected result. However, some of these methods may be
more biased or more variable than others. A better practice would be to exclude these
methods from the average. The selected result would then be less biased and/or have less
variance.

By the time that reserving actuaries have developed their best estimate, they should have
already addressed several questions, including the following:

When should loss development methods be used? When should exposurebased methods be used?

What is the best way to weigh together different methods?

What is the best way to determine the expected (a priori) ultimate losses?

What items should be considered when selecting the best exposure base?

What exposure base is generally best for estimating ultimate losses?

What is the best method for determining ALAE reserves?

What is the best method for determining S&S reserves?

What are some of the best practices for picking development patterns?

What are some of the best practices for selecting ultimate losses?

What is a practical method for determining a reasonable range around the best
estimate?

BLENDING LOSS DEVELOPMENT AND EXPECTED LOSS ESTIMATES

Before discussing our best practices, some definitions are in order. Exposure base is
defined as a measure that is known or accurately estimated in advance and that varies
directly with the quantity being estimated. Payroll, sales, and car-years are well known
examples of exposure bases used for ratemaking purposes. They are equally useful for
reserving purposes. A leading indicator is a measure that is not known in advance, but
is directly correlated with the quantity being measured. Leading indicators alert the
actuary to the estimated quantitys possible realized value earlier than if the actuary relied
on observed experience alone.

For reserving purposes, leading indicators can be

exposure bases for losses also. The only difference is that the ultimate value of the
leading indicator also needs to be estimated. For example, reported claim counts is a
leading indicator of ultimate losses and, therefore, could be considered an exposure base
for ultimate losses. This is discussed further in the BEST EXPOSURE BASE section.

The terms a priori and expected are used interchangeably to refer to any estimate of
ultimate loss (or other quantity being projected) that is based on an exposure measure.
For example, multiplying the calendar year premium by the actuarys expected loss ratio
results in an exposure-based estimate.

This leads to the first best practice: Whenever an appropriate exposure base has been
identified, the actuary should rely on a loss reserving method that mixes the loss
development, or chain ladder, method with exposure-based expected loss methods. The
most common of these blended methods in use are the Bornhuetter-Ferguson (BF) [2] and
Cape Cod (CC) methods.

Actuarial literature is full of examples why the loss development result alone should not
be considered the best estimate. Stanard [12] used simulation techniques to measure the
expected value and the variance of the prediction error of several loss reserving methods,
including loss development method and expected loss-based methods.

[Prediction

error, or bias, is the difference between our estimate of an unknown value, e.g. ultimate
loss, and the actual realized value of that quantity. The prediction error is therefore an
unknown random variable. We are also interested in the expected squared prediction
error and the variance of the prediction error for any method; these are the same if the
method is unbiased. The best methods have low-mean, low-variance prediction errors.]

Stanard found significantly higher prediction errors (both mean and variance) when using
the loss development method. Based upon the results of the simulation model, Stanard
concluded that the loss development method is clearly inferior to the methods which give
weight to expected losses.

Murphy [9] came to the same conclusions, although he found that loss development
techniques could be improved upon by varying the averaging methods used to select link
ratios.

Besides these studies which demonstrate that blended methods reduce prediction errors,
there are several other reasons why we believe such methods are superior. First, these
techniques are easy to apply. Second, blending in expected losses is intuitively appealing.
The less mature the loss experience, the more the weight assigned to the expected losses.
As many actuaries using the loss development method have discovered, early
development is unstable, not a useful predictor of ultimate losses, and will understate
ultimate losses when the current evaluation is less than average and overstate when the
current evaluation is greater than average.

Murphy and Patrik [11] make similar

observations.

A third advantage cited by Patrik is that future loss emergence predicted by these methods
is correlated with an exposure measure (instead of with past loss emergence). The
advantage is that external information can be incorporated; the exposure measure can be
adjusted to reflect expected changes in rate level adequacy (if premiums are used) or
changes in the distribution of business by class, territory, limit, etc. (if ratemaking
exposures are used).

Fourth, an expected loss method can make use of loss information from all of the years in
order to project any given year. Robertson [12], in his review of Stanard, discusses this

advantage of the CC method.

Mack [8] makes a similar observation;

the loss

development method uses only one data point from one accident year to project a given
year, implicitly assuming that other years do not provide statistically useful information.

We believe that the use of a blended method is a best practice in all but the most extreme
situations. As an example of such a situation, the loss development method may be
sufficient for fast-reporting lines of business where there is very little variation in loss
development factors by accident year. Even then, it is good practice to give weight to
exposure-based, expected loss methods.

THE BEST WEIGHTS:


BORNHUETTER-FERGUSON & ALTERNATIVE BORNHUETTER-FERGUSON

Given that the best practice is to weight together loss development and exposure-based
methods, what are the best weights to use? The popular Bornhuetter-Ferguson (BF)
method uses weights related to the size of the loss development factor:

L
D

1
1 

A1 
LDF
 LDF 

estimated ultimate loss

loss development estimate

expected loss estimate

LDF

loss development factor.

where

(Equation 1)

Gluck [5] demonstrated that the BF weights are optimal (they result in the minimum
variance of the prediction error) subject to certain constraints. Gluck also demonstrated
that the BF weights are a specific case of the more general best weights: those that are
inversely proportional to the variance of the prediction errors of D and A. Gluck refers to
this general case as alternative BF weights.

In the more specific BF case, the

assumption is made that the LDF is proportional to the prediction error variance of the
loss development method, i.e. the error using D is likely to be larger the more immature
the data.

Gluck states that alternative BF weights are particularly useful when the LDF is less than
unity, for example in a line such as automobile physical damage. The BF weights should
not be used in this case because they will result in L being outside of the range of D and
A. (D would be given greater than 100% weight.) There is still uncertainty surrounding
the loss development method when the LDF is less than unity, so it should be given a
weight less than 100%.

Gluck also states that, even if the LDF is greater than unity, alternative BF weights may
be more appropriate if the actuary believes that the LDF approaches unity faster than the
uncertainty surrounding D is eliminated. For instance, the LDF may be close to 1.0, but
the incurred losses to date may include a large proportion of case reserves. The BF could
assign too much weight to D in this case.

How should alternative BF weights be calculated? Gluck states that any other reasonable
proxy for the variance of the loss development method may be used. In an example in his
paper, Gluck uses the paid LDF in place of the incurred LDF to derive the weights. (Note
that incurred LDFs are still used to derive D.)

Based on Glucks work, we believe the use of the BF weights, or alternative BF weights,
is a best actuarial practice for weighting together D and A. Within this discussion of best
practices, when we refer to the BF method, we are referring to the weighting scheme used
to blend the loss development result with the expected loss. As it is commonly applied,
the BF method also refers to the method of selecting the expected losses, A, where A is
often determined arbitrarily. We will approach the subject of the best practices for
estimating A in the next section. There are many practices found in the literature, other
than BF, that will enable us to identify a best expected ultimate loss estimate.

THE BEST A PRIORI ESTIMATE: THE GENERALIZED CAPE COD METHOD

The Traditional Cape Cod Method

The Cape Cod (CC) method was described by both Stanard and Buhlmann [3]. It is,
therefore, also referred to as the Stanard-Buhlmann method.

In Stanards original

presentation of the method, the exposure is assumed to be constant from year to year.
Exhibit 1 shows an example of the CC method with changing exposure.

In this presentation of the CC method, the exposure is separated into two components:

1. the exposure expected to correspond to the reported losses, or reported


exposure;
2. the component expected to correspond to the unreported losses.

The reported exposure is calculated by multiplying the exposure by the percentage of


ultimate losses reported, or the inverse of the loss development factor to ultimate, for
each accident year. As Stanard describes the method, the CC averages, then adjusts the
reported losses. The reported losses for all years combined are divided by the reported
exposures for all years combined to derive an expected ultimate loss-to-exposure ratio.
We will refer to this ratio throughout this paper as the expected ultimate loss ratio, even
though the exposures may be quantities other than premiums (e.g. use of ratemaking
exposure units would result in an expected ultimate pure premium).

The expected

ultimate loss ratio is then applied to the unreported exposure for each year to derive an
estimate of IBNR reserves.

A zero loss trend implicitly underlies Exhibit 1. Exhibit 2 shows how an actuary would
use the CC method if he or she believed that losses increased over time at an average
annual trend rate of 7.0%. The reported losses in column (4) have been adjusted to 1997
loss levels. The all-years-combined expected ultimate loss ratio in column (9), therefore,
is also stated at a 1997 level. Therefore, the loss ratios in column (10) have been
detrended by 7.0% per year before being multiplied by the unreported exposure for each

accident year. Under the CC method, the expected loss ratio estimate, before detrending,
is the same for each accident year.

The CC formula for the expected ultimate loss ratio for all accident years can be written
another way:

( R xLDF / E ) x( E / LDF )
Exp( LR )

( E / LDF )
i

(Equation 2)

where
Exp(LR)

expected loss ratio estimate

Ri

reported trended losses for accident year i

LDFi

loss development factor to ultimate for accident


year i

Ei

exposures for accident year i.

Therefore, the expected loss ratio estimate used in the CC method can best be viewed as
the weighted average of the trended, developed ultimate loss ratio for each year shown in
Column (8), where the weights are based on a two dimensional weighting scheme: they
are proportional to exposures and inversely proportional to development factors or,
equivalently, are equal to the reported exposures in column (6) of Exhibit 2. Accident
years with greater exposure levels get more weight than accident years with lower
exposure levels. In addition, loss experience from more mature accident years gets more

weight (in proportion to the percentage of ultimate reported at each maturity) since older
years are closer to ultimate. Note that Equation 2 simplifies to the ratio of reported losses
to reported exposures for all years combined.

A Modification to the Traditional Cape Cod Method

Gluck developed a modification to the CC method which introduced a third dimension to


the a priori weighting scheme: decay. This modification is a significant departure from
the traditional CC method in that the a priori loss ratio, trended to a common accident
year basis, varies for each accident year. Gluck refers to this as the Generalized Cape
Cod method (GCC).

In estimating the expected ultimate loss ratio for a given accident year, weight is assigned
to all accident years in inverse proportion to the distance from the accident year in
question. This is accomplished through the use of a decay factor, F, which varies
between 0 and 1. Using notation similar to Glucks, the GCC formula for the expected
ultimate loss ratio in accident year i is:

( R xLDF
j

Exp( LRi )

/ E j ) x( E j / LDF j ) xF

(E

/ LDF j ) xF

i j

i j

(Equation 3)

where
Exp(LRi)

expected loss ratio estimate for accident year i

(0 F 1)

decay factor

Rj

reported trended losses for accident year j

LDFj

loss development factor to ultimate for accident


year j

Ej

exposures for accident year j.

Exhibit 3 shows an example of the GCC method and Exhibit 4 shows the weighting
scheme for calculating the a priori loss ratios (for accident years 1994 and 1997), using a
decay rate of 0.75 and a trend rate of 7.0%. A decay factor of 0.75 gives less weight to
the accident years that are not immediately surrounding the accident year being
calculated. In fact, as F approaches 0, the GCC method result approaches the loss
development method result. As F approaches 1, the GCC method result approaches the
CC method result. Therefore, the traditional loss development and CC methods can be
viewed as special cases of the GCC method.

As the actuarys confidence in the loss development method increases, smaller decay
factors should be used. On the other hand, for a line of business such as casualty
umbrella, a decay factor close to 1.0 may be appropriate. In an appendix to his paper,
Gluck describes a method for using the variances within the loss triangle (in both the
development and trend directions) to determine the best decay factor, F.

Gluck demonstrated that the BF weights are optimal (they produce the minimum variance
estimate) subject to certain constraints. Among these constraints is the assumption that

the expected ultimate losses are known. This constraint is obviously not met in practice.
Gluck has demonstrated that, if the expected ultimate loss ratio is determined as outlined
in the GCC method, then the BF weights are still optimal.

There are some situations when it is prudent not to use the GCC method to calculate
expected ultimate losses. For example, there may be situations when it is desirable to use
external data or the actuarys own a priori expectations to derive the a priori loss ratio,
such as for high excess layers where there are zero reported losses.

In most other situations, we believe it is a best practice to use the GCC method to
determine the a priori ultimate losses, because:

It uses information from all accident years in estimating any one year,
thereby eliminating the need to arbitrarily determine how to select the
a priori loss ratio.

It gives more weight to surrounding years in determining a particular


years expected loss. This is appropriate because (1) insurance is
subject to underwriting cycles, (2) pricing and underwriting changes
are implemented gradually due to regulatory and other business
constraints, and (3) the imprecision in using one year to estimate
another likely increases with the distance between them.

It gives less weight to immature years and low volume years, reducing
the prediction error.

Gluck points out that the GCC a priori loss ratio estimate is optimal -produces the minimum bias linear estimate -- under certain conditions.

It allows the actuary to systematically reflect internal or external


changes, such as trend, coverage changes, underwriting changes, etc.,
through exposure base adjustments. (See BEST EXPOSURE BASE
section.)

The GCC method is a generalized case of the loss development and


traditional CC methods. These other methods can be handled within
the GCC framework.

BEST EXPOSURE BASE

When projecting losses, actuaries often have a choice of exposure bases. These typically
would include earned premium, ratemaking exposure units and ultimate claim counts.
The selection of the best exposure base should be based on the following considerations:

1. Does there exist a leading indicator of the quantity being projected? If so,
then this exposure base will provide more information than other bases and
should be considered a best exposure base.

2. Does the exposure base require any adjustments before it can be used in the
GCC method? All other things being equal, it is best to choose an exposure
base that requires the fewest adjustments.

Leading Indicators

Premiums, and ratemaking exposures upon which premiums are based, are truly a priori
measures of exposure to loss. In general, they are determined for each risk in advance,
before the first claim is incurred. To the extent possible, the exposures should be
adjusted for changes in the underlying components of the loss process as described in the
next section. While proper adjustment of the exposures underlying the loss process is
necessary, it is not sufficient. A pure exposure-based approach can never model the
randomness of insurance claims. In addition, there is significant parameter risk involved
in using premiums or ratemaking exposures since ratemaking is not an exact science.

Therefore, we believe that the best exposure base should incorporate the latest available
information. Reported claim counts, in general, emerge more quickly than losses and are
leading indicators of the ultimate loss experience in most lines of business. However,
ultimate claim counts themselves are not known with certainty and must be estimated. In
keeping with the best practices, the GCC method should be used to estimate ultimate
claim counts. Exhibit 5 provides an example.

Adjustment of Exposure Base

Littmann [7] describes an exposure-based approach to modeling the loss process. That
paper developed a model of the expected change in losses by measuring the changes in
the following items:

Number of risks;

Average size of risks;

Policy limiting factors (limits, deductibles and net retentions);

Class of business (including territory or state);

Underwriting standards (including loss control and risk management


initiatives);

Claims adjustment procedures (changes in philosophy, treatment of incidents);

Inflation;

External factors (e.g. benefit level changes).

Breaking down the loss process into these components is a useful application for the GCC
method. The GCC exposure base may need to be adjusted for changes in any of the listed
factors. For example, if ratemaking exposure units are used as the exposure base, they
should first be converted to a base class-equivalent basis to account for any changes in the
class mix.

Alternatively, in this example, losses can be adjusted to reflect the change in distribution
of exposures by classification. This adjustment to losses should be treated in the same

manner as trend adjustments. That is, before expected ultimate losses are calculated for
each accident year, this adjustment would have to be reversed so that each years losses
are at its actual, as opposed to common, level. This reversal is done in Column (10) of
Exhibit 3.

When modeling these changes, the question to ask is: Does this change affect the
expected loss-to-exposure ratio? If the answer is no, then neither the exposure base
nor the losses need to be adjusted to a common level. For example, if the exposure base
is premium and the policy limits profile has changed dramatically, this change would not
be expected to affect the expected loss ratio. As long as the increased limits factors are
assumed to be correctly priced, any increase (or decrease) in loss severity would be offset
by a corresponding change in premiums.

If the answer to the above question is yes, or if the increased limits factors were not
correctly priced, then either the exposure base or the losses (or both) need to be adjusted
to a common level to reflect the anticipated changes. In the previous example, if the
exposure base instead was ratemaking units or claim counts, then there would be a change
in the expected loss per exposure.

Table 1 summarizes the adjustments that need to be made to three possible exposure
bases (premiums, ratemaking exposures and claim counts) in response to changes in each
of the items listed above, plus some other situations.

Table 1

Adjustments to GCC Exposure Bases


Changes in
Number of Risks

Size of Risks

Limits, Deductibles

Class, State Mix

Underwriting Standards

Claims Adjustment

Inflation-Sensitive
Exposure Base
Rate Level Changes

Exposure
Premium
R/M Expos
Claims
Premium
R/M Expos
Claims
Premium
R/M Expos
Claims
Premium
R/M Expos
Claims
Premium
R/M Expos
Claims
Premium
R/M Expos
Claims
Premium
R/M Expos
Claims
Premium
R/M Expos
Claims

Adjustment
None
None
None
None
Adjust for pure premium differences by size of risk
Adjust for severity differences by size of risk
None
Adjust for severity differences by limit/deduct.
Adjust for severity differences by limit/deduct.
None
Adjust for pure premium differences by class/state
Adjust for severity differences by class/state
Adjust for pure premium differences
Adjust for pure premium differences
Adjust for severity differences
Adjust for pure premium differences
Adjust for pure premium differences
Adjust for severity differences
Adjust for exposure trend
Adjust for exposure trend
None
Adjust to common rate level
None
None

Note: R/M Expos = ratemaking exposures. Claims = claim counts.

In all cases when premiums are used as the exposure base, they should be adjusted to a
common rate level, and premiums and inflation-sensitive exposure bases should also be
adjusted for exposure trend. Furthermore, losses should always be adjusted to a common
severity and/or frequency level -- only severity if claim counts are used as the exposure
base, and also frequency if premiums or ratemaking exposures are used -- to reflect (1)
inflation during the experience period and (2) external factors such as benefit level
changes. (In the exhibits, it is assumed that exposures were already properly adjusted,
and losses are adjusted to the basis of the most recent year via trend factors.)

All else equal, the exposure base requiring the fewest adjustments is the best choice
because additional adjustments add imprecision to the process. In most situations, this
would be the ultimate claim counts. Premiums have certain advantages when the mix of
business by class, territory or limit has changed as long as these items are priced
correctly. There is also the danger, in using premiums, of being unable to measure
changes other than manual rate changes, e.g. changes in the average schedule debit or
credit.

It should be noted that the GCC calculation will produce the same results if the exposures
are adjusted to the level of the most recent year, the oldest year, or any year in between.
The results are identical because only the relative exposure levels from year to year
matter.

An Illustration

In stable environments, we have found ultimate claim counts to be the best exposure base
for losses. In addition to being leading indicators of losses, they generally require the
fewest adjustments.

A comparison of Exhibits 6 and 7 illustrates this point. In Exhibit 6, ultimate claim


counts are used as the exposure base for estimating ultimate losses. Losses are adjusted
for severity trend only. Note that the ultimate claims were derived in Exhibit 5 using a
GCC approach.

In Exhibit 7, ratemaking exposure units are used as the exposure base for ultimate losses.
In this situation, adjustments must be made for both severity and frequency trends. In this
example, however, it is assumed that frequency trend is zero based on external indices.

Ultimate losses in Exhibit 6 are less than the ultimate losses in Exhibit 7 because the
ratemaking exposure unit-based approach fails to recognize changes in claim frequency.
Although it may be reasonable to assume that frequency trend is zero, the experience in
Exhibit 5 indicates a decreasing frequency trend.

Ultimate claim counts that are

determined by a GCC method are more accurate than any externally-derived or fitted
frequency trend. Remember that the purpose of the trend adjustment is to put historical
years on a common level, and not to predict prospective trends.

Alternatively, the actuary could build in a negative frequency trend based on the actual
claims experience. This is equivalent, however, to using the ultimate claim counts as the
exposure base in the GCC method (Exhibit 6), but requires some extra steps.

Layered GCC Methods

This concept can be extended by layering GCC methods, using the results of one
application as the exposure base for another application.

The steps should be as follows:

1. Use ratemaking exposure units (or policy counts or on-level premium) as the
exposure base for projecting ultimate reported claim counts.
2. Use ultimate reported claim counts from the GCC as the exposure base for
projecting ultimate losses.
3. Use ultimate losses (from the GCC) as the exposure base for projecting
ultimate ALAE.
4. Use ultimate losses (from the GCC) as the exposure base for projecting
ultimate salvage and subrogation (S&S).

If the ratio of claims closed with indemnity payment (CWIP) to total closed claims is
changing materially, then ultimate reported claim counts are inappropriate for estimating
ultimate losses. Instead, the actuary may want to use ultimate CWIPs. (This and other
data inconsistencies, along with a method for deriving ultimate CWIPs, are explored in
the Appendix.)

Exhibit 8 shows the GCC method as applied to reported ALAE. In this example, the
GCC method derives a priori ultimate ALAE-to-loss ratios. The trend factor is used to
reflect any anticipated changes in this ratio. In the example, the ratio is expected to
decline in 1995.

Potential causes include coverage or operational changes such as

including ALAE within policy limits or decreased reliance on outside adjusters. An


ALAE ratio trend factor of 0.80 is used for accident years 1993 and 1994 to adjust these
years to a 1997 level in Columns (8) and (9). In Column (10), the a priori ALAE ratios
are adjusted back to 1993 and 1994 for purposes of calculating unreported ALAE
reserves.

BEST LOSS DEVELOPMENT PATTERNS

It is generally preferable to analyze losses separately from allocated loss adjustment


expenses (ALAE), salvage and subrogation (S&S), and other quantities that vary with
losses for the same reasons an actuary would separately analyze losses by line of
business. Each of these quantities has its own, unique development pattern. To the
extent that S&S recovery efforts have been increasing, for example, the S&S
development pattern would be changing over time. If losses were reviewed net of S&S,
the changes in recovery efforts might not be observable. It would be safe to combine
losses, ALAE and recoveries if the actuary (1) observes these items developing
consistently and (2) expects that the relative mix of losses, ALAE and recoveries to
remain stable over time.

To the extent allowed by the data, then, a best development pattern should be selected for
each of the following:

reported claim counts

incurred loss

paid loss

paid ALAE

incurred ALAE

S&S recoveries

Specific considerations for each of these patterns are addressed separately. In addition,
best practices for selecting link ratio averages, which applies to all of the above patterns,
is addressed in the last section.

Reported Claim Counts

Reported claim counts are defined as the sum of claims closed with payment (CWIP),
closed without payment (CWOP), and open claims. These are all claims reported to the
insurer by its insureds.

Reported claim counts are generally the easiest patterns to select, for many of the reasons
that we are proposing using claim counts as the best exposure base for ultimate losses.
The patterns are often stable and consistent, approach ultimate more quickly than losses,

and are generally unaffected by insurer operational changes other than incident claim
reporting (addressed in the Appendix).

They are a function of the insureds and

plaintiffs actions and societal trends, not the insurers internal procedures. In statistical
terms, the variance of the bias is expected to be less for claim counts than for dollars of
loss, resulting in greater credibility.

The actuary should also perform tests to determine whether the ratio of CWIP claims to
closed claims (CWIP ratio) is changing for accident years at similar maturities. If CWIP
ratios are relatively constant, then using ultimate reported claim counts as the exposure
base in the GCC method is appropriate. If CWIP ratios are changing materially, then the
actuary should consider using ultimate CWIPs as the exposure base, because reported
claim counts would no longer represent the true exposure to loss. In this case, a method
for deriving ultimate CWIPs is needed. Such a method is described in the Appendix.

Incurred Loss

One incurred loss development pattern should be identified as being the best. Unlike
reported claim counts, incurred losses are materially affected by changes in insurer
operational procedures. The best pattern should be determined after testing and adjusting
for these operational changes. For purposes of determining a best estimate, the actuary
would eliminate from consideration all other incurred development patterns. High and
low estimates can be derived by selecting slower or faster development patterns.

The following issues should be considered when selecting the best incurred pattern:

case reserving changes

changing net retentions (net data)

changing policy limits profile (gross data)

underwriting changes

changes in claim count definition (incident claims).

These issues are explored in the Appendix.

Paid Loss

In addition to the incurred development pattern, one paid loss development pattern should
be identified as being the best. Many of the proposed best practices described for
incurred development also apply to paid development, such as:

changing net retentions (net data)

changing policy limits profile (gross data)

underwriting changes.

For paid loss development, the actuary should also perform diagnostic tests to determine
whether there have been changes in claims settlement rates.
discussed in the Appendix.

These tests are also

ALAE

Before selecting the best ALAE patterns, the actuary should question the claims manager
about changes that would impact ALAE development, or the ratio of ALAE to loss.
Examples of such events include:

1. Shift of claim handling responsibilities between outside adjusters and inside


adjusters;
2. Changes in claim settlement philosophy;
3. Changes in the rate in which lawyers and adjusters are compensated; for
example, are fees billed on a monthly basis, quarterly basis, upfront, at the end
of the suit, etc;
4. Changes in the claims settlement rate (may also affect paid ALAE
development); and
5. Changes in case reserve adequacy (may also affect incurred ALAE
development).

Salvage & Subrogation Recoveries

S&S development patterns generally lag the paid loss pattern, since the loss must be paid
before a recovery can be sought. If S&S development data beyond a given maturity is
thin, then the paid loss development may be lagged to estimate the S&S pattern. To be
conservative, the actuary might consider using the paid loss development without any lag.

Best Link Ratio Averages

The reader may skip this section without loss of continuity. The ideas expressed in this
section have been previously worked out in the actuarial literature, notably by Stanard,
Peck [12], Mack and Murphy. We believe it is useful to consolidate these ideas into a
framework for selecting best link ratio averages. These ideas apply to all development
patterns.

Based on the cited authors work, the actuary may choose from among the following link
ratio methods:

straight average development (SAD)

weighted average development (WAD)

least squares (LS)

geometric average development (GAD).

The SAD, WAD and LS methods are all weighted averages of the observed link ratios
between two maturities. The difference is in the weights: SAD applies equal weights to
the observed link ratios, WAD applies weights in proportion to observed losses, and LS
applies weights in proportion to the square of the observed losses. GAD differs from the
other methods in that it is not a linear average of the observed link ratios; it is the nth
root of the product of n observed link ratios.

Earlier work by Hachemeister and Stanard [6] indicated that WAD is superior to SAD, as
the latter is likely to produce substantial additional bias. Peck, in his review of Stanard,
confirmed that both GAD and LS are superior to SAD. Murphy replicated Stanards
simulation, incorporating GAD into the results, and reached the same conclusion. His
results indicate that WAD, GAD and LS are superior to SAD.

Based on these authors work, it appears that a best practice should be to not use SAD.

According to Mack and Murphy, the best average to use depends on the underlying
process that we assume is generating losses (or ALAE, counts, etc.) For example, if the
loss at a subsequent maturity depends on the loss at the previous maturity, then a
statistical model of the following form may be assumed:

where

bx

y = the observed loss at the subsequent maturity


x = the observed loss at the current maturity (x is therefore a constant)
b = the unknown constant link ratio
e = a random error term with an expected value of zero.

Mack and Murphy prove that if the above process is the true process generating losses,
then the best linear unbiased estimator of b is LS. (In general, the best weights to use are
those which are inversely proportional to the variances of the link ratios.)

The loss generating models which correspond to SAD, WAD and GAD being the best
are as follows. LS is also repeated for the readers convenience.

If loss generating model is:

Best average is:

bx

ex

bx

e (x)1/2

bx

bxe

SAD
WAD
LS
GAD*

*For GAD, the expected value of the error term is 1.

The actuary can determine which model applies using ideas presented by Mack. Mack
describes residual plots which can be constructed to help determine whether SAD, WAD
or LS is the true loss process. A GAD plot can similarly be constructed.

The actuary can also consider the following features of the models to judgmentally
determine which averages to use:

SAD assumes that the variance of the realized loss at the next observed
maturity is proportional to the square of the loss at the current maturity.
Equivalently, SAD assumes that the variance of the realized link ratio is
constant for all observed losses at the current maturity.

WAD assumes that the variance of the realized loss at the next observed
maturity is proportional to the loss at the current maturity. Equivalently,
WAD assumes that the variance of the realized link ratio is lower the greater
the observed loss at the current maturity.

LS assumes that the variance of the realized loss at the next observed maturity
is constant, for all observed losses at the current maturity. Equivalently, LS
assumes that the variance of the realized link ratio is lower the greater the
observed loss at the current maturity.

GAD assumes that the variance of the realized loss at the next observed
maturity is proportional to the square of the loss at the current maturity times
the unknown link ratio. GAD assumes that the variance of the realized link
ratio is constant for all observed losses at the current maturity.

The actuary can compare the above to his pre-conceived beliefs to decide on the best
averages to use. In practice, use of plots of the type described by Mack may be the
exception rather than the rule. We believe that WAD is the best approach because it
has been shown to consistently produce among the best results and it is simple to apply.

Final Thoughts on Loss Development Factors

In addition to choosing the averaging method, the actuary must also choose the number of
years to include in the link ratio averages. Many of the best practices followed earlier
may help determine the years that should be discarded from averages, if any. Aside from
this, it is generally considered appropriate to use averages of recent data points.

Finally, a good thought experiment to perform at the end, after the actuary has selected all
the best patterns, is to justify to an imaginary company executive the actuarial decisions
regarding development that the actuary has made, using the business changes
implemented by the insurer and discovered by the actuary as support. If an actuary cannot
describe a business reason behind all decisions regarding the development patterns, then
the actuary should reconsider making the change until he/she can support it in such terms.
For example: I selected slower development patterns than indicated by the data because
net retentions have increased significantly and case reserves were lowered two years ago.

SELECTED ULTIMATE LOSSES

The Statement of Principles Regarding Property and Casualty Loss and Loss Expense
Adjustment Reserves states that ordinarily the actuary will examine the indications of
more than one method. The various indications that are produced by following the
practices outlined in this paper can be considered different methods. For example, paid
and incurred variations should be considered different approaches. Additionally, the use
of different exposure bases within the GCC method can be considered distinct methods.

Unless there is an obvious reason to eliminate one or the other, both paid and incurred
methods should be given weight when selecting ultimate losses. Keeping with the best
practices expressed in this paper, the weights assigned to the paid and incurred methods
should be inversely proportional to the variance of their ultimates.

Earlier in this paper, we discussed considerations for determining the best exposure base.
There may be situations where two or more exposure bases appear to be equally
applicable. If several GCC methods are used, each with a different exposure base, then
more weight should be given to the method with less variable developed ultimate loss
ratios (in Column 7). Ultimate loss ratios that vary minimally from year to year indicate
that the actuary has successfully adjusted losses and exposures for trends, distributional
shifts and other explainable changes.

Ideally, if losses and exposures are perfectly

adjusted, then the developed ultimate loss ratios for each year should fluctuate randomly
around some mean value.

POINT ESTIMATES VS. RANGES

Once the best estimate is determined, a range can then be determined by varying key
parameters within this framework. For example, alternate loss development patterns, tail
factors, trend rates, decay rates and exposure bases can be used to produce reasonable
high and low reserve estimates. Great care should be exercised to ensure that too wide of
a range is not created by compounding various optimistic (or pessimistic) assumptions.

The key idea, though, is to use the GCC framework to generate the range. This is a
departure from the more common situation where ranges are created by using the results
of various methods.

CONCLUSION

Table 2 summarizes the proposed best practices discussed in this paper.

Problem or Issue

Table 2
Proposed Best Practices
Best Practice

When should loss development


methods be used? When should
exposure-based methods be used?

Whenever an appropriate exposure base has been identified, the


actuary should rely on a loss reserving method that mixes loss
development methods with exposure-based expected loss methods.

What is the best way to weigh


together different methods?

The best weights used to average the results of the loss development
and expected loss methods are Bornhuetter-Ferguson weights (or
alternative BF weights).

What is the best way to determine


the expected ultimate loss ratio?

The expected ultimate loss ratio should be determined using the


Generalized Cape Cod (GCC) method described by Gluck. The
expected ultimate loss ratio is based on a combination of factors,
namely: (a) maturity of data, (b) volume of data and (c) decay.

What items should be considered


when selecting the best exposure
base for the GCC method?

(1) The exposure base must be a leading indicator for the quantity
being projected.
(2) The exposure base requiring the fewest adjustments is most likely
the one with the least distortion.

What exposure base is generally


best for projecting ultimate losses in
the GCC method?

As long as there is a stable environment, ultimate reported claim


counts are generally the best exposure base when projecting ultimate
losses using the GCC method.

What is the best method for


determining ALAE reserves?

GCC methods should also be applied to ALAE using the ultimate


losses (from the loss GCC method) as the exposure base.

What is the best method for


determining S&S reserves?

GCC methods should also be applied to S&S using the ultimate


losses (from the loss GCC method) as the exposure base.

What are some of the best practices


for picking development factors?

Select one best claim count pattern, one incurred pattern, one paid,
one ALAE and one S&S (after adjusting for any operational
changes).
Use weighted averages to determine link ratios.

What are some best practices for


selecting ultimate losses?

Give weight to both paid and incurred GCC methods.


If one exposure base has not been identified as the best, then select
averages of GCC methods based on various exposure bases.

What is a practical method for


determining a range around the best
estimate?

A practical way to determine the reasonable range of reserves is to


select reasonable alternative parameters (e.g. tail factors,
development factors, trend rates, decay rates or exposure bases) and
substitute these parameters into the selected methodology, e.g.
Generalized Cape Cod.

The GCC method provides a framework to handle all of the best practices identified in
Table 2. It is flexible and effective for nearly all types of insurers, including primary,

excess, reinsurance and international. The GCC method should be an actuarys central
reserving technique.

This paper is not meant to be exhaustive. It is hoped that the ideas expressed will lead to
further dialogue among property/casualty actuaries and refinement of what are considered
best reserving practices. Suggested areas of further study could include:


best practices for tail factors;

best practices for ULAE reserving.

APPENDIX

BEST PRACTICES WHEN THE DATA IS INCONSISTENT

When data has not been consistent, for example due to operational changes made by the
insurer, the actuary may need to adjust the data to remove distortions caused by factors
other than random noise.

Reported Claim Counts

There is a distortion which should be accounted for when selecting the best claim count
pattern: that arising from incident claims. These are events which have occurred
between the insured and a third party, but have not yet given rise to a claim. The insured
notifies the insurer of the incident, and depending on the insurers data procedures, the
incident is coded as a reported claim immediately or not until the claim is actually
reported by the insured.

The important consideration here is consistency. If incident claims have always been
counted as claims by the insurer, and these are a relatively constant mix of all claims,
then there is no distortion. However, if the insured recently revised their treatment of
such incidents, e.g. only in recent years were they coded as claims, then they should not
be included in the claim count triangle until they turn into real claims. The best pattern
should be selected on this basis.

Claims Closed With Indemnity Payment (CWIP)

The actuary should also perform tests to determine whether the ratio of CWIP claims to
closed claims (CWIP ratio) is changing over time for accident years at similar maturities.
If CWIP ratios are relatively constant, then using ultimate reported claim counts as the
exposure base in the GCC method is appropriate. If CWIP ratios are changing materially,
then the actuary should consider using ultimate CWIPs as the exposure base. In this case,
a method for deriving ultimate CWIPs is needed.

We do not recommend applying a development method to CWIP claims, as the main


advantage of using reported claims counts, namely that they develop to ultimate more
quickly than losses, does not apply to CWIP development. In addition, ultimate reported
claims serve as the upper bound on ultimate CWIP claims. With development methods,
ultimate CWIP claims can be projected to be greater than the ultimate number of reported
claims. Instead, the ultimate reported counts should be used as the basis in any method
for deriving ultimate CWIP counts.

An example of such a method is shown in Exhibits 9 to 11. In Exhibit 9, the number of


closed claims are subtracted from the ultimate reported claim counts (estimated earlier in
Exhibit 5) to derive the total number of claims unpaid -- including open and unreported -for each accident year as of each evaluation date. Based on the historical information, we

estimate for each interval (1) the rates at which the total unpaid claims will be disposed,
and (2) the percentage of those claims that will close with indemnity payment.

The first item, or disposal ratios, are derived in Exhibit 10 by dividing the triangle of
claims closing in each interval by a triangle of unpaid claims (open + IBNR) at the
beginning of each interval. The second item, or in-period CWIP ratios, are derived in
Exhibit 11 by dividing the triangle of CWIPs in each interval by the triangle of all claims
closing during the interval. By applying both of these factors to the unpaid claims for
each accident year, one can estimate the number of CWIP claims in each future interval,
and, therefore, the ultimate number of CWIP claims; this is shown in Exhibit 11. The
ultimate CWIPs so derived are certain to be less than the ultimate reported claims.

The ultimate CWIP ratios in Exhibit 11 are declining, alerting the actuary that ultimate
CWIPs should serve as the exposure base for the ultimate loss GCC method. (Note that
the actuary could have determined this simply by examining the declining CWIP ratios in
Exhibit 11, Section (A).) If the actuary were to use ultimate reported claims as the
exposure base, the GCC method could overstate ultimate losses, since an important
inconsistency in the data -- the fact that the percentage of claims that are CWIPing is
declining -- would be hidden from the actuary.

Incurred Losses

The following issues should be considered when selecting the best incurred pattern(s):

case reserving changes

changing net retentions (net data)

changing policy limits profile (gross data)

underwriting changes

changes in claim count definition (incident claims)

Case Reserving Changes

The first step in the process is to perform certain diagnostic tests of the data. Based on
these tests, the actuary would draw conclusions as to whether there have been changes in
the level of average relative case reserve adequacy (CRA) and, if so, adjust the data to put
all years onto the current years basis.

These diagnostics and the corresponding

adjustments are described by Berquist and Sherman [1] and are not repeated here.

Changes in CRA are most often a calendar year phenomenon, rather than an accident year
or policy year phenomenon. For example, when a claims department implements a new
case reserving procedure, it generally applies to all open cases from all prior accident
years, as well as future accident years. This means it affects all accident/policy years at

different maturities and to differing extents. The CRA adjustments, if any, should reflect
this.

If the actuary determines that changes in CRA are observed in the data, then a best
practice should be to contact the claims personnel of the insurer to confirm that
operational changes have occurred. The questions asked by the actuary should address
the year in which changes in CRA occurred, and whether it was a calendar year or
accident year implementation; this should then be confirmed for consistency with the
data.

The information being provided by the data and by the claims department may be
inconsistent, however. For example, if the data indicates weakening CRA in recent
calendar years, the actuary may determine after interviewing claims personnel that there
have been no changes in case reserving procedures. Instead, underwriting changes,
declining net retentions, or declining policy limits may be the cause behind declining
average claim sizes on open claims, rather than specific claims department actions. In
this case, the CRA diagnostics will give a false reading of weakening case reserves. The
actuary should consider tempering the CRA diagnostics and adjustments when claims
personnel do not corroborate phenomena observed in the data.

This will alert the actuary of the need to interview appropriate personnel to discuss
changes in net retentions, policy limits or underwriting procedures, to verify the observed
changes in relative case reserves.

Net Retentions & Policy Limits Profile

This is a policy year or reinsurance underwriting year phenomenon rather than an accident
year or calendar year phenomenon. Changing net retentions affect the net development
patterns, and changing policy limits impact direct development. Declines in these values
should serve to shorten development patterns, and similarly increases will lengthen the
development. The practitioner should also be aware of significant changes to deductibles.

Even if net retentions or the mix of policy limits is not changing in nominal terms, the
real retentions and policy limits are declining due to claim cost trend. Although an
actuary could construct statistical methods to adjust for this, they are beyond the scope of
this paper. The actuary should make a determination whether he/she believes this effect
to be material, based on severity trends in the line of business, the average severity in the
book of business, and the limits written by the insurer. The closer the limits are to the
average severity, the greater the dampening effect will be, i.e. recent accident years will
develop more quickly than older accident years.

We believe it is a best practice to request the net retention and policy limits information at
the same time as the loss and exposure data, which may eliminate the need to discuss
these changes with the insurer. The actuary should request detailed information about the
insurers ceded reinsurance program -- including excess of loss and quota share retentions
by line of business -- for as many years as are available. The requested policy limits

information should show written premium, or policy counts, by policy year and limit for
as many years as are available.

The actuary may choose to select different development patterns for different groups of
accident years, depending on the extent of the changes in retentions and policy limits.
This may be a judgmental determination rather than a statistical construction.

Underwriting Changes

After discussing the claims issues with the insurer and examining the retention and limits
profiles, the actuary may conclude that neither of those provide verification of the
observed CRA changes in the data. In this case, changes in underwriting should be
explored.

For practical purposes, the term underwriting would also include risk

management, loss control or engineering changes.

This is often a policy year, not a calendar year, phenomenon.

If there have been

underwriting changes, and the book of business has significantly turned over in recent
years, then it may not be possible to measure the relative adequacy of the reserves in
recent accident years to those from older accident years. If claims personnel indicate no
changes in case reserving, the actuary should consider not making any adjustments to the
data and assume that all years have consistent relative CRA.

Different patterns by year may still be appropriate, however, if the changing mix of types
of insureds due to underwriting results in shorter or longer development patterns in recent
years.

This effect may be observed in the actual link ratios, or may have to be

judgmentally accounted for. Discussions with underwriters may provide enlightenment


here.

Incident Claims

If the definition of a claim count has changed due to the changing treatment of
incident claims, then Berquist-Sherman type adjustments due to changes in case reserve
adequacy may not be possible because there will appear to be more (or fewer) open
claims for recent accident years. This will give the illusion of changing case reserve
adequacy as incident claims are usually reserved for at some low value. Such incident
claims should be removed from the data as described earlier, to allow meaningful CRA
diagnostics and adjustments. If not, then CRA methods cannot be used.

Paid Losses

For paid loss development, the actuary should first perform diagnostic tests to determine
whether there have been changes in claims settlement rates (CSR). These diagnostics are
also described by Berquist and Sherman. The actuary would select paid development
factors that reflect his or her beliefs about changes in CSR by adjusting the data to the

current years basis. For purposes of determining a best estimate, the actuary would
eliminate from consideration all other paid development patterns.

The CSR diagnostics determine if there have been changes in the rate at which claims are
settled by expressing closed claims at a given maturity as a percentage of ultimate
reported claims, separately by accident year. Alternatively, CWIPs may be related to
ultimate CWIPs. The choice of method is dependent on whether CWIP ratios have been
changing over the course of the historical data period. This was examined by the actuary
as part of the incurred loss development best practices described earlier.

If CWIP ratios have not materially changed, then the actuary may use closed claims and
ultimate reported claims to measure changes in CSR. Otherwise, CWIPs and ultimate
CWIPs should be used.

The actuary would have derived ultimate CWIPs as the

exposure base for the GCC method after determining that CWIP ratios were changing.

Before performing CSR diagnostics, the actuary should remove incident claims from the
data as described earlier.

REFERENCES
[1] Berquist, James R., and Sherman, Richard E., Loss Reserve Adequacy Testing:
A Comprehensive, Systematic Approach, PCAS LXIV, 1977, p. 123.
[2] Bornhuetter, Ronald L., and Ferguson, Ronald E., The Actuary and IBNR,
PCAS LIX, 1972, p. 181.
[3] Buhlmann, Hans, Estimation of IBNR Reserves by the Methods Chain Ladder,
Cape Cod, and Complementary Loss Ratio, 1983, unpublished.
[4] Casualty Actuarial Society, Statement of Principles Regarding Property and
Casualty Loss and Loss Adjustment Expense Liabilities, 1988.
[5] Gluck, Spencer, Balancing Development and Trend in Loss Reserve Analyses, to
be published in 1998 as part of the Proceedings of the CAS, Volume LXXXIV.
[6] Hachemeister, Charles A., and Stanard, James N., IBNR Claim Count
Estimation with Static Lag Functions, XIIth ASTIN Colloquium, International
Actuarial Association, Portimao, Portugal, 1975.
[7] Littmann, Mark W., Loss Estimation: The Exposure Approach, Evaluating
Insurance Company Liabilities, Casualty Actuarial Society Discussion Paper
Program, 1988, p. 315.
[8] Mack, Thomas, Measuring the Variability of Chain Ladder Reserve Estimates,
Casualty Actuarial Society Forum, Spring 1994, Volume 1, p. 101.
[9] Murphy, Daniel M., Unbiased Loss Development Factors, Casualty Actuarial
Society Forum, Spring 1994, Volume 1, p.183.
[10] National Association of Insurance Commissioners, Statutory Codification Project
Issue Paper: Unpaid Claims, Losses and Loss Adjustment Expenses, 1996.
[11] Patrik, Gary S., Reinsurance, Ch. 6 in: Foundations of Casualty Actuarial
Science, New York: Casualty Actuarial Society, 1990: pp. 277-369.
[12] Stanard, James N., A Simulation Test of Prediction Errors of Loss Reserve
Estimation Techniques, PCAS LXXII, 1985, p. 124. Including discussion of
paper: Robertson, John P., PCAS LXXII, 1985, p. 149 and Peck, Edward F.,
PCAS LXXXII, 1995, p. 104.

You might also like