Moosa FMII2007
Moosa FMII2007
Moosa FMII2007
BY IMAD A. MOOSA
Operational risk has, in a relatively short period of time, risen from non-recognition
to prominence as the culprit for spectacular corporate collapses. This paper surveys the
mushrooming literature on the subject, covering the definition, classification, characteris-
tics, modeling and management of operational risk. It is concluded that operational risk is a
controversial topic that will generate a significant amount of research in the years to come.
I. INTRODUCTION
Operational risk is the risk of losses arising from the materialization of a wide vari-
ety of events including fraud, theft, computer hacking, loss of key staff members,
lawsuits, loss of information, terrorism, vandalism and natural disasters. It has
been receiving increasingly significant attention from the media, regulators and
business executives, as financial scandals keep on surfacing (for example, Enron
and Parmalt) and because operational loss events have become the major cause of
spectacular business failures (for example, Barings Bank and Long-Term Capital
Management). The trend towards greater dependence on technology, more inten-
sive competition, and globalization have left the corporate world more exposed to
operational risk than ever before. In a particular reference to the banking industry,
Buchelt and Unteregger (2004) argue that the risk of fraud and external events
(such as natural disasters) has been around ever since the beginning of banking
but it is technological progress that has boosted the potential of operational risk.
Likewise, Halperin (2001) argues that “operational risk has traditionally occupied
a netherworld below market and credit risk” but “headline-grabbing financial fi-
ascos, decentralized control, the surge in e-commerce and the emergence of new
products and business lines have raised its profile”. 1
The detrimental consequences of exposure to operational risk cannot be over-
stated. Blunden (2003) argues that operational risk is as likely to bring a company
to its knees as a market collapse, and in many cases it is clearly within management
1
Market risk results from fluctuations in financial prices, whereas credit risk arises because of the
possibility of default by borrowers. Operational risk, therefore, results from almost everything else. To
emphasize the importance of operational risk relative to market risk, Parsley (1996, p 74) wonders “what
is the use of having state-of-the-art market risk measurement tools if one rogue trader can bankrupt
your institution in a matter of weeks”. On similar lines, Kingsley et al. (1998) argue that “the value at
risk, risk scenario analysis and risk-adjusted performance measures, on which senior managers now
rely in much of the financial industry, are potentially misleading if they ignore operational risk”.
C 2007 The Authors. Journal compilation c 2007 New York University Salomon Center, Financial Markets, Insti-
tutions & Instruments, V. 16, No. 4, November. Published by Blackwell Publishing, Inc., 350 Main St., Malden, MA
02148, USA, and 9600 Garsington Road, Oxford OX4 2DQ, UK.
168 Imad A. Moosa
control, but it is not fully understood or exploited. 2 While market risk has tradition-
ally caught the attention of financial institutions, operational risk is increasingly
considered more seriously, perhaps as being more detrimental than market risk
and credit risk. Studies of large operational loss events in the U.S. by Cummins
et al. (2006) and Wei (2006) show that a bank (or a financial institution in general)
can suffer a market value decline in the days surrounding the announcement of a
large loss that is significantly larger than the loss itself.
Many reasons have led to the increasing significance of operational risk but,
broadly speaking, some recent developments are conducive to the materialization
of operational loss events. These include the growth of e-commerce, mergers and
consolidations, the use of automated technology, the growing use of outsourcing
arrangements, and the increasing complexity of financial assets and trading pro-
cedures. Ong’s (2002) top three of the updated list of the “top 10 reasons why
so many people are interested in operational risk” are (i) it is sexy, hot and com-
pletely nebulous; (ii) people think they have already conquered both market risk
and credit risk; and (iii) operational risk is a convenient catch-all “garbage dump”
for all kinds of possible risks. The greater interest of the regulators in operational
risk (enshrined in the Basel II Accord) can be attributed to the changing risk profile
of the financial services sector, which has resulted from the growth in e-business
activity and reliance on technology. The Basel Committee on Banking Supervision
(BCBS, 1999) expresses the view that operational risk is “sufficiently important
for banks to devote the necessary resources to quantify”.
The objective of this paper is to survey the recent literature on operational risk.
The starting point is to define operational risk, which is not straightforward, then the
criteria of classifying operational risk are discussed. The controversy concerning
the distinguishing features of operational risk is examined, then we move on to
a discussion of the importance of, and the problems associated with, operational
risk modeling. Two more sections are devoted to the classification of operational
risk models and the examination of selected relevant empirical work. Measuring
regulatory capital against operational risk is discussed next before moving on to
the topic of operational risk management. The final section contains some final
thoughts and concluding remarks.
as a separate risk category, comprising types of risk that could not be classified
as either credit risk or market risk. Hence, it was (and still is) rather tempting to
define operational risk negatively as any risk that is not market risk or credit risk.
Rao and Dev (2006) argue that “it was not uncommon, five years ago, to consider
OR as a residual”, and that “everything other than credit risk or market risk was,
by default, OR”.
However, defining operational risk in this negative manner as a residual item
is difficult to work with, in the sense that it cannot be used for the purpose of
operational risk measurement. Buchelt and Unteregger (2004) agree with this
view, asserting that the negative definition of operational risk is hardly suitable for
identifying its scope precisely, although it indicates (to a certain extent) what might
be meant. However, Medova and Kyriacou (2001) are convinced that thinking of
operational risk as “everything not covered by exposure to credit and market risk”
remains prevalent amongst practitioners. This view is also held by Jameson (1998)
who indicated that the definition most frequently given in telephone interviews was
“every risk source that lies outside the areas covered by market risk and credit risk”.
Viewing it as a residual is probably a reflection of the lack of understanding and
the diversity of operational risk.
Early definitions of operational risk appeared in the published literature of ma-
jor international banks and other bodies in the 1990s before the Basel Commit-
tee adopted its official definition that is currently used for regulatory purposes. 3
The Group of Thirty (1993) defined operational risk as “uncertainty related to
losses resulting from inadequate systems or controls, human error or manage-
ment”. The Commonwealth Bank of Australia (1999) came up with the broad
definition that operational risk is “all risks other than credit and market risk, which
could cause volatility of revenues, expenses and the value of the Bank’s busi-
ness”. An early definition of operational risk came up in a seminar at the Federal
Reserve Bank of New York when Shepheard-Walwyn and Litterman (1998) char-
acterized operational risk as “a general term that applies to all the risk failures
that influence the volatility of the firm’s cost structure as opposed to its revenue
structure”. Note, however, the sharp difference between the last two definitions:
in the definition of the Commonwealth Bank of Australia, operational risk im-
pinges upon both the revenue and cost sides of the business, but in the definition
of Shepherd-Walwyn and Litterman it affects the cost side only. This contrast
gives rise to the question if operational risk is one-sided, which is a controver-
sial issue that will be examined in detail later on. Finally, an early definition
that identifies internal and external sources of operational risk has been put for-
ward by Crouchy et al. (1998) who suggested that operational risk is “the risk
that external events, or deficiencies in internal controls or information systems,
will result in a loss- whether the loss is anticipated to some extent or entirely
unexpected”.
3
That is, for the purpose of calculating the regulatory capital against operational risk as stipulated by
Pillar 1 of the Basel II Accord.
170 Imad A. Moosa
4
Writing less than ten years ago, Webb (1999) argued that there was no consensus in the industry on
a precise definition of operational risk and that such a consensus was unlikely to emerge in the near
future.
5
Operational risk is a broader term than operations risk, the latter pertaining to the operational risk
associated with value-driving operations such as foreign exchange trading and settlement.
6
The BCBS (2001b) classifies losses into “would be included”, “should be included”, and “would not
be included”. The first category includes costs incurred to fix an operational risk problem, payments to
third parties and write-downs. The second category includes near misses, latent losses and contingent
losses. The third category includes the costs of improvement in controls, preventive action and quality
assurance, and investment in new systems.
Operational Risk: A Survey 171
in the banking industry”. He also describes the definition as being “opaque” and
“open-ended”, because it fails to specify the component factors of operational risk
or its relation to other forms of risk. The definition, according to Hadjiemmanuil
(2003), leaves unanswered many questions concerning the exact range of loss
events that can be attributed to operational failures. Thirlwell (2002) argues that
the BCBS’s definition represents a “measurable view of operational risk if you are
trying to come up with something which is quantifiable, but not good if you think
about what causes banks to fail”. 7
Vinella and Jin (2005) come up with yet another definition of operational risk,
of which (they claim) the BCBS’s definition is a special case. They define opera-
tional risk as “the risk that the operation will fail to meet one or more operational
performance targets, where the operation can be people, technology, processes,
information and the infrastructure supporting business activities”. They argue that
the BCBS’s definition is a special case of their “generalized” definition when the
failure to meet an operational performance target results in a direct monetary loss.
Again, there is no specific mention of the role of external factors in this definition.
Is the definition of operational risk such a critical issue that triggers so much
disagreement? One view is that to measure something, we must first define it. But
Lam (2003) argues against being “fussy” about the definition of operational risk as
it does not serve any purpose as far as operational risk management is concerned.
This is why the first step in his “ten steps to operational risk management” is
“define it and move on”. Lam’s underlying argument is that “many institutions do
not get off the ground because too much time is spent trying to come up with the
perfect definition of operational risk”. The problem, however, lies in the concept of
a “perfect definition”. One thing that we know is that we have to choose between
comprehensiveness (idealism) and pragmatism, with the latter seemingly the better
choice. 8
would include people risk, process risk, system (or technology) risk and external
risk. For instance, external risk includes external fraud (such as external money
laundering), natural disasters (such as floods) and non-natural disasters (such as
arson).
An alternative to the cause as a criterion for classifying operational risk is to
use event type. One perceived advantage of an event-based classification is that it
makes the operational risk manager’s task easier, as losses can be considered to ma-
terialize in an event. The BCBS has developed a matrix of seven broad categories
of loss events that are further broken down into sub-categories and related activity
examples. The categories include internal fraud (such as embezzlement); exter-
nal fraud (such as forgery); employment practices and workplace safety (such
as discrimination); consumers, products and business practices (such as money
laundering); damage to physical assets such as terrorism); business disruption and
system failure such as power outage); and execution, delivery and product man-
agement (such as missing legal documents). This classification is similar to the
typology of hazards used by the insurance industry.
Peccia (2003) suggests that a classification based on causes is prone to errors
and misunderstanding and that a more appropriate schema is the classification
of losses by the area of impact on the results, as the ultimate objectives is to
explain the volatility of earnings arising from the direct impact of losses on the
financial results. The problem is that the causes and effects of operational loss
events are often confused. Operational risk types, such as human risk and system
risk, constitute the cause (not the outcome) of risk, as the latter is the monetary
consequence. However, it will be argued later that classifying loss events by cause
rather than consequence makes it easier to distinguish operational loss events from
market and credit loss events.
that this activity involves operational risk? The answer is simple: this activity gen-
erates fee income. Hence, it is bizarre to claim that operational risk is not taken
for profit.
The argument that operational risk leads to loss or no-loss situations can be
demonstrated to be invalid by considering examples from outside the world of
business. People accept to be exposed to loss or no-loss situations because they
believe that some risks are worth taking, given the potential reward. We take
planes although we might find ourselves in loss or no-loss situations when the
plane is hijacked or, for some reason, it loses its tail fin (almost certainly a loss
outcome in the second case). People still work for banks, knowing that they may
find themselves in loss or no-loss situations when armed robbers take hold of a
bank. We still take cruise ships, exposing ourselves to situations where we might
find ourselves stranded in the middle of the ocean, contemplating the possibility
of being eaten by sharks. The same applies to people practicing extreme sports,
and just think about those who run with the bulls in the narrow streets of a small
Spanish town every summer (lunacy or thrill seeking, involving risk-reward trade
off?). In all cases, it is a matter of choice: we deliberately take on risk for the sake
of potential reward, and in this sense risk cannot be one-sided. Being in a loss
or no-loss situation is the materialization of the unfavorable outcome or the bad
side of risk. This argument is even more compelling with respect to the business
world. Firms take on operational risk in their day-to-day operations because these
operations generate income. Increasing the size of operations and going into new
operations lead to more operational risk and more return. Is not this genuine risk-
return trade-off?
Turning to the proposition that operational risk is idiosyncratic (in the sense that
when it hits one firm, it does not spread to other firms), Lewis and Lantsman (2005)
describe operational risk as being idiosyncratic because “the risk of loss tends to
be uncorrelated with general market forces”. This, the argument goes, is not a
characteristic of market risk and credit risk: a market downturn affects all firms,
and a default by the customers of one firm affects its ability to meet its obligations
to other firms. Danielsson et al. (2001) use the proposition that operational risk
is idiosyncratic to criticize the Basel II Accord, arguing that there is no need to
regulate operational risk because it is idiosyncratic.
The view that operational risk is idiosyncratic is rather strange, because it implies
the following. When a bank incurs losses from loan default or market downturn, its
ability to meet its obligations to other banks will be affected, but this is not the case
when a bank incurs losses because of the unauthorized activities of a rogue trader.
Does this mean that other banks were not affected by the spectacular operational
failures of Barings Bank in 1995 and Long-Term Capital Management in 1998? 10
And what about the 1974 (operational) failure of Bankhaus Herstatt, which has
led to the establishment of the Basel Committee on Banking Supervision whose
10
The operational failure of Long-Term Capital Management attracted the attention of the Federal
Reserve because of concern about its systemic effects.
Operational Risk: A Survey 175
11
Moosa (2007a) argues that operational risk will not be idiosyncratic in the presence of groupthink.
176 Imad A. Moosa
regulatory capital than under the other two approaches (the basic indicators ap-
proach and the standardized approach). Indeed, it is arguable that one advantage
of operational risk modeling is that the resulting models allow the firm to meet the
regulatory requirements.
Not everyone is so enthusiastic about the relevance of operational risk modeling
to operational risk management, however. Rebonato (2007, p xvi) argues that
“although the quantitative approach remains the high route to risk management, a
lot of very effective risk management can be done with a much simpler approach”,
describing the latter as being “a measurable and meaningful approximation to the
quantitatively correct answer”. In particular, Rebonato is skeptical about the ability
of risk managers to move from the probabilistic assessment of risk to decisions.
He also argues against the use of internal models for the purpose of meeting
regulatory requirements. Likewise, Herring (2002) argues that operational risk
models are insufficiently reliable to replicate the approach used with market risk to
the case of operational risk. This point has also been made by the Shadow Financial
Regulatory Committee (2001), Altman and Saunders (2001) and Llewellyn (2001).
Currie (2004) lists the potentially unintended consequences arising from the use
of operational risk models for practical risk management purposes, including (i)
false reliance, (ii) management of the model rather than reality, (iii) misdirected
focus, (iv) misdirected resources, (v) discouragement of the “whistle-blowers”,
and (vi) blissful ignorance.
It is perhaps the case that objections to the use of the quantitative approach to
operational risk management are motivated by the problems encountered by any
endeavor to model operational risk. Referring to the modeling techniques sug-
gested by the Basel Committee’s advanced measurement approach, Davis (2005,
p 1) argues that the implementation of this approach “could easily turn into a night-
mare”. Hughes (2005) expresses the view that “the challenge on the operational
risk side has turned out to be vastly more complex and elusive than originally envis-
aged”. To start with, finding a proper and universally-accepted definition for this
kind of risk, is problematical as we have seen. But even if an acceptable definition
were available, there are other serious problems.
A serious problem is that of data availability (or rather unavailability). Muzzy
(2003) highlights this problem by arguing that “anyone venturing into operational
risk management has quickly learned that the process is doomed without robust
data”. This is because there is in general an absence of reliable internal operational
loss data, but that is not all. Publicly-available operational loss data pose unique
modeling challenges, the most important of which is that not all losses are reported
in public (which means that they will not appear on publicly-available external
databases). 13 de Fontnouvelle et al. (2006) argue that if the probability that an
operational loss is reported increases as the loss amount increases, there will be a
13
External operational loss databases report operational loss events that make it to the media, but these
represent a tiny fraction of the total number of operational loss events experienced by a typical bank.
Credit card fraud is probably the most frequent loss event experienced by banks (see, for example,
BCBS, 2003a), but these are not reported in the media, because most of them are not high-profile
events.
178 Imad A. Moosa
14
See Moosa (2007b) on internal and external operational loss databases.
15
The issues of scaling and appropriateness are dealt with by Na et al. (2005) and Peizer (2003). Na
et al. propose a scaling mechanism that can be used to mix internal and external data. Peizier, however,
casts considerable doubt on the usefulness of external operational loss data by wondering about the
relevance of an exceptional loss incurred by an Indian broker to the operational risk distribution of an
asset manager based in Manhattan.
Operational Risk: A Survey 179
by empirical evidence”. Kaiser and Kohne (2006) argue that this assumption is
particularly troublesome because a simple summation of high percentile VARs
implies the simultaneous occurrence of several worst-case scenarios. The prob-
lem, however, is that it is difficult to assess the level of correlation between different
risk types and/or business units because of the lack of historical data. Powojowski
et al. (2002) express the view that although some correlation exists between oper-
ational losses, modeling this correlation is not an easy task. This problem invites
subjectivity and bias if banks wish to minimize their regulatory capital against
operational risk.
control and reliability analysis, which is rather similar to causal networks, is used
widely to evaluate manufacturing processes. In connectivity analysis, the emphasis
is on the connections between the components of the process.
The second approach is the factor approach, whereby an attempt is made to
identify the significant determinants of operational risk, either at the firm level
or lower levels (individual business lines or processes). Hence, operational risk is
estimated as
m
OR = α + βi Fi + ε (1)
i=1
where Fi is risk factor i. The factor approach covers risk indicators, CAPM-like
models and predictive models. In the risk indicators approach, a regression-based
technique is used to identify risk factors such as the volume of operations, audit
ratings and employee turnover. CAPM-like models, which are also known as arbi-
trage pricing models and economic pricing models, are used to relate the volatility
of returns to operational risk factors. In predictive models, discriminant analysis
and similar techniques are used to identify the factors that lead to operational
losses.
The third approach is the actuarial approach, whose focus is the loss distribu-
tion associated with operational risk. Wei (2007) argues that the actuarial approach
seems to be the natural choice to quantify operational risk by estimating the fre-
quency and severity distributions separately. 16 This approach, which will be de-
scribed in more detail later on, covers the following techniques: (i) the empirical
loss distributions technique, (ii) the parameterized explicit distributions approach,
and (iii) the extreme value theory (EVT).
16
Wei (2007) argues for the actuarial approach in preference to other approaches. For example, he
suggested that Bayesian networks (proposed by Alexander (2003a), Cruz (2003b), and Giudici and
Bilotta (2004)) introduce subjectivity and that copula-based models (proposed by Bee (2005) and
Embrechts et al. (2003)) require abundant data.
Operational Risk: A Survey 181
4
3
+ γi,t FFit + πi,t Ri,t + εt (2)
i=1 i=1
where rt and r t−1 are the monthly current and lagged equity returns; xit (i =
1, 2, . . . , 22) is the first difference of the 22 variables used to represent credit
risk, interest rate risk, exchange rate risk and market risk; FFit represent the three
Fama-French (1993) factors as well as a momentum factor; and Rit represents
three alternative industry factors measured as the average monthly return for each
industry sector. The residual term from equation (2) is taken to be a measure
of operational risk. The coefficients were estimated using a rolling window of
50 months to yield results indicating that the ratio of the residual (operational risk)
to total stock return is 17.7%, with considerable monthly variance. This finding
suggests that financial firms have considerable levels of residual operational risk
exposure that has been left relatively unmanaged.
de Fontnouvelle et al. (2006) address the problem of sample selection bias using
an econometric model in which the truncation point for each loss (that is, the value
below which the loss is not reported) is modeled as an unobserved random variable.
By using two external operational loss databases to estimate the loss distribution
and estimate the capital charge, they conclude that the regulatory capital held
against operational risk often exceeds that held against market risk. They also
conclude that supplementing internal data with external data on extremely large
events could result in a significant improvement in operational risk models.
de Fontnouvelle et al. (2004) used loss data covering six large internationally-
active banks as part of the BCBS’s (2003a) operational risk loss data exercise to
find out if the regularities in the loss data make consistent modeling of operational
losses possible. Their results turned out to be consistent with the publicly reported
operational risk capital estimates produced by banks’ internal economic capital
models. Moscadelli (2005) analyzed data from the BCBS’s exercise, performing
a thorough comparison of traditional full-data analyses and extreme value meth-
ods for estimating loss severity distributions. He found that extreme value theory
outperformed the traditional methods in all of the eight business lines proposed by
the BCBS. He also found the severity distribution to be very heavy-tailed and that
a substantial difference exists in loss severity across business lines. In a similar
study, Wei (2007) utilized data from the OpVar database to estimate the aggregate
tail operational risk exposure, implementing a Bayesian approach to estimate the
frequency distribution, while estimating the severity distribution by introducing a
covariate. He concluded that “the main driving force of the capital requirement is
the tail distribution and the size of a bank”.
In another study, Wei (2003) examined operational risk in the insurance industry.
By using data from the OpVar operational loss database, he found results indicating
that operational loss events have a significantly negative effect on the market value
182 Imad A. Moosa
of the affected firms and that the effect of operational losses goes beyond the
firm experiencing the loss event. The conclusion derived from this study is that
“the significant damage of market values of both the insurers and the insurance
industry caused by operational losses should provide an incentive for operational
risk management in the U.S. insurance industry”.
In a more recent study, Wei (2006) examined the impact of operational loss
events on the market value of announcing and non-announcing U.S. financial
institutions using data from the OpVar database. The results reveal significantly
negative impact of the announcement of operational losses on stock prices. He
also found that the declines in market value to be of a larger magnitude than
the operational losses causing them, which supports the conjecture put forward
by Cummins et al. (2006). A significant contagion effect was also detected. By
using data from the same source, Cummins et al. (2006) conducted an event study
of the impact of operational loss events on the market values of U.S. banks and
insurance companies, obtaining similar results to those obtained by Wei (2006).
They found losses to be proportionately larger for institutions with higher Tobin’s
Q ratios, which implies that operational losses are more serious for firms with
strong growth prospects.
banks’ activities into eight business lines: corporate finance, trading and sales,
retail banking, commercial banking, payment and settlement, agency services,
asset management and retail brokerage. Within each business line, gross income
is taken to be a proxy for the scale of the business operation and hence a likely
measure of the extent of operational risk (as in the BIA). 17 The capital charge for
each business line is calculated by multiplying gross income by a factor (β) that is
assigned to each business line. The total capital charge is calculated as a three-year
average of the simple sum of capital charges of individual business lines in each
year. Hence
3
8
α max βjYj, 0
t=1 j=1
K = (4)
3
where β j is set by the Basel Committee to relate the level of required capital to
the level of gross income for business line j.
The BCBS (2004a) suggests that if banks move from the BIA along a continuum
towards the AMA, they will be rewarded with a lower capital charge. 18 The reg-
ulatory capital requirement is calculated by using the bank’s internal operational
risk model. One of the objectives of the Basel II Accord is to align regulatory
capital with the economic capital determined by the banks’ internal models, which
can be achieved by using the AMA. 19 Under this approach, banks must quantify
operational risk capital requirements for seven types of risk and eight business
lines, a total of 56 separate cells, where a cell is a combination of business line
and event type. These estimates are aggregated to obtain a total operational risk
capital requirement for the bank as a whole, thus ignoring correlation.
The problem is that it is not quite clear what the AMA comprises. For example,
Chapelle et al. (2004) define the AMA as encompassing “all measurement tech-
niques that lead to a precise measurement of the exposure of each business line of
a financial institution to each category of operational loss event”. It is sometimes
described as encompassing three versions: the loss distribution approach (LDA),
17
Using gross income as an indicator of operational risk has been criticized. For example, Herring
(2002) argues that gross income has no tenuous link to operational risk, but Dowd (2003) argues that
it is the least bad option. Although the BCBS (2001c) has suggested other indicators (such as annual
average assets, annual settlement throughput and total funds under management), the Basel II document
(BCBS, 2004a) defines regulatory capital in terms of gross income only. When an indicator other than
income is used, this is sometimes known as the alternative standardized approach (Moosa, 2007b).
18
This is indeed a problematical feature of Basel II because only large, internationally-active banks
will be allowed to use the AMA. One reason why the U.S. has decided to delay the implementation
of Basel II is complaints by small U.S. banks that the Accord would put them in a weak competitive
position relative to larger banks.
19
Economic capital is the amount of capital that a firm (or a unit within the firm) must hold to protect
itself with a chosen level of certainty (confidence level) against insolvency due to unexpected losses
over a given period of time (for example, one year). Regulatory capital, on the other hand, is the
capital prescribed by the regulator. Economic capital is typically determined by an internal model of
the underlying firm.
184 Imad A. Moosa
the scenario-based approach (SBA) and the scorecard approach (SCA). The basis
of classification here is the nature of the data required to implement the procedure:
while the LDA depends on historical data (hence, it is backward-looking), the
other two approaches are forward-looking because hypothetical futuristic data is
collected from “expert opinion” via scenario analysis and scorecards. For example,
Andres and van der Brink (2004) list the three approaches as separate versions of
the AMA and go on to illustrate a scenario-based AMA. Likewise, Kuhn and Neu
(2004) describe the AMA as being dependent on internal or external data or expert
knowledge, meaning that they are separate approaches.
On the other hand, it is sometimes claimed that the scenario-based and score-
card approaches are not really separate versions of the AMA, but rather means for
collecting data to supplement the historical data used with the LDA. For exam-
ple, Currie (2004) describes the AMA as involving the estimation of unexpected
losses based on a combination of internal and external data, scenario analysis and
bank-specific environment and internal controls. Reference to scenario analysis
and internal controls implies that the scenario-based approach and the scorecard
approach are used to collect data to supplement the internal and external data used
in the LDA. Likewise, Kalyvas et al. (2006, pp 123–124) argue that the AMA
measurement system must take into account internal data, external data, scenario
analysis, and internal controls and business environment factors. Haubenstock and
Hardin (2003) outline the steps involved in the LDA, which is used to calculate
the capital charge from internal and external data. Then they list some additional
steps, including the development of scenarios for stress testing and incorporating
scorecards and risk indicators. The implication here is that the scenario-based ap-
proach and the scorecard approach are used to adjust the capital charge calculated
by using the LDA. Reynolds and Syer (2003) mention, as separate approaches, the
IMA, LDA and SCA, but not the SBA, and the same idea is expressed by Kuhn and
Neu (2005). This is in contrast with Fujii (2005) who explains how the “scenario-
based advanced management approach (AMA) provides solutions to some of the
problems [of the LDA]”. Chapelle et al. (2004) argue that while the AMA could
encompass any proprietary model, the most popular AMA methodology is by far
the LDA. 20
The Basel Committee seems to accept the two possibilities of regarding scenario-
based analysis as a separate version of the AMA and a means of collecting sup-
plementary data for the LDA. In BCBS (2003c), it is stated that scenario-based
analysis may be used as an input or may form the basis of an operational risk analyt-
ical framework, particularly when internal data, external data and the assessment
of the business environment and internal controls are inadequate (that is, when the
SBA and SCA are unimplementable). But only the view that the SBA and SCA are
used as supplementary procedures is expressed in the 2001 working paper on the
regulator treatment of operational risk (BCBS, 2001c). This document describes a
20
It is not clear how the LDA is by far the most popular methodology, given that it is initially unavailable
for regulatory purposes and that it is extremely difficult to implement.
Operational Risk: A Survey 185
sound (operational) risk management system as involving the use of internal data,
relevant external data, scenario analysis and factors reflecting the business envi-
ronment and internal control system. This is Currie’s description of the AMA and
also that of Giudici (2004) who interprets this statement as implying that the AMA
should take into account internal and external data, scenario-based expert opinion
and causal factors reflecting the business environment and control systems. Alter-
natively, Chernobai and Rachev (2004) argue that the Basel Committee (BCBS,
2001d) suggests five methodologies for the measurement of regulatory capital:
(i) the basic indicators approach, (ii) the standardized approach, (iii) the internal
measurement approach, (iv) the scorecard approach, and (v) the loss distribution
approach. But no matter whether the scorecard and scenario based approaches are
separate versions of the AMA or just a means for collecting supplementary data,
they both suffer from the problem of subjectivity and bias because the data are
collected from the so-called “expert opinion”. Rebonato (2007, p 45) argues that
if an expert is held responsible when things go badly under his watch, but not
correspondingly rewarded if things turn out to be better than expected, it is not
difficult to imagine in which direction his predictions will be biased.
In BCBS (2001b), two versions of the AMA are proposed, the LDA and the
internal measurement approach (IMA), which is the same classification used by
Kalyvas et al. (2006). 21 The difference between the two approaches is that the
IMA is used to estimate unexpected loss by relating it to expected loss, whereas the
LDA is used to estimate unexpected loss from the total loss distribution. The BCBS
(2001b) makes it clear that the loss distribution approach “will not be available at
the outset of the New Basel Capital Accord”. Initially, the AMA will take the form
of the IMA, under which the capital charge for cell ij will be calculated as
8
7
K = γij E ij Pij L ij (5)
i=1 j=1
21
Alexander (2003b) argues that the IMA is rooted in the LDA, in the sense that it provides an analytical
solution whereas the LDA uses Monte Carlo simulation. Likewise, Frachot et al. (2001) view the IMA
as “an attempt to mimic LDA through a simplified, easy-to-implement way”.
22
The parameter γ will be provided by the regulator for each business line/event type. E is a measure of
exposure to operational risk, which the Basel Committee will standradise on the basis of the individual
bank data. P and L, which will be provided by banks on the basis of internal models, are respectively
the probability of occurrence of a loss event and the proportion of the exposure that will be lost if and
when a loss event materializes.
186 Imad A. Moosa
8
7
K = γij E ij Pij L ij Rij (6)
i=1 j=1
where R is the risk profile index (1 for the industry). For a bank with a fat tail
distribution, R > 1 and vice versa.
The loss distribution approach is described by the BCBS (2001b) as being a
“more advanced version of the internal methodology”, but the Basel Committee
makes it clear that this approach will not be used at this stage. If and when (if at
all) it is used, two provisions are designed to make it easier to implement: (i)
correlations will not be considered, and (ii) the structure of the business lines and
event types will be determined by the bank itself. 23 This approach is different
from the IMA in that it allows a direct estimation of unexpected losses without
specifying the gamma factor.
Under the LDA, the total loss distribution, from which the capital charge is cal-
culated, is obtained by combining (by using Monte Carlo simulations) the loss fre-
quency distribution and the loss severity distribution. The distributions are selected
and parameterized on the basis of historical data and sometimes supplemented by
scenario analysis and expert opinion. Typically, the choice falls on the Poisson dis-
tribution for frequency and some thick tail distribution (such as the lognormal and
gamma distributions) for severity. The capital charge for cell ij is then calculated
as being equal to the unexpected loss, which is the difference between the 99.9th
percentile and the mean of the distribution, which means
where EL is the expected loss (the mean of the distribution). This definition appears
to be what is embodied in the Basel II Accord as long as the underlying bank can
demonstrate that it has adequately provided for expected losses. This is because
one of the quantitative standards that the users of the AMA must satisfy is that
regulatory capital must be calculated as the sum of the expected loss and unexpected
loss unless it can be demonstrated that the expected loss is adequately captured in
the internal business practices, in which case regulatory capital is meant to cover
the unexpected loss only. 24
23
One of the proclaimed advantages of the AMA is that it results in lower capital charges than the basic
indicators approach and standardized approach. Failure to allow for the effect of correlation produces
higher capital charges than otherwise. There seems to be some contradiction here unless there are other
reasons why the AMA would produce lower capital charges. This issue is discussed in detail by Moosa
(2007c), who concludes that the subjectivity of the AMA is the most likely reason for this outcome.
24
Frachot, Moudoulaud and Roncalli (2004) argue that there is ambiguity about the definition of
the capital charge, hence suggesting two other definitions: (i) the 99.9th percentile of the total loss
Operational Risk: A Survey 187
If correlation among risk categories is assumed to be perfect (that is, losses occur
at the same time) the capital charge for the whole firm is calculated by adding up
the individual capital charges for each risk type/business line combination. This is
what will be done initially if and when the LDA is adopted for regulatory purposes
(although it can be used for the calculation of economic capital). On the other
extreme, the assumption of zero correlation among risk categories (that is, they are
independent of each other) means that the firm-wide capital charge is calculated
by compounding all distribution pairs into a single loss distribution for the firm.
This is done by calculating the total loss produced by each iteration of the Monte
Carlo simulations.
In between the two extremes of assuming perfect correlation and zero correlation
is the alternative of allowing for the explicit modeling of correlations between the
occurrences of loss events. This indeed is the most difficult procedure. The problem
here is that correlation, which is a simple form of the first moment of the joint
density of two random variables, does not capture all forms of dependence between
the two variables (it is a measure of linear association between the two variables).
Another problem with correlation is that it varies over time. This is why it is more
appropriate for this purpose to employ the copula, which is used to combine two
or more distributions to obtain a joint distribution with a prespecified form of
dependence. 25
In general, the capital charge (that is, regulatory capital or the regulatory
capital requirement) is calculated from the total loss distribution by using the
concept of value at risk (VAR). However, the use of the concept of value
at risk to measure operational risk capital charges has not escaped criticism.
For example, Hubner et al. (2003) argue against using a “VAR-like figure”
to measure operational risk, stipulating that although VAR models have been
developed for operational risk, questions remain about the interpretation of the
results. Another problem is that VAR figures provide an indication of the amount
of risk but not of its form (for example, legal risk as opposed to technology risk).
Moreover, the estimates of VAR can vary substantially with the underlying model.
For example, Kalyvas and Sfetsos (2006), who consider the issue of whether
the application of “innovative internal models” reduces regulatory capital, find
that the use of extreme value theory produces a lower estimate of VAR than the
variance-covariance, historical simulation and conditional historical simulation
methods.
distribution, and (ii) a definition that considers only losses above a threshold. Evidence for this ambi-
guity is provided by Wei (2007) who makes it explicit that “banks’ capital charge should be equal to
at least 99.9% quintile of their entire annual aggregate loss distribution in excess of expected losses”.
It seems that Wei has missed the qualifying statement “unless it can be demonstrated that the expected
loss is adequately captured in the internal business practices”.
25
Rosenberg and Schuermann (2006) show how the copula can be used for the purpose of integrated
risk management by constructing joint distribution for market, credit and operational risk. The power
of the copula, they argue, lies in its ability to capture a rich dependence structure. However, Wei (2007)
argues that a drawback of copula-based models is the data requirement. For a discussion of the pros
and cons of copulas relative to correlation, see Moosa (2007b).
188 Imad A. Moosa
Some doubts have been raised about the use of the 99.9th percentile to mea-
sure value at risk, which is recommended by the Basel Committee. For example,
Alexander (2003b) argues that the parameters of the total loss distribution cannot
be estimated precisely because the operational loss data are incomplete, unreliable
and/or subjective. This makes the estimation of risk at the 99.9th percentile im-
plausible. Alexander argues that regulators should ask themselves very seriously
if it is sensible to measure the capital charge on the 99.9th percentile. Even worse,
Rebonato (2007) argues that the 99.9th percentile is a meaningless concept.
While Chernobai et al. (2006) argue that “all statistical approaches become
somewhat ad hoc in the presence of incomplete data”, they suggest four alterna-
tive approaches to the estimation of the frequency and severity distributions by
distinguishing between censored and truncated data. Data are censored when the
number of observations that fall in a given set is known, but the specific values of the
observations are unknown. Data are said to be truncated when observations that fall
in a given set are excluded. Thus, censored data affect the estimation of the severity
distribution, not the frequency distribution, whereas both are affected by truncated
data. The evidence on the effect of truncated data is mixed. Moscadelli et al. (2005)
highlight the potential drawbacks of neglecting the existence of thresholds in the
measurement process, suggesting that one way to circumvent this problem is to
reconstruct the shape of the lower part of the distribution by fitting the collected
data and extrapolating down to zero. Mignola and Ugoccioni (2007), on the other
hand, argue that neglecting events below the loss data collection threshold does
not lead to large errors in the aggregated expected loss quintiles and unexpected
loss for threshold values up to fairly large percentiles of the severity distribution.
Generally speaking, operational risk is much more difficult to quantify than
market risk and credit risk, which have much more well-behaved loss distributions
compared with operational risk. But one may ask the question why it is that op-
erational risk is more difficult to measure than credit risk, given that the concepts
on which measurement is based (the concepts of loss frequency and loss severity)
are equivalent to concepts of frequency of default and the loss given default. de
Koker (2006) points out that despite these similarities, operational risk is difficult
to measure because of two characteristics of operational risk that we discussed
earlier in this paper: the absence of a good proxy for operational risk exposure and
the fat-tail characteristic of the loss distribution. 26
Insurance has always been used to mitigate various kinds of operational risk,
such as the risk of fire (damage to physical assets). Insurance companies have
been lobbying regulators to accept the idea of replacing (at least in part) regulatory
capital with insurance. Currently, a wide variety of insurance products (policies) are
available to banks, which include peril-specific products (such as computer crime
cover) and multi-peril products (such as the all-risk operational risk insurance), as
well as the traditional deposit insurance. There are, however, doubts about the role
of insurance in operational risk management. To start with, banks are (financially)
too big for insurance companies, which means that they cannot use insurance
effectively to cover all elements of operational risk. Cruz (2003a) identifies other
pitfalls with insurance for operational risk, including the following: (i) the limiting
conditions and exclusion clauses, which may impede payment in the event of
failure; (ii) delays in payment, which could result in serious damage to the claimant;
and (iii) the difficulty of determining the true economic value of insurance in the
absence of sufficient and appropriate data.
Brandts (2005) casts doubt on the ability of insurance to provide a “perfect
hedge” for operational risk, arguing that insurance compensation is often subject
to a range of limitations and exceptions. Specifically, he identifies three problems
(risks) with insurance: (i) the payment uncertainty resulting from mismatches in
the actual risk exposure and the insurance coverage; (ii) delayed payment, which
may result in additional losses; and (iii) the problem of counterparty risk resulting
from the possibility of default by the insurance company.
Young and Ashby (2003) are skeptical about the ability of insurance products
to go far enough in the current operational risk environment. The BCBS (2001b)
has expressed doubts about the effectiveness of insurance products, stating that “it
is clear that the market for insurance of operational risk is still developing”. And
although Basel II allows banks using the AMA to take account of the risk mitigating
impact of insurance in their regulatory capital calculations, some strict conditions
must be satisfied. In general, regulators have a problem with the proposition that
regulatory capital can be replaced (at least partially) with insurance. This is mainly
because regulators are skeptical about the feasibility of immediate payouts (which
is not what insurance companies are known for). There is also fear about the ability
of the insurers to get off the hook (completely or partially) through some dubious
clauses in the insurance policy.
A controversial issue is the claim that insurance is a key tool of risk trans-
fer, which Kaiser and Kohne (2006), for example, make explicit by stating that
“banks transfer risks by buying insurance policies”. However, taking insurance
does not really amount to risk transfer because the insured would still be ex-
posed to risk. Risk transfer in the strict sense occurs only if a firm outsources
the underlying activity to the insurer, which does not sound a good idea. Without
that, insurance provides financial cover, should risk assumption lead to losses.
Taking insurance, therefore, is not risk transfer but rather (external) risk financing
through the insurance company as an alternative to financing it through capital and
reserves.
Operational Risk: A Survey 191
Confusion between risk transfer and risk financing is quite conspicuous in the
BCBS (2003b) paper on operational risk transfer across financial sectors. The paper
makes it explicit that “banks already transfer operational risk through insurance”
(p 6) but then shifts to the use of phrases like “finance those losses”. On one page
(p 7), it is first stated that “the firm has used insurance to transfer some of the risk
of internal fraud loss”, but in the following paragraph the word “transfer” is no
longer used. Instead, it is stated that “the insurance policy provides benefits that act
as a form of contingent capital in the event of an insured loss”, while referring to
“catastrophic coverage to finance low-frequency, high-severity losses”. Actually,
the graphical illustration of the use of insurance to cover operational risk does
not use the word “transfer” at all. Instead, the title of Graph 1 is “financing of
fraud losses over a one-year period, no insurance”, whereas the title of Graph 2 is
“financing of fraud losses over a one-year period, with insurance”. So, is it transfer
or financing? Logic and pure common sense tell us that it is financing, not transfer.
The authors of the BCBS paper try to stick to the customary term of risk transfer,
but it is sometimes quite obvious that this is the wrong term to use, in which case
they shift to the correct term of “risk financing”. 27
X. CONCLUDING REMARKS
There is much more disagreement than agreement amongst academics and profes-
sionals about the concept of operational risk as well as its causes, consequences,
characteristics and management. While there is a consensus on the views that
operational risk is diverse and that it is difficult to measure, there are lingering
disagreements about the definition of operational risk, its classification and what it
should and should not include. A large number of definitions have been suggested,
ranging from those that are hardly informative to those that look more like descrip-
tions than definitions, and from those that are very narrow to those that encompass
anything that is not related to market risk and credit risk. Strangely perhaps, oper-
ational risk is the only risk type that has an official regulatory definition, the Basel
Committee’s definition (market risk and credit risk do not have official definitions,
perhaps because they are straightforward). But this official definition, motivated
by regulatory pragmatism rather than comprehensiveness, has been criticized by
those who think it is too narrow and those who think it is too broad. Controversy
has also arisen about the criteria of classifying operational risk, whether opera-
tional loss events should be classified according to cause (people or systems), event
(internal fraud or external fraud) or consequence (asset write-down or fines).
27
As a compromise, it may be possible to argue that insurance can be used to transfer the financial
effects of an operational loss event because the firm buying the insurance still experiences the event.
The term “risk financing” is more appropriate because the very basic principles of risk management
tell us that risk can be dealt with in a number of ways, including risk assumption, risk avoidance,
risk transfer, risk reduction and risk financing. Hence, we are talking about risk transfer versus risk
financing, which is more appropriate than talking about the transfer of risk versus the transfer of the
financial effects of a loss event.
192 Imad A. Moosa
Operational risk is not understood very well, and it seems that there is disagree-
ment about its proclaimed features. Some of the controversial issues pertain to the
proclaimed features that it is one-sided, it is idiosyncratic, indistinguishable from
other risks, and that it is transferable via insurance. This paper presented strong ar-
guments against what seems to be the conventional wisdom, expressing the views
that operational risk is not one-sided, is not idiosyncratic, is not indistinguishable
from other risks, and that it is not transferable via insurance.
There are also controversies about why and how operational risk should be mod-
eled and measured. But the most controversial issue is whether or not operational
risk should be regulated, as required by the Basel II Accord. To start with, there
is disagreement about the need for bank regulation in general, which is based on
the undisputable fact that banks command special importance in the domestic and
world economy, hence avoiding bank failures should be an objective of the reg-
ulators. However, there is significant skepticism about the role of regulation as a
means of achieving financial stability. For example, Kaufman and Scott (2000)
argue that regulatory actions have been double-edged, if not counterproductive.
Koehn and Santomero (1980) suggest that regulation does not necessarily accom-
plish the declared objective of reducing the probability of bank failure and that
a case could be put forward for the proposition that the opposite result can be
expected. Benston and Kaufman (1996) assert that most of the arguments that are
frequently used to support special regulation for banks are not supported by either
theory or empirical evidence. They also share the view that an unregulated system
tends to achieve an optimal allocation of resources. When it comes to Basel II as a
form of bank regulation, Barth et al. (2006) conclude that Basel II is some sort of
“one size fits all” kind of regulation, which they seem to be very skeptical about.
Their empirical results reveal that raising regulatory capital bears no relation to
the degree of development of the banking system, the efficiency of banks and the
possibility of experiencing a crisis.
Risk-based regulation (including Basel II) has been criticized severely. Daniels-
son et al. (2002) demonstrate that, in the presence of risk regulation, prices and
liquidity are lower, whereas volatility is higher, particularly during crises. They
attribute this finding to the underlying assumption of the regulator that asset returns
are exogenous, which fails to take into account the feedback effect of trading de-
cisions on prices. Danielsson (2003) argues that while the notion that bank capital
be risk sensitive is intuitively appealing, the actual implementation (in the form
of Basel II) may boost financial risk for individual banks and the banking system
as a whole. Danielsson and Zigrand (2003) use a simple equilibrium model to
demonstrate “what happens when you regulate risk”, showing that even if regu-
lation lowers systemic risk (provided that not too many firms are left out by the
regulatory regime, which is what will happen under Basel II), this can only be
accomplished at the cost of significant side effects.
The management of operational risk (and financial risk in general), as envis-
aged by Basel II, has been criticized by Rebonato (2007) on the grounds of dif-
ferences between regulators and risk managers. While regulators are concerned
Operational Risk: A Survey 193
about catastrophic events (represented by the 99.9th percentile of the loss distri-
bution), there is more to risk management than rare events because banks are also
concerned about the daily risk-return trade-off. Risk managers, therefore, should
not do things the same way as the regulators. If we accept the logic of this argu-
ment, the proclaimed novelty of the Basel II Accord of aligning regulatory capital
with economic capital is not a good idea after all. Regulatory capital is supposed
to protect banks from catastrophic events, whereas economic capital is what is
needed to run banks efficiently. Even more important, the argument goes, regu-
lators should not force banks to devote resources to the development of internal
models to calculate “numbers of dubious meaning” for regulatory purposes. The
recommendation is: keep it simple or let banks decide whether or not they want to
develop internal models. It would take a lot of people some convincing to dispute
the validity of this view.
Having gone through a somewhat detailed discussion of various aspects of oper-
ational risk, the inevitable conclusion is that operational risk is truly a controversial
topic, which has led to the emergence of a new strand of research that did not exist
some ten years ago. This survey, it is hoped, will serve as a concise introduction
to the topic as more and more academics and practitioners develop taste for it.
XI. REFERENCES
Alexander, C. 2003a. “Managing Operational Risks with Bayesian Networks.”
Pp. 285–294 in Operational Risk: Regulation, Analysis and Management, ed.
C. Alexander. London: Prentice Hall-Financial Times.
Alexander, C. 2003b. “Statistical Models of the Operational Loss.” Pp. 129–170 in
Operational Risk: Regulation, Analysis and Management, ed. C. Alexander.
London: Prentice Hall-Financial Times.
Allen, L. and T. G. Bali. 2004. “Cyclicality in Catastrophic and Operational Risk
Measurements.” Unpublished paper, City University of New York.
Allen, L. and G. B. Turan. 2007. “Cyclicality in Catastrophic and Operational Risk
Management.” Journal of Banking and Finance 31:1191–1235.
Altman, E. and A. Saunders. 2001. “Credit Ratings and the BIS Reform Agenda.”
Unpublished paper, New York University.
Andres, U. and G. J. Van Der Brink. 2004. “Implementing a Basel II Scenario-
Based AMA for Operational Risk.” Pp. 343–368 in The Basel Handbook, ed.
K. Ong. London:. Risk Books.
Barth, J., G. Caprio, and R. Levine. 2006. Rethinking Bank Regulation: Till Angels
Govern. New York: Cambridge University Press.
BCBS. 2001a. Basel II: The New Basel Capital Accord-Second Consultative Paper.
Basel: Bank for International Settlements.
BCBS. 2001b. Operational Risk: Supporting Document to the New Basel Accord.
Basel: Bank for International Settlements.
BCBS. 2001c. Working Paper on the Regulatory Treatment of Operational Risk.
Basel: Bank for International Settlements.
194 Imad A. Moosa
BCBS. 2003a. The 2002 Data Collection Exercise for Operational Risk: Summary
of the Data Collected. Basel: Bank for International Settlements.
BCBS. 2003b. Operational Risk Transfer Across Financial Sectors. Basel: Bank
for International Settlements.
BCBS. 2003c. Supervisory Guidance on Operational Risk: Advanced Measure-
ment Approaches for Regulatory Capital. Basel: Bank for International Settle-
ments.
BCBS. 2003d. Sound Practices for the Management of Operational Risk. Basel:
Bank for International Settlements.
BCBS. 2004a. Basel II: International Convergence of Capital Measurement and
Capital Standards: A Revised Framework. Basel: Bank for International Settle-
ments.
BCBS 2004b. Bank Failures in Mature Economies. Basel: Bank for International
Settlements.
Bee, M. 2005. “Copula-Based Multivariate Models with Applications to Risk
Management and Insurance.” Unpublished paper, Università degli Studi di
Trento.
Bee, M. 2006. “Estimating the Parameters in the Loss Distribution Approach: How
can we Deal with Truncated Data?” Pp. 123–144 in The Advanced Measurement
Approach to Operational Risk, ed. E. Davis. London: Risk Books.
Benston, G. J. and G. G. Kaufman. 1996. “The Appropriate Role of Bank Regu-
lation.” Economic Journal 106:688–697.
Blunden, T. 2003. “Scoreboard Approaches.” Pp. 229–240 in Operational Risk:
Regulation, Analysis and Management, ed. C. Alexander. London: Prentice
Hall-Financial Times.
Bocker, K. and C. Kluppelberg. 2005. “Operational VAR: A Closed-Form Ap-
proximation.” Risk December: 90–93.
Bolton, N. and J. Berkey. 2005. “Aligning Basel II Operational Risk and Sarbanes-
Oxley 404 Projects.” Pp. 237–246 in Operational Risk: Practical Approaches
to Implementation, ed. E. Davis. London: Risk Books.
Brandts, S. 2005. “Reducing Risk Through Insurance.” Pp. 305–314 in Opera-
tional Risk: Practical Approaches to Implementation, ed. E. Davis. London:
Risk Books.
Buchelt, R. and S. Unteregger. 2004. “Cultural Risk and Risk Culture: Op-
erational Risk after Basel II, Financial Stability Report 6.” http://www.
oenb.at/en/img/fsr 06 cultural risk tcm16–9495.pdf.
Buchmuller, P., M. Haas, B. Rummel, and K. Stickelmann. 2006. “AMA Imple-
mentation in Germany: Results of BaFin’s and Bundesbank’s Industry Survey.”
Pp. 295–336 in The Advanced Measurement Approach to Operational Risk, ed.
E. Davis. London: Risk Books.
Cagan, P. 2001. “Seizing the Tail of the Dragon.” FOW/Operational Risk July:18–
23.
Chapelle, A., Y. Crama, G. Hubner, and J. P. Peters. 2004. “Basel II and Operational
Risk: Implications for Risk Measurement and Management in the Financial
Sector.” Unpublished paper, National Bank of Belgium.
Operational Risk: A Survey 195
Lewis, C.M. and Y. Lantsman. 2005. What is a Fair Price to Transfer the Risk
of Unauthorised Trading? A Case Study on Operational Risk.” Pp. 315–356
in Operational Risk: Practical Approaches to Implementation, ed. E. Davis.
London: Risk Books.
Llewellyn, D. T. ed. 2001. Bumps on the Road to Basel: An Anthology of Basel 2.
London: Centre for the Study of Financial Innovation.
Lopez, J. A. 2002. “What is Operational Risk?” Federal Reserve Bank of San
Francisco Economic Letter January.
Medova, E. A. and M. N. Kyriacou. 2001. “Extremes in Operational Risk Man-
agement.” Unpublished paper, University of Cambridge.
Mestchian, P. 2003. “Operational Risk Management: The Solution is in the Prob-
lem.” Pp. 3–14 in Advances in Operational Risk: Firm-wide Issues for Financial
Institutions. London: Risk Books.
Metcalfe, R. 2003. “Operational Risk: The Empiricists Strike Back.”
Pp. 435–446 in Modern Risk Management: A History, ed. P. Field. London:
Risk Books.
Milligan, J. 2004. “Proritizing Operational Risk.” Banking Strategies 80:67.
Mignola, G. and R. Ugoccioni. 2007. “Effect of Data Collection Thresh-
old in the Loss Distribution.” Journal of Operational Risk 1 Winter:35–
47.
Moosa, I. A. 2007a. “Misconceptions about Operational Risk.” Journal of Oper-
ational Risk Winter:97–104.
Moosa, I. A. 2007b. Operational Risk Management. London: Palgrave.
Moosa, I. A. 2007c. “A Critique of the Advanced Measurement Approach to Reg-
ulatory Capital Against Operational Risk.” Unpublished paper, Monash Uni-
versity.
Moscadelli, M. 2005. “The Modelling of Operational Risk: Experience with the
Analysis of the Data collected by the Basel Committee.” Pp. 39–106 in Oper-
ational Risk: Practical Approaches to Implementation, ed. E. Davis. London:
Risk Books.
Moscadelli, M., A. Chernobai, and S. Rachev. 2005. “Treatment of Incomplete
Data in the Field of Operational Risk: The Effects on Parameter Estimates, EL
and UL Figures.” Operational Risk June:33–50.
Muzzy, L. 2003. “The Pitfalls of Gathering Operational Risk Data.” RMA Journal
85:58–62.
Na, H. S., L. C. Miranda, J. Van Den Berg, and M. Leipoldt. 2005. “Data Scal-
ing for Operational Risk Modelling.” ERIM Report Series ERS-2005–092-LIS,
December.
Ong, M. 2002. “The Alpha, Beta and Gamma of Operational Risk.” RMA Journal
85:34.
Parsley, M. 1996. “Risk Management’s Final Frontier.” Euromoney September:74–
75.
Peccia, A. 2003. “Using Operational Risk Models to Manage Operational Risk.”
Pp. 363–384 in Operational Risk: Regulation, Analysis and Management, ed.
C. Alexander. London: Prentice Hall-Financial Times.
Operational Risk: A Survey 199
Peccia, A. 2004. “An Operational Risk Ratings Model Approach to Better Mea-
surement and Management of Operational Risk.” Pp. in The Basel Handbook,
ed. K. Ong. London:. Risk Books.
Pezier, J. 2003. “A Constructive Review of the Basel Proposals on Operational
Risk.” Pp. 49–73 in Operational Risk: Regulation, Analysis and Management,
ed. C. Alexander. London: Prentice Hall-Financial Times.
Postlewaite, A. and X. Vives. 1987. “Bank Runs as an Equilibrium Phenomenon.”
Journal of Political Economy 95:485–491.
Powojowski, M., D. Reynolds, and H. J. H. Tuenter. 2002. “Dependent Events and
Operational Risk.” Algo Research Quarterly 5:65–73.
Rao, V. and A. Dev. 2006. “Operational Risk: Some Issues in Basel II AMA
Implementation in US Financial Institutions.” Pp. 273–294 in The Advanced
Measurement Approach to Operational Risk, ed. E. Davis. London: Risk Books.
Rebonato, R. 2007. “The Plight of the Fortune Tellers: Thoughts on the Quantitative
Measurement of Financial Risk,” Unpublished manuscript.
Reynolds, D. and D. Syer. 2003. “A General Simulation Framework for Op-
erational Loss Distributions.” Pp. 193–214 in Operational Risk: Regulation,
Analysis and Management, ed. C. Alexander. London: Prentice Hall-Financial
Times.
Robert Morris Associates, British Bankers’ Association and International Swaps
and Derivatives Association. 1999. Operational Risk: The Next Frontier.
Philadelphia: RMA.
Rosenberg, J. V. and T. Schuermann. 2006. “A General Approach to Integrated Risk
Management with Skewed, Fat-Tailed Risks.” Journal of Financial Economics
79:569–614.
Shadow Financial Regulatory Committee. 2001. “The Basel Committee’s Revised
Capital Accord Proposal.” Statement No 169, February.
Shepheard-Walwyn, T. and R. Litterman. 1998. “Building a Coherent Risk Mea-
surement and Capital Optmisation Model for Financial Firms.” Federal Reserve
Bank of New York Economic Policy Review October:171–182.
Smithson, C. and P. Song. 2000. “Quantifying Operational Risk.” Risk July:50–52.
Thirlwell, J. 2002. “Operational Risk: The Banks and the Regulators Struggle.”
Balance Sheet 10:28–31.
Tripe, D. 2000. “Pricing Operational Risk.” Unpublished paper, Massey University
Turing, D. 2003. “The Legal and Regulatory View of Operational Risk.”
Pp. 253–266 in Advances in Operational Risk: Firm-wide Issues for Financial
Institutions (second edition). London: Risk Books.
Vinella, P. and J. Jin. 2005. “A Foundation for KPI and KRI.” Pp. 157–168 in Op-
erational Risk: Practical Approaches to Implementation, ed. E. Davis. London:
Risk Books.
Webb, A. 1999. “Controlling Operational Risk.” Derivatives Strategy 4:17–21.
Wei, R. 2003. “Operational Risk in the Insurance Industry.” Unpublished paper,
University of Pennsylvania.
Wei, R. 2006. An Empirical Investigation of Operational Risk in the United States
Financial Sectors. University of Pennsylvania AAT 3211165.
200 Imad A. Moosa