Four Principles of ORM: Operational Risk Risk Human Factors

Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 10

The term Operational Risk Management (ORM) is defined as a continual cyclic

process which includes risk assessment, risk decision making, and implementation of risk
controls, which results in acceptance, mitigation, or avoidance of risk. ORM is the
oversight of operational risk, including the risk of loss resulting from inadequate or failed
internal processes and systems, human factors, or from external events.

Four Principles of ORM


The U.S. Department of Defense summarizes the principles of ORM as follows: [1]

• Accept risk when benefits outweigh the cost.


• Accept no unnecessary risk.
• Anticipate and manage risk by planning.
• Make risk decisions at the right level.

[edit] Three Levels of ORM


In Depth
In depth risk management is used before a project is implemented, when there is
plenty of time to plan and prepare. Examples of in depth methods include
training, drafting instructions and requirements, and acquiring personal protective
equipment.
Deliberate
Deliberate risk management is used at routine periods through the implementation
of a project or process. Examples include quality assurance, on-the-job training,
safety briefs, performance reviews, and safety checks.
Time Critical
Time critical risk management is used during operational excercises or execution
of tasks. It is defined as the effective use of all available resources by individuals,
crews, and teams to safely and effectively accomplish the mission or task using
risk management concepts when time and resources are limited. Examples of
tools used includes execution check-lists and change management. This requires a
high degree of situational awareness.[1]

[edit] ORM Process


[edit] In Depth

The International Organization for Standardization defines the risk management process
in a four-step model: [2]

1. Establish context
2. Risk assessment
o Risk identification
o Risk analysis
o Risk evaluation
3. Risk treatment
4. Monitor and review

This process is cyclic as any changes to the situation (such as operating environment or
needs of the unit) requires re-evaluation per step one.

[edit] Deliberate

The U.S. Department of Defense summarizes the deliberate level of ORM process in a
five-step model: [1]

1. Identify hazards
2. Assess hazards
3. Make risk decisions
4. Implement controls
5. Supervise (and watch for changes)

[edit] Time Critical

The U.S. Navy summarizes the time critical risk management process in a four-step
model:[3]

1. Assess the situation.

The three conditions of the Assess step are task loading, additive conditions, and human
factors.

• Task loading refers to the negative effect of increased tasking on performance of


the tasks.
• Additive factors refers to having a situational awareness of the cumulative effect
of variables (conditions, etc.).
• Human factors refers to the limitations of the ability of the human body and mind
to adapt to the work environment (e.g. stress, fatigue, impairment, lapses of
attention, confusion, and willful violations of regulations).

2. Balance your resources.

This refers to balancing resources in three ways:

• Balancing resources and options available. This means evaluating and leveraging
all the informational, labor, equipment, and material resources available.
• Balancing Resources verses hazards. This means estimating how well prepared
you are to safely accomplish a task and making a judgement call.
• Balancing individual verses team effort. This means observing individual risk
warning signs. It also means observing how well the team is communicating,
knows the roles that each member is supposed to play, and the stress level and
participation level of each team member.

3. Communicate risks and intentions.

• Communicate hazards and intentions.


• Communicate to the right people.
• Use the right communication style. Asking questions is a technique to opening the
lines of communication. A direct and forceful style of communication gets a
specific result from a specific situation.

4. Do and debrief. (Take action and monitor for change.)

This is accomplished in three different phases:

• Mission Completion is a point where the exercise can be evaluated and reviewed
in full.
• Execute and Gauge Risk involves managing change and risk while an exercise is
in progess.
• Future Performance Improvements refers to preparing a "lessons learned" for the
next team that plans or executes a task.

[edit] Benefits of ORM


1. Reduction of operational loss.
2. Lower compliance/auditing costs.
3. Early detection of unlawful activities.
4. Reduced exposure to future risks.
[edit] Chief Operational Risk Officer
The role of the Chief Operational Risk Officer (CORO) continues to evolve and gain
importance. In addition to being responsible for seting up a robust Operational Risk
Management function at companies the role also plays an important part in increasing
awareness of the benefits of sound operational risk management.

Companies with Chief Operational Risk Officers

[edit] ORM Software


The impact of the Enron failure and the implementation of the Sarbanes-Oxley Act has
caused several software development companies to create enterprise-wide software
packages to manage risk. These software systems allow the financial audit to be executed
at lower cost.

Forrester Research has identified 115 Governance, Risk and Compliance vendors that
cover operational risk management projects. Active

[edit] Definition
The Basel Committee defines operational risk as:

"The risk of loss resulting from inadequate or failed internal processes, people and
systems or from external events."

However, the Basel Committee recognizes that operational risk is a term that has a
variety of meanings and therefore, for internal purposes, banks are permitted to adopt
their own definitions of operational risk, provided the minimum elements in the
Committee's definition are included.

[edit] Scope exclusions


The Basel II definition of operational risk excludes, for example, strategic risk - the risk
of a loss arising from a poor strategic business decision.

Other risk terms are seen as potential consequences of operational risk events. For
example, reputational risk (damage to an organization through loss of its reputation or
standing) can arise as a consequence (or impact) of operational failures - as well as from
other events.

[edit] Basel II event type categories


The following lists the official Basel II defined event types with some examples for each
category:

1. Internal Fraud - misappropriation of assets, tax evasion, intentional mismarking of


positions, bribery
2. External Fraud- theft of information, hacking damage, third-party theft and
forgery
3. Employment Practices and Workplace Safety - discrimination, workers
compensation, employee health and safety
4. Clients, Products, & Business Practice- market manipulation, antitrust, improper
trade, product defects, fiduciary breaches, account churning
5. Damage to Physical Assets - natural disasters, terrorism, vandalism
6. Business Disruption & Systems Failures - utility disruptions, software failures,
hardware failures
7. Execution, Delivery, & Process Management - data entry errors, accounting
errors, failed mandatory reporting, negligent loss of client assets

[edit] Difficulties
It is relatively straightforward for an organization to set and observe specific, measurable
levels of market risk and credit risk. By contrast it is relatively difficult to identify or
assess levels of operational risk and its many sources. Historically organizations have
accepted operational risk as an unavoidable cost of doing business.

[edit] Methods of operational risk management


Basel II and various Supervisory bodies of the countries have prescribed various
soundness standards for Operational Risk Management for Banks and similar Financial
Institutions. To complement these standards, Basel II has given guidance to 3 broad
methods of Capital calculation for Operational Risk

• Basic Indicator Approach - based on annual revenue of the Financial Institution


• Standardized Approach - based on annual revenue of each of the broad business
lines of the Financial Institution
• Advanced Measurement Approaches - based on the internally developed risk
measurement framework of the bank adhering to the standards prescribed
(methods include IMA, LDA, Scenario-based, Scorecard etc.)

The Operational Risk Management framework should include identification,


measurement, monitoring, reporting, control and mitigation frameworks for Operational
Risk.

Which Measurement Method?


Many methods of measuring an enterprise's operational and business
risk have emerged:

 the proxy, analog or surrogate method


 the earnings volatility method
 the loss modelling method
 the direct estimation or scenario method

The first three consistently run into data problems which reduce either
their effectiveness or certainly their freedom from the influence of
subjective judgment. The first two are nonetheless useful as a rough
approximation or triangulation of a firm's risk capital requirement.

Each is discussed briefly below.

The proxy, analog or surrogate method

This high level or "top-down" method is often used by large companies


comprising of several divisions operating independent businesses. The
stages of the method are broadly as follows:

1. Identify public companies that are similar ("analogs") to the


entity's different divisions either as they are or as they "should
be".
2. Identify the capital and other key financial variables of these
analog companies and analyse the relationship (including by
regression) between the capital and other variables
3. Use this relationship to calculate the amount of capital the
division should hold on a "stand-alone" basis according to its key
financial variables

Undertaking the same approach at the group level and comparing the
group results with the addition of the divisional results would allow an
estimate of the diversification benefits gained by the group.

The essential problems with this method are:


1. the level of subjective judgment required in selecting the analog
companies
2. the level of judgment required at the formation of the
explanatory algorithm
3. the shortage in term of the data history available to support this
analysis
4. the difficulty in using this method to project the capital
requirements for a future innovative structure where analogs
may not exist
5. the separation between the risk capital measure and the actual
risk management activities in the entity, with subsequent lack of
incentive to promote risk management

There have also been instances when analog choice and data
"cleaning" has been undertaken to achieve desired results - results
that have subsequently diverged from expected levels with
movements in equity markets. Does one then choose different analogs
to achieve a desired outcome?

The earnings volatility method

This method considers the statistical variability in the earnings of the


company, or of its divisions, and either empirically calculates the
unexpected loss at a certain confidence level or more likely fits a
standard statistical distribution to the available data and analytically
calculates the unexpected loss at a certain confidence level. Before the
analysis is undertaken the earnings data needs to be adjusted for risks
other than operational risk (with this depending on the definition
chosen). Even at the broad definition, this would involve adjusting for
the full effect of credit and market risks.

The same method can be applied to volatility of asset values, including


to the market capitalisation of the company, but this is not readily
possible for its divisions.

The essential problems with this method are:

1. the difficulty in finding sufficient consistent data to perform


either the empirical analysis or to fit a standard statistical
distribution with confidence
2. the level of judgment required in cleaning the data for the effects
of other risk types, and adjusting it for inflation, volume growth,
changes in company structure etc
3. the judgment required that the relatively short time-span of the
data effectively captures the full range of experience for which
risk capital provides a buffer
4. the backward looking nature of the reliance on historical data
tending to make this approach difficult to apply to strategic
changes in company structure or business
5. the separation between the risk capital measure and the actual
risk management activities in the entity, with subsequent lack of
incentive to promote risk management

The loss modelling method

This method collects actual loss data and uses it to derive empirical
distributions for its risks. These empirical risk distributions are then
used to calculate an unexpected loss amount needing to be protected
by a capital buffer. The unexpected loss can be theoretically calculated
to any desired target confidence level.

This is typically thought of as a "bottom-up" method, but can be done


at any level of detail with the loss types able to be defined narrowly or
broadly.

This method is intuitively attractive as it endeavours to anchor itself in


objective loss data. It is also promoted by the January 2001 Basel
proposals in conjunction with its narrow definition of operational risk.
However it faces a number of critical problems, particularly if an entity
has a goal of a broad measure of risk:

1. the need to collect sufficient data to cover the range of


experience that a risk capital buffer would be expected to cover
(including for example the range of political events that might
threaten an industry or company)
2. the assumption that the past is a good predictor of the future
(particularly with a company that evolves to a different business
mix or managment style)
3. the need to capture full economic losses (including opportunity
losses) if the company wishes to pursue a holistic broad loss
measure
4. the need to collect loss data on a consistent basis (such as
before or after the mitigating impact of insurance)
5. the need to have a consistent and clear demarcation between
different risk types to ensure that losses are appropriately
grouped (and avoiding double counting)
6. the difficulty in using this method to project the capital
requirements for a future innovative structure/business strategy
where loss data may not exist
7. the delay between impact on the risk capital measure and the
actual risk management activities in the entity, with subsequent
reduction of incentive to promote risk management
A method to overcome the insufficiency of loss data proposed by
several is to collect industry loss data, with scaling to the size of the
entity. Although it is clearly beneficial to learn from the experience of
others, this supplementary method has additional problems:

1. the need to ensure that a consistent treatment and grouping of


losses occurs despite their different sources, with many publicly
sourced losses relying on news reports
2. the need to analyse the losses sufficiently to nominate the
appropriate scaling factors to be used to make them relevant to
other entities
3. the problem that even total industry experience for a period does
not ensure that the data covers the range of experience that a
risk capital buffer would be expected to cover (as the industry
shared the same or a similar economic and political
environment)

The judgment layer required to deal with these problems overshadows


the apparent objective nature of this method.

The direct estimation method

The direct estimation method relies on collaborative line manager


judgments to estimate a risk distribution for the risks they run. It
explicitly incorporates a layer of subjective judgment based on
available loss data and other relevant factors, but these subjective
judgments are generally at a lower level of significance than the
judgments involved in the other measurement methods.

It also provides a forward looking quantification of risk, with the effects


of changes in business mix or strategy or structure, readily included in
the direct estimation judgment process.

The direct estimation method is covered in detail in subsequent pages,


but basically involves the selection of a risk distribution shape that is
appropriate for the risk (generally allocated by default to the risk
category) and than anchoring this risk distribution shape with a
quantification of the impact of one or more scenarios (which can
include actual risk incidents or near misses). This estimated risk
distribution can be refined based on subsequent experience (or as
appropriate loss data becomes available).

The detail at which risks are estimated (or losses grouped in the loss
modelling method) is at the organisation's discretion - the level of
detail required depends on the level of detail sought in the subsequent
risk/return numbers. This may well be detailed but need not be.
Some granularity or level of detail has to be chosen and at any level
some bundling of risk types is inevitable. For example, the granularity
might be the risk of flood to a state's branch network or it might be the
risk of all natural disasters to the organisation as a whole - either way
the risk encompasses a range of possible risk events. The decision
needs to be made on the level of discrimination sought for the results.
Does the organisation wish to consider pricing or a possible
contribution to shareholder value at the level of an individual regions?
Probably not initially.

Direct estimation does not require everyone to duplicate estimates of


the same risks. Common risks can be estimated by the appropriate
experts who can also define which of the available indicators would be
the best proxies of the risk's incidence. Staff numbers may be suitable
for some risks, the number of certain types of transactions for others,
and so on.

Many broadly defined operational risks do have large "intangible" or


non-cash losses that are difficult to estimate. But these seem to be
able to be quantified after a disaster, particularly for public companies
with a sharemarket measure. In most cases it would be appropriate to
consider the size of the risks before they hit. Hard? - yes. Precise? - no.
But precise to the degree of precision needed - yes. And are such
judgments about these risks already explicitly needed for the
company's risk management? - most

You might also like