EWS Report

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 21

EWS Report

2.1 Qualitative and quantitative approaches to risk assessment

 Risk Assessment
According to UNISDR (United Nations International Strategy for Disaster Reduction, risk
assessment is a process determining the probability of losses by the analysis of potential
hazards and evaluation of the existing conditions of vulnerability that may cause threat or harm
to people, property, livelihoods, and the environment. It is made up of three processes: risk
identification, risk analysis, and risk evaluation.
o Importance of risk assessment

Conducting risk assessments can also provide a framework to determine the effectiveness of
disaster risk management, risk prevention, and/or risk mitigation.
The process of risk assessment requires a structured approach.
It requires transparency, opening up assumptions and options to challenge, discuss and review.
Through this approach, it will help us better understand the hazards, their nature and the
possible consequences and its weight to the society.
The importance of risk assessments provides a great opportunity to be prepared from the likely
damages, losses, and even death. The assessment will also highlight the significant and
necessary actions to be taken that are most effective in order to mitigate or reduce the impacts
on individuals, communities, and government.
o Examples: catastrophe risk and the insurance industry

A great example for proving the importance of risk assessment is the experience of the
insurance industry, where it has been transformed by the adoption of increasing rigorous risk
assessment and modelling process over the last 30 years. The lessons of this experience is a
relevant note to policymakers and practitioners working in the government.
The insurance industry’s risk catastrophe (rare events) risk assessment was entirely based on
historical experience or assumptions.
1984 – Don Friedman published a paper forming the template for modelling insurance
catastrophe risk over the following 30 years. This template has broken into hazard, exposure,
vulnerability and financial loss processes.
The first US hurricane model to the template was produced by the reinsurance broker E.W.
Blanch in 1987, followed by the US earthquake in 1988.
Reinsurance brokers and reinsurers also lead the field in EU, however, early 1990s saw the rise
of three major catastrophe modelling firms, which still dominated the industry in 2016. These
models were stochastic (random) models – based on synthetic event made of many thousands
of events that attempt to represent the range of possible events with their associated
probabilities.
The models required knowledge not only of what properties were insured and their value but
also of their location, construction type and occupation.
Engineering principles augmented by historical loss analysis attempted to understand the
relationship between the event’s manifestation at a particular location (e.g., peak ground
acceleration, peak gust speed and maximum flood depth) and its likely damage. From this an
overall damage estimate for any given property portfolio for each of the synthetic events could
be calculated. If the probability of each synthetic event is then applied, we could understand the
distribution off loss to the overall portfolio, for example what the annual average loss is and how
big a loss from that hazard type can be expected every 5, 10, 20, 50 and 100 years.
(WA KO KASABOT)
Decisions could be made based on ‘objective fact’, not subjective opinion. Underwriters now
had much more information to appropriately rate individual policies and to decide how much
total risk they could accept across their portfolio and how much to off lay. The concept of
risk/return entered the market. Firms began to clearly define their risk appetite to ensure
appropriate levels of financial security and then seek to maximise return within that appetite.
It has not been a painless process. Initially, many saw the models as a panacea to the market’s
problems. There was a tendency by those unaware of the complexity of the models to believe
the results. Arguably, the models were oversold and overbought: the vendors sold the models
on their technical capabilities and the buyers bought them seeking certainty, but neither
publically faced up to the inherent uncertainty within the models, despite growing pains in the
process. However, this information has transformed the industry. Twenty years ago the most
technical reinsurance broker had perhaps 3 % of staff engaged in risk analytics, whereas now
this has become 25 % to 30 %. Chief risk officers were virtually unknown in the insurance
industry 20 years ago; now they are embedded. The models became a mechanism to raise
debate above vague opinion to a discussion of the veracity of assumptions within the model.
The models’ data requirements led to a massive increase in the quality and quantity of data
captured, leading in turn to improved models. Knowledge of catastrophe risk has grown
immeasurably; firms have become smarter, more financially robust and therefore more likely to
meet their claim obligations. Whilst such modelling originally applied to catastrophe risk only, it
has been extended to cover man-made hazards such as terrorism and more esoteric risk such
as pandemic. Indeed, the EU’s solvency II (Directive 2009/138/EC) an insurance regulatory
regime, requires firms to understand all the risk they face, insurance and non-insurance (e.g.,
market risk, counterparty risk and operational risk), with the carrot that if they can demonstrate
that they can adequately model their risks, then they may be allowed to use the capital
requirement implied by their model rather than the standard formula. Regulators rather smartly
realise that any firm willing and able to demonstrate such capacity and understanding is less
likely to fail. (PASABTA KO)
o Key elements of risk assessment

Whilst the insurance industry is a special case, others are noticing that the same methods can
be used to manage risks to governments, cities and communities. They can drive not only a
better understanding of the risks that society faces but also a means to determine and justify
appropriate risk planning, risk management strategies as well as public and investment
decisions.
Indeed, it can be argued that the process of risk assessment and modelling is more important
than the results obtained. Risk assessment does not need to be as complex as a full stochastic
model to add real value. Similarly, it is a common misunderstanding that a lack of good-quality,
homogeneous data invalidates risk assessment. Any risk assessment methodology requires
assumptions to be brought to light and so opened to challenge. Assumptions can then be
reviewed, compared and stressed, identifying areas of inconsistency, illogicality, sensitivity and
where further research should be concentrated.
The key steps in risk assessment are the following.
• Identify the hazards which might affect the system or environment being studied A brain-
storming session to identify all potential hazards should be done at an initial stage. It is
important to think beyond events or combinations of events that have occurred in order to
consider those that may occur.
• Assess the likelihood or probability that hazards might occur: inputs to this process include
history, modelling, experience, corporate memory, science, experimentation and testing. In
practice, events with a very, very low probability (e.g. meteor strike) are ignored, focusing on
ones more likely to occur and can be either prevented, managed or mitigated.
• Determine the exposure to the hazard, i.e. who or what is at risk.
• Estimate the vulnerability of that hazard to the entity exposed in order to calculate the physical
or financial impact upon that entity should the event occur. This may be obtained by a review of
historical events, engineering approaches and/or expert opinion and may include the ability of
the system to respond after the event so as to mitigate the loss.
• Estimate the potential financial and/or social consequences of events of different magnitudes.
o Risk tolerance

The likelihood of the hazard and its consequences needs to be compared with the norms of
tolerability/acceptability criteria that society or an organization has formulated. If these criteria
are met, the next step would be to manage the risk so that it is at least kept within these criteria
and ideally lowered with continuous improvement.
If not met, the next step would be risk reduction by either reducing exposure to the hazard or by
reducing vulnerability by preventative measures or financial hedging, typically through traditional
indemnity insurance that pays upon proof of loss, but also increasingly through parametric
insurance that pays upon proof of a defined event occurring. Insurance-like products can also
be obtained from the financial markets by means of catastrophe or resilience bonds.
In industry, reducing event likelihood is normally the preferred method, since this dimension is
amenable to improving reliability and enhancing the protective measures available. In many
cases, these can be tested, so are therefore often a dominant feature of risk reduction.
Estimating the potential severity of the hazard is harder and often leaves much to expert
opinion. If risk cannot be credibly reduced in industry, it may lead to the cessation of an activity.
Ideally, a hazard would be completely avoided: a fundamental step in the design of inherently
safer processes. However, for natural hazards and climate risk, where hazard likelihood
reduction is often impossible, it is required to work on exposure and vulnerability. Building
codes, for example the EU standard Eurocodes, encourage appropriate resilience in design and
construction and can include ‘build back better’ after an event. Spatial planning and the
delineation of hazard zones of various levels can promote development in areas less exposed
to risk.
The insurance mechanism can be used to encourage appropriate risk behaviours, penalising
poor construction, maintenance or location by reduced cover or higher premiums and rewarding
mitigation measures, e.g. retro-fitting roof ties in tropical cyclone-exposed areas or installing
irrigation systems for crops by premium reductions.

 Risk identification process


Risk identification is the process that is used to find, recognize, and describe the risks that could
affect the achievement of objectives. The purpose of risk identification is to reveal what, where,
when, why, and how something could affect a company’s ability to operate.
o Importance of risk identification

In all risk assessment methods, the failure to include these ‘atypical’ scenarios will present
problems. Examples include the major fire and explosion at Buncefield (December 2005) and
the tsunami that inundated the Fukushima nuclear power station (March 2011). Identification of
all potential hazards is absolutely fundamental in ensuring success.
The United Kingdom Health and Safety Executive has identified and reviewed almost 40 hazard
identification methods.(search more) The scope and depth of study is important and relevant to
purpose and the needs of users of the assessment. It is necessary to identify all hazards so that
a proper risk assessment may be made. When we are open to considering potential deviations
we need to make sure that we are open-minded enough to consider all possibilities even when
they may seem to be remote.
It is important to consider all potential hazards, natural and man-made, and their possible
interactions and consequences. The process should not be limited to events known to have
happened in the past, but also to consider what could happen.
Methods in use greatly depend on the experience of the persons carrying out the study. This is
normally a team activity, and how it is made up is important and should be drawn from persons
familiar with the technology or natural phenomena and the location being considered.
Techniques adopted range from relatively unstructured ‘brainstorming’ through to the more
structured ‘what if ’ analysis
o What if

This is a form of structured team brainstorming. Once the team understands the process or
system being assessed and the kind of risks (potential exposures and vulnerabilities), each
discreet part or step is examined to identify things that can go wrong and to estimate their
possible consequences. In order to carry this out successfully, we must stress the need for the
team to be properly qualified and to have a full set of data relating to the system being studied.
This would include operating instructions, process flow sheets, physical and hazardous
properties of the materials involved, potentially exposed persons, environment or assets,
protective systems. Most users will simply estimate the likelihood and severity of consequences
in a similar way to that used in risk matrix applications. A brainstorming exercise has the side
benefit of encouraging a wide participation in the risk identification and assessment process,
increasing ownership of the ultimate conclusions.
o Failure mode and effect analysis (FMEA)

FMEA is a rigorous, step-by-step process to discover everything that could go wrong in a task or
process, the potential consequences of those failures and what can be done to prevent them
from happening. In this way, it can be used in risk assessment in industry. As shown in Figure
2.1, it comprises a systemised group of activities designed to:
• recognise and evaluate the potential failure of a process or equipment and their effects;
• identify actions which could eliminate or reduce the chance of potential failure;
• document the process.
It captures:
• the failure mode, i.e., what could go wrong;
• the effect analysis, i.e., how it would happen, how likely it is to go wrong and how bad it would
be.

FMEA is a tool used to provide a structured process to understand and thereby minimise risk.
The three distinct assessments for each of the three strands of this methodology, detection
availability, occurrence probability and severity, are each given a rating: D, P and D,
respectively. Risk ranking is calculated by multiplying these factors to give a single rating D x P
x S. A risk matrix may be used to illustrate this process.

o Hazard and operability study (HAZOP)

This step involves establishing the probability that a risk event might occur and the potential
outcome of each event. The technique of HAZOP has been used and developed since the
1970s for identifying potential hazards and operability problems caused by ‘deviations’ from the
design intent of a part of a production process or a procedure for new and existing operations.
The technique is most associated with identifying hazardous deviations from the desired state,
but it also greatly assists the operability of a process. In this mode it is very helpful when writing
operating procedures and job safety analysis.
Processes and procedures all have a design intent which is the desired normal state where
operations proceed in a good way to make products in a safe way. With this in mind, equipment
is designed and constructed, which, when it is all assembled and working together, will achieve
the desired state. In order to achieve this, each item of equipment will need to consistently
function as designed. This is known as the ‘design intent’ for that particular item or section of the
process.
Each part of this design intent specifies a ‘parameter’ of interest. For example, for a pump this
could be flow, temperature or pressure. With a list of ‘parameters’ of interest, we can then apply
‘guide words’ to show deviations from the design intent. Interesting deviations from the design
intent in the case of our cooling facility could include less or no flow of water, high temperature
or low (or high) pressure. When these deviations are agreed, all the causes associated with
them are listed. For example, for no or less flow, causes will include pump failure, power failure,
line blockage, etc. The possible hazardous consequences can now be addressed, usually in a
qualitative manner without significant calculation or modelling. In the example, these might be,
for example, for line blockage pump overheats or loss of cooling to process, leading to high
temperature problems with product. These simple principles of the method are part of the study
normally carried out by a team that includes designers, production engineers, technology
specialists and, very importantly, operators. The study is recorded in a chart as in the study
record. A decision can then be made about any available safeguards or extra ones that might
be needed — based on the severity or importance of the consequence. It is believed that the
HAZOP methodology is perhaps the most widely used aid to loss prevention in industry. The
reason for this can be summarised as follows: • it is easy to learn; • it can be easily adapted to
almost all the operations that are carried out within process industries; • no special level of
academic qualification is required.
• Risk analysis methodologies
o Types of risk analysis

Risk analysis is a complex field requiring specialist knowledge and expertise but also common
sense. It is not just a pure scientific field but will necessarily include judgements over issues
such as risk appetite and risk management strategy. It is vital that the process be as
comprehensive, consistent, transparent and accessible as possible. If a risk cannot be properly
understood or explained, then it is difficult if not impossible for policymakers, companies and
individuals to make rational choices.
Currently, there is no universally agreed risk analysis method applied to all phenomena and
uses, but the methods used rather are determined by a variety of users, such as industrial and
transport companies, regulators and insurers. They are selected on the basis of their perceived
relevance, utility and available resources. For example, a method adopted in industry may not
be suitable in the field of natural hazards. Legal requirements may also dictate the degree of
study as well as such factors as the ‘allowable’ threat to the community. This last matter is
common in ‘deterministic’ risk analysis where the requirement may be that there is no credible
risk for a community in the location of an industrial operation. Deterministic methods consider
the consequences of defined events or combinations of events but do not necessarily consider
the probability of these events or guarantee that all possible events are captured within the
deterministic event set. Often this is the starting point for risk analysis. At the other extreme,
stochastic or probabilistic analysis attempts to capture all possible outcomes with their
probabilities; clearly coming with a much higher data and analytical requirement and, if correct,
forming the basis for a sophisticated risk assessment process.
o Deterministic methods

Deterministic methods seek to consider the impact of defined risk events and thereby prove that
consequences are either manageable or capable of being managed. They may be appropriate
where a full stochastic model is impossible due to a lack of data; providing real value whilst a
more robust framework is constructed. Risk standards may be set at national and international
level and, if fully complied with, are believed to prevent a hazard that could impact the
community. This is akin to the managing of risk in the aviation industry, where adherence to
strict rules on the design and operation of aircraft and flights has produced a very safe industry.
The same approach to rule- based operations exists in some countries and companies. How are
deterministic events framed? For example, to check the safety of an installation against a
severe flood, severity is assessed according to the worst recently seen, the worst seen in the
last 20 years or the worst that may be expected every 100 years based on current climatic
conditions and current upstream land use. A different choice of event will have a different
outcome and potentially a very different conclusion about manageability. Can we ensure that all
deterministic events used in risk assessment across hazards are broadly equivalent in
probability? If not, assessments and conclusions may be skewed.
In recent times there has been a shift from a totally rule- based system to one where an element
of qualitative, semi- quantitative and quantitative risk assessment (QRA) may influence
decisions. But deterministic risk assessment is also carried out as a reality check for more
complex stochastic models and to test factors that may not be adequately modelled within these
models. For example, over the past 20 years the insurance industry has enthusiastically
embraced advances risk assessment techniques, but deterministic assessment of the form ‘if
this happens, this is the consequence’ is still required by regulators. They may be referred to as:
• a scenario test, where a defined event or series of events is postulated and the consequences
assessed; • a stress test, where pre-agreed assumptions of risk, for example implied within a
business plan (e.g. interest rate assumptions), are stressed and challenged to determine their
impact on results and company sustainability; • a reverse stress test, where events or
combinations of events are postulated that could cause insolvency of the firm if unhedged.
Scenario, stress and reverse stress tests may be informed by science and modelling or expert
opinion, or both, and often an assessment of probability will be estimated. Insurance regulators
often focus on a 0.5 % probability level as a benchmark, i.e. the worse that may be expected
every 200 years. If stress and scenario tests give numbers for an estimated 1 in 200 events that
the stochastic model says could happen, say, every 10 years, then it casts doubt on the
assumptions within the model or the test itself — they could be assessed and challenged.
Similarly, the framing of multievent reverse stress tests may challenge assumptions about
dependency and correlation within the model. Realistically, deterministic methods are not 100 %
reliable, taking as they do only a subset of potential events, but their practical performance in
preventing hazard -impacting communities is as good and in some cases even better than other
methods. If properly presented they can be clear, transparent and understandable. The process
of developing deterministic stress and scenario sets can also be a means to engage a range of
experts and stakeholders in the risk analysis process, gaining buy-in to the process. Whether
rules and standards derived from such tests work may depend on the risk culture of the region
or firm where the risk is managed. Some risk cultures have a highly disciplined approach to
rules, whereas others allow or apparently tolerate a degree of flexibility. Furthermore, the effort
required to create, maintain and check for compliance where technical standards are concerned
is considerable and may be beyond the capacity of those entrusted with enforcement.
o Semi-quantitative risk analysis

Semi-quantitative risk analysis seeks to categorise risks by comparative scores rather than by
explicit probability and financial or other measurable consequences. It is thus more rigorous
than a purely qualitative approach but falls short of a full comprehensive quantitative risk
analysis. But rather like deterministic methods, it can complement a full stochastic risk analysis
by inserting a reality check. Semi-quantitative methods can be used to illustrate comparative
risk and consequences in an accessible way to users of the information. Indeed, some output
from complex stochastic models may be presented in forms similar to that used in semi-
quantitative risk analysis, e.g., risk matrices and traffic light rating systems (for example where
red is severe risk, orange is medium risk, yellow is low risk and green is very low risk).
A risk matrix is a means to communicate a semi-quantitative risk assessment: a combination of
two dimensions of risk, severity and likelihood, which allows a simple visual comparison of
different risks. Severity can be considered for any unwanted consequence such as fire,
explosion, toxic release, impact of natural hazards (e.g. floods and tsunamis) with their effects
on workers and the community, environmental damage, property damage or asset loss. A
severity scale from minor to catastrophic can be estimated or calculated, perhaps informed by
some form of model. Normal risk matrices usually have between four and six levels of severity
covering this range with a similar number of probability scales. There is no universally adopted
set of descriptions for these levels, so stakeholders can make a logical selection based on the
purpose of the risk assessment being carried out. The example depicted in Figure 2.2, below, is
designed for risk assessment by a chemical production company and is based on effects on
people. Similar matrices can be produced for environmental damage, property or capital loss.
In this illustrative example the severity scale is defined as:
• insignificant: minor injury quick recovery;
• minor: disabling injury;
• moderate: single fatality;
• major: 2 -10 fatalities;
• severe: more than 11 fatalities

Similarly, the likelihood scale is defined as:


• rare: no globally reported event of this scale — all industries and technologies;
• unlikely: has occurred but not related to this industry sector;
• possible: has occurred in this company but not in this technology;
• likely: has occurred in this location — specific protection identified and applied;
• almost certain: has occurred in this location — no specific protection identified and applied.

When plotted in the matrix (Figure 2.2), a link may be provided to rank particular risks or to
categorise them into tolerable (in green), intermediate (in yellow and orange) or intolerable (in
red) bands. A risk which has severe consequences and is estimated to be ‘likely’ would clearly
fall into the intolerable band. A risk which has minor consequences would be intermediate and
‘very rare’ in likelihood would be in the tolerable band. For risks which appear in the intolerable
band, the user will need to decide what is done with the result. There are choices to be made,
either to reduce the severity of the consequence or the receptor vulnerability and/or to reduce
the event’s likelihood. All may require changes to the hazardous process. Many users would
also require intermediate risks to be investigated and reduced if practicable. Some users apply
numerical values to the likelihood and/or severity axes of the matrix. This produces a ‘calibrated’
matrix.

o Probabilistic risk analysis

This method originated in the Cold War nuclear arms race, later adopted by the civil nuclear
industry. It typically attempts to associate probability distributions to frequency and severity
elements of hazards and then run many thousands of simulated events or years in order to
assess the likelihood of loss at different levels. The method is often called Monte Carlo
modelling after the gaming tables of the principality’s casinos. These methods have been widely
adopted by the insurance industry, particularly where problems are too complicated to be
represented by simple formulae, including catastrophic natural hazard risks.
A commonly used generic term for these methods is QRA or probabilistic or stochastic risk
modelling. Today it is frequently used by industry and regulators to determine individual and
societal risks from industries which present a severe hazard consequence to workers, the
community and the environment. EU legislation such as the Seveso III directive (Directive
2012/18/EU) requires risks to be mapped and managed to a tolerable level. These industrial
requirements have resulted in the emergence of organisations, specialists and consultants who
typically use specially designed software models. The use of probabilistic methods is spreading
from the industrial field to others, for example the Netherlands flood defence planning.
Stochastic risk modelling has been wholeheartedly embraced by the re/insurance industry over
the past 30 years, particularly for natural catastrophes, though increasingly for all types of risks.
EU solvency II regulation (Directive 2009/138/EC), a manifestation of the advisory insurance
core principles for regulators set by the International Association of Insurance Supervisors in
Basel (IAIS, 2015), allows companies to substitute some or all of their regulatory capital
calculation with their own risk models if approved by their regulatory and subject to common
European rules. The main advantage of a quantitative method is that it considers frequency and
severity together in a more comprehensive and complex way than other methods. The main
problem is that it can be very difficult to obtain data on risks: hazard, exposure, vulnerability and
consequential severity. If it is difficult to understand and represent the characteristics of a single
risk then it is even harder to understand their interdependencies. There is inevitably a high level
of subjectivity in the assumptions driving an ‘objective’ quantitative analysis. A paper by
Apostolakis (2004) on QRA gives a coherent argument for appropriate review and critique of
model assumptions. The level of uncertainty inherent in the model may not always be apparent
or appreciated by the ultimate user, but the results of a fully quantitative analysis, if properly
presented, enhance risk understanding for all stakeholders. Often the process of building a
probabilistic model is as valuable as the results of the model, forcing a structured view of what is
known, unknown and uncertain and bringing assumptions that may otherwise be unspoken into
the open and thereby challenging them. Typically for a full stochastic model, severities for each
peril would be compared for different probability levels, often expressed as a return period; the
inverse of annual probability, i.e. how many years would be expected to pass before a loss of a
given size occurred. Figure 5 gives an example of output of such a model, here showing the
size of individual loss for two different perils with return periods of up to the worst that may be
expected every 500 years. Note that a return period is a commonly used form of probability
notation. A 1-in-200 year loss is the worst loss that can be expected every 200 years, i.e. a loss
with a return period of 200 years. A return period is the inverse of probability; a 1- in -200 year
event has a 0.5 % probability (1/200). We can see that, for example, every 100 years the worst
tropical cyclone loss we can expect is over EUR 28 million compared to the worst earthquake
loss we can expect every 100 years of EUR 10 million. In fact, a tropical cyclone gives rise to
significantly higher economic loss than an earthquake, up until the 1 -in450-year probability
level. But which is the most dangerous? A more likely event probabilities tropical cyclone is
much more damaging, but at very remote probabilities it is earthquake. Notice too the very
significant differences in loss estimate for the probability buckets used in the National risk
register for civil emergencies report (United Kingdom Cabinet Office 2015) risk matrix example
in Figure 2.4. The national risk register looks at the probability of an event occurring in a 5-year
period, but compares the 1-in-40-year loss to the 1-in-400-year loss, broadly equivalent to the 1-
in200 to 1-in-2 000 5-year bucket: the loss for both perils at these probability levels is very
different. Terms like ‘1-in-100 storm’ or ‘1-in -100 flood’ are often used in the popular press, but
it is important to define what is meant by these terms. Is this the worst flood that can be
expected every 100 years in that town, valley, region or country? It is also important not just to
look at the probability of single events as per Figure 2.5, an occurrence exceedance probability
curve, but also annual aggregate loss from hazards of that type, i.e. an annual aggregate
exceedance probability curve. For a given return period the aggregate exceedance probability
value will clearly be greater or at least equal to the occurrence exceedance probability — the 1
in 200 worst aggregate exceedance probability could be a year of one mega event or a year of
five smaller ones that are individually unexceptional but cumulatively significant. The models
can be used to compare the outcome of different strategies to manage and mitigate risk. The
cost and benefit of different solutions can be compared, and so an optimal strategy rationalised.
An anonymised insurance example is shown in Figure 2.6. Figure 2.6 compares 10 reinsurance
hedging options to manage insurance risk against two measures, one of risk and one of return.
On the horizontal axis we have the risk measure: the worse result that we may expect every 100
years, while on the vertical axis we have the return measure, or rather its inverse here, the cost
of each hedging option. Ideally we would be to the top left of the chart: low risk but low cost. The
‘do nothing’ option is the black triangle at the top right: high risk (a EUR 70 million 1-in-100 year
loss) but zero additional cost. The nine reinsurance hedging options fall into two clusters on the
chart. The purple diamond option to the extreme left has the least risk, reducing the 1-in-100
loss to EUR 30 million, but at an annual cost of EUR 2.25 million. The other two options in that
cluster cost more and offer less benefit so can be ignored. The best opinion of the middle group
is the purple square, reducing the 1-in-100 loss to EUR 55 million but at an annual average cost
of EUR 1.75 million. Again, this option clearly offers the best risk return characteristics of all the
others in the middle group, so the others in that group may be discounted. Therefore, from 10
options including the ‘do nothing’, option we have a shortlist of three: • black triangle: high risk
(EUR 70 million 1-in-100 loss), zero cost; • purple square: medium risk (EUR 55 million 1-in-100
loss), medium cost (EUR1.75million); • purple diamond: lowest risk (EUR 30 million 1-in-100
loss), highest cost (EUR2.25million). Which to pick depends on the risk appetite of the firm. If
they are uncomfortable with the unhedged risk then the purple diamond seems to offer much
better protection than the purple square option for comparatively little additional cost. Similar
methods can be used to compare options for, say, managing flood risk in a particular location
and/or process risk for a particular plant. The same metrics can be used to look at and compare
different perils and combinations of perils. The methods make no moral judgements but allowing
the cost of a particular strategy to be compared against the reduction is a risk as defined by a
specific risk measure. It is at this point that more subjective, political decisions can be made on
an informed, objective basis. An example of a comparative peril analysis for a European city is
outlined in a paper by Grünthal et al. (2006) on the city of Cologne. It must always be
remembered that models advise, not decide. Such charts and analyses should not be
considered definitive assessments; like any model they are based upon a set of defined
assumptions.

 Conclusions
The process of risk assessment acts as a catalyst to improve risk understanding and so to
encourage a process of proactive risk management. An early adapter of these methods, the
global catastrophe insurance and reinsurance industry has been transformed by the process
and has become more technically adept, more engaged with science and more financially
secure, providing more resilience for society. Similarly, the manufacturing and process
industries have embraced structured risk identification and assessment techniques to improve
the safety of the manufacturing process and the safety of the consumer. Disaster risk
assessment requires a combination of skills, knowledge and data that will not be held within one
firm, one industry, one institution, one discipline, one country, or necessarily one region. Risk
assessment requires input from a variety of experts in order to identify potential hazards, those
that could occur as well as those in the historical record. Rigorous approaches to risk
assessment require scientific modelling and a precise understanding of risk and probability.
Scientific models can be compared in order to challenge the underlying assumptions of each
and lead to better, more transparent decisions. As risk assessments get more quantitative,
scientific, and technical, it is important that policymakers are able to interpret them. The
assumptions within models must be transparent, and qualitative risk assessment (such as
deterministic scenario impacts or risk matrixes) can be useful and complementary to stochastic
modelling. It is important that policymakers can demonstrate that appropriate expertise and rigor
has been engaged to found risk management decisions firmly. The practitioner lies in the centre
of the many opportunities for partnerships in disaster risk assessment. In order to think beyond
accepted ways of working and challenge ingrained assumptions, links between other
practitioners in familiar fields as well as other sectors and industries and academia are
extremely valuable.
The risk assessment process is structured and covers risk identification, hazard assessment,
determining exposure and understanding vulnerability. Depending on the objective of risk
assessment and data availability, risk assessment methods can range in formalization and rigor.
There are more subjective scenario based deterministic models, semi quantitative risk analyses
such as risk matrixes, and fully quantitative risk assessment; probabilistic or stochastic risk
modelling. The more qualitative approaches to risk add value through the process of developing
a framework to capture subjective risk perception and serve as a starting point for a discussion
about assumptions and risk recognition engaging a wide variety of experts and stakeholders in
the process. They also provide a means to reality check more theoretical models. Probabilistic
and stochastic analyses provide the potential to perform cost/benefit or risk/ return analysis,
creating an objective basis for decision making. Rigorous quantitative approaches to risk
assessment and probabilistic analysis raise awareness of the need for further scientific input
and the requirement to transfer of knowledge and engagement between science and
practitioners. Risk assessment and analysis provides a framework to weigh decisions, and risk
models provide an objective basis against which policy decisions can be made and justified.
However, it is important that the limitations of modelling are recognized and inherent uncertainty
is understood. Having the ability to compare and challenge assumptions, as well as requiring
evidence based analysis, is required. Risk perception is subjective, but practitioners have
valuable information in the fields of data, methodologies and models that further solidify
frameworks through which hazards can be understood and compared in an objective fashion.
Innovation is required to meet the challenges of lack of data and partial information in risk
identification and modelling. Creative approaches can be made to capture and challenges
assumptions implicitly or explicitly made and so test them against available data and defined
stresses.
No model is perfect. New scientific input can improve and challenge models – testing sensitivity
to prior assumption, so leading to a greater understanding of disaster events which in turn leads
to safer companies, communities and countries A deeper understanding of the quantitative and
qualitative approaches to risk management can help innovate ways of thinking about subjective
public risk perception, and risk assessment frameworks can develop a more objective
understanding of risk and risk-informed decision making. Risk assessment and associated
modelling contain inherent uncertainty and are not fully complete. It is important to innovate in
areas where hazards are less known and capable of anticipation; truly “unknown unknowns”
and “known unknowns” must be considered. Similarly assumptions held for “known knowns”
should be continuously challenged and tested as new information arises.
2.2. Current and innovative methods to define exposure

 What is exposure
Exposure represents the people and assets at risk of potential loss or that may suffer damage to
hazard impact. It covers several dimensions like the physical (e.g. building stock and
infrastructure), the social (e.g. humans and communities) and the economic dimensions.
Exposure with vulnerability and hazard is used to measure disaster risk. It is reported that
exposure has been trending upwards over the past several decades, resulting in an overall
increase in risk observed worldwide, and that trends need to be better quantified to be able to
address risk reduction measures. Particular attention to understanding exposure is required for
the formulation of policies and actions to reduce disaster risk.
As highlighted by the Sendai Framework for Disaster Risk Reduction, ‘Policies and practices for
disaster risk management should be based on an understanding of disaster risk in all its
dimensions of vulnerability, capacity, exposure of persons and assets, hazard characteristics
and the environment. Such knowledge can be leveraged for the purpose of pre -disaster risk
assessment, for prevention and mitigation and for the development and implementation of
appropriate preparedness and effective response to disasters.
Exposure is a necessary, but not sufficient, determinant of risk. According to available global
statistics, least developed countries represent 11 % of the population exposed to hazards but
account for 53 % of casualties, while the most developed countries account for 1.8 % of all
victims (Peduzzi et al., 2009) with a population exposure of 15 %. These figures show that
similar exposures with contrasting levels of development, of land- use planning and of mitigation
measures lead to drastically different tolls of casualties. Hence it is possible to be exposed, but
not vulnerable; however, it is necessary to also be exposed to be vulnerable to an extreme
event.
Due to its multidimensional nature, exposure is highly dynamic, varying across spatial and
temporal scales: depending on the spatial basic units at which the risk assessment is
performed, exposure can be characterised at different spatial scales (e.g. at the level of
individual buildings or administrative units).
Population demographic and mobility, economic development and structural changes in the
society transform exposure over time. The quantification of 60 exposure is challenging because
of its interdependent and dynamic dimensions. The tools and methods for defining exposure
need to consider the dynamic nature of exposure, which evolves over time as a result of often
unplanned urbanisation, demographic changes, modifications in building practice and other
socioeconomic, institutional and environmental factors (World Bank, GFDRR, 2014). Various
alternative or complementary tools and methods are followed to collect exposure-related data;
they include rolling census and digital in situ field surveys. When the amount, spatial coverage
and/or quality of the information collected in the ground is insufficient for populating exposure
databases, the common practice is then to infer characteristics on exposed assets from several
indicators, called proxies. Exposure modelling also has a key role to play in risk assessment,
especially for large- scale disaster risk models (regional to global risk modelling (De Bono and
Chatenoux, 2015; De Bono and Mora ,2014)). Among the different tools for collecting
information on exposure, Earth observation represents an invaluable source of up-to-date
information on the extent and nature of the built-up environments, ranging from the city level
(using very high spatial resolution data) to the global level (using global coverage of satellite
data) (Deichmann et al., 2011; Dell’Acqua et al. ,2013; Ehrlich and Tenerelli, 2013). Besides,
change -detection techniques based on satellite images can provide timely information about
changes to the built-up environment (Bouziani et al., 2010).The choice of the approach
determines the resolution (spatial detail) and the extent (spatial coverage) of the collected
exposure data. It also influences the quality of the collected information. Despite the general
conceptual and theoretical understanding of disaster exposure and the drivers for its dynamic
variability, few countries have developed multihazard exposure databases to support policy
formulation and disaster risk- reduction research. Existing exposure databases are often hazard
specific (earthquakes, floods and cyclones), sector specific (infrastructure and economic) or
target specific (social, ecosystems and cultural) (Basher et al., 2015). They are often static,
offering one-time views of the baseline situation, and cannot be easily integrated with
vulnerability analysis. This chapter reviews the current initiatives for defining and mapping
exposure at the EU and global levels. It places emphasis on remote sensingbased products
developed for physical and population exposure mapping. Innovative approaches based on
probabilistic models for generating dynamic exposure databases are also presented together
with a number of concrete recommendations for priority areas in exposure research. The
broader aspects of exposure, including environment (e.g. ecosystem services) and agriculture
(e.g. crops, supply chains and infrastructures), deserve to be addressed in a dedicated future
chapter and will not be covered by the current review

 Why do we need exposure


There is a high demand for exposure data by the communities that address disaster risk
reduction (DRR). National governments and local authorities need to implement DRR
programmes; the insurance community needs to set premiums and manage their aggregate
exposures; civil society and the aid community need to identify the regions of the world that
most urgently require DRR measures (Ehrlich and Tenerelli, 2013). Effective adaptation and
disaster risk management (DRM) strategies and practices depend on a rigorous understanding
of the dimensions of exposure (i.e. physical and economic) as well as a proper assessment of
changes and uncertainties in those dimensions (Cardona et al., 2012).
Risk models require detailed exposure data (e.g. with information on buildings, roads and other
public assets) to produce as outputs risk metrics such as the annual expected loss and the
probable maximum loss (see Chapter 2.4). For instance, catastrophe models commonly used
by the insurance industry include an exposure module, which represents either a building of
specific interest, a dwelling representative of the average construction type in a given area or an
entire portfolio of buildings with different characteristics
(e.g. an entire city). The characteristics may include physical characteristics like building height,
occupancy rate, usage (private, public like commercial, industrial, etc.), construction type (e.g.
wood or concrete) and age, and also non-physical characteristics like the replacement cost
which is needed for calculating the loss at a certain location (Michel-Kerjan et al., 2013).
Besides, insurance companies need to assess and model the business interruption that
represents a major part of the total economic loss. To quantify loss due to business interruption,
exposure databases need to include information on building contents and business information
for different types of properties (Rose and Huyck 2016). These industry exposure databases are
often proprietary and use heterogeneous taxonomies and classification systems which hinder
efforts of merging independently developed datasets (GFDRR, 2014). However, the Oasis
(OASIS, 2016) community and the recently established Insurance Development Forum are
dedicating special efforts to exposure data harmonisation, sourcing, structuring and
maintenance at the global levels. Moreover, an initiative lead by Perils is offering de facto
standard industry exposure databases for property across Europe at an aggregated spatial level
(PERILS, 2016). If the aim is to know whether a particular feature is likely to be affected or not
by a certain level of hazard, then it is enough to simply identify the location of that feature (e.g.
building location and building footprint) or group of features (e.g. building stock). Whereas if the
purpose is to understand the potential economic impacts or human loss, then other attributes of
the feature or group of features need to described (e.g. the type of construction materials,
population density and the replacement value). Exposure databases detailed to single building
units are seldom available for disaster modellers. Instead, the exposure data are more often
available in an aggregated level for larger spatial units related to arbitrary areal subdivision of
the settlements, census block, postal codes, city blocks or more regular gridded subdivision. A
spatial unit may contain a statistic summary of building information such as average size and
average height, density or even relative distribution of building types (Ehrlich and Tenerelli,
2013). For optimal results the choice of the attribute and its granularity should be aligned with
the scale and the purpose of the risk assessment. To a certain extent, the requirements in terms
of granularity also depend on the peril being modelled: e.g. flood models require detailed
information on the location and building type. By contrast, windstorm models arguably need to
be less precise. Detailed gust speeds will not be known at a precise location level but rather
estimated on a broader spatial scale. There are clearly many attributes that can be attached to
exposure data. Developing such databases requires a multidisciplinary team of construction
engineers, economists, demographers and statisticians. In recent years, several exposure
datasets with regional or global coverage have attempted to generate detailed building
inventories and compile exposure data despite the challenges related to the heterogeneous
mapping schemas, the different typologies and the varying resolutions. In the following sections,
we review the existing initiatives at EU and global levels that have made a first step in
overcoming these obstacles either i) by using exposure proxies such as land -use and land-
cover products, ii) by using Earth observation technologies for mapping human settlements and
population or iii) by integrating existing information from different acquisition techniques, scales
and accuracies for characterising the assets at risk and for describing the building stock. We
purposely limit the review here to large -scale exposure datasets that have a spatial component
(i.e. associated with a geographic location) and that are open, hence ensuring replicability and a
better understanding of risk (World Bank, 2014).
 Landcover and land-use products as proxies to exposure
These products outline areas with different uses, including ‘industrial’, ‘commercial’ and
‘residential’ classes, as well as non-impervious areas (e.g. green spaces).
Land- use and landcover (LU/LC) information products that are usually derived from remote
sensing data may provide information on buildings and thus on exposure.
Some products may also describe the building density. LU/LC maps provide valuable
information on infrastructure such as roads. The spatial characteristics of LU/LC maps are
influenced by the minimum mapping units , which refer to the smallest size area entity to be
mapped.
o European land-use and land-cover products

The currently available EU-wide and global LU/LC products have minimum mapping units
ranging between 0.01 ha (e.g. the European Settlement Map (ESM)) to 100 ha (e.g. MODIS
land cover). At the EU level, the Corine Land Cover is the only harmonised European land cover
data available since 1990. It comprises 44 thematic classes with units of 25 ha and 5 ha for
changes, respectively. From 1990 until 2012, four of such inventories were produced and
completed by change layers, and it has been used for several applications like indicator
development, LU/LC change analysis (Manakos and Braun, 2014) and flood risk assessment
within the EU context (Lugeri et al., 2010). However, its limitations in terms of spatial resolution
do not allow the conversion of land- cover classes into accurate, physical exposure maps. To
complement LU/LC maps, detailed inventories of infrastructures are essential for assessing
risks to infrastructures as well as for managing emergency situations. In 2015, a geographical.
database of infrastructure in Europe was developed including transport networks, power plants
and industry locations. (Marin Herrera et al., 2015). The database was successfully used in a
comprehensive multihazard and multisector risk assessment for Europe under climate change
(Forzieri et al., 2015). The Urban Atlas is another pan-European LU/LC product describing, in a
consistent way, all major European cities’ agglomerations with more than 100 000 inhabitants.
The current specifications of the Urban Atlas fulfil the condition of a minimum mapping unit of
0.25 ha, allowing the capture of urban, built-up areas in sufficient thematic and geographic detail
(Montero et al., 2014). The Urban Atlas cities are mapped in 20 classes, of which 17 are urban
classes. It is a major initiative dealing with the monitoring of urban sprawl in Europe, designed to
capture urban land use, including low- density urban fabric, and in this way it offers a far more
accurate picture of exposure in urban landscapes than the Corine Land Cover does. Despite its
accuracy and relevance for risk modelling, the main limitation of this product is its spatial
coverage, as it is restricted to large urban zones and their surroundings (more than 100 000
inhabitants). Currently, the continental map of built-up areas with the highest resolution so far
produced is the ESM (Florczyk et al., 2016). The ESM (Figure 2.7) is a 10 metre x 10 metre
raster map expressing the proportion of the remote sensing image pixel area covered by
buildings; it was produced in 2013-2014. It was developed jointly by two services of European
Commission (JRC and REGIO). The ESM is distributed as a building density product at both 10
metre x 10 metre and 100 metre x 100 metre resolutions, each supporting specific types of
applications. For a pan-European risk assessment (Haque et al. 2016), the coarser (100 metre)
resolution is sufficient, whereas the 10 metre product would be necessary for local to regional
risk assessment.
o Global land-use and land-cover products

A number of global land- cover products covering different time periods and different spatial
resolutions have been created from remote sensing, e.g. MODIS (Friedl et al., 2010), Africover,
GLC-SHARE of 2014 (Latham et al., 2014), GLC2000 (Fritz et al., 2010), IGBP (Loveland et al.,
2000) and GlobCover (Arino et al., 2012). Many of these products are based on coarse
resolution sensors, e.g. GLC2000 is at 1 km, MODIS is at 500 metre and GlobCover is at 300
metre resolution, which hampers the potential to provide accurate exposure data that can
directly feed into risk assessment models. The first high- resolution (30 metres) global land -
cover product is the GlobeLand30, which comprises 10 types of land cover including artificial
surfaces for years 2000 and 2010 (Chen et al., 2015). However, the ‘artificial surfaces’ class
consists of urban areas, roads, rural cottages and mines impeding the straightforward
conversion of the data into physical exposure maps. The Global Urban Footprint describing
built-up areas is being developed by the German Aerospace Centre and is based on the
analysis of radar and optical satellite data. The project intends to cover the extent of the large
urbanised areas of megacities for four -time slices: 1975, 1990, 2000 and 2010 at a spatial
resolution of 12 metre x 12 metre (Esch et al., 2012). Once available, this dataset will allow
effective comparative analyses of urban risks and their dynamics among different regions of the
world. The global human settlement layer (GHSL) is the first global, fine- scale, multitemporal,
open data on the physical characteristics of human settlements. It was produced in the
framework of the GHSL project, which is supported by the European Commission. The data
have been released on the JRC open data catalogue (Global Human Settlement Layer, 2016).
The main product, GHS Built-up, is a multitemporal built-up grid (builtup classes: 1975, 1990,
2000 and 2014 ), which has been produced at high resolution (approximately 38 metre x 38
metre). The GHS Built-up grid was obtained from the processing of the global Landsat archived
data in the last 40 years in order to understand global human settlement evolution. The target
information collected by the GHSL project is the built-up structure or building aggregated in
built-up areas and then settlements according to explicit spatial composition laws. They are the
primary sign and empirical evidences of human presence on the global surface that are
observable by current remote sensing platforms. As opposed to standard remote sensing
practices based on urban land cover or impervious surface notions, the GHSL semantic
approach is continuously quantitative and centred around the presence of buildings and their
spatial patterns (Pesaresi et al., 2015; Pesaresi et al., 2013). This makes the GHSL perfectly
suitable for describing the physical exposure and its changes over time at a fine spatial
resolution (Pesaresi et al., 2016).

 Status of population exposure at the EU and global levels


The static component relates to the number of inhabitants per mapping unit and their
characteristics, whereas the dynamic component refers to their demography and their activity
patterns that highlight the movement of population through space and time. Population
distribution can be expressed as either the absolute number of people per mapping unit or as
population density. Census data are commonly used for enumerating population and for making
projections concerning population growth. Census data may also contain other relevant
characteristics that are used in risk assessment, such as information on age, gender, income,
education and migration. For large- scale analysis, census data are costly and seldom available
in large parts of the world or are even outdated or unreliable. Remote sensing, combined with
dasymetric mapping, represents an interesting alternative for large- scale mapping of human
exposure. Dasymetric mapping consists in disaggregating population figures reported at coarse
source zones into a finer set of zones using ancillary geographical data like LU/LC.
o EU-wide population grids

At the EU level, a European population grid with a spatial resolution of 100 metres x 100 metres
was produced (Batista e Silva et al., 2013). The method involved dasymetric mapping
techniques with a resident population reported at the commune level for the year 2011 and a
refined version of the Corine Land Cover as the main input sources. The data are publically
distributed on the geographic information system of the Commission following the standardised
1 km x 1 km grid net and the Inspire specifications. A new population grid at 10 metres has
recently been produced for the whole European territory, which builds on the ESM at 10 metres
as a proxy of the distribution of residential population and 2011 census data (Freire et al.,
2015a). The layer has been produced upon request of the European Commission and will soon
be made freely available and downloadable online. Figure 2.8. shows an example of potential
uses of the 10 -metre- resolution, EU- wide ESM map for modelling day and night population
distribution in volcanic risk assessment.
o Global human exposure

Global distribution of population in terms of counts or density per unit area is considered as the
primary source of information for exposure assessment (Pittore et al., 2016). Global population
data are available from the LandScan Global Population Database (Dobson et al., 2000), which
provides information on the average population over 24 hours and in a 1 km resolution grid. The
LandScan data have annual updates and are widely used despite being a commercial product.
Although LandScan is reproduced annually and the methods are constantly revised, the annual
improvements made to the model and the underlying spatial variables advise against any
comparison of versions. Other global human exposure datasets include the Gridded Population
of the World (GPWv4) available at a resolution of approximately 5 km at the equator. It is
developed by SEDAC and provides population data estimates at a spatial resolution of
approximately 1 km at the equator. For GPWv4 , population input data are collected at the most
detailed spatial resolution available from the results of the 2010 round of censuses, which
occurred between 2005 and 2014. The input data are extrapolated to produce population
estimates for the years 2000, 2005, 2010, 2015, and 2020 (Neumann et al., 2015). The open
WorldPop is another initiative providing estimated population counts at a spatial resolution of
100 metres x 100 metres through the integration of census surveys, high- resolution maps and
satellite data (Lloyd et al., 2017) . Within the WorldPop project, population counts and densities
are being produced for 2000-2020; the available data currently essentially cover America, Asia
and Africa.
Recently, within the framework of the GHSL, an improved global layer called GHS-POP, which
maps the distribution of the population with unprecedented spatio-temporal coverage and detail,
has been released. The data have been produced from the best available global built-up layer
(GHS-BU) and from census geospatial data derived from GPWv4. The population grids
correspond to residential-based population estimates in built-up areas and not ‘residential
population’ or ‘population in their place of residence’, for which consideration of land use would
be required (Freire et al., 2015b). The multitemporal data are available free of charge at a
spatial resolution of 250 metres x 250 metres for 1975, 1990, 2000 and 2015. It has already
successfully been used in the context of global risk assessment for the analysis of the increase
in population exposure to coastal hazards over the last 40 years (Pesaresi et al., 2016).

 Exposure data describing the building stock


Several exposure databases attempt to characterise the assets at risk by including physical
exposure information. The latter is often derived from the integration of a large variety of
possible exposure information sources using different modelling approaches. We review here
the existing initiatives that describe the building stock through a variety of attributes (e.g. height,
construction material and replacement value)
o EU-wide building inventory databases

The European Union’s seventh framework programme for research and technological
development (FP7) project, the NERA (Network of European Research Infrastructure for
Earthquake Risk Assessment and Mitigation) initiated the development of a European building
inventory database to feed into the Global Exposure Database (GED) (see Chapter 2.2.5.2).
The database builds upon the outcomes of NERIES project (Network of Research
Infrastructures for European Seismology) to compile building inventory data for many European
countries and Turkey (Erdik et al., 2010). The European building inventory is a database that
describes the number and area of different European building typologies within each cell of a
grid, with a resolution of at least 30 arc seconds (approximately 1 km2 at the equator) for use in
the seismic risk assessment of European buildings (Crowley et al., 2012). The database
includes building/dwelling counts and a number of attributes that are compatible with the Global
earthquake model’s basic building taxonomy (i.e. material, lateral load, number of storeys, date
of construction, structural irregularity, occupancy class, etc.). This inventory contains useful
information for the assessment of risk assessment and for the estimation of economic loss at
the EU level.
o Global building inventory databases

The prompt assessment of global earthquakes for response (PAGER) (Jaiswal et al., 2010), the
GED for GAR 2013 (GEG-2013) and the GED for the Global earthquake model (GED4GEM)
are examples of global exposure databases that specifically include physical exposure
information. On a country-by-country level, the PAGER (Jaiswal et al., 2010) contains estimates
of the distribution of building types categorised by material, lateral force resisting system and
occupancy type (residential or non-residential, urban or rural). The database draws on and
harmonises numerous sources: (1) United Nations statistics, (2) the United Nations habitat’s
demographic and health survey database, (3) national housing censuses, (4) the world housing
encyclopaedia project and (5) other literature. PAGER provides a powerful basis for inferring
structural types globally. The database is freely available for public use, subject to peer review,
scrutiny and open enhancement. The GEC-2013 (De Bono and Chatenoux, 2015) is a global
exposure dataset at 5 km spatial resolution which integrates population and country-specific
building typology, use and value. It has been developed for the global risk assessment 2013
with the primary aim of assessing the risk of economic loss as a consequence of natural
hazards at a global scale. The development of GEG-2013 is based on a top-down or
‘downscaling’ approach, where information including socioeconomic, building type and capital
stock at a national scale are transposed onto a regular grid, using geographic population and
gross domestic product distribution models as proxies. GEG-2013 is limited in some important
ways: i) it was fundamentally constructed using national indicators that were successively
disaggregated onto a 5 × 5 km grid; and ii) the capital stock in each cell is distributed on the
basis of the number of persons living in that cell and does not take into account the real value of
the assets of the cell. The data can be downloaded from the GAR risk data platform. The
GED4GEM is a spatial inventory of exposed assets for the purposes of catastrophe modelling
and loss estimation (Dell’Acqua et al., 2013, Gamba et al., 2012). It provides information about
two main assets at risk: residential population and residential buildings. Potentially, it can also
include information about non-residential population and buildings, although the amount of
information for these two additional assets is currently quite limited. In general, the GED is
divided into four different levels, which are populated from different data sources and use
different techniques. Each level has a different geographical scale as for the statistical
consistency of the data it contains as well as a different level of completeness. Each level is
thus appropriate for a different use: • Level 0 — A representation of the population and buildings
on a 30- arc seconds grid with information about the buildings coming from statistics available at
the country level. The building distribution is thus the same for each element of the grid
belonging to a given country, with a binary difference between ‘rural’ and ‘urban’ areas. • Level 1
— A representation of population and buildings on a 30 -arc seconds grid with information about
the buildings that is available using the subnational statistics (e.g. for regions, states, provinces
or municipalities according to the different countries). • Level 2 — A representation where each
element of the same 30 -arc seconds grid includes enough information to be consistent by itself,
and no distribution on a bigger geographical scale is used. This case corresponds to the
situation when all building counts are actually obtained, not by means of a disaggregation of a
distribution available on a wider area on the elements of the grid but by aggregating building -
level data, possibly available for the area of interest. • Level 3 — A representation at the single
building level, including all the possible information about each building, such as structural,
occupancy and economic variables. The first version of the GED contains aggregate information
on population and the number/built area/ reconstruction cost of residential and non-residential
buildings at a 1 km resolution. Detailed datasets on single buildings are available for a selected
number of areas and will increase over time.

 Future trends in exposure mapping:” towards a dynamic exposure modelling


The review of existing initiatives for defining and mapping exposure shows that there is a clear
trend towards the use of satellite data in combination with statistical modelling (top-down and
bottom-up approaches) for building exposure data: remotely sensed data sourcing for exposure
is particularly useful in low-income and emerging economies which lack well-established data
collection resources, frameworks and agencies. These economies are often also undergoing
rapid urbanisation with dramatically changing exposure concentrations over short periods of
time. In parallel, over the last 5 years, the field of risk assessment has been increasingly driven
by open data and open -source modelling, as highlighted in the report Understanding risk in an
evolving world (GFDRR, 2014).
Open data initiatives such as the Humanitarian OpenStreetMap Team has contributed
significantly to the collection of exposure data in vulnerable countries: in a little over a year,
more than 160 000 individual buildings were mapped through crowdsourcing and in situ
surveys. At present, one of the most challenging aspects of exposure modelling is to implement
multihazard exposure models through dynamic, scalable frameworks. The dynamic nature of
such frameworks in this context reflects the need to explicitly account for both the time variability
of the exposed assets and the constant evolution of their representation in the model, which is
seldom complete and exhaustive.
In a dynamic, multiresolution exposure model, two basic types of entities should therefore
coexist: atomic data and statistical (aggregated) models. Atomic data refer to physical structures
such as buildings or bridges that have been analysed individually and possibly not fully
enumerated. Statistical models are aggregated descriptions defined over specific geographic
boundaries and possibly influenced by atomic data. Atomic data and statistical models are
closely related and mutually interactive, with both having geometric properties and attributes.
Compound models accommodating both atomic data and statistical models would be able to
optimally exploit direct, in situ information obtained from specialised surveys, even if not
complete and exhaustive, by constraining a set of statistical distributions describing the assets’
attributes at the atomic level (e.g. material properties for a single building) or at the aggregation
boundary level (for instance the expected number of storeys of different building types based on
empirical observations in a city district). At atomic level, this can be obtained for instance by
modelling the (in)dependence relationships among different assets’ attributes and with external
covariates (e.g. geographical location, altitude, terrain slope, etc.). An example for a
probabilistic information integration approach is given in Pittore and Wieland (2013), where
Bayesian networks are proposed for their sound treatment of uncertainties and for the possibility
of seamless merging of different data sources, including legacy data, expert judgement and
data mining- based inferences. Due to the increasingly large variety of possible exposure
information sources including sparse and incomplete data available at small- scale resolution,
the issue of the need for the flexible integration of existing information at different scales and
accuracies in order not to discard available information needs to be confronted. To exploit the
full capabilities of the available information in combined spatio-temporal approaches, a
database is needed that allows one to model and query complex data types composed of
multiple spatial and temporal dimensions. Information extracted from a satellite image or
manually sampled in situ show different degrees of quality in terms of reliability and accuracy.
Therefore, a probabilistic framework for information integration, updating and refinement is
required, as exemplified in Pittore and Wieland (2013). During monitoring activities, the resulting
information model continuously evolves and a dynamic exposure database should be able to
track an object’s evolution over space and time while accounting for its identity which is the
lifespan of an object. To this regard, Pittore et al. (2015) propose a novel approach to prioritise
exposure data collection based on available information and additional constraints. They utilise
the concept of focus maps (Pittore, 2015), which combine different information layers into a
single raster representing the probability of the point being selected for surveying, conditional on
the sampling probability of each of the other layers. Based on a focus map, a set of sampling
points is generated and suitably routed on the existing road network. This allows one to realise
a further optimisation of the overall data collection by including additional survey constraints in
the routing algorithm and which drives the in situ data collection phase. Iteratively repeating this
process allows for an efficient model updating which can be optimised to fit the available time
and resources.

 Conclusions
The increasing availability of detailed and harmonised hazard datasets is calling for parallel
efforts in the production of standardised multihazard exposure information for disaster risk
models. GEDs can be a possible solution for harmonisation and for moving beyond single-
hazard databases. Several recommendations can be distilled from this overview and are
provided here to develop a roadmap towards the effective implementation of global, dynamic
exposure databases. Finally, exposure data collection should be regarded as a continuous
process sustaining a continuous re-evaluation of risks to enable an effective DRR.
Partnership
Authoritative and non-authoritative sources should be integrated in order to ensure quality
standards and compliance with the disaster risk- reduction purposes. Within this context, it
becomes important to harvest data from crowd-sourced information and exploit volunteered
geographic information to augment authoritative sources and involve communities and experts,
especially in data-poor countries.
Knowledge
The need for quality assessment and an analysis of the uncertainty in the exposure data to
avoid error propagation. Quantification of exposure data uncertainty is useful for anatomising
the structure of the total uncertainty in the risk assessment into individual uncertainties
associated with the risk components (exposure uncertainty compared to that of hazard and
vulnerability). In addition, the communication of uncertainty to the users of the exposure
databases is also essential to ensure local understanding and trust in the data.
Innovation
Data and (statistical) models have to coexist in a statistically sound framework in order to
overcome the impracticality of having a complete and fully enumerated global dynamic
exposure database. Flexible integration of existing information at different scales and
accuracies in order not to discard available information needs to be confronted. To this regard,
rapid, large-scale data collection based on remote sensing should be fully exploited and be
complemented whenever possible by information collected in situ using suitable sampling
methodologies.

You might also like