Reservoir Model Design
Reservoir Model Design
Reservoir Model Design
Mark Bentley
Reservoir
Model Design
A Practitioner’s Guide
Second Edition
Reservoir Model Design
Philip Ringrose • Mark Bentley
Second Edition
Philip Ringrose Mark Bentley
Equinor ASA TRACS International Limited
Norwegian University of Science Heriot-Watt University
and Technology Aberdeen and Edinburgh
Trondheim, Norway Scotland, UK
Cover figure: Multiscale geological bodies and associated erosion, Lower Antelope Canyon,
Arizona, USA. Photograph by Jonas Bruneau. # EAGE reproduced with permission of the
European Association of Geoscientists and Engineers.
This Springer imprint is published by the registered company Springer Nature Switzerland AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Preface
This book is about the design and construction of subsurface reservoir models.
In the early days of the oil industry, oil and gas production was essentially an
engineering activity, dominated by disciplines related to chemical and mechan-
ical engineering. Three-dimensional (3D) geological reservoir modelling was
non-existent, and petroleum geologists were mostly concerned with the inter-
pretation of wire-line well logs and with the correlation of geological units
between wells. Two important technological developments – computing and
seismic imaging – stimulated the growth of reservoir modelling, with computa-
tional methods being applied to mapping, volumetrics and reservoir simulation.
Geological or ‘static’ reservoir modelling was given further impetus from the
development of geostatistical techniques, allowing the reservoir modeller to
estimate inter-well reservoir properties and hence make statistical predictions.
3D reservoir modelling has now become the norm.
Reservoir models in themselves do not generate prospects or development
scenarios, increase productivity or reduce risk and uncertainty. They are
simply tools which reflect our ability to achieve these goals. Advances in
reservoir modelling cannot therefore be compared to technology
breakthroughs such as 4D seismic imaging, horizontal drilling or EOR
methods. Reservoir modeling tools are, however, invaluable tools for
integrating knowledge and for making strategic decisions.
As the world moves into the energy transition and oil and gas production is
likely to decline, energy and transportation will be increasingly powered by
renewable or low-emissions technologies. In this new era, CO2 disposal and
subsurface energy storage will become increasingly important and will make
new demands of subsurface modelling. The fluids used and stored in the
subsurface may change but the value of porous rock reservoirs will remain.
We will never cease to need to know how these fluids move in the subsurface,
and modelling of subsurface reservoirs will remain a tool for building this
understanding and making useful forecasts.
***
The explosion of reservoir modelling software packages and associated
geostatistical methods has created high expectations but also led to periodic
disappointments. This has given birth to an oft quoted mantra “all models are
wrong.”
v
vi Preface
This book emerged from a series of industry and academic courses given
by the authors and aimed at guiding the reservoir modeller through the pitfalls
and benefits of reservoir modelling, in the search for a reservoir model design
that is useful for forecasting. Reservoir modelling and simulation software
packages often come with guidance about which buttons to press and menus
to use for each operation, but less advice on the objectives and limitations of
the model algorithms. The result is that whereas much time is devoted to
model building, the outcomes of the models often fall short of their objectives.
Our central contention in this book is that problems with reservoir modelling
tend not to stem from hardware limitations or lack of software skills but from the
approach taken to the modelling – the model design. It is essential to think
through the design and to build fit-for-purpose models that meet the requirements
of the intended use. All models are not wrong, but in many cases models are used
to answer questions which they were simply not designed to answer.
We cannot hope to cover all the possible model designs and approaches, and
we have avoided as much as possible reference to specific software modelling
packages. Our aim is to share our experience and present a generic approach to
reservoir model design. Our design approach is geologically based – partly
because of our inherent bias as geoscientists – but mainly because subsurface
reservoirs are composed of rocks. The pore space which houses the ‘black gold’
of the oil age, the ‘golden age’ of gas or the fluids disposed and stored in the
new ‘age of sustainability’, has been constructed by geological processes – the
deposition of sandstone grains and clay layers, processes of carbonate cemen-
tation and dissolution, and the mechanics of fracturing and folding. Good
reservoir model design is therefore founded on good geological interpretation.
There is always a balance between probability (the outcomes of stochastic
processes) and determinism (outcomes controlled by limiting conditions). We
develop the argument that deterministic controls rooted in an understanding of
geological processes are the key to good model design. The use of probabilis-
tic methods in reservoir modelling without these geological controls is a poor
basis for decision making, whereas an intelligent balance between determin-
ism and probability offers a promising path.
We also discuss the decision-making process involved in reservoir
modelling. Human beings are notoriously bad at making good judgements –
a theme widely discussed in the social sciences and behavioural psychology.
The same applies to reservoir modelling – how do you know you have a fit-
for-purpose reservoir model? There are many possible responses, but most
commonly there is a tendency to trust the outcome of a reservoir modelling
process without appreciating the inherent uncertainties.
We hope this book will prove to be a useful guide to practitioners and
students of subsurface reservoir modelling in the fields of petroleum geosci-
ence, environmental geoscience, CO2 and energy storage and reservoir
engineering – an introduction to the complex, fascinating, rapidly-evolving
and multi-disciplinary field of subsurface reservoir modelling.
This book offers practical advice and ready-to-use tips on the design and
construction of reservoir models. This subject is variously referred to as
‘geological reservoir modelling,’ ‘static reservoir modelling’, ‘3D reservoir
modelling’ or ‘geomodelling’, and our starting point is very much the geol-
ogy. However, the end point is fundamentally the engineering representation
of the subsurface and in this sense these notes will deal with the important
interface with the world of dynamic, simulation modelling, with which static
modelling is intrinsically integrated.
Our central argument is that whether models succeed in their goals is
generally determined in the higher level issue of model design – building
models which are fit for the purpose at hand.
Based on our own experiences and those of colleagues and groups we have
worked with on courses and in reviews, we distinguish five root causes which
commonly determine modelling success or failure:
1. Establishing the model purpose
Why are we logged on in the first place and what do we mean by ‘fit-for-
purpose’?
2. Building a 3D architecture with appropriate modelling elements
The fluid-dependent choice on the level of detail required in a model
3. Understanding determinism and probability
Our expectations of geostatistical algorithms
4. Model scaling
Reconciling the jump between the scale of data and cellular resolution of
our models, and how to represent pore-scale fluid flow reasonably at a
reservoir scale
5. Uncertainty-handling
Moving from single to multi-models and where the design becomes subject
to bias
Strategies for addressing these underlying issues will be dealt with in the
following chapters under the thematic headings of model purpose, the rock
model, the property model, upscaling flow properties and uncertainty-
handling.
vii
viii Prologue: Model Design
Design in General
There are many more examples, however, of office block and accommo-
dation units that are unattractive and plagued by design faults and
inefficiencies – the carbuncles that should never have been built.
This architectural analogy gives us a useful setting for considering the
more exclusive art of constructing models of the subsurface.
Norman Foster building, 30 St. Mary Axe (Photograph from Foster and Blaser (1993) –
reproduced with kind permission from Springer Science + Business Media B.V.)
The Petter Dass Museum, Alstahaug, Norway (The Petter Dass-museum, # Petter Dass-museum,
reproduced with permission)
References
xi
xii Acknowledgements
1 Model Purpose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 Modelling for Comfort? . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Models for Visualisation Alone . . . . . . . . . . . . . . . . . . . . 2
1.3 Models for Volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.4 Models as a Front End to Simulation . . . . . . . . . . . . . . . . . 4
1.5 Models for Well Planning . . . . . . . . . . . . . . . . . . . . . . . . 4
1.6 Models for Seismic Modelling . . . . . . . . . . . . . . . . . . . . . 5
1.7 Models for IOR/EOR . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.8 Models for Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.9 The Fit-for-Purpose Model . . . . . . . . . . . . . . . . . . . . . . . . 8
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2 The Rock Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.1 The Rock Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.2 Model Concept . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.3 The Model Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.3.1 Structural Data . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.3.2 Stratigraphic Data . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.4 Model Elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.4.1 Reservoir Models Not Geological Models . . . . . . . 20
2.4.2 Building Blocks . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.4.3 Model Element Types . . . . . . . . . . . . . . . . . . . . . . 20
2.4.4 How Much Heterogeneity to Include?
‘Flora’s Rule’ . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.5 Determinism and Probability . . . . . . . . . . . . . . . . . . . . . . 28
2.5.1 Balance Between Determinism and Probability . . . . 29
2.5.2 Different Generic Approaches . . . . . . . . . . . . . . . . 31
2.5.3 Forms of Deterministic Control . . . . . . . . . . . . . . . 32
2.6 Essential Geostatistics . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
2.6.1 Key Geostatistical Concepts . . . . . . . . . . . . . . . . . 34
2.6.2 Intuitive Geostatistics . . . . . . . . . . . . . . . . . . . . . . 39
2.7 Algorithm Choice and Control . . . . . . . . . . . . . . . . . . . . . 43
2.7.1 Object Modelling . . . . . . . . . . . . . . . . . . . . . . . . . 44
2.7.2 Pixel-Based Modelling . . . . . . . . . . . . . . . . . . . . . 47
2.7.3 Texture-Based Modelling . . . . . . . . . . . . . . . . . . . 50
2.7.4 Algorithms Compared . . . . . . . . . . . . . . . . . . . . . . 51
xiii
xiv Contents
Nomenclature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305
Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315
Model Purpose
1
A reservoir engineer and a geoscientist establish model purpose against an outcrop analogue
Keywords
1.1 Modelling for Comfort?
Design · Reservoir modelling · Simulation ·
Energy transition · Fit-for-purpose ·
There are two broad schools of thought on the
Volumetrics · Well planning · Forecasting ·
purpose of models:
Development planning
1. To provide a 3D, digital representation of a question “what purpose?”, as the model design
hydrocarbon reservoir, which can be built and will vary according to that purpose. To illustrate
maintained as new data becomes available, this, the following sections look at contrasting
and used to support on-going lifecycle needs purposes of reservoir modelling, and the distinc-
such as volumetric updates, well planning and, tive design of the models associated with these
via reservoir simulation, production differing contexts.
forecasting.
2. That there is little value in maintaining a single
‘field model’. Instead, we should build and
1.2 Models for Visualisation Alone
maintain a field database, from which several
fit-for-purpose models can be built quickly to
Simply being able to visualise the reservoir in 3D
support specific decisions.
was identified early in the development of
The first approach seems attractive, especially modelling tools as a potential benefit of reservoir
if a large amount of effort is invested in the first modelling. Simply having a 3D box in which to
build prior to a major investment decision. How- view the available data is beneficial in itself.
ever, the ‘all-singing, all-dancing’ full-field This is the most intangible benefit of
approach tends to result in large, detailed models, modelling, as there is no output other than a richer
generally working at the limit of the available mental impression of the subsurface, which is
software/hardware and which are cumbersome difficult to measure. However, most people
to update and difficult to pass hand-to-hand as benefit from 3D visualisation (Fig. 1.1), con-
people move between jobs. Significant effort can sciously or unconsciously, particularly where
be invested simply in the on-going maintenance cross-disciplinary issues are involved.
of these models, to the point that the need for the Some common examples are:
model ceases to be questioned and the purpose of
• To show the geophysicist the 3D structural
the model is no longer apparent. In the worst case,
model based on their seismic interpretations.
the maintenance of the model becomes a job in
Do they like it? Does it make geological
itself and the investment in modelling technology
sense? Have seismic artefacts been inadver-
may just be satisfying an urge for technical rigour
tently included?
in the lead up to a business decision – simply
• To show the petrophysicist (well-log special-
‘modelling for comfort’ (Bentley 2015).
ist) the 3D property model based on the well-
We argue that the route to happiness lies with
log data supplied in 1D. Has the 3D property
the second approach: building fit-for-purpose
modelling been appropriate or have features
models which are equally capable of creating
been introduced which are inconsistent with
comfort or discomfort around a business decision.
the 1D knowledge along the wells,
Choosing this approach immediately raises the
e.g. correlations and geological or many cases, the model is effectively a 3D visual
petrophysical trends? data base and many of the steps described in the
• To show the reservoir engineer the geo-model subsequent chapters in this book are not required
grid, which will be the basis for subsequent to achieve the desired understanding.
flow modelling. Is it usable? Does it conflict
with prior perceptions of reservoir unit
continuity?
1.3 Models for Volumes
• To show the well engineer what you are trying
to target in 3D with the complex well path you
Knowing how much oil and gas is in place or how
have just planned. Does it look feasible?
much fluid can be stored or disposed of is usually
• To show the asset team how a conceptual
one of the first goals of reservoir modelling. This
reservoir model sketched on a piece of paper
may be done using a simple map-based approach,
actually transforms into a 3D volume.
but the industry has now migrated to 3D software
• To show the manager or investment fund
packages, which is appropriate given that
holder what the subsurface resource actually
volumetrics are intrinsically a 3D property. The
looks like. That oil and gas do not come from a
tradition of calculating volumes from 2D maps
‘hole in the ground’ but from a heterogeneous
was a necessary simplification, no longer
pore-system requiring significant technical
required.
skills to access, and a different set of skills
3D mapping to support volumetrics should be
when we come to consider storing or perma-
quick, and is ideal for quickly screening
nently disposing of fluids in the subsurface.
uncertainties for their impact on volumetrics, as
in the case shown in Fig. 1.2, where the volumet-
Getting a strong shared understanding of the ric sensitivity to fluid contact uncertainties is
subsurface concept tends to generate useful being tested, as part of a quick asset evaluation.
discussions on commercial risks and Models designed for this purpose can be rela-
uncertainties, and looking at models or data in tively coarse, containing only the outline fault
3D often facilitates this process. The value of pattern required to define discrete blocks and the
visualisation alone is the improved understanding gross layering in which the volumes will be
it gives. reported. The reservoir properties involved
If this is a prime purpose then the model need (e.g. porosity and net-to-gross) are statistically
not be complex – it depends on the audience. In additive (see Chap. 3 for further discussion)
Fig. 1.2 Two models for different fluid contact scenarios built specifically for volumetrics
4 1 Model Purpose
which means cell sizes can be large. There is no 1.5 Models for Well Planning
requirement to create permeability models and, if
this is for quick screening only, it may be suffi- If the purpose of the modelling exercise is to
cient to run 3D volumes for gross rock volume assist well planning and geosteering, the model
only, combining the remaining reservoir may require no more than a top structure map,
properties on spreadsheets. nearby well ties and seismic attribute maps. Wells
Models designed for volumetrics should be may also be planned using simulation models,
coarse and fast. allowing for alternative well designs to be tested
against likely productivity.
It is generally preferable to design well paths
1.4 Models as a Front End in reservoir models which capture all factors
to Simulation likely to impact a fairly costly investment deci-
sion. Most geoscience software packages have
The majority of reservoir models, however, are good well design functionality allowing for accu-
built for input to flow simulators. To be success- rate well-path definition in a high resolution static
ful, such models have to capture the essential model. Figure 1.4 shows an example model for a
permeability heterogeneity which will impact on proposed horizontal well, the trajectory of which
reservoir performance. If the static models fail to has been optimised to access oil volumes (HCIIP)
capture this, the subsequent simulation forecasts by careful geo-steering with reference to expected
may be useless. stratigraphic and structural surfaces.
The requirement for capturing connected per- Some thought is required around the
meability usually means finer scale modelling is determinism-probability issue referred to in the
required because permeability is a multiplicative prologue and explored further in Chap. 2 because,
rather than an additive property. Unlike models although there are many possible statistical
for volumetrics, the scope for simple averaging of simulations of a reservoir, there will only be one
detailed heterogeneity is limited. Issues of grid final well path. It is therefore only reasonable to
geometry and cell shape are also more pressing target the wells at the more deterministic features
for flow models (Fig. 1.3); strategies for dealing in the model – features that are placed in 3D by
with this are discussed in Chap. 4. the modeller and determined by the underlying
At this point it is sufficient to simply appreci- reservoir concept. These typically include fault
ate that taking a static geological model through blocks, key stratigraphic rock units, and high
to simulation automatically requires additional porosity features which are well determined,
design, with a focus on permeability such as channel belts or seismic amplitude
architecture. ‘sweet spots.’ It is wrong to target wells at highly
Fig. 1.3 Rock model (a) and property model (b) designed for reservoir simulation for development planning (c)
1.6 Models for Seismic Modelling 5
stochastic model features, such as a simulated ‘planned’ well, including uncertainty ranges.
random channel, stochastic porosity highs or Using visualisation, it is easier to understand
small-scale probabilistic bodies (Fig. 1.5). surprises as they occur, particularly during
The dictum is that wells should only target geosteering (e.g. Fig. 1.4).
highly probable features; this means well
prognoses (and geosteering plans) can only be
confidently conducted on models designed to be 1.6 Models for Seismic Modelling
largely deterministic. If we try and drill a proba-
bilistic rock body, we will probably be Over the last few decades, geophysical imaging
disappointed. has led to great improvements in reservoir
Having designed the well path it can be useful characterisation – better seismic imaging allows
to monitor the actual well path (real-time updates) us to ‘see’ progressively more of the subsurface.
by incrementally reading in the well deviation file However, an image based on sonic wave
to follow the progress of the ‘actual’ well vs. the reflections requires translation into rock and
6 1 Model Purpose
fluid properties. Geological reservoir models are well data. The nature of the required input model
therefore important as a priori input to Quantita- varies according to the QI process being
tive Interpretation (QI) seismic studies. followed – this needs to be discussed with the
This may be as simple as providing the geophysicist.
layering framework for routine seismic inversion, In the example shown here (Fig. 1.6), a reser-
or as complex as using Bayesian probabilistic voir model (top) has been passed through to the
rock and fluid prediction to merge seismic and simulation stage to predict the acoustic
Fig. 1.6 Reservoir modelling in support of seismic inter- matched (Bentley and Hartung 2001). (Redrawn from
pretation: (a) rock model; (b) forecast of acoustic imped- Bentley and Hartung 2001, #EAGE reproduced with
ance change between seismic surveys; (c) 4D seismic kind permission of EAGE Publications B.V., The
difference cube to which the reservoir simulation was Netherlands)
1.8 Models for Storage 7
impedance change to be expected on a 4D seismic We started by arguing that there is little value
survey (middle). The actual time-lapse in ‘fit-for-all purposes’ detailed full-field models.
(4D) image from seismic (bottom) is then com- However, IOR schemes generally require very
pared to the synthetic acoustic impedance change, detailed models to give very accurate answers,
and the simulation is history-matched to achieve a such as “exactly how much more oil will I recover
fit. if I start a gas injection scheme?” This requires
If input to geophysical analysis is the key detail, but not necessarily at a full-field scale.
issue, the focus of the model design shifts to the Many IOR solutions are best solved using
properties relevant to geophysical modelling, detailed sector or near-well models, with rela-
notably models of velocity and density changes. tively simple and coarse full-field grids to handle
There is, in this case, no need to pursue the the reservoir management.
intricacies of high resolution permeability archi- Figure 1.7 shows an example IOR model
tecture, and simpler (coarser) model designs may (Brandsæter et al. 2001). Gas injection was
therefore be appropriate. simulated in a high-resolution sector model with
fine-layering (metre-thick cells) and various fault
scenarios for a gas condensate field with difficult
fluid phase behaviour. The insights from this IOR
1.7 Models for IOR/EOR
sector model were then used to constrain the
coarse-grid full-field reservoir management
Efforts to extract maximum possible volumes
model.
from oil and gas reservoirs usually fall under the
banner of Improved Oil Recovery (IOR) or
Enhanced Oil recovery (EOR). IOR tends to
include all options including novel well design 1.8 Models for Storage
solutions, use of time-lapse seismic and second-
ary or tertiary flooding methods (water-based or The growing interest in CO2 storage as a means of
gas-based injection strategies), while EOR gener- controlling greenhouse gas emissions brings a
ally implies tertiary flooding methods, new challenge for reservoir modelling. Here
i.e. something more advanced than primary there is a need for both initial scoping models
depletion or secondary waterflood. CO2 flooding (for capacity assessment) and for more detailed
and Water Alternating Gas (WAG) injection models to understand injection strategies and to
schemes are typical EOR methods. We will use assess long-term storage integrity. Some of the
IOR to encompass all the options. issues are similar – find the good permeability
zones, identify important flow barriers and pres- 1.9 The Fit-for-Purpose Model
sure compartments – but other issues are rather
different, such as understanding formation Given the variety of models described above, we
response to elevated pressures and geochemical argue that it is best to abandon the notion of a
reactions due to CO2 dissolved in brine. CO2 is single, all-knowing, all-purpose, full-field
also normally compressed into the liquid or dense model, and replace this with the idea of flexible,
phase to be stored at depths of c.1–3 km, so that faster models based on thoughtful model design,
understanding fluid behaviour is also an impor- tailored to answer specific questions at hand.
tant factor. CO2 storage generally requires the Such models have a short shelf life and are
assessment of quite large aquifer/reservoir built with specific ends in mind, i.e. there is a
volumes and the caprock system – presenting clear model purpose. The design of these models
significant challenges for grid resolution and the is informed by that purpose, as the contrast
level of detail required. between the models illustrated in this chapter
An example geological model for CO2 storage has shown.
is shown in Fig. 1.8 from the In Salah CO2 injec- With a fit-for-purpose mind set, the long-term
tion project in Algeria (Ringrose et al. 2011). handover items between geoscientists are not a set
Here CO2, removed from several CO2-rich gas of 3D property models, but the underlying build-
fields, has been stored in the down-flank aquifer ing blocks from which those models were created,
of a producing gas field. Injection wells were notably the reservoir database (which should
placed on the basis of a seismic porosity inver- remain updated and ‘clean’) and the reservoir
sion, and analysis of seismic and well data was concept, which should be clear and explicit, to
used to monitor the injection performance and the point that it can be sketched.
verify the integrity of the storage site. Geological It is also often practical to hand-over some
models at a range of scales were required, from aspects of the model build, such as a fault model
near-wellbore models of flow behaviour to large- if the software in use allows this to be updated
scale models of the geomechanical response. easily, or workflows and macros, if these can be
understood and edited readily. The pre-existing this should be decided up-front before
model outputs (property models, rock models, significant modelling effort is expended. There
volume summaries, etc.) are best archived. is no ‘correct’ workflow, because the actual
The rest of this book develops this theme in steps to be taken are an output of the fit-for-
more detail – how to achieve a design which purpose design. However, it may be useful to
addresses the model purpose whilst representing refer to a general workflow (Fig. 1.10) which
the essential features of the geological architec- represents the main steps outlined in this
ture (Fig. 1.9). book (see also Cannon 2018 for a description of
When setting about a reservoir modelling the standard workflow).
project, an overall workflow is required and
Make an economic or
engineering decision
10 1 Model Purpose
Outcrop view and model representation of the Urgonian Limestone at the Gorges de la Nesque, Provence (hot
colours ¼ high permeability)
Fig. 2.2 Capturing the reservoir concept for a tidal reser- from the Hugli delta in Bangladesh, or a generic block
voir outcrop in the Lajas Formation in Patagonia (top diagram sketch. (Block diagram adapted from McIlroy
image) in terms of an analogue depositional environment et al. 2005)
Fig. 2.3 Capturing the reservoir concept in a simple sketch showing the architectural arrangement of distinct reservoir
elements observed in core from a Triassic fluvial reservoir
2.3 The Model Framework 15
geometry) and stratigraphic inputs (to define is whether the fault model reflects the seismic
internal layering). interpretation only or whether it has been adjusted
The main point we wish to consider here is with reference to a conceptual structural interpre-
what are the structural and stratigraphic issues tation. We recommend the latter, as a direct
that a modeller should be aware of when thinking expression of a seismic interpretation will tend
through a model design? These are discussed to be a conservative representation of the fault
below. architecture – it is limited to the resolution of
the data.
Facets of an incomplete structural interpreta-
tion (Fig. 2.4) are:
2.3.1 Structural Data
• Fault networks tend to be incomplete,
Building a fault model tends to be one of the more e.g. faults may be missing in areas of poor
time-consuming and manual steps in a modelling seismic quality.
workflow, and is therefore commonly done with • Faults may not be joined (under-linked
each new generation of seismic interpretation. In networks) due to lack of sufficient seismic
the absence of new seismic, a fault model may be resolution in areas of fault intersections.
passed on between users and adopted simply to • Horizon interpretations may stop short of
avoid the inefficiency of repeating the manual faults due to poorer seismic resolution around
fault-building. the fault zone.
Such an inherited fault framework therefore • Horizon interpretations may be extended down
requires quality control. The principal question fault planes or unconformities, i.e. the fault or
Fig. 2.4 Incomplete structural interpretations, inappropri- surface geometries around faults, presumably interpreted
ate for modelling. Highlighted areas pick out well-surface or gridded without reference to the fault planes
misties, disconnections in the fault network and unlikely
16 2 The Rock Model
unconformity is not identified independently inputs are smoothed, not only within-surface but
from each horizon, or not identified at all. more importantly around faults, the latter tending
• Faults may be interpreted on seismic noise to have systematically reduced fault
(artefacts). displacements.
A more rigorous structural model workflow is
Although models made from such ‘raw’ seis-
to:
mic interpretations are honest reflections of that
data, the structural representations are incomplete 1. Determine the structural concept – are faults
and a structural interpretation should be overlain expected to die out laterally or to link? Are en
on the seismic outputs as part of the model echelon faults separated by relay ramps? Are
design. To achieve this, the workflow similar to there small, possibly sub-seismic connecting
that shown in Fig. 2.5 is recommended. faults? Make a sketch of the desired fault
Rather than start with a gridded framework architecture to clarify.
constructed directly from seismic interpretation, 2. Input the fault sticks and grid them as fault
the structural build should start with the raw, planes (Fig. 2.5a).
depth-converted seismic picks and the fault 3. Link faults into a network consistent with the
sticks. This is preferable to starting with horizon concept (1, above, also Fig. 2.5b).
grids, as these will have been gridded without 4. Import depth-converted horizon picks as
access to the final 3D fault network. Working points and remove spurious points, e.g. those
with pre-gridded surfaces means the starting erroneously picked along fault planes rather
Fig. 2.5 A structural build based on fault sticks from using a 3D velocity model. The key feature of this
seismic (a), converted into a linked fault system (b), workflow is the avoidance of intermediate surface
integrated with depth-converted horizon picks (c) to gridding steps which are made independently of the final
yield a conceptually acceptable structural framework interpreted fault network. Example from the Douglas
which honours all inputs (d). The workflow can equally Field, East Irish Sea. (Bentley and Elliott 2008)
well be followed using time data, then converted to depth
2.3 The Model Framework 17
than stratigraphic surfaces (Fig. 2.5c) to avoid correlation. For static models which are to go
the structural errors shown in Fig. 2.4. forward to dynamic modelling, the best correla-
5. Edit the fault network to ensure optimal posi- tion lines for modelling are those which most
tioning relative to the raw picks; this may be an closely reflect paths of fluid-flow during produc-
iterative process with the geophysicist, partic- tion or injection.
ularly if potentially spurious picks are The choice of correlation surfaces hugely
identified. influences the resulting model architecture, as
6. Grid surfaces against the edited fault network illustrated in Fig. 2.6. The example is for a
(Fig. 2.5d). shoreface system, broadly cleaning upwards. A
7. Iterate steps 2–6 to capture alternative poten- ‘geological’ solution would identify time-lines
tial fault networks to capture structural and correlate genetic parasequences (Fig. 2.6a).
uncertainties. In this case, for the water injection scenario
illustrated, the flow in the highly permeable
upper part of the sequence will simply follow
the high permeability pathway which may or
2.3.2 Stratigraphic Data
may not cross timelines. In the lower part of the
sequence, however, the low kv/kh ratio associated
There are two main considerations in the selection
with lower and middle shorefaces will direct flow
of stratigraphic inputs to the geological frame-
lines from the horizontal, will result in poorer
work model: correlation and hierarchy.
macroscopic sweep and will predict unswept
intervals in the left-hand well. A lithostratigraphic
2.3.2.1 Correlation interpretation (Fig. 2.6b) will predict an uneven
In the subsurface, correlation usually begins with floodfront, but with a different production profile
markers picked from well data – well picks. and a prediction that all zones will ultimately be
Important information also comes from correla- swept. The correct model representation is the
tion surfaces picked from seismic data. Numerous one which captures how the fluid will ‘see’ the
correlation picks may have been defined in the reservoir, and this depends on the permeability
interpretation of well data and these picks may architecture, and an idea for how sensitive the
have their origins in lithological, biostrati- reservoir fluids are to heterogeneity (guidance
graphical or chronostratigraphical correlations – on this is given in Sect. 2.4.4).
all of these being influenced directly or indirectly An excellent example of the same issue for
by sequence stratigraphy (see for example Van from a lacustrine reservoir system is given by
Wagoner et al. 1990; Van Wagoner and Bertram Ainsworth et al. (1999).
1995). If multiple stratigraphic correlations are
available these may produce surfaces which spa- 2.3.2.2 Use of Hierarchy
tially cross, such as timelines crossing lithostra- Different correlation schemes have different
tigraphic correlations. Not all these surfaces are influences on the issue of hierarchy, as the stra-
needed in reservoir modelling. A choice is there- tigraphy of most reservoir systems is inherently
fore required and as with the structural framework hierarchical (Campbell 1967) and for reservoir
the selection of surfaces should be made with modelling this impacts on the representation of
reference to the conceptual sketch, which is in length scales, an issue which is central to rock and
turn driven by the model purpose. property geostatistics. For a sequence strati-
For models built for exploration purposes or graphic correlation scheme, a low-stand systems
for in-place resource calculations only, correla- tract may have a length-scale of tens of kilometres
tion lines which track the distribution of reservoir and contain stacked parasequences with a length-
properties are ideal, as the geostatistical simula- scale of kilometres (e.g. Fig. 2.6a). These
tion of properties will follow the 3D grid stratig- parasequences in turn act as the container for
raphy, which is in turn influenced by the zone individual reservoir elements with dimensions of
18 2 The Rock Model
Fig. 2.6 Alternative (a) chronostratigraphic and (b) lines directly influences flow lines in a dynamic model
lithostratigraphic correlations of the same sand simulation. Note: the chronostratigraphic correlation
observations in three wells; the choice of correlation invokes an additional hierarchical level in the stratigraphy
tens to hundreds of metres. Lower levels of the Particular care is required if a sequence strati-
hierarchy are always associated with shorter graphic approach has been used to correlate. The
length scales. method yields a framework of timelines, often
The reservoir model should aim to capture the based on picking the most shaley parts of
levels in the stratigraphic hierarchy which influ- non-reservoir intervals so a rock model for an
ence the spatial distribution of significant interval between two flooding surfaces should
heterogeneities (determining ‘significance’ will contain a shaley portion at both the top and the
be discussed in Sect. 2.4.4). Bounding surfaces base of the interval. The probabilistic aspects of
within the hierarchy (e.g. flooding surfaces in a the subsequent geostatistical modelling, however,
marginal marine interval) may or may not act as can easily degrade the correlatable nature of the
flow barriers – so they may represent important flooding surfaces, with inter-well shales becom-
model elements in themselves or they may merely ing scattered incorrectly throughout the zone.
control the distribution of model elements within This particular case can be resolved by applying
that hierarchy. This applies to structural model deterministic trends; this will be pursued further
elements as well as the more familiar sedimento- in Sect. 2.7.4.
logical elements, as features such as fracture den- In general, hierarchical order is intrinsic to
sity can be controlled by mechanical reservoirs and some degree of hierarchy is usually
stratigraphy – implicitly related to the strati- captured in standard software workflows, but not
graphic hierarchy. necessarily all. The modeller is required to work
2.4 Model Elements 19
out if the default hierarchy is sufficient to capture of the two lithofacies elements. The two rock
the required concept. If not, the workflow should models were then combined using a logical prop-
be modified, most commonly by applying logical erty model operation, which imposed the texture
operations. An example of this is illustrated in of the fine-scale lithofacies heterogeneity, but
Fig. 2.7, from a reservoir model in which the only within the relevant channels. Effectively
first two hierarchical levels were captured by the this created a third hierarchical level within the
default software workflow: tying layering to seis- model.
mic horizons (first level) then infilled by One way or another hierarchy can be
conformant sub-seismic stratigraphy (second represented, but only rarely by using default
level, Fig. 2.7a). An additional hierarchical level model workflows.
was required because an important permeability
heterogeneity existed between lithofacies types
within a particular model element (the main
2.4 Model Elements
channels). The chosen solution was to build the
channel model using channel objects and creating
Having established a structural/stratigraphic
a separate, in this case probabilistic, model which
model framework, we can now return to the
contained the information about the distribution
model concept and consider how to fill the
model. Typical lithofacies elements may be related by a depositional process. These include
coarse sandstones, mudstones or grainstones, the rock bodies which typical modelling packages
and will generally be defined from core and or are most readily designed to incorporate, such as
log data (e.g. Fig. 2.8). channels, sheet sands or heterolithics. These usu-
ally comprise several lithofacies, for example, a
2.4.3.2 Genetic Elements fluvial channel might include conglomeratic,
In reservoir modelling, genetic elements are a cross-bedded sandstone and mudstone
component of a sedimentary sequence which are lithofacies. Figure 2.9 shows an example for a
Fig. 2.9 Genetic Units: lithofacies types grouped into channel, upper shoreface (USF) and lower shoreface (LSF)
genetic depositional elements. (Image courtesy of Simon Smith)
22 2 The Rock Model
shoreface in which genetic elements are identified by structural rather than sedimentological or strat-
from core and log data. igraphic aspects. Fault damage zones are impor-
tant volumetric structural elements (e.g. Fig. 2.12)
2.4.3.3 Stratigraphic Elements as are mechanical layers (strata-bound fracture
For models based on a sequence stratigraphic sets) with properties driven by small-scale
framework, the fine-scale components of the jointing or cementation. These are discussed fur-
scheme may be the predominant model elements. ther in Sect. 6.7.
These may be parasequence sets organised within
a larger-scale sequence-based framework, 2.4.3.6 Exotic Elements
e.g. Fig. 2.10. The list of potential model elements matches the
diversity of reservoir types. Reservoirs in volcanic
2.4.3.4 Diagenetic Elements rocks are a good example (Fig. 2.13), in which the
Diagenetic elements commonly overprint key model elements may be zones of differential
lithofacies types, may cross major stratigraphic cooling and hence differential fracture density.
boundaries and are often the predominant feature
of carbonate reservoir models. Typical diagenetic
elements could be zones of meteoric flushing,
dolomitisation or de-dolomitisation (Fig. 2.11). The important point about using the term
‘model element’ is to stimulate broad thinking
2.4.3.5 Structural Elements about the model concept, a thought process
Assuming a definition of model elements as which runs across the reservoir geological
three-dimensional features, structural model sub-disciplines (stratigraphy, sedimentology,
elements should be considered when the structural geology, even volcanology). For avoid-
properties of a reservoir volume are dominated ance of doubt, the main difference between the
Fig. 2.11 Diagenetic elements in a carbonate build-up; where reservoir property contrasts are driven by differential
development of dolomitisation
A
e1
Zon
C
B e1
e1 Zon e2
A
Zon Zon
OWC
Minor faulting in
poorer sands
(open?)
Damage zone
model framework and the model elements is that 2.4.4 How Much Heterogeneity
2D features are used to define the model frame- to Include? ‘Flora’s Rule’
work (faults, unconformities, sequence
boundaries, simple bounding surfaces) whereas The answer to this fundamental question
it is 3D model elements which fill the volumes depends on the interaction of geology and flow
within that framework. physics. The key criteria for distinguishing which
Having defined the framework and identified model elements are required for the model build
the candidate elements, the next question is how are:
much detail to carry explicitly into the modelling
1. The identification of potential elements from a
process. Everything that can be identified need
large number of ‘candidates’;
not be modelled.
24 2 The Rock Model
1000.0
Shoreface
trend Channel
trend
100.0
Permeability (mD)
Heterolithic
cluster
10.0
Channel fill
Gravel lag
Floodplain heterolithic
1.0
Middle shoreface sand
Lower shoreface heterolithics
Mudstone
0.1
0 5 10 15 20 25 30 35 40
Porosity (%)
Fig. 2.14 Six candidate model elements and one non-reservoir element identified from core and log data and clustered
into three on a k/phi cross plot – how do we decide if it is acceptable to lump these together?
The outcome of this line of argument is that geological one. Geological and petrophysical
some reservoirs may not require complex 3D analyses are required to define the degree of per-
reservoir models at all (Fig. 2.16). Gas-charged meability variation and to determine the spatial
reservoirs require high degrees of permeability architecture, but it is the fluid type and the pro-
heterogeneity in order to justify a complex duction or injection process which determine the
modelling exercise – they often deplete as simple level of geological detail needed in the reservoir
tanks. Fault compartments and active aquifers model and hence the selection of ‘model
may stimulate heterogeneous flow production in elements’.
gas fields, but even in this case the model required Beyond the considerations above, there
to capture key fault blocks can be quite coarse. By remains the question of whether a significant het-
contrast, heavy oil fields under water or steam erogeneity should be modelled explicitly or cap-
injection or CO2 storage schemes are highly sus- tured implicitly in a well-selected average. This
ceptible to minor heterogeneities, and benefit comes down to length scales, specifically the
from detailed modelling. The difficulty here lies difference between the length scale of the signifi-
in assessing the architecture of these cant uncertainty and the length scale of the ques-
heterogeneities, which can often be on a very tion being addressed in the modelling exercise
fine, poorly-sampled scale. (Fig. 2.17).
The decision as to which candidate elements to Heterogeneity length scales are an established
include in a model is therefore not primarily a concept and will be discussed fully in Chap. 4 in
fluid type
Two
orders
Simple
One analytical
order model
vs.
2.4 Model Elements 27
the context of models scales and the REV (the The reason the comparison is important is that
Representative Elementary Volume). Here it the subsurface has a dispersive effect on fluid
suffices to say that the length scale is an assess- flow, which is also rate-dependent. This is
ment of the distance over which the heterogeneity illustrated in Fig. 2.18 which shows a simple
varies: systems with high point-to-point cross-sectional flow experiment involving water
variations over short distances are said to have injecting into an oil reservoir. The homogeneous
short length scales, variations over long distances flow model (a) shows simple piston-like displace-
are said to have long length scales. The concept of ment, distorted only by the effect of gravity. The
questions having scales may be less commonly heterogeneous flow model (b) clearly shows the
considered, but holds true for most subsurface sensitivity to the underlying permeability hetero-
decisions; every model has a purpose, the purpose geneity but the shape of the flood front is not
relates to a question which is being asked and that significantly impacted as the length of the hetero-
question generally has a scale. For infill drilling, geneity is small compared to the length scale of
the scale of the question is the proposed well the question, which is the well spacing (in this
spacing, typically a few hundred metres and typi- experiment this is the width of the model). The
cally shorter onshore than offshore. For gas model is literally ‘so heterogeneous, it is homo-
depletion the scale of the question is the size of geneous’. Only as the length scale of the hetero-
the reservoir compartment being blown down, geneity (H) approaches the length scale of the
which may be the full field. For CO2 storage development question (Q) does the floodfront
schemes it is the size of the storage complex. start to deviate from the simple homogeneous
For well-test interpretations the scale is much case (the lower two sets of figures).
smaller – typically the radius of investigation of In all cases in Fig. 2.18b–d the heterogeneity is
the well test. significant in terms of impacting flow. The
Fig. 2.18 The impact of contrasting heterogeneity length heterogeneous model with very short length scales
scales vs. a fixed length scale of the development question (H < < Q); (c) & (d) models in which the heterogeneity
(the well spacing): left-hand models are static permeabil- length scales approach that of the scale of the development
ity, right-hand models are paired water injection question (where H ~ Q). Images produced courtesy of
simulations, injecting from the left; ornament is water Rayyan Hussein and Ed Stephens
saturation. (a) homogeneous permeability model; (b)
28 2 The Rock Model
No
Significant ? Ignore
Yes
H≈Q H << Q
Compare
heterogeneity LS Implicitly
Explicitly model (H) with question average
LS (Q)
Fig. 2.19 How much detail needs to be incorporated in a length scale of the contrasts, the reservoir fluid and pro-
static modelling exercise? Assuming the ultimate purpose duction mechanism and the scale of the question the model
is flow modelling, it depends on the magnitude and the is addressing
Fig. 2.20 A naïve example of an expectation from geostatistical forecasting – the final mapped result simply illustrates
where the wells are
Fig. 2.21 Different rock body types as an illustration of the deterministic/probabilistic spectrum
1. Correlatable bodies (Fig. 2.21, left). These are Deterministic constraints will have been
largely determined by correlation choices placed on the algorithm to make sure bodies
between wells, e.g. sand observations are are not unrealistically large or small, and are
made in two wells and interpreted as appropriately numerous.
occurrences of the same extensive sand unit
So, if everything is a mixture of determinism
and are correlated. This is a deterministic
and probability, what’s the problem? The issue is
choice, not an outcome of a probabilistic algo-
that although any reservoir model is rightfully a
rithm. The resulting body is not a 100% deter-
blend of deterministic and probabilistic processes,
mined ‘fact’, however, as the interpretation of
the richness of the blend is a choice of the user so
continuity between the wells is just that – an
this is an issue of model design. Some models are
interpretation. At a distance from the wells, the
highly deterministic, some are highly probabilis-
sand body has a probabilistic component in its
tic and which end of the spectrum a model sits at
interpretation, if not its algorithmic
influences the uses to which it can be put. A
emplacement.
single, highly probabilistic model is not suitable
2. Non-correlated bodies (Fig. 2.21, centre).
for planning one well (probabilistic rock bodies
These are bodies encountered in one well
will probably not be encountered as prognosed).
only. At the well, their presence is determined.
A highly deterministic model may be inappropri-
At increasing distances from the well, the loca-
ate, however, for simulations of reservoirs with
tion of the sand body is progressively less well
small rock bodies and little well data. Further-
determined, and is eventually controlled
more, different modellers might approach the
almost solely by the roll of the dice. These
same reservoir with more deterministic or more
bodies are each partly deterministic and partly
probabilistic mindsets.
probabilistic.
The balance of probability and determinism in
3. Probabilistic bodies (Fig. 2.21, right). These
a model is therefore a subtle issue, and needs to be
are the bodies not encountered by wells, the
understood and controlled as part of the model
position of which will be chosen by a probabi-
design. We suggest here that greater happiness is
listic algorithm. Even these, however, are not
generally to be found in models built with strong
100% probabilistic as their appearance in the
deterministic control, as the deterministic inputs
model is not a completely random surprise.
are the direct carrier of the reservoir concept.
2.5 Determinism and Probability 31
2.5.2 Different Generic Approaches data set, which can yield clear statistical guidance
for a model build. In reservoir modelling we are
To emphasise the importance of user choice in the typically dealing with more sparse data, an excep-
approach to determinism and probability, two tion being direct conditioning of the reservoir
approaches to model design are summarised model by high quality 3D seismic data
graphically. The first is a data-driven approach (e.g. Doyen 2007).
to modelling (Fig. 2.22). In this case, the model An alternative is to take a more concept-
process starts with an analysis of the data, from driven, top-down approach (Fig. 2.23). In this
which statistical guidelines can be drawn. These case, the modelling still starts with an analysis
guidelines are input to a rich statistical model of of the data, but the analysis is used to generate
the reservoir which in turn informs a geostatistical alternative conceptual models for the reservoir.
algorithm. The outcome of the algorithm is a The reservoir concept should honour the data at
model, from which a forecast emerges. its geographic location but, as the dataset is gen-
This is the approach which most closely erally statistically insufficient, the full 3D model
resembles the default path in reservoir modelling, should not be limited to it. The model build is
resulting from the linear workflow of a standard strongly concept-driven, has a strong determin-
reservoir modelling software package, as outlined istic component, and less reliance is placed on
in Fig. 1.10. geostatistical algorithms to produce the desired
The limit of a simple data-driven approach product. The final outcome is not a single fore-
such as this is that there is a reliance on the rich cast, but a set of forecasts based on the
geostatistical algorithm to generate the desired uncertainties associated with the underlying res-
model outcome. This in turn relies on the statisti- ervoir concepts.
cal content of the underlying data set, yet for most The difference between the data- and concept-
of our reservoirs, the underlying data set is sta- driven approaches described above is the expec-
tistically insufficient. This is a critical issue and tation they place on the geostatistical algorithm
distinguishes reservoir modelling from other in the context of data insufficiency. The result
types of geostatistical modelling in earth sciences is a greater emphasis on deterministic model
such as mining and soil science. In the latter aspects, which therefore need some more
cases, there is often a much richer underlying consideration.
Work up data
Fig. 2.22 The data-driven approach to reservoir Fig. 2.23 The concept-driven approach to reservoir
modelling modelling
32 2 The Rock Model
2.5.3 Forms of Deterministic Control dimensions and statistical success criteria (Sect.
2.6). In the context of the concept-driven logic
The deterministic controls on a model can be seen described above, these limits need to be determin-
as a toolbox of options with which to realise an istically chosen rather than a simple consequence
architectural concept in a reservoir model. These of the spread of a statistically insufficient well
will be discussed further in the last section of this data set.
chapter, and are introduced below.
2.5.3.5 Seismic Conditioning
2.5.3.1 Faulting The great hope for detailed deterministic control
With the exception of some (relatively) specialist is exceptionally good seismic data. This hope is
structural modelling packages, large scale struc- often forlorn, as even good quality seismic data is
tural features are strongly deterministic in a reser- not generally resolved at the level of detail
voir model. Thought is required as to whether the required for a reservoir model. All is not lost,
structural framework is to be geophysically or however, and it is useful to distinguish between
geologically led, that is, are only features resolv- hard and soft conditioning.
able on seismic to be included, or will features be Hard conditioning is applicable in cases where
included which are kinematically likely to occur extremely high quality seismic, sufficiently
in terms of rock deformation. This in itself is a resolved at the scale of interest, can be used to
model design choice, introduced in the discussion directly define the architecture in a reservoir
on model frameworks (Sect. 2.3) and the choice model. An example of this is seismic geobodies
will be imposed deterministically. in cases where the geobodies are believed to
directly represent important model elements.
Some good examples of this have emerged from
2.5.3.2 Correlation and Layering
deepwater clastic environments, but in many of
The correlation framework (Sect. 2.3) is deter-
these cases detailed investigation (or more dril-
ministic, as is any imposed hierarchy. The proba-
ling) ends up showing that reservoir pay extends
bilistic algorithms work entirely within this
sub-seismically, or that the geobody is itself a
framework – layer boundaries are not moved in
composite, heterogeneous feature.
common software packages. Ultimately the
Soft conditioning is the more generally useful
flowlines in any simulation will be influenced by
approach for rock modelling, where information
the fine layering scheme and this is all set
from seismic is used to give a general guide to the
deterministically.
probabilistic algorithms (Fig. 2.24). In this case,
the link between the input from seismic and the
2.5.3.3 Choice of Algorithm probabilistic algorithm may be as simple as a
There are no hard rules as to which geostatistical correlation coefficient. It is the level of the coeffi-
algorithm gives the ‘correct’ result yet the choice cient which is now the deterministic control; and
of pixel-based or object-based modelling the decision to use seismic as either a hard or soft
approaches will have a profound effect on the conditioning tool is also a deterministic one.
model outcome (Sect. 2.7). The best solution is One way of viewing the role of seismic in
the algorithm or combination of algorithms which reservoir modelling is to adapt the frequency/
most closely reflect the desired reservoir concept, amplitude plot familiar from geophysics
and this is a deterministic choice. (Fig. 2.25). These plots are used to show the
frequency content of a seismic data set and can
2.5.3.4 Boundary Conditions be used to indicate how improved seismic acqui-
for Probabilistic Algorithms sition and processing can extend the frequency
All algorithms work within limits, which will be content towards the ends of the spectrum. Fine-
given by arbitrary default values unless imposed. scale reservoir detail sits below the resolution of
These limits include correlation lengths, object the seismic data. The low end of the frequency
2.5 Determinism and Probability 33
spectrum – the large scale layering – is also frequency ‘earth model’ to support seismic inver-
beyond the range of the seismic sample unless sion work.
broadband technologies are in use (e.g. Soubaras The plot is a convenient backdrop for
and Dowle 2010). Even in the case of broadband arranging the components of a reservoir model,
seismic, there is a requirement to construct a low and the frequency/amplitude axes can be alterna-
tively labelled as ‘scale of heterogeneity’ and
‘content’. The reservoir itself exists on all scales
and is represented by the full ‘rectangle of truth’,
which is only partially covered by seismic data.
The missing areas are completed by the frame-
work model at the low frequency end, by core and
log-scale detail at the high frequency end and by
interpretation in between these data scales.
The only full-frequency data set is a well
exposed outcropping reservoir analogue, as it is
only in the field that the reservoir can be accessed
on all scales. Well facilitated excursions to out-
crop analogues are thereby conveniently justified
(an often neglected activity, the value of which is
summarised in Howell et al. 2014).
Is all the detail necessary? Here we can refer
back to Flora’s Rule and the model purpose,
which will inform us how much of the full spec-
Fig. 2.24 Deterministic model control in the form of trum is required to be modelled in any particular
seismic ‘soft’ conditioning of a rock model. Upper case. In terms of seismic conditioning, it is only in
image: AI volume rendered into cells. Lower image: Best the case where the portion required for modelling
reservoir properties (red, yellow) preferentially guided by
high AI values (Image courtesy of Simon Smith)
exactly matches the blue area in Fig. 2.25 that we
Subsurface concept
reservoir seismic
model
content
frequency
scale of heterogeneity
Outcrop analogue
34 2 The Rock Model
can confidently apply hard conditioning using confounded by complex geostatistical terminol-
geobodies in the reservoir model, and this happy ogy which is difficult to translate into the
coincidence is rarely the case. modelling process. Take for example this quota-
tion from the excellent but fairly theoretical treat-
ment of geostatistics by Isaacs and Srivastava
With the above considered, there can be some (1989):
logic as to the way in which deterministic control in an ideal theoretical world the sill is either the
is applied to a model, and establishing this is part stationary infinite variance of the random function
or the dispersion variance of data volumes within
of the model design process. The probabilistic the volume of the study area.
aspects of the model should be clear, to the
point where the modeller can state whether the The problem for many of us is that we don’t
design is strongly deterministic or strongly prob- work in an ideal theoretical world and are unfa-
abilistic and identify where the deterministic and miliar with the concepts and terminology that are
probabilistic components sit. used in statistical theory. This section therefore
Both components are implicitly required in aims to extract just those statistical concepts
any model and it is argued here that the road to which are essential for an intuitive understanding
happiness lies with strong deterministic control. of what happens in the statistical engines of res-
The outcome from the probabilistic components ervoir modelling packages.
of the model should be largely predictable, and
should be a clear reflection of the input data
combined with the deterministic constraints 2.6.1 Key Geostatistical Concepts
imposed on the algorithms.
Disappointment occurs if the modeller expects A central concept in geostatistics which must be
the probabilistic aspects of the software to take on understood is that of variance. Variance, σ2, is a
the role of model determination, and fill in the measure of the average difference between indi-
knowledge and interpretational gaps in their vidual values and the mean of the dataset they
own mind. come from. It is a measure of the spread of the
So how to approach geostatistics? The first dataset:
step is to ensure we have a firm understanding
of the underlying concepts behind the
algorithms – just the essentials. σ2 ¼ Σðxi μÞ2 =N ð2:2Þ
where:
Pn¼N
1=N ðxi μx Þ yi μy variogram function, which is most simply
ρ¼ n¼1
ð2:3Þ expressed as:
σx σy
where: 2
2γ ¼ ð1=NÞΣ zi zj ð2:4Þ
N ¼ number of points in the data set
xi, yi ¼ values of point in the two data sets where:
μx, μy ¼ mean values of the two data sets, and
zi and zj are pairs of points in the dataset
σx, σy ¼ standard deviations of the two data sets
(the square of the variance)
For convenience we generally use the
semivariogram function:
If the outcome of the above function is positive
then higher values of x tend to occur with higher
values of y, and the data sets are said to be 2
γ ¼ ð1=2NÞΣ zi zj ð2:5Þ
‘positively correlated’. If the outcome is ρ ¼ 1
then the relationship between x and y is a simple The semivariogram function can be calculated
straight line. A negative outcome means high for all pairs of points in a data set, whether or not
values of one data set correlate with low values they are regularly spaced, and can therefore be used
of the other: ‘negative correlation’. A zero result to describe the relationship between data points
indicates no correlation. from, for example, irregularly scattered wells.
Note that correlation coefficients assume the The results of variogram calculations can be
data sets are both linear. For example, two data represented graphically as a scatterplot
sets which have a log-linear relationship might (Fig. 2.26a) to establish the relationship between
have a very strong correlation but still display a the separation distance (known as the ‘lag’) and
poor correlation coefficient. Of course, a coeffi- the average γ value for pairs of points which are
cient can still be calculated if the log-normal data that distance apart. The data is grouped into dis-
set, e.g. permeability, is first converted to a linear tance bins to do the averaging; hence only one
form by taking the logarithm of the data. value appears for any given lag in Fig. 2.26b.
Correlation between datasets (e.g. porosity A more formal definition of semi-variance,
versus permeability) is typically entered into res- incorporating the separation distance term, is
ervoir modelling packages as a value between given by:
0 and 1, in which values of 0.7 or higher generally n o
1
indicate a strong relationship. The value is com- γ ðhÞ ¼ E ½Z ðx þ hÞ Z ðxÞ2 ð2:6Þ
monly described as the ‘dependency’. 2
Correlation coefficients reflect the variation of where:
values within a dataset, but say nothing about
how those values vary spatially. For reservoir E ¼ the expectation (or mean)
modelling we need to express the spatial variation Z(x) ¼ the value at a point in space
of parameters, and the central concept controlling Z(x + h) ¼ the value at a separation distance,
this is the variogram. h (the lag)
The variogram captures the relationship
between the different values between pairs of Generally, γ increases as a function of separa-
data points, and the distance separating those tion distance. Where there is some relationship
two points. Numerically, this is expressed as the between the values in a spatial dataset, γ shows
averaged squared differences between the pairs of smaller values for points which are closer
data in the data set, given by the empirical together in space, and therefore more likely to
36 2 The Rock Model
a b
g g
Fig. 2.26 The variogram: a systematic change in variance between data points with increasing distance between those
points; left: the raw γ values plotted versus the lag; right: the data binned, averaged and a line-fit applied
Fig. 2.28 Contrasting reservoir models built from a common data set, but modelled with different ranges: short range to
the left (short length scale heterogeneity); long range to the right (long length scale heterogeneity)
for the moment the connection to make is the link A final important feature of variograms is
between the variogram range and heterogeneity that they can vary with direction. The spatial
length scales. Short ranges correspond to short variation represented by the variogram model
length scale heterogeneities, long ranges to long can be orientated on any geographic axis, N-S,
length scale heterogeneities. E-W, etc. This has an important application
There are several standard functions which can to modelling in sedimentary rocks, where a
be given to semivariogram models, and which trend can be estimated based on the deposi-
appear as options on reservoir modelling software tional environment. For example, reservoir
packages. Six types are illustrated in Fig. 2.29. properties may be more strongly correlated
Three are options present in most modelling along a channel direction, or along the strike
packages (spherical, exponential and gauss- of a shoreface. This directional control on spa-
ian); the spherical model is probably the most tial correlation leads to anisotropic variograms.
widely used, and is appropriate for most purposes. Anisotropy is imposed on the reservoir model
The power law semivariogram describes data by indicating the direction of preferred conti-
sets which continue to get more dissimilar with nuity and the strength of the contrast between
distance. A simple example would be depth the maximum and minimum continuity
points on a tilted surface or a vertical variogram directions, usually represented as an oriented
through a data set with a porosity/depth trend. ellipse.
The power law semivariogram has no sill. Anisotropic correlation can also occur in the
It should also be appreciated that, in general, vertical plane, controlled by features such as sed-
sedimentary rock systems often display a ‘hole imentary bedding. In most reservoir systems ver-
effect’ when data is analysed vertically (Fig. 2.29, tical length scales are shorter than horizontal
lower right). This is a feature of any rock system length scales because sedimentary systems tend
that shows cyclicity (Jensen et al. 1995), where to be layered. It is generally much easier to calcu-
the γ value decreases as the repeating bedform is late vertical variograms directly from subsurface
encountered. In practice this is generally not data, because continuous data come from logs in
required for the vertical definition of layers in a sub-vertical wells. Vertical length scales in lay-
reservoir model, as the layers are usually created ered systems are short, and vertical variograms
deterministically from log data, or introduced therefore tend to have short ranges, often less than
using vertical trends (Sect. 2.7). that set by default in software packages.
38 2 The Rock Model
g g
0 0
distance (lag) distance (lag)
g g
0 0
distance (lag) distance (lag)
Horizontal variograms are likely to have much The figures in Table 2.1 summarise important
longer ranges, and may not reach the sill at the and useful contrasts but also reflect the residual
scale of the reservoir model. This is illustrated uncertainty in predicting the range of a horizontal
conceptually in Fig. 2.30, based on work by semivariogram model, which is the key parameter
Deutsch (2002). required in any model build using variograms.
As vertical semivariograms can be easier to Attempts can be made to derive horizontal
estimate, one approach for geostatistical analysis semivariograms directly from any data set, but
is to measure the vertical correlation (from well the dataset is only a sample, most likely an imper-
data) and then estimate the likely horizontal fect one. This is a particular issue for well data,
semivariogram using a vertical/horizontal anisot- especially sparse offshore well data, which is
ropy ratio based on a general knowledge of sedi- generally statistically insufficient for the model
mentary systems. Considerable care should be purpose. For many datasets, the horizontal
taken if this is attempted, particularly to ensure variogram is difficult to estimate, and the model-
that the vertical semivariograms are sampled ler is therefore often required to choose a
within distinct (deterministic) zones. Deutsch variogram model ‘believed’ to be representative
has estimated ranges of typical anisotropy ratios of the system being modelled. This is the intuitive
by sedimentary environment (Table 2.1) and leap and it is an intuitive application of
these offer a general guideline. geostatistics that we advocate.
2.6 Essential Geostatistics 39
Fig. 2.32 Figure 2.31 converted to greyscale (left), and pixelated (right)
Pixel row 13
Pixel row 33
lag
lag
Pixel row 43
lag
lag
Pixel row 23
variogram (geostatistical variance) and reservoir reasonable to convert actual spatial variation to a
heterogeneity (our concept of the variation). In representative average and then apply this aver-
particular, the example highlights the role of age over a wide area. Using sparse well data as a
averaging in the construction of variograms. Indi- starting point this is a big assumption, and its
vidual transects over the image vary widely, and validity depends on the architectural concept we
there are many parts of the sand system which are have for the reservoir. The concept is not a statis-
not well represented by the final averaged tical measure; hence the need to make an intuitive
variogram. The variogram is in a sense quite connection between the reservoir concept and the
crude and the application of variograms to either geostatistical tools we use to generate reservoir
rock or property modelling assumes it is heterogeneity.
2.6 Essential Geostatistics 41
Pixel column 28
Pixel column 54
Pixel column 8
lag
lag
lag
5000
4500
4000
3500
variance
3000
2500
2000
1500
500
0
0 10 20 30 40 50 60
lag (pixels)
The intuitive leap in geostatistical reservoir the geostatistical model, assuming the applica-
modelling is therefore to repeat this exercise for tion of an average variogram model is valid. The
an analogue of the reservoir being modelled basic steps are as follows:
and use the resulting variogram to guide
42 2 The Rock Model
6000
5000
4000
variance
3000
2000
Range – 32 pixels
1000
0
0 10 20 30 40 50 60
lag (pixels)
1. Select (or imagine) an outcrop analogue. • Highly heterogeneous systems, e.g. glacial
2. Choose the rock model elements which appro- clastic systems, should have short ranges and
priately characterise the reservoir. are relatively isotropic in (x, y).
3. Sketch their spatial distribution (the architec- • Shoreface systems generally have long ranges,
tural concept sketch) guided by a suitable ana- at least for their reservoir properties, and the
logue dataset. maximum ranges will tend to be along the
4. Estimate appropriate variogram ranges for strike of the system.
individual elements (with different variogram • In high net fluvial systems, local coarse-
ranges for the horizontal and vertical grained components (if justifiably extracted
directions). as model elements) may have very short
5. Estimate the anisotropy in the horizontal ranges, often only a nugget effect.
plane. • In carbonate systems, it needs to be clear
6. Input these estimates directly to a variogram- whether the heterogeneity is driven by diage-
based algorithm if pixel-based techniques are netic or depositional elements, or a blend of
selected (see next section). both; single-step variography described above
7. Carry through the same logic for modelling may not be sufficient to capture this (see
reservoir properties, if variogram-based Chap. 6 for suggested solutions).
algorithms are chosen.
The approach above offers an intuitive route to Often these generalities may not be apparent
the selection of the key input parameters for a from a statistical analysis of the well data, but
geostatistical rock model. The approach is they make intuitive sense. The outcome of an
concept-based and deterministically steers the ‘intuitive’ variogram model should of course be
probabilistic algorithm which will populate the sense-checked for consistency against the well
3D grid. data – any significant discrepancy should prompt
There are some generalities to bear in mind: a re-evaluation of either the concept or the
approach to the handling of the data, e.g. the
• There should be greater variance across the choice of model elements. However, this intuitive
grain of a sedimentary system than along approach to geostatistical reservoir modelling is
it (represented by the shorter E-W range for recommended in preference to simple condition-
the example above). ing of the variogram model to the well data –
2.7 Algorithm Choice and Control 43
Grey scale image is 22.5 22.5 cm; pixelated grey-scale image is 55 by 55 pixels
44 2 The Rock Model
1. Form geological concepts and decide whether • Texture-based methods, using training images
rock modelling is required; to recreate the desired architecture. Although
2. Select the model elements; this has been experimented with since the early
3. Set the balance between determinism and days of reservoir modelling this has only
probability; recently ‘come of age’ through the develop-
4. Intuitively set parameters to guide the ment of practical multi-point statistical (MPS)
geostatistical modelling process, consistent algorithms (Strebelle 2002).
with the architectural concepts. • Process-based methods, in which an architec-
ture is built up stratigraphically, often using
The next step is to select an algorithm and
2D grids to define the upper and lower surfaces
decide what controls are required to move beyond
of a rock body or layer.
the default settings that all software packages
offer. Algorithms can be broadly grouped into
four: The pros and cons of these algorithms, includ-
ing some common pitfalls, are explored below.
• Object modelling, placing bodies with discrete
shapes into 3D space for which another model
element, or group of elements, has been 2.7.1 Object Modelling
defined as the background.
• Pixel-based methods, using indicator Object modelling uses various adaptations of the
variograms to create the model architecture ‘marked point process’ (Holden et al. 1998). A
by assigning the model element type on a position in the 3D volume, the marked point, is
cell-by-cell basis. The indicator variogram is selected at random and a point defined within a
simply a variogram that has been adapted for modelled 3D body (ellipse, half moon, channel
discrete rather than continuous variables. etc.) is attached to that point in space. The main
There are several variants of pixel modelling inputs for object modelling are an upscaled ele-
including Sequential Indicator Simulation ment log, a shape template and a set of geometri-
(SIS), indicator kriging and various facies cal parameter distributions such as width,
trend or facies belt methods which attempt to orientation and body thickness, derived from out-
capture gradational lateral facies changes. crop data (e.g. Fig. 2.37).
The algorithm works by implanting objects user to control multiple well intersections of the
with dimensions selected from the prescribed same object but this is not commonplace.
input distributions and then rejecting objects Secondly, the distribution of objects at the
which do not satisfy the well constraints wells does not influence the distribution of inter-
(in statistics, the ‘prior model’). For example, a well objects because of the assumption of
channel object which is placed in 3D but stationarity in the algorithm. Channel
intersects a well without a channel observed at morphologies are particularly hard to control
that location is rejected. This process continues because trend maps only influence the location
iteratively until a target proportion of each ele- of the ‘marked point’ for the channel object and
ment is achieved, constrained by the expected not the rest of the body, which generally extends
total volume fraction of the object, e.g. 30% throughout the model.
channels. Objects that do not intersect the wells A key issue with object modelling, therefore,
are also simulated if needed to achieve the is that things can easily go awry in the inter-well
specified element proportions. However, spatial area. Figure 2.39 shows an example of
trends of element abundance or changing body ‘funnelling’, in which the algorithm has found it
thickness are not automatically honoured because difficult to position channel bodies without
most algorithms assume stationarity (no interwell intersecting wells with no channel observations;
trends). Erosional, or intersection, rules are the channels have therefore been preferentially
applied so that an object with highest preservation funnelled into the inter-well area. Again, some
potential can replace previously simulated objects intuitive geological sense is required to control
(Fig. 2.38). and if necessary reject model outcomes. The issue
There are issues of concern with object illustrated in Fig. 2.39 can easily be exposed by
modelling which require user control and aware- making a net sand map of the interval and looking
ness of the algorithm limitations. Firstly, it is for bulls-eyes around the wells.
important to appreciate that the algorithm can Thirdly, the element proportions of the final
generate bodies that cross multiple wells if model do not necessarily give guidance as to the
intervals of the requisite element appear at the quality of the model. Many users compare the
right depth intervals in the wells. That is, the element (‘facies’) proportions of the model with
algorithm can generate probabilistic correlations those seen in the wells as a quantitative check on
without user guidance – something that may or the result, but matching the well intersections is
may not be desirable. Some algorithms allow the the main statistical objective of the algorithm so
vertical heterolithics
10m exaggeration x50
channel
1000m sheetflood
Fig. 2.38 Cross section through the ‘Moray Field’ model, same section line through models which are conditioned to
an outcrop-based model through Triassic fluvial clastics in the same well data, with the same choice of elements,
NE Scotland. Figures 2.38, 2.40, 2.41 and 2.42 follow the differing only in the selection of rock modelling algorithm
46 2 The Rock Model
135 metres
channel sand
overbank
shale
heterolithics
there is a circular logic to this type of QC. The key answers but do give good guidance on realistic
thing to check is the degree of ‘well match’ and preserved body geometries.
the spatial distributions and the total element 4. The obvious object shape to select for a given
proportions (together). Repeated mismatches or element is not always the best one to use.
anomalous patterns point to inconsistencies Channels are a good example of this, as the
between wells, geometries and element architecture of a channel belt is sometimes
proportions. better constructed using ellipse- or crescent-
Moreover, it is highly unlikely that the element shaped objects rather than channel objects
proportions seen in the wells truly represent the per se, which are often more representative of
distribution in the subsurface as the wells dramat- the shape of the rivers that formed the channel
ically under-sample the reservoir. It is always belts, rather than the preserved belt
useful to check the model element distributions geometries of nested barforms. Also, these
against the well proportions, and the differences body shapes are less extensive than the chan-
should be explained, but differences should be nel shapes, rarely go all the way through a
expected. The ‘right’ element proportion is the model area and so reflect the trend map inputs
one which matches the underlying concept. more closely and are less prone to the ‘bull’s
The following list of observations and tips eye’ effect.
provides a useful checklist for compiling body 5. There may be large differences between the
geometries in object modelling: geometry of a modern feature and that pre-
served in the rock record. The example above
1. Do not rely on the default geometries.
of channels is a case in point; carbonate
2. Recall that thickness distributions have to be
reservoirs offer a more extreme example as
customised for the reservoir. Depending on the
any original depositional architecture may
software package in use, the upscaled facies
have been completely overprinted by
parameter can be based on the measured thick-
subsequent diagenetic effects. Differential
ness from deviated wells – not necessarily the
compaction effects (sands vs. muds and coals)
stratigraphic thicknesses.
may also change the vertical geometry of the
3. Spend some time customising the datasets and
original sediment body.
collating your own data from analogues. There
6. It is important to distinguish uncertainty from
are a number of excellent data sources avail-
variability. Uncertainty about the most
able to support this; they do not provide instant
2.7 Algorithm Choice and Control 47
appropriate analogue may result in a wide The kriging algorithm attempts to minimise
spread of geometrical options. It is incorrect, the estimation error at each point in the model
however, to combine different analogue grid. This means the most likely element at each
datasets and so create spuriously large location is estimated using the well data and the
amounts of variation. It is better to make two variogram model – there is no random sampling.
scenarios using different data sets and then Models made with indicator kriging typically
quantify the differences between them. show smooth trends away from the wells, and
7. Get as much information as possible from the the wells themselves are often highly visible as
well and seismic data sets. Do well ‘bulls-eyes’, especially in very sparse data sets.
correlations constrain the geometries that can These models will have different element
be used? Is there useful information in the proportions to the wells because the algorithm
seismic? does not attempt to match those proportions to
8. We will never know what geometries are cor- the frequency distribution at the wells. Indicator
rect. The best we can do is to use our concep- kriging can be useful for capturing lateral trends if
tual models of the reservoir to select a series of these are well represented in the well data set,
different analogues that span a plausible range and if a high degree of correlation between
of geological uncertainty and quantify the wells is desired.
impact. This is pursued further in Chap. 5. In general, it is a poor method for representing
reservoir heterogeneity because the lateral hetero-
geneity in the resulting model is too heavily
influenced by the well spacing. For fields with
2.7.2 Pixel-Based Modelling
dense, regularly-spaced wells and relatively long
correlation lengths in the parameter being
Pixel-based modelling is a fundamentally differ-
modelled, it may still be useful.
ent approach, based on assigning properties using
Figure 2.40 shows an example of indicator
geostatistical algorithms on a cell-by-cell basis,
Kriging applied to the Moray data set – it is first
rather than by implanting objects in 3D. It can be
and foremost an interpolation tool.
achieved using a number of algorithms, the
Sequential Gaussian Simulation (SGS) is most
commonest of which are summarised below.
commonly used for modelling continuous
Kriging is the most basic form of interpolation
petrophysical properties (Sect. 3.4), but one vari-
used in geostatistics, developed by the French
ant, Sequential Indicator Simulation (SIS), is
mathematician Georges Matheron and his student
quite commonly used for rock modelling (Journel
Daniel Krige (Matheron 1963). The technique is
and Alabert 1990). SIS builds on the underlying
applicable to property modelling (next section)
geostatistical method of kriging, but then
but rock models can also be made using an
introduces heterogeneity using a sequential sto-
adaptation of the algorithm called Indicator
chastic method to draw Gaussian realisations
Kriging.
vertical heterolithics
10m exaggeration x50
channel
1000m sheetflood
using an indicator transform. The indictor is used conditioning cases and the funnelling effect is
to transform a continuous distribution to a dis- avoided. The method also avoids the bulls-eyes
crete distribution (e.g. element 1 vs. element 2). around wells which are common in indicator
When applied to rock modelling, SIS will gen- kriging.
erally assume the reservoir shows no lateral or The algorithm can be used to create
vertical trends of element distribution – the prin- correlations by adjusting the variogram range to
ciple of stationarity again – although trends can be greater than the well spacing. In the example in
be superimposed on the simulation (see the Fig. 2.42, correlated shales (shown in blue) have
important comment on trends at the end of this been modelled using SIS. These correlations con-
section). tain a probabilistic component, will vary from
Models built with SIS should, by definition, realisation to realisation and will not necessarily
honour the input element proportions from wells, create 100% presence of the modelled element
and each geostatistical realisation will differ when between wells. Depending on the underlying con-
different random seeds are used. Only when large cept, this may be desirable.
ranges or trends are introduced will an SIS model When using the SIS method as commonly
realisation differ statistically from the input applied in commercial packages, we need to be
well data. aware of the following:
The main limitation with such pixel-based
1. Reservoir data is generally statistically insuffi-
methods is that it is difficult to build architectures
cient and rarely enough to derive meaningful
with well-defined margins and discrete shapes
experimental variograms. This means that the
because the geostatistical algorithms tend to cre-
variogram used in the SIS modelling must be
ate smoothly-varying fields (e.g. Fig. 2.41). Pixel-
derived by intuitive reasoning (see previous
based methods tend to generate models with lim-
section).
ited linear trends, controlled instead by the prin-
2. The range of the variogram is not the same as
cipal axes of the variogram. Where the rock units
the element body size. The range is related to
have discrete, well-defined geometries or they
the maximum body size, and actual simulated
have a range of orientations (e.g. radial patterns),
bodies can have sizes anywhere along the
object-based methods are preferable to SIS.
slope of the variogram function. The range
SIS is useful where the reservoir elements do
should therefore always be set larger than
not have discrete geometries either because they
your expected average body size, as a rule of
have irregular shapes or variable sizes. SIS also
thumb – twice the size.
gives good models in reservoirs with many
3. The choice of the type of kriging used to start
closely-spaced wells and many well-to-well
the process off can have a big effect. For
correlations. The method is more robust than
simple kriging a universal mean is used and
object modelling for handling complex well-
vertical heterolithics
10m exaggeration x50
channel
1000m sheetflood
the algorithm assumes stationarity. For ordi- Instinctively, the anisotropy should be more
nary kriging the mean is estimated locally pronounced, perhaps 10:1, but feeding this
throughout the model, and consequently ratio through an SIS run will also not repro-
allows lateral trends to be captured. Ordinary duce the original (Fig. 2.43, right-hand image).
kriging works well with large numbers of In short: the SIS algorithm is robust but has
wells and well-defined trends, but can produce limitations; an alternative is required if the
unusual results with small data sets. desire is to reproduce an model similar to the
4. Some packages allow the user to specify local centre image in Fig. 2.43.
azimuths for the variogram. This information
The facies trend simulation algorithm is a
can come from the underlying architectural
modified version of SIS which attempts to honour
concept and can be a useful way of avoiding
a logical lateral arrangement of elements, for
the regular linear striping which is typical for
example, an upper shoreface passing laterally
indicator models, especially those conditioned
into a lower shoreface and then into mudstone.
to only a small number of wells.
Figure 2.44 shows an example applied to the
5. Variograms make average statements about
Moray data set. The facies trend approach,
spatial correlation in a given direction. This
because it uses SIS, gives a more heterogeneous
places a limit on the textures that can be
pattern than indicator kriging and does not suffer
modelled; variogram modelling is often
from the problem of well bulls-eyes. The latter is
described as a two-point process. The impact
because the well data is honoured at the well
of this simplification is illustrated in Fig. 2.43,
position, but not necessarily in the area local to
in which the image from Sect. 2.6.2 is
the well.
revisited. In that case, the exhaustive, true
The user can specify stacking patterns,
dataset was available and a variogram analysis
directions, angles and the degree of inter-
of that data yielded length scales (variogram
fingering. The approach can be useful, but it is
ranges) and a horizontal anisotropy ratio of
often very hard to get the desired inter-fingering
1.45:1. Yet if those same ranges and ratio are
throughout the model. The best applications tend
fed back into an SIS run the channelised image
to be shoreface environments where the logical
does not reappear (Fig. 2.43, left-hand image).
50 2 The Rock Model
Fig. 2.43 Attempts to recreate a complex image using anisotropy in the centre image; right: an alternative run
SIS. Centre: the original image; left: an SIS realisation with increased anisotropy. The complexity of the image is
based on the measured spatial correlation and average beyond the reach of a standard variogram workflow
vertical heterolithics
10m exaggeration x50
channel
1000m sheetflood
sequence of elements, upper to lower shoreface, point statistics (MPS) technique (Strebelle 2002;
transition on a large scale. Similar modelling Caers 2003).
effects can also be achieved by the manual appli- The approach starts with a pre-existing train-
cation of trends (see below). ing image, a cellular model or a scanned image,
which is analysed for textural content. Using a
geometric template, the frequency of instances of
2.7.3 Texture-Based Modelling a model element occurring next to similar and
different elements are recorded, as is their relative
An alternative approach to the averaging of position (to the west, the east, diagonally etc.). As
variography is the emergence of algorithms the cellular model framework is sequentially
which aim to honour texture directly. Although filled, the record of textural content in the training
there are parallels with very early techniques such image is referred to in order to determine the
as simulated annealing (Yarus and Chambers likelihood of a particular cell having a particular
1994, Chapter 1 by Srivistava) the approach has model content, given the content of the
become more widely available through the multi- surrounding cells (Fig. 2.45).
2.7 Algorithm Choice and Control 51
Although the approach is pixel-based, the key and SIS-based architectural elements. The
step forward is the emphasis on potentially com- MPS algorithm does not ‘work alone.’
plex texture rather than relatively simple ‘two- 2. The additional effort of generating and
point’ statistics of variography. The prime limita- checking a training image may not be required
tion of variogram-based approaches – the need to in order to generate the desired architecture;
derive simple rules for average spatial 2-point geostatistics or object modelling may
correlation – is therefore surmounted by be sufficient.
modelling instead an average texture.
Despite the above, the technique can provide
An example is shown in Fig. 2.46 – a cross-
very realistic-looking architectures which over-
sectional model based on a sketch of a single bed
come both the simplistic textures of older pixel-
in a dryland river system. The sketch was hand-
based techniques and the simplistic shapes and
drawn, drafted neatly, scanned and input to an
sometimes unrealistic architectures produced by
MPS algorithm resulting in the digital model
object modelling.
illustrated.
MPS is arguably the most desirable algorithm
for building 3D reservoir architectures because
architecture itself is a heterogeneous textural fea- 2.7.4 Algorithms Compared
ture and MPS is designed to model these textures
directly. In spite of this there are two reasons why A comparison of the techniques above is shown
MPS is not necessarily the algorithm of choice: in Fig. 2.48. These are the same images shown in
Figs. 2.38, 2.40, 2.41, 2.44 and 2.47 but now
1. A training image is required, and this is a 3D
placed together and shown with less vertical
architectural interpretation in itself. MPS
exaggeration. All models are based on the same
models are therefore not as ‘instantaneous’ as
underlying data set and all are architectural
the simpler pixel-based techniques such as
arrangements of the same three elements of an
SIS, and require more pre-work. The Moray
ephemeral braided fluvial system from the
Field example shown in Fig. 2.47 was built
‘Moray Field’ training set.
using a training data set which was itself
There is no technical error in the construction
extracted from a model combining object-
of the models in Fig. 2.48 and each would pass a
52 2 The Rock Model
Fig. 2.46 MPS model based on a sketch of a bed in a training image; (c) MPS realization of the bed over
dryland fluvial system: (a) graphic log through a type 400 m based on the training image (model courtesy of
section of the bed; (b) conceptual sketch of the arrange- Tom Buckle and Rhona Hutton)
ment of elements within the bed over 50 m used as a
vertical heterolithics
10m exaggeration x50
channel
1000m sheetflood
MPS
Objects
Kriging
Truncated
Gaussian
SIS
Fig. 2.48 True-scale comparison of five techniques applied to the Moray Field outcrop-based model
2.7 Algorithm Choice and Control 53
statistical QC, yet the images are clearly very Models such as that in Figs. 2.49 and 2.50 can
different and subsequent flow modelling would be used to provide a template for modelling in
yield very different results. standard software packages. This can be done by
Which is correct? We argue, it is the one which using the models as training images for MPS or,
best matches the underlying conceptual interpre- as in the case in Figs. 2.51 and 2.52 mapping the
tation drawn before the modelling began – the process model grid onto a standard 3D modelling
sketch. grid for onward property modelling (Da Pra et al.
2017).
Process models can become extremely elabo-
rate as more realistic parameters are introduced to
2.7.5 Process-Based Modelling
the algorithm. The example in Fig. 2.53 is a 3D
process model mimicking the evolution of the
The section above compared modelling
Nile Delta which was generated by solving
algorithms widely available in software packages
used in the E&P industry. There are another
approaches, however, and these can be consid-
ered for use in addition to the standard workflow
if the approaches above fail to produce the desired
architectures.
Figure 2.49 shows the model outcome from a
process-based model for a meandering fluvial
system. The image is a map view of a 3D model
built stratigraphically with the modelling algo-
rithm depositing layers guided by process sedi-
mentological parameters. These parameters can
be varied to produce different architectures, as
shown in Fig. 2.50 which compares variations in
aggradation rate with preserved net sand volumes.
The process can be tuned not only to understand
key ‘tipping’ points for architectural connectivity Fig. 2.49 Map view of a process-based fluvial model
but also to provide a range of architectures for (left), mimicking a meandering fluvial system (right).
forward-modelling. Image courtesy of Tom Buckle
Fig. 2.51 Process-based models for a fluvial system; four contrasting map views of the models (Da Pra et al. 2017).
Image reproduced courtesy of the EAGE
Fig. 2.52 Cross-sections through the fluvial model after reservoir elements (lower image), (Da Pra et al. 2017).
translating the process-based model onto a standard Image reproduced courtesy of the EAGE
modelling grid (upper image) and infilling with the desired
Fig. 2.53 Plan view of a process-based model of the Nile Delta. Generated by solving interaction between bathymetry,
hydrodynamics and sediment transport. Image reproduced courtesy of TUDelft
Having discussed the pros and cons of the geological norm, indeed, Walther’s Law – the
algorithms, the final consideration is therefore principle that vertical sequences can be used to
how to overlay deterministic control. In statistical predict lateral sequences – is a statement of
terms, this is about overcoming the stationarity non-stationarity.
that probabilistic algorithms assume as a default. Deterministic trends are therefore required,
Stationarity is a prerequisite for the algorithms whether to build a model using object- or pixel-
and assumes that elements are randomly but based techniques, or to build a training image for
homogeneously distributed in the inter-well a texture-based technique.
space. This is at odds with geological systems,
in which elements are heterogeneously
2.7.6.1 Vertical Trends
distributed and show significant non-stationarity:
Sedimentary systems typically show vertical
they are commonly clustered and show patterns in
organisation of elements which can be observed
their distribution. Non-stationarity is the
in core and on logs and examined quantitatively
56 2 The Rock Model
in the data-handling areas of modelling packages. effectively being adjusted up and down away
Any such vertical trends are typically switched off from well control. The adjustments therefore
by default – the assumption of stationarity. require justification which comes, as ever, from
As a first assumption, observed trends in the the underlying conceptual model.
form of vertical probability curves should be
switched on, unless there are compelling reasons
2.7.6.2 Horizontal Trends
not to use them. More significantly, these trends
Horizontal trends are most simply introduced as
can be manually adjusted to help realise an archi-
2D maps which can be applied to a given interval.
tectural concept perhaps only partly captured in
Figure 2.55 shows the application of a sand trend
the raw well data.
map to a low net-to-gross system following the
Figure 2.54 shows an edited vertical element
steps below:
distribution which represents a concept of a depo-
sitional system becoming sand-prone upwards.
This is a simple pattern, common in sedimentary 1. Sand elements are identified in wells based on
sequences, but will not be integrated in the core and log interpretation.
modelling process by default. 2. A net-to-gross (sand) value is extracted at each
Thought is required when adjusting these well and gridded in 2D to produce a map
profiles because the model is being consciously illustrating any sand trend apparent from well
steered away from the statistics of the well data. data alone.
Unless the well data is a perfect statistical sample 3. The 2D map is hand-edited to represent the
of the reservoir (rarely the case and never prov- desired concept, with most attention being
able) this is not a problem, but the modeller paid to the most poorly sampled areas (in the
should be aware that hydrocarbon volumes are example shown, the trend grid is also
smoothed – the level of detail in the trend map for a model with the trends removed, with all other
should match the scale of the sand distribution model variables unchanged. Stationarity is over-
concept). come and the concept dominates the modelling.
4. The trend map is input to, in this case, an SIS The source of the trend can be an extension of
algorithm for rock modelling. the underlying data, as in the example above, or a
5. As a check, the interval average net-to-gross is less data-independent concept based on a regional
backed-out of the model as a map and com- model, or a trend surface derived from seismic
pared with the concept. The map shows more attributes – the ‘soft conditioning’ described in
heterogeneity because the variogram ranges Sect. 2.5.
have been set low and the model has been
tied to the actual well observations; the desired
2.7.6.3 3D Probability Volumes
deterministic trends, however, clearly control
The 3D architecture can be directly conditioned
the overall pattern.
using a 3D volume – a natural extension of the
The influence of the trend on the model is process above. The conditioning volume can be
profound in this case as the concept is for the built in a modelling exercise as a combination of
sand system to finger eastwards into a poorly the horizontal/vertical trends described above, or
drilled, mud-dominated environment. The fluid derived from a 3D data source, typically a seismic
volumes in the trended case are half that calculated volume.
58 2 The Rock Model
Seismic conditioning directly in 3D creates sedimentology for which is less clear and can be
some issues: viewed as either a lower coastal plain or a river-
dominated delta.
1. The volume needs QC. It is generally easier to
Rock model realisations have been built from
check simpler data elements, so if the desired
element distributions in 19 wells. Cross-sections
trends are separately captured in 2D trend
taken at the same location through the models are
surfaces and vertical proportion curves then
illustrated in Figs. 2.57, 2.58 and 2.59 for a 2-, 4-
combination into a 3D trend volume is not
and 7-interval correlation, respectively. The
necessary.
examples within each layering scheme explore
2. If conditioning to a 3D seismic volume, the
object vs. pixel (SIS) modelling and the default
resolution of the model framework needs to be
model criteria (stationarity maintained) vs. the use
consistent with the intervals the seismic attri-
of deterministic trends (stationarity overwritten).
bute is derived from. For example, if the
The models contrast greatly and the following
parameter being conditioned is the sand con-
observations can be made:
tent within a 25 m thick interval, it must be
assumed that the seismic data from which the 1. The more heavily subdivided models are natu-
seismic attribute is derived is also coming rally more ‘stripy’. This is partly due to the
from that 25 m interval. This is unlikely to be ‘binning’ of element well picks into zones,
the case from a simple amplitude extraction which starts to break down stationarity by
and a better approach is to condition from picking up any systematic vertical
inverted seismic data. The questions to ask organisation of the elements, irrespective of
are therefore what the seismic inversion pro- the algorithm chosen and without separate
cess was inverting for (was it indeed the sand application of vertical trends.
content) and was the earth model used for the 2. The stripy architecture is further enhanced in
inversion the same one as the reservoir model the 7-zone model because the layering is based
is being built on? on a flooding surface model, the unit
3. If the criteria for using 3D seismic data boundaries for which are preferentially picked
(2, above) are met, can a probabilistic seismic on shales. The unit boundaries are therefore
inversion be called upon? This is the ideal shale-rich by design and prone to generating
input to condition to. correlatable shales if the shale dimension is big
4. If the criteria in point 2, above, are not met, the enough (for object modelling) or the shale
seismic can still be used for soft conditioning, variogram range long enough (for SIS).
but will be more artefact-free and easier to QC 3. Across all frameworks, the object-based
if applied as a 2D trend. The noisier the data, models are consistently more ‘lumpy’ and the
the softer the conditioning will need to be, SIS-based models consistently more ‘spotty’, a
i.e. the lower the correlation coefficient. consequence of the different underlying
principles of the algorithms.
4. The untrended object model for the two zone
realisation is the one most dominated by
2.7.7 Alternative Rock Modelling
stationarity, and looks the least realistic
Methods – A Comparison
geologically.
5. The addition of deterministic trends, both ver-
An example is given below, to which alternative
tical and lateral, creates more ordered, less
algorithms have been applied. The case is taken
random-looking models, as the assumption of
from a fluvio-deltaic reservoir – the Franken
stationarity is overridden by the conceptual
Field – based on a type log with a well-defined
model.
conceptual geological model (Fig. 2.56). The
main reservoir is the Shelley, which divides into Any of above models presented in Figs. 2.57,
a clearly fluvial Lower Shelley characterised by 2.58 and 2.59 could be offered as a ‘best guess’,
sheetfloods, and an Upper Shelley, the and could be supported at least superficially with
2.7 Algorithm Choice and Control 59
Fig. 2.56 The Franken Field reservoir – type log and for the Upper Shelley, the other for an alternative coastal
proposed depositional environment analogues. The plain model for the Upper Shelley. Sands are marked in
model elements are shown on the right-hand coloured yellow, muds in blue, intermediate lithologies in interme-
logs, one of which is associated with a delta interpretation diate colours. (Image courtesy of Simon Smith)
an appropriate story line. Presenting multiple of these models is a ‘good’ or ‘bad’ representation
models using different layering schemes and of the reservoir. However, a number of the
alternative algorithms also appears thorough and models were made quickly using system defaults
in a peer-review it would be hard to know which and have little substance; stationarity (within
60 2 The Rock Model
Fig. 2.57 The Franken Field. Top image: the two zone Fig. 2.58 The Franken Field: Top image: the four zone
subdivision; middle image: object model (no trends subdivision; middle image: pixel (SIS) model (no trends
applied, stationarity maintained); bottom image: trended applied, stationarity maintained); bottom image: trended
object model pixel (SIS) model
2.8 Summary • Make net sand isochore maps for each zone
without wells posted; imposed trends should
In this section we have offered an overview of be visible and the well locations should not
approaches to rock modelling and reviewed a (no bulls-eyes around wells).
range of geostatistically-based methods, whilst • Make full use of the visualisation tools, espe-
holding the balance between probability and cially the ability to scroll through the model
determinism and the primacy of the underlying vertically, layer by layer, to look for anoma-
concept as the core issues. Reservoir modelling is lous geometries, e.g. spikes and pinch-outs.
not simply a process of applying numerical tools
to the available dataset – there is always an ele-
ment of subjective design involved – an intuitive 2.8.2 Synopsis – Rock Modelling
leap. Guidelines
Overall the rock model must make geological
sense and to summarise this we offer a brief The first decision to be made is whether or not a
resume of practical things which can be done to rock model is truly required. If rock modelling
check the quality of the rock model – the QC can add useful control upon the desired distribu-
process. tion of reservoir properties, then it is needed. If
the desired property distributions can be achieved
directly by property modelling, then rock
2.8.1 Rock Model QC modelling is not necessary at all.
If it is decided that a rock model is required, it
• Make architectural sketches along depositional then needs some thought and design. The use of
strike and dip showing the key features of the system default values is unlikely to be successful.
conceptual models. During the model build, This chapter has attempted to stress the follow-
switch the model display to stratigraphic ing things:
(simbox) view to remove the structural defor- 1. The model concept needs to be formed before
mation. How do the models compare with the the modelling begins, otherwise the modeller
sketches? is ‘flying blind’. A simple way of checking
• Watch out for the individual well matches your (or someone else’s) grasp of the concep-
reported by the software package – well by tual reservoir model is to make a sketch sec-
well. These are more useful and diagnostic tion of the reservoir, or request a sketch,
than the overall ‘facies’ proportions. Anoma- showing the desired architecture. If you can
lous wells point to weaknesses in the model sketch it, you can model it.
execution. 2. The model concept needs to be expressed in
• Make element proportion maps for each ele- terms of the chosen model elements, the selec-
ment in each zone and check these against well tion of which is based on not only a consider-
data and the overall concept. This is an impor- ation of the heterogeneity, but with a view to
tant check on the inter-well probabilistic the fluid type and production mechanism.
process. Some fluid types are more sensitive to hetero-
• Assuming trends have been applied, check the geneity than others; if the fluid molecules do
statistics of the modelled element distribution not sense the heterogeneity, there is no need to
against that for the well data alone; they are model it – reference ‘Flora’s Rule’ as a handy
highly unlikely to be the same because of the rule of thumb.
undersampling of the wells, but the differences 3. Rock models are mixtures of deterministic and
should be explicable in terms of any applied probabilistic inputs. Well data tends to be
trends and the spatial location of the wells. statistically insufficient, so attempts to extract
62 2 The Rock Model
statistical models from the well data are often Caers J (2003) History matching under training-image-
not successful. The road to happiness therefore based geological model constraints. SPE J 8
(3):218–226
generally lies with strong deterministic con- Caers J (2011) Modeling uncertainty in the earth sciences.
trol, as determinism is the most direct method Wiley, Hoboken
of carrying the underlying reservoir concept Campbell CV (1967) Lamina, laminaset, bed, bedset. Sed-
into the model. imentology 8:7–26
Da Pra A, Anastasi P, Corradi C, Sprega G (2017) 3D
4. To achieve the desired reservoir architecture, process based model integration in reservoir
the variogram model has a leading influence if modeling – a real case application. Presented at the
pixel-based methods are employed; we sug- 79th EAGE Conference and Exhibition 2017, Paris,
gest the variogram range and its spatial anisot- extended abstract
Deutsch CV (2002) Geostatistical reservoir modeling.
ropy is the most important geostatistical input Oxford University Press, Oxford, p 376
to reconcile with the conceptual sketch. Doyen PM (2007) Seismic reservoir characterisation.
5. To get a reasonable representation of the EAGE Publications, Houten
model concept it is generally necessary to Dubrule O, Damsleth E (2001) Achievements and
challenges in petroleum geostatistics. Pet Geosci 7:1–7
impose trends (both vertical and lateral) on Fielding CR, Crane RC (1987) An application of statistical
the modelling algorithm, irrespective of the modelling to the prediction of hydrocarbon recovery
chosen algorithm, unless a process-based factors in fluvial reservoir sequences, vol 39. Society of
modelling tool is used to forward-model the Economic Paleontologists and Mineralogists (SEPM)
special publication
likely trends. Haldorsen HH, Damsleth E (1990) Stochastic modelling. J
6. Guide the geostatistical algorithms during a Petrol Technol 42:404–412
rock model build by an intuitive understanding Holden L, Hauge R, Skare Ø, Skorstad A (1998) Modeling
of the relationship between the underlying res- of fluvial reservoirs with object models. Math Geol 30
(5):473–496
ervoir concept and the geostatistical rules Howell J, Martinius A, Good T (2014) The application of
which guide the chosen algorithm. outcrop analogues in geological modelling: a review,
7. It is unlikely that the element proportions in present status and future outlook. Geol Soc London
the model will match those seen in the wells – Spec Publ 387:1–25
Isaaks EH, Srivastava RM (1989) Introduction to applied
do not expect this to be the case; the data and geostatistics. Oxford University Press, Oxford
the model are statistically different – this is Jensen JL, Corbett PWM, Pickup GE, Ringrose PS (1995)
discussed further in the next chapter. Permeability semivariograms, geological structure and
flow performance. Math Geol 28(4):419–435
Journel AG, Alabert FG (1990) New method for reservoir
mapping. J Petrol Technol 42:212–218
Matheron G (1963) Principles of geostatistics. Econ Geol
References 58:1246–1266
McIlroy D, Flint S, Howell J, Timms N (2005) Sedimen-
Aarnes I, van der Vegy H, Hauge R, Fjellvoll B, Nordahl tology and the tide-dominated Jurassic Lajas Forma-
K (2019) Using sedimentary process-based models as tion, Neuquen Basin, Argentina. Geol Soc London
training images for multipoint facies simulations. Bull Spec Publ 252:83–107
Can Petrol Geol 67:217–230 Pyrcz MJ, Deutsch CV (2014) Geostatistical Reservoir
Ainsworth RB, Sanlung M, Duivenvoorden STC (1999) Modelling, 2nd edn. Oxford University Press, Oxford
Correlation techniques, perforation strategies and Soubaras R, Dowle R (2010) Variable-depth streamer – a
recovery factors. An integrated 3-D reservoir modeling broadband marine solution. First Break 28:89–96
study, Sirikit Field, Thailand. Am Assoc Petrol Geol Strebelle S (2002) Conditional simulation of complex
Bull 83(10):1535–1551 geological structures using multiple-point statistics.
Bentley MR, Elliott AA (2008) Modelling flow along fault Math Geol 34(1):1–21
damage zones in a sandstone reservoir: an unconven- Van Wagoner JC, Bertram GT (eds) (1995) Sequence
tional modelling technique using conventional stratigraphy of Foreland basin deposits. American
modelling tools in the Douglas Field Irish Sea Association of Petroleum Geologists (AAPG), AAPG
UK. SPE paper 113958 presented at SPE Europec/ Memoir 64:137–224
EAGE conference and exhibition. Society of Petro- Van Wagoner JC, Mitchum RM, Campion KM,
leum Engineers (SPE). https://doi.org/10.2118/ Rahmanian VD (1990) Siliciclastic sequence stratigra-
113958-MS phy in well logs, cores, and outcrops. AAPG methods
References 63
in exploration series, vol 7. American Association of Yarus JM, Chambers RL (1994) Stochastic modeling and
Petroleum Geologists (AAPG) geostatistics principals, methods, and case studies.
Webber KJ, Van Geuns LC (1990) Framework for AAPG computer applications in geology, vol 3. Amer-
constructing clastic reservoir simulation models. J Pet ican Association of Petroleum Geologists (AAPG),
Technol 42:1–248 p 379