New Technologies Oil Gas Industry
New Technologies Oil Gas Industry
New Technologies Oil Gas Industry
. of DSA
Ti/TiO2-RuO2-SnO2 were used, in a solution containing NaCl, which produced strong
oxidizers such as HOCl and Cl2. These promoted degradation of BTEX and phenol at
different flow rates. The Zn
+2
was removed by electrodeposition or by the formation of
Zn(OH)2 due to the increased pH.
Assessing the removal of oil from a synthetic emulsion by the electrocoagulation-flotation
process, [45] observed the influence of operating parameters on the rate of reduction of
COD, initial oil concentration, current density, electrode separation, pH and electrolyte
concentration. NaCl was added to increase the solutions conductivity. The initial pH of the
emulsion was 8.7. The Zeta potential had an average value of -75 mV, indicating emulsion
New Technologies in the Oil and Gas Industry 14
stability. The author found that the best conditions for removal were current density of 4.44
mA/cm
2
, treatment time of 75 min, distance between electrodes of 10 mm and concentration
of the electrolyte (NaCl) of 3 g/L.
Studying the treatment of a synthetic effluent and a real produced water sample for removal
of oil by the Fenton process, electroflotation and a combination these two, [64] first
evaluated the Fenton and electroflotation processes individually and optimized the
parameters for evaluating the combined process. The Fenton process, using Fe
+2
and H2O2,
obtained a peak oil removal of around 95% after 150 minutes and 50% removal after 57
minutes. The EC with the optimized volt (V) value managed to remove 98% of the oil after
40 minutes. The combined process using the optimized parameters for each process
achieved removal of 98% after 10 minutes and 50% after 1 to 3 minutes. The combined
process proved to be much more efficient than the procedures alone.
[65] evaluated the removal of sulfate and COD from oil refinery wastewater through three
types of electrodes: aluminum, stainless steel and iron. They investigated the effects of
current density, electrode array, electrolysis time, initial pH and temperature for two
samples of wastewater with different concentrations of COD and sulfate. The experimental
results showed that the aluminum anode and cathode was more efficient in the reduction
of both contaminants. The results demonstrated the technical feasibility of
electrocoagulation as a reliable method for pre-treatment of contaminated wastewater from
refineries.
[66] in their experiments showed that treatment of synthetic wastewater emulsified water
produced by EF, produced better results when used at a frequency of 60 Hz alternating
current, initial pH 9, electrolysis time of 3 minutes and application of intensity current of 3
A. The results of tests on simulated wastewater produced water resulted in high removal
efficiencies of organic load reaching 99% removal of oil and grease, color and turbidity.
Compared to the flocculation trials using Jar-Test, the EF demonstrated highly efficient for
the treatment of effluent water production in order to remove oil and grease emulsions,
color and turbidity with no addition of chemical reagents or pH adjustment. Jar-Test trials
were not effective consume high amounts of aluminum sulfate and low efficiency of
removal of parameters. The main advantage of alternating current electrolysis in
comparison with the direct current is less wear of the electrode mass. By using the same
assay conditions for both technologies in 60 minutes oxidized alternating current of 1.6 g Al
electrode while the oxidized direct current electrode of 3.4 g Al.
2. Variable frequency AC electroflocculation
In this work, we used variable frequency electroflocculation, which consists of using
alternating current from the power grid at 60 Hertz and varying the voltage and frequency
between 1 Hz to 120 Hz. This alternating current was generated by reconstituting the
sinusoidal form of the input current in a conversion system with vector control, which
generates a pulse-controlled formation time (period) adjusted by a programmable base time
through a system of microprocessors.
Electrolytic Treatment of Wastewater in the Oil Industry 15
This system triggers an oscillator to form a new waveform that has a peak residence time
large enough to have conduction at a given polarity. The evaluation of EF with alternating
current for the treatment of effluents from oil platforms can be of great importance to
develop treatment processes that are fast, efficient and cost-effective. The aim of this
experiment was to develop and evaluate in laboratory scale a variable frequency
electroflocculation (EF) system for the treatment of oily water generated in offshore oil
production and to compare the results against those produced using direct current.
2.1. Electroflocculation units
AC and DC electroflocculation units consisted of a glass electrolytic cell with capacity of 1
liter under magnetic stirring, in which an electrode was inserted vertically (monopolar in
parallel) in a honeycomb arrangement, made of seven interspersed aluminum plates. These
plates measured 10 cm long, 5 cm wide and 3 mm thick and were separated by spacers of 1
cm each. After a predetermined electrolysis period, we waited for 30 minutes for complete
flotation of the emulsion to occur. Through a tap at the bottom of the beaker, the treated
effluent was removed to assess the efficiency of electroflocculation (EF), which was done by
monitoring, in triplicate, the following parameters: pH, conductivity, turbidity and color.
2.1.1. DC unit
The DC electroflocculation unit used a voltage of up to 15 V. First AC power (110/220 V)
was applied to a potentiometer connected to a step-down transformer, feeding the
secondary stage rectifier bridge responsible for providing DC power to the electrodes by a
polarity reversing switch, connected to a meter showing voltage (V) and current (A). These
readings guide the operator regarding the parameters of honeycomb electrode array. Figure
4 shows the diagram of the experimental DC setup.
Figure 4. Schematic diagram of the experimental DC unit.
2.1.2. Variable frequency AC unit
The alternating current at a voltage of up to 15 V and variable frequency between 1 Hz and
120 Hz was obtained from a Weg CFW0800 AC/AC converter and a step-down transformer
Power
127 V
Voltage
inverter
Step-down
transformer
Bridge
rectifier
Voltmeter
Ammeter
Reversings
witch
Electrode
New Technologies in the Oil and Gas Industry 16
(Tecnopeltron PLTN model 100/15). In this setup, the input power at 60 Hz from the grid is
converted to variable frequency output of 1 to120 Hz in order to obtain AC power at the
desired level. As with the DC setup, there is a meter to indicate the voltage (V) and current
(a), to guide the operator.
Figure 5 shows a block diagram where the 60 Hz current from the grid feeds a frequency
converter with variable output from 1 Hz to 120 Hz, connected to a variable voltage step-
down transformer, thereby providing appropriate frequency and voltage to the electrode.
In the rectification step that occurs in the variable frequency converter, the power is
transformed into DC. Then the new direct current is treated in the oscillator module
which converts it into pulses with controlled width, forming a new AC waveform, with a
frequency that can vary between 1 Hz and 120 Hz depending on the level of feedback
(reference) from the load controller. Thus, it has a sinusoidal waveform where the period
varies with the load, to obtain the best performance at active power levels.
Figure 5. Schematic diagram of the experimental AC unit
The electrode is the central element for treatment. Thus, the proper selection of its materials
is very important. The most common electrode materials for electroflocculation are
aluminum and iron, since they are inexpensive, readily available and highly effective. In this
experiment we used a hive array of seven interspersed aluminum plates measuring 10 cm
long, 5 cm wide and 3 mm thick. The plates were separated by spacers (1 cm thick each),
allowing varying the distance between the electrodes.
The electrodes were connected to specific instruments to control and monitor the
current and voltage applied to the system, namely a frequency converter/regulator,
potentiometer, step-down transformer, voltmeter, ammeter, bridge rectifier and polarity
reversing switch.
Figure 6 shows an example of hive aluminum electrodes.
Power
127 V
Step-down
transformer
Frequency
Converter
inverter
Voltage
inverter
Voltmeter
Ammeter
Electrode
Electrolytic Treatment of Wastewater in the Oil Industry 17
Figure 6. Hive aluminum electrodes containing eigth plates.
2.2. Tests with real effluent using oily water from the oil industry
Through laboratory tests we noted that the real effluent yielded obtained from an oil
company did not have high salinity and had high oil and grease content (60 g / L), so it was
not characterized as produced water but rather as oily effluent.
We conducted there tests:
1. the first using the effluent as received;
2. the second adding 60g / L of natural salt; and
3. the third adding salt 60g / L of salt plus emulsifiers.
Table 1 shows the values obtained with the AC and DC electroflocculation processes with
the original effluent as received. The results of the AC setup were obtained with the
maximum current of the unit (i = 2.5 A), due to the low salinity. The voltage was 11 V. In the
case of direct current, the unit only reached a maximum of 1.6 A, so we added 1 g of salt to
obtain the same current intensity as the AC unit. By adding salt to the effluent in the DC
system, there is an improvement in removal efficiency.
The analysis of the oil and grease parameter (supernatant) was carried out separately, while
the remaining parameters were analyzed with the subnatant phase of the effluent. The AC
and DC electroflocculation tests were performed with the effluent containing an oil and
grease content of 60g/L.
There was an increase in pH during the final AC and DC tests, attributed to the generation
of OH-ions during the water reduction step.
2H2O(l) + 2e
-
H2(g) + 2OH
-
(aq) (5)
New Technologies in the Oil and Gas Industry 18
Although these ions are also used to form the coagulating agent, the remaining quantity
results in an increase of the pH value. This was also observed by [43, 67, 68].
Parameters Oily wastewater AC DC
pH 6.7 8.3 8.2
Turbidity (NTU) 840 2 2
Color (Abs. 400 nm) 0.46 0.02 0.01
Salinity (mg/L) 279 340 1210
Conductivity (S/cm) 580 702 2238
TDS (mg/L) 408 498 1680
Phenols (mg/L) 0.5 < 0.1 < 0.1
Sulfides (mg/L) 2.8 < 0.5 < 0.5
Ammonia nitrogen (mg/L) 36.0 3.7 1.5
O&G (mg/L) 60000 18 22
Current (A) 0 2.5 2.5
Tension (V) 0 11.0 8.5
Legend: Data: time = 15 min., Delectrode. = 1 cm, vol. = 3L, i = 3 A, f = 60 Hz AC. Note: In the DC setup, 1 g of salt was
added to increase the conductivity and the current intensity of the equipment, because it was unable to reach the same
as with the AC setup.
Table 1. Data from the EF treatment process.
Despite the low conductivity of the effluent, there was high removal of pollutants and
complete clarification after treatment for 15 minutes. The removal of ammonia, phenols and
sulfides may have been obtained by drag of the gas phase (electroflotation).
Figure 7 shows the evolution of the tests of the raw effluent and after treatment with AC
and DC.
Figure 7. EF tests with real effluent with low salinity. Legend: a) raw effluent; b) effluent treated with AC
electroflocculation; c) effluent treated with DC electroflocculation; and d) comparison of raw effluent
with samples treated with AC and DC electroflocculation. Note: The tests performed with t = 15 min.
c
b
a
d
Electrolytic Treatment of Wastewater in the Oil Industry 19
The test results using the effluent plus NaCl to bring the salinity to 60 g / L, to simulate
produced water, are shown in Table 2.
Parmeters Oily wastewater AC DC
pH 5.9 6.3 6.3
Turbidity (NTU) 1000 10 16
Color (Abs.) 0.63 0.03 0.05
Salinity (g/L) 50.8 46.0 47.2
Conductivity (mS/cm) 91.5 72.4 75.3
TDS (g/L) 64.9 59.6 59.6
Phenols (mg/L) 0.5 < 0.1 < 0.1
Sulfide (mg/L) 2.8 < 0.5 < 0.5
Ammonia nitrogen (mg/L) 36.0 2.9 2.7
O&G (mg/L) 60000 30 35
Mass electrode (g) - 0.16 0.23
Current (A) - 2.5 2.5
Tension (V) - 1.5 2.0
Data: vol. = 3 L, t = 6 min., Delectrode. = 1 cm, vol. = 3 L, i = 3 A, f = 60 Hz AC.
Table 2. Results obtained during tests of AC and DC electroflocculation using effluent with high
salinity.
Since the electrolytic process involves corrosion of the electrode, according to Faraday's
laws, there is mass loss of the electrodes. We measured this electrode mass loss by the
weight difference before and after each test. High removal efficiencies were observed in tests
with the high-salinity effluent. Electrode mass consumption with alternating current was
33% lower than with direct current, with all other conditions the same. Low voltage was
applied during electrolysis in the tests to achieve high pollutant removal efficiency. In
previous tests, the low conductivity greatly increased the voltage required by the system,
leading to high energy consumption. This indicates that high conductivity greatly favors the
electrolytic process.
Figure 8 shows the evolution of the tests with high-salinity effluent using the EF technology
with alternating current and direct current. Note the total clarification after the tests without
filtration.
Table 3 shows the results of the high-salinity effluent treated with the AC and DC processes.
As can be seen from the above table, the high turbidity, color and O&G were almost
completely removed (above 99%). The voltage applied to the electrodes was very low,
resulting in high efficiency of the technique. The electrode mass consumed with DC was
31% higher than with AC.
A hypothesis for the lower electrode consumption with alternating current is that since DC
only flows in one direction, there may be irregular wear on the plates due to the onslaught
of the current and subsequent oxidation occurring in the same preferential points of the
New Technologies in the Oil and Gas Industry 20
electrode. In the case of AC, the cyclical energization retards the normal mechanisms of
attack on an electrode and makes this attack more uniform, thus ensures longer electrode
life.
Figure 8. EF tests with high-salinity effluent. Legend: 60g / L NaCl: a) raw wastewater containing 60g /
L of O&G; b) raw wastewater being mixed salt, for EF testing; c) effluent treated with AC (left) and DC
(right) electroflocculation.
Parmeters Oily wastewater AC DC
pH 6.4 6.7 6.7
Turbidity (NTU) 11050 8 10
Color (Abs.) 7.57 0.03 0.04
Salinity (g/L) 50.8 47.5 47.8
Conductivity (mS/cm) 91.5 87.4 87.0
TDS (g/L) 64.9 61.9 61.5
Phenols (mg/L) 0.5 0.2 0.2
Sulfide (H2S) (mg/L) 2.8 1.2 1.0
Ammonia nitrogen (mg/L) 36 4 5
O&G (mg/L) 60000 32 30
Mass electrode (g) - 0.18 0.26
Current (A) - 2.5 2.5
Tension (V) - 1.5 2.0
Note: Data: vol. = 3 L, t = 6 min.
Table 3. Results of treating high-salinity effluent with AC and DC electrolysis.
a
c
b
Electrolytic Treatment of Wastewater in the Oil Industry 21
Figure 9 shows the evolution of EF treatment of high salinity effluent during the stages of
development using AC and DC.
Figure 9. Tests of EF with high-salinity effluent. Legend: (60g/L) of salt. Processing steps: a) emulsified
effluent, b) effluent undergoing electroflocculation with formation of supernatant sludge, c) treated
effluent, with sludge formation; d) original effluent and after treatment with AC (left) and DC (right)
electroflocculation.
3. Conclusions
In the present study, we could confirm that the EF process produces satisfactory results for
treatment of oily wastewater, allowing its discharge into water bodies or reinjection in oil
formations. The AC technology was highly effective, both with the original oily water as
received and with the simulated produced water after addition of salt.
Overall, the results confirm the potential of the technique, which through simple and
compact equipment, can be employed for the decontamination of organic compounds. The
results of tests on oily water resulted in high organic load removal efficiencies, reaching 99%
removal of oil and grease, color and turbidity, along with high removal of phenols,
ammonia and sulfides.
a
b
c
d
a
c
b
d
New Technologies in the Oil and Gas Industry 22
The biggest advantage of AC versus DC electroflocculation is the lower electrode wear with
the former technique. When using the same testing conditions and time of 6 minutes for
both technologies, the efficiency was above 30%. The AC electroflocculation technique
seems to be a promising alternative in the treatment of oily wastewater from the oil
industry.
4. Acronyms
A - Ampere
AC - Alternate Current
ACE Separator
TM
- Alternating Current electrocoagulation
BOD - Biochemical Oxygen Demand
BTEX - Benzeno, Tolueno, Etil-Benzeno and Xilenos)
COD - Chemical Oxygen Demand
DC - Direct Current
Delectrode Distance between electrodes
DSA
Novel Formulation of Environmentally Friendly Oil Based Drilling Mud
59
e.g for Jatropha
,
0.110231 0.38040768 0.76742464
8.326 ppg
0.0924608 0.0528344 0.005079769585
m J
From the above table, the error differences between the calculated and measured densities
all lie below 0.1, thus the readings obtained using the mud balance have a high accuracy.
It also showed that the denser the base oil, the higher the amount of barite needed to
build.
4.2. Viscosity and gel strength results
Viscosity readings obtained from the experiment carried out on the rotary viscometer are
contained in Table 3.
The dial reading values (in lb/100ft
2
) are tabulated against the viscometer speeds in RPM.
Viscosity values are calculated with equations
Apparent viscosity= Dial Reading at 600RPM (600)/2
Dial speed (RPM) Diesel Algae Jatropha Moringa Canola
600 185 122 154 169 128
300 170 114 133 158 120
200 169 96 124 149 115
100 163 88 114 143 114
60 152 82 107 140 113
30 143 74 98 136 111
6 122 62 92 120 110
3 81 55 76 79 60
Table 3. Viscometer Readings for Diesel, Jatropha and Canola OBMs
Rheological Properties Diesel Algae Jatropha Moringa Canola
Plastic Viscosity 15 8 21 11 8
Apparent Viscosity 92.5 61 77 84.5 64
Gel Strength 50/51 52/43 54/55 52/53 60/72
Table 4. Plastic Viscosities, Apparent Viscosities, Gel Strength,
Diesel OBM had the highest apparent viscosity, followed by Moringa, then Jatropha, Canola
and algae OBMs
New Technologies in the Oil and Gas Industry
60
Figure 6. Viscometer Plot for Diesel OBM
Figure 7. Viscometer Plot for Jatropha OBM
Figure 8. Viscometer Plot for Moringa OBM
Novel Formulation of Environmentally Friendly Oil Based Drilling Mud
61
Figure 9. Viscometer Plot for algae OBM
Figure 10. Viscometer Plot for Canola OBM
Figure 11. Combined viscometer plot for Diesel, Algae, and jatropha OBMs
It can be seen that the plots on Figures 6 to 11, generated from the dial readings of all the
mud samples are similar to the Bingham plastic model. This goes to prove that the muds
have similar rheological behaviour.
New Technologies in the Oil and Gas Industry
62
However, not all the lines of the plot are as straight as the Bingham plastic model. This can
be explained by a number of factors such as: possible presence of contaminants, and the
possibility of behaving like a different model such as Herschel Bulkley.
A Bingham plastic fluid will not flow until the shear stress exceeds a certain minimum
value y known as the yield point
9
(Bourgoyne et al 1991). After the yield has been exceeded,
the changes in shear stress are proportional to changes in shear rate and the constant of
proportionality is known as the plastic viscosity p.
From Figures, the yield points of the different muds can be read off. The respective yield
points are the intercepts on the vertical (shear stress) axes.
For reduced friction during drilling, algae OBM gives the best results, followed by Jatropha
OBM then moringa OBM.
This means Diesel OBM offers the greatest resistance to fluid flow. Algae, Jatropha, Moringa
and Canola OBMs pose better prospects in the sense that their lower viscosities will mean
less resistance to fluid flow. This will in turn lead to reduced wear in the drill string
10
.
4.3. Mud filtration results
The filtration tests were carried out at 350 kPa due to the low level of the gas in the cylinder.
The mud cakes obtained from the API filter press exhibited a slick, soft texture.
From Table 5 and Figures 12 to 15, we can infer that Diesel OBM had the highest rate of
filtration and spurt loss. Comparing this to a drilling scenario, this means that the mud cake
from Diesel OBM is the most porous, and the thickest.
From these inferences, we can see that Algae, Jatropha, Moringa and Canola OBMs are better
in filtration properties than Diesel OBM as inferred from thickness and filtration volumes.
Figure 12. Filtration Volumes for Diesel, Algae, Jatropha and Moringa OBMs
Novel Formulation of Environmentally Friendly Oil Based Drilling Mud
63
Figure 13. Filtration Volumes for Diesel, Jatropha and Canola OBMs
Figure 14. Mud Cake Thicknesses for Diesel, Algae, Canola OBMs
Figure 15. Mud Cake Thicknesses for Diesel, Jatropha and Canola OBMs
New Technologies in the Oil and Gas Industry
64
Filtration
Properties
DIESEL ALGAE JATROPHA MORINGA Canola
Total Fluid
Volume
6.9ml 6.2ml 6.3ml 7.2ml 6.0 ml
Oil volume 2.3ml 1.1ml 1.1ml 2.5ml 1.0 ml
Water Volume 4.6ml 5.1ml 4.2ml 4.7ml 4.3 ml
Cake Thickness 1.0mm 0.9mm 0.8mm 0.9mm 0.78mm
Table 5. Mud Filtration Results
Problems caused as a result of excessive thickness include
4
:
i. Tight spots in the hole that cause excessive drag.
ii. Increased surges and swabbing due to reduced annular clearance.
iii. Differential sticking of the drillstring due to increased contact area and rapid
development of sticking forces caused by higher filtration rate.
iv. Primary cementing difficulties due to inadequate displacement of filter cake.
v. Increased difficulty in running casing.
The problems as a result of excessive filtration volumes include
4
:
i. Formation damage due to filtrate and solids invasion. Damaged zone too deep to be
remedied by perforation or acidization. Damage may be precipitation of insoluble
compounds, changes in wettability, and changes in relative permeability to oil or gas,
formation plugging with fines or solids, and swelling of in-situ clays.
ii. Invalid formation-fluid sampling test. Formation-fluid flow tests may give results for
the filtrate rather than for the reservoir fluids.
iii. Formation-evaluation difficulties caused by excessive filtrate invasion, poor
transmission of electrical properties through thick cakes, and potential mechanical
problems running and retrieving logging tools.
iv. Erroneous properties measured by logging tools (measuring filtrate altered properties
rather than reservoir fluid properties).
v. Oil and gas zones may be overlooked because the filtrate is flushing hydrocarbons
away from the wellbore, making detection more difficult.
4.4. Hydrogen ion potential results
Drilling muds are always treated to be alkaline (i.e., a pH > 7). The pH will affect viscosity,
bentonite is least affected if the pH is in the range of 7 to 9.5. Above this, the viscosity will
increase and may give viscosities that are out of proportion for good drilling properties. For
minimizing shale problems, a pH of 8.5 to 9.5 appears to give the best hole stability and
control over mud properties. A high pH (10+) appears to cause shale problems.
The corrosion of metal is increased if it comes into contact with an acidic fluid. From this point
of view, the higher pH would be desirable to protect pipe and casing (Baker Hughes, 1995).
Novel Formulation of Environmentally Friendly Oil Based Drilling Mud
65
The pH values of all the samples meet a few of the requirements stated but Diesel OBM with
a pH of less than 8.5 does not meet with specification. Algae, Jatropha, Moringa and Canola
OBMs show better results since their pH values fall within this range.
Type of Oil DIESEL ALGAE JATROPHA MORINGA
pH Value 8 9 8.5 9
Table 6. pH Values
4.5. Results of cuttings carrying index
Only three drilling-fluid parameters are controllable to enhance moving drilled solids from
the wellbore:Apparent Viscosity (AV) density (mud weight [MW]), and viscosity. Cuttings
Carrying Index (CCI) is a measure of a drilling fluids ability to conduct drilled cuttings in
the hole. Higher CCIs, mean better hole cleaning capacities.
From the Table, we can see that Jatropha OBM showed best results for CCI iterations.
Diesel Jatropha Canola
CCI 15.901 19.067 17.846
Table 7. Cuttings Carrying Indices (CCIs)
4.6. Pressure loss modeling results
The Bingham plastic model is the standard viscosity model used throughout the industry,
and it can be made to fit high shear- rate viscosity data reasonably well, and is generally
associated with the viscosity of the base fluid and the number, size, and shape of solids in
the slurry, while yield stress is associated with the tendency of components to build a shear-
resistant.
Diesel Jatropha Canola
Drill Pipe 829 277.39 250.65
Drill Collar 177.35 173.75 157.0
Drill Collar (Open) 161.35 158.15 142.9
Drill Pipe (Open) 14.1 13.81 12.48
Drill Pipe (Cased) 9.28 9.10 8.22
Total 1191.98 706.45 571.25
Table 8. Bingham Plastic Pressure Losses in Psi
It can be seen from the table that Jatropha and Canola OBMs gave better pressure loss
results than Diesel OBM as a result of lower plastic viscosities, and hence should be
encouraged for use during drilling activities.
New Technologies in the Oil and Gas Industry
66
4.7. Result of the toxicity measurements
Samples of 100ml of each of the selected oils were exposed to both corn seeds and bean seed
and the no of days which the crop survived are as indicated in Figure 16. The growth rate
was also measured i.e the new length of the plant was measured at regular time intervals.
For the graph of toxicity of diesel based mud the reduced growth rate indicates when the
leaves began to yellow, and the zero static values indicate when the plant died.
From the results indicated by the figure 16, it can be concluded that jatropha oil has less
harmful effect on plant growth compared to canola and diesel. Bean seeds were planted and
after one week, they were both exposed to 100ml of both jathropha formulated mud and
diesel formulated mud. The seeds exposed to jatropha survived for 18 days, while that
exposed to diesel mud survived for 6 days and then withered. When the soil was checked,
there was no sign of any living organisms in diesel mud sample while that of the jatropha
mud, there were signs of some living organisms such as earth worms, and other little
insects. This shows that jatropha mud sample is environmentally safer for both plants and
micro animals than diesel mud sample.
From the figure 17, it can be seen that the seeds exposed to jatropha had the highest number
of days of survival which indicates its lower toxicity while that of diesel had the lowest days
of survival which indicates its high toxicity. The toxicity of diesel can be traced to high
aromatic hydrocarbon content. Therefore, replacements for diesel should either eliminate or
minimize the aromatic contents thereby making the material non toxic or less toxic.
Biodegradation and bioaccumulation however depend on the chemistry of the molecular
character of the base fluids used. In general, green material i.e plant materials containing
oxygen within their structure degrade easier.
Figure 16. Comparison of Growth Rate Curve of Different Mud Types
4.8. Results of density variation with temperature
Densities were measured for the various samples at temperatures ranging from 30
O
C to
80
O
C and are summarized in Table 9.
Novel Formulation of Environmentally Friendly Oil Based Drilling Mud
67
Figure 17. Toxicity of different mud types
Temperature Diesel Jatropha Canola
30
O
C 10 10 10
40
O
C 10.1 10.05 10.05
50
O
C 10.17 10.1 10.05
60
O
C 10.2 10.15 10.1
70
O
C 10.2 10.15 10.15
80
O
C 10.25 10.2 10.17
Table 9. Density Changes in ppg at Varying Temperatures.
The mud samples were heated at constant pressure, and in an open system, hence the
density increment.
At temperatures of 60
O
C and 70
O
C, the densities of Diesel and Jatropha OBMs were
constant, while that happened with Canola OBM at a lower range of 40
O
C and 50
O
C. This is
shown in Figure 18. This could be due to the differences in temperature and heat energy
required to dissipate bonds, which vary with fluid properties (i.e the continuous phases).
Figure 18. Density against Temperature (Diesel, Jatropha and Canola OBMs)
New Technologies in the Oil and Gas Industry
68
After the results were recorded, extrapolations were made and hypothetical values were
derived for temperatures as high as 320
O
C, to enhance the prediction using Artificial Neural
Network (ANN).
These values are summarized Tables 10 to 12
Diesel Jatropha Canola
30
O
C 10 10 10
40
O
C 10.1 10.05 10.05
50
O
C 10.17 10.1 10.05
60
O
C 10.2 10.15 10.1
70
O
C 10.2 10.15 10.15
80
O
C 10.25 10.2 10.17
90
O
C 10.31133 10.24333 10.20667
100
O
C 10.35648 10.2819 10.24095
110
O
C 10.40162 10.32048 10.27524
120
O
C 10.44676 10.35905 10.30952
130
O
C 10.4919 10.39762 10.34381
140
O
C 10.53705 10.43619 10.3781
150
O
C 10.58219 10.47476 10.41238
160
O
C 10.62733 10.51333 10.44667
170
O
C 10.67248 10.5519 10.48095
180
O
C 10.71762 10.59048 10.51524
190
O
C 10.76276 10.62905 10.54952
200
O
C 10.8079 10.66762 10.58381
210
O
C 10.85305 10.70619 10.6181
220
O
C 10.89819 10.74476 10.65238
230
O
C 10.94333 10.78333 10.68667
240
O
C 10.98848 10.8219 10.72095
250
O
C 11.03362 10.86048 10.75524
260
O
C 11.07876 10.89905 10.78952
270
O
C 11.1239 10.93762 10.82381
280
O
C 11.16905 10.97619 10.8581
290
O
C 11.21419 11.01476 10.89238
300
O
C 11.25933 11.05333 10.92667
310
O
C 11.30448 11.0919 10.96095
320
O
C 11.34962 11.13048 10.99524
Table 10. Hypothetical Temperature-Density Values (extrapolated from regression analysis).
Novel Formulation of Environmentally Friendly Oil Based Drilling Mud
69
4.9. Results of neural networking
From the Artificial Neural Network Toolbox in the MATLAB 2008a, the following results
were obtained:
60% of the data were used for training the network, 20% for testing, and another 20% for
validation.
On training the regression values, returned values are summarized in Table 11
Diesel Jatropha Canola
Training 0.99999 0.99999 0.99995
Testing 0.99725 0.99056 0.99898
Validation 0.99706 0.98201 0.99328
All 0.99852 0.99414 0.99675
Table 11. Regression Values.
Since all regression values are close to unity, this means that the network prediction is a
successful one.
The graphs of training, testing and validation are presented below:
The values were returned after performing five iterations for each network. This also goes to
say that the Artificial Neural Network, after being trained and simulated, is a viable and
feasible instrument for prediction.
Figures 19 to 31 present the plots of Experimental data against Estimated (predicted) data
for training, testing and validation processes from MATLAB 2008.
Figure 19. Diesel OBM Validation values
New Technologies in the Oil and Gas Industry
70
Figure 20. Diesel OBM Test values
Figure 21. Diesel OBM Training values
Figure 22. Diesel OBM Overall values
Novel Formulation of Environmentally Friendly Oil Based Drilling Mud
71
Figure 23. Diesel OBM Overall values
Figure 24. Jatropha OBM Validation values
Figure 25. Jatropha OBM Test values
New Technologies in the Oil and Gas Industry
72
Figure 26. Jatropha OBM Training values
Figure 27. Jatropha OBM Overall values
Figure 28. Canola OBM Validation values
Novel Formulation of Environmentally Friendly Oil Based Drilling Mud
73
Figure 29. Canola OBM Test values
Figure 30. Canola OBM Training values
Figure 31. Canola OBM Overall values
New Technologies in the Oil and Gas Industry
74
We can see from the Figures 19 to 31 that the data points all align closely with the
imaginary/arbitrary straight line drawn across. This validates the accuracy of the network
predictions and this also gives rise to the high regression values (tending towards unity)
presented in Table 11
Errors, estimated values and experimental values are summarized in Tables 12 to 14
Temperature
o
C Exp Values Est Values Errors
30 10 10.049 0.049
40 10.1 10.1407 0.0407
50 10.17 10.1794 0.0094
60 10.2 10.2022 0.0022
70 10.2 10.2236 0.0236
80 10.25 10.24 -0.01
90 10.31133 10.287 -0.02433
100 10.35648 10.3579 0.001424
110 10.40162 10.3904 -0.01122
120 10.44676 10.4222 -0.02456
130 10.4919 10.4835 -0.0084
140 10.53705 10.5204 -0.01665
150 10.58219 10.5455 -0.03669
160 10.62733 10.6133 -0.01403
170 10.67248 10.687 0.014524
180 10.71762 10.7202 0.002581
190 10.76276 10.7714 0.008638
200 10.8079 10.8335 0.025595
210 10.85305 10.8611 0.008052
220 10.89819 10.8991 0.00091
230 10.94333 10.9623 0.018967
240 10.98848 10.9955 0.007024
250 11.03362 11.0273 -0.00632
260 11.07876 11.085 0.006238
270 11.1239 11.1195 -0.0044
280 11.16905 11.1474 -0.02165
290 11.21419 11.2049 -0.00929
300 11.25933 11.2432 -0.01613
310 11.30448 11.2545 -0.04998
320 11.34962 11.2674 -0.08222
Table 12. Errors, Experimental Values, and Estimated Values for Diesel OBM
Temperature
o
C Exp Values Est Values Errors
30 10 10 0
40 10.05 10.05 0
50 10.1 10.0998 -0.0002
60 10.15 10.1485 -0.0015
Novel Formulation of Environmentally Friendly Oil Based Drilling Mud
75
Temperature
o
C Exp Values Est Values Errors
70 10.15 10.2556 0.1056
80 10.2 10.3232 0.1232
90 10.24333 10.3143 0.070967
100 10.2819 10.2851 0.003195
110 10.32048 10.281 -0.03948
120 10.35905 10.3147 -0.04435
130 10.39762 10.3985 0.000881
140 10.43619 10.4526 0.01641
150 10.47476 10.4769 0.002138
160 10.51333 10.5126 -0.00073
170 10.5519 10.5544 0.002495
180 10.59048 10.5884 -0.00208
190 10.62905 10.63 0.000952
200 10.66762 10.6665 -0.00112
210 10.70619 10.7025 -0.00369
220 10.74476 10.741 -0.00376
230 10.78333 10.7559 -0.02743
240 10.8219 10.7655 -0.0564
250 10.86048 10.803 -0.05748
260 10.89905 10.8872 -0.01185
270 10.93762 10.9375 -0.00012
280 10.97619 10.9644 -0.01179
290 11.01476 11.0148 3.81E-05
300 11.05333 11.0533 -3.3E-05
310 11.0919 11.0747 -0.0172
320 11.13048 11.1305 2.38E-05
Table 13. Errors, Experimental Values, and Estimated Values for Jatropha OBM
Temperature
o
C Exp Values Est Values Errors
30 10 9.8841 -0.1159
40 10.05 10.0044 -0.0456
50 10.05 10.048 -0.002
60 10.1 10.0925 -0.0075
70 10.15 10.1449 -0.0051
80 10.17 10.1681 -0.0019
90 10.20667 10.1987 -0.00797
100 10.24095 10.2489 0.007948
110 10.27524 10.2745 -0.00074
120 10.30952 10.2972 -0.01232
130 10.34381 10.3445 0.00069
140 10.3781 10.377 -0.0011
150 10.41238 10.4003 -0.01208
160 10.44667 10.4539 0.007233
New Technologies in the Oil and Gas Industry
76
Temperature
o
C Exp Values Est Values Errors
170 10.48095 10.4994 0.018448
180 10.51524 10.519 0.003762
190 10.54952 10.5537 0.004176
200 10.58381 10.5952 0.01139
210 10.6181 10.6145 -0.0036
220 10.65238 10.6444 -0.00798
230 10.68667 10.6888 0.002133
240 10.72095 10.7105 -0.01045
250 10.75524 10.7365 -0.01874
260 10.78952 10.7895 -2.4E-05
270 10.82381 10.8224 -0.00141
280 10.8581 10.8465 -0.0116
290 10.89238 10.8971 0.004719
300 10.92667 10.9337 0.007033
310 10.96095 10.945 -0.01595
320 10.99524 10.9562 -0.03904
Table 14. Errors, Experimental Values, and Estimated Values for Canola OBM
The minute errors encountered in the predictions further justify the claim that the ANN is a
trust worthy prediction tool.
The Experimental outputs were then plotted against their corresponding temperature
values, and also fitted into the polynomial trend line of order 2.
The Equations derived are
7
:
Diesel OBM:
7 2
4 10 0.004 9.915 T T
(1)
Jatropha OBM:
7 2
7 10 0.003 9.994 T T
(2)
Canola OBM:
6 2
2 10 0.004 9.827 T T
(3)
Also by comparing the networks created with that of Osman and Aggour
12
(2003), we can
see that this work is technically viable in predicting mud densities at varying temperatures
as the network developed in the course of this project showed regression values close to
those proposed by Osman and Aggour
12
.
Errors, percentage errors and average errors as compared with Osman and Aggour
12
are
relatively lower, thus guaranteeing the accuracy of the newly modeled network.
Novel Formulation of Environmentally Friendly Oil Based Drilling Mud
77
Table 15 shows the regression values of Osman and Aggour for oil based mud density
variations with temperature and pressure
12
.
Training Testing Validation All
0.99978 0.99962 0.99979 0.9998
Table 15. Table Showing the Regression Values from Osman and Aggour
12
Temperature Diesel Jatropha Canola
30 0.49 0 1.159
40 0.40297 0 0.453731
50 0.092429 0.00198 0.0199
60 0.021569 0.014778 0.074257
70 0.231373 1.040394 0.050246
80 0.097561 1.207843 0.018682
90 0.235986 0.692808 0.078054
100 0.013748 0.031076 0.077606
110 0.107859 0.382504 0.007183
120 0.235115 0.428105 0.119538
130 0.080107 0.008473 0.006675
140 0.157991 0.157237 0.010553
150 0.346719 0.020412 0.116025
160 0.132049 0.006975 0.069241
170 0.136087 0.023647 0.176011
180 0.024081 0.019604 0.035776
190 0.080259 0.00896 0.039587
200 0.23682 0.01049 0.107622
210 0.074195 0.03447 0.03386
220 0.008346 0.035012 0.074922
230 0.173317 0.254405 0.019963
240 0.06392 0.521209 0.097495
250 0.057271 0.529223 0.174223
260 0.056307 0.108703 0.000221
270 0.039597 0.001088 0.013022
280 0.193818 0.107419 0.106789
290 0.082846 0.000346 0.043324
300 0.143289 0.000302 0.064369
310 0.442092 0.155111 0.145538
320 0.724421 0.000214 0.355045
Table 16. Table of the Relative Deviations
Table 17 compares the Average Absolute Percent Error abbreviation (AAPE), Maximum
Average relative deviation (Ei) and Minimum Ei for Diesel, Jatropha and Canola OBMs as
well as the values from Osman and Aggour.
New Technologies in the Oil and Gas Industry
78
Diesel Jatropha Canola Osman et al
Minimum Ei 0.008346 0.000214 0.000221 0.102269
Maximum Ei 0.724421 1.207834 1.159 1.221067
AAPE 0.172738 0.193426 0.124949 0.36037
Table 17. Table Comparing Maximum Ei, Minimum Ei, and AAPE
5. Conclusion
The lower viscosities of jatropha, moringa and canola oil based mud (OBMs) make them
very attractive prospects in drilling activities.
The results of the tests carried out indicate that jatropha, moringa and canola OBMs have
great chances of being among the technically viable replacements of diesel OBMs. The
results also show that additive chemistry must be employed in the mud formulation, to
make them more technically feasible. In addition, the following conclusions were drawn:
1. From the viscosity test results, it can be inferred that the plastic viscosity of jatropha OBM
can be further stepped down by adding an adequate concentration of thinner. This method
can also be used to reduce the gel strengths of jatropha, moringa and canola OBMs.
2. The formulated drilling fluids exhibited Bingham plastic behavior, and from the
pressure loss modeling, canola OBM gave the best results, and next was jatropha OBM.
3. The tests of temperature effects on density: The densities increased and became
constant at some point, and began increasing again (these temperature points of
constant density varied for the different samples). The diesel OBM showed the highest
variation range, while the canola OBM showed the lowest.
4. Artificial Neural Network works well for prediction of scientific parameters, due to
minimized errors returned.
6. Limitations
1. The temperature-density tests were carried out at surface conditions under an open
system and at a constant pressure due to the absence of a pressure unit thus, the
equations developed are not guaranteed for down-hole circulating conditions.
2. During the temperature-density tests, it was observed that some of the mud particles
settled at the base of the containing vessel, and this reduced the accuracy of the
readings.
3. The accuracy of the temperature-density readings is also reduced because of the use of
an analogue mud balance (calibrated to the nearest 0.1 ppg).
4. The mud samples were aged for only 24 hours, hence the feasibility of older muds may
not be guaranteed.
7. Recommendations
1. This work should further be tested and investigated for the effect of temperature on
other properties of the formulated drilling fluids.
Novel Formulation of Environmentally Friendly Oil Based Drilling Mud
79
2. The temperature-density tests should also be carried out at varying pressures, to
simulate downhole conditions.
Author details
Adesina Fadairo, Churchill Ako, Abiodun Adeyemi and Anthony Ameloko
Department of Petroleum Engineering, Covenant University, Ota, Nigeria
Olugbenga Falode
Department of Petroleum Engineering, University of Ibadan, Nigeria
Acknowledgement
We wish to thank all members of staff Department of Petroleum Engineering Covenant
University, Nigeria for their technical support in carrying out this research work especially
Mr Daramola. We also acknowledge the support of Environmental Research Group, Father-
Heroes Forte Technology Nigeria for their commitment.
8. References
[1] Yassin .A., Kamis .A., Mohamad .O.A., (1991) Formulation of an Environmentally Safe Oil
Based Drilling Fluid SPE 23001, Paper presented at the SPE Asia Pacific Conference held
in Perth, Western Australia, 4-7 November 1991.
[2] Terry Hemphil, (1996)Prediction of Rheological Behavior of Ester-Based Drilling Fluids
Under Downhole Conditions SPE 35330. Paper presented at .s1 Ihe 1996 SPE
International Petroleum Conference and Exhibition of Mexico held in Villahermosa,
Tabasco 5-7 March 1996.
[3] A.M. Ezzat and K.A. Al-Buraik, (1997) Environmentally Acceptable Drilling Fluids for
Offshore Saudi Arabia SPE 37718. Paper presented at the SPE Middle East Oil Show and
Conference held in Bahrain, 15-18 March, 1997.
[4] G. Schez; N. Len, M. Esclaps; I. Galindo; A. Martnez; J. Bruzual; I. Siegert, (1999)
Environmentally Safe Oil-Based Fluids for Drilling Activities SPE 52739, Paper presented
at SPE/EPA Exploration and Production Environmental Conference held in Austin,
Texas, 28 February3 March 1999.
[5] Xiaoqing .H. and Lihui .Z., (2009) Research on the Application of Environment Acceptable
Natural Macromolecule Based Drilling Fluids SPE 123232, Paper presented at the SPE
Asia Pacific Health, Safety, Security and Environment Conference and Exhibition held
in Jarkata, Indonesia, 4-6 August 2009.
[6] Dosunmu .A. and Ogunrinde .J. (2010) Development of Environmentally Friendly Oil
Based Mud using Palm Oil and Groundnut Oil. SPE 140720. Paper presented at the 34th
Annual International Conference and Exhibition in Tinapa-Calabar, Nigeria, July 31
st
-
August 7th, 2010.
[7] *Fadairo Adesina, Adeyemi Abiodun, Ameloko Anthony, Falode Olugbenga, (SLO
2012) Modelling the Effect of Temperature on Environmentally Safe Oil Based Drilling
New Technologies in the Oil and Gas Industry
80
Mud Using Artificial Neural Network Algorithm Journal of Petroleum and Coal 2012,
Volume 54, Issue 1.
[8]
*
Fadairo Adesina, Ameloko Anthony, Adeyemi Gbadegesin, Ogidigbo Esseoghene,
Airende Oyakhire (2012) Environmental Impact Evaluation of a Safe Drilling Mud
SPE Middle East Health, Safety, Security, and Environment Conference and Exhibition held in
Abu Dhabi, UAE, 24 April 2012, SPE-152865-PP
[9] Bourgoyne .T.A., Millheim .K.K, Chenevert M.E., Applied Drilling Engineering. SPE
Textbook series, Vol 2, 1991.
[10] Mitchell B., Advanced Oil Well Drilling Engineering Hand Book, Mitchell Engineering, 10
th
edition, 1995.
[11] Baker Hughes Mud Engineering Hand Book
[12] Osman, E.A. and Aggour, M.A.: Determination of Drilling Mud Density Change with
Pressure and Temperature Made Simple and Accurate by ANN, paper SPE 81422
presented at the 2003 SPE Middle East Oil Show and Conference, Bahrain, 5-8 April.
Section 2
IT and Modeling
Chapter 0
Alternative Computing Platforms
to Implement the Seismic Migration
Sergio Abreo, Carlos Fajardo, William Salamanca and Ana Ramirez
Additional information is available at the end of the chapter
http://dx.doi.org/10.5772/51234
1. Introduction
High performance computing (HPC) fails to satisfy the computational requirements of Seismic
Migration (SM) algorithms, due to problems related to computing, the data transference
and management ([4]). Therefore, the execution time of these algorithms in state-of-the-art
clusters still remain in the order of months [1]. Since SM is one of the standard data
processing techniques used in seismic exploration, because it generates an accurate image
of the subsurface, oil companies are particularly interested in reducing execution times of SM
algorithms.
Computational migration needed for large datasets acquired today is extremely demanding,
and therefore the computation requires CPU clusters. The performance of CPU clusters had
been duplicating each 24 months until 2000 (satisfying the SM demands), but since 2001 this
technology stop accelerating due to three technological limitations known as Power Wall,
Memory Wall and IPL Wall [10, 23]. This encouraged experts all over the world to nd new
computing alternatives.
The devices that have been highlighted as a base for the alternative computing platforms are
FPGAs and GPGPUs. These technologies are subject of major research in HPC since they
have a better perspective of computing [10, 14]. Different implementations of SM algorithms
have been developed using those alternatives platforms [4, 12, 17, 21, 22, 29]. Results show
a reduction in the running times of SM algorithms, leading to combine these alternative
platforms with traditional CPU clusters in order to get a promising future in HPC for seismic
exploration.
This chapter gives an overview of the HPC with an historical perspective, emphasizing
in the technological aspects related to the SM. In the section two we will show the
seismic migration evolution together with the technology, section three summarizes the
most important aspects of the CPU operation in order to understand the three technological
walls, section four presents the use of GPUs as a new HPC platform to implement the SM,
Chapter 4
2 Will-be-set-by-IN-TECH
section ve shows the use of FPGAs as another HPC platform, section six discusses the
advantages and disadvantages of both technologies and, nally, the chapter is closed with
the acknowledgments.
At the end of this chapter,it is expected that the reader have enough background to
compare platform specications and be able to choose the most suitable platform for the
implementation of the SM.
2. The technology behind the Seismic Migration
1
One of the rst works related with the beginnings of the Seismic Migration (SM) is from 1914
and it was related with the construction of a sonic reection equipment to locate icebergs by
the Canadian inventor Reginald Fessenden. The importance of this work was that together
with the patent named Methods and Apparatus for Locating Ore Bodies presented by
himself three years later (1917), he gave us the rst guidelines about the use of reection
and refraction methods to locate geological formations.
Later, in 1919, McCollum and J.G. Karchner made other important advance in this eld
because they received a patent for determining the contour of subsurface strata that was inspired
by their work on detection of the blast waves of artillery during World War I,[6]. But it was only in
1924 when a group led by the German scientist Ludger Mintrop, could locate the rst Orchard
salt dome in Fort Bend County, Texas [15].
In the next year (1925), as a consequence of the Reginalds patent in 1917, Geophysical research
corporation (GRC) created a department dedicated to the design and construction of new and
improve seismographic instrumentation tools.
2.1. Acceptance period
In the next three years (1926 - 1928) GRC was testing and adjusting his new tools, but at that
time there was an air of skepticism due to the low reliability of the instruments. It can be said
that the method was tested and many experiments were performed, but it was only between
1929 and 1932 when the method was nally accepted by the oil companies.
Subsequently, the oil companies began to invest large sums of money to improve it and have a
technological advantage over their competitors. As the validity of reection seismics became more
and more acceptable and its usage increased, doubts seemed to reenter the picture. Even though this
newfangled method appeared to work, in many places it produced extremely poor seismic records. These
were the so-called no-record areas [6].
Only until 1936 when Frank Reiber could recognize that the cause of this behavior was the
steep folding, [38], faulting or synclinale and anticlinal responses and he built an analog
device to model the waves of different geological strata. This discovery was really important
for the method, because it could give it the solidity that the method was needing at that
moment; but nally the bad news arrived with the beginning of the world war II, because
it stopped all the inertia of this process.
1
The main ideas of this section has been taken from the work of J. Bee Bednar in his paper A brief history of seismic
migration
84 New Technologies in the Oil and Gas Industry
Alternative Computing Platforms to Implement the Seismic Migration 3
2.2. World War II (1939-1945)
With the World War II all efforts were focused on building war machines. During this period,
important developments were achieved, which would be very useful in the future of the SM.
Some of these developments were done by Herman Wold, Norbert Weiner, Claude Shannon
and Norman Levinson from MIT, and established the fundamentals of the numerical analysis
and nite differences, areas that would be very important in future of seismic imaging.
For example, Shannon proposed a theorem to sample analog signals and convert them into
discrete signals and then developedall the mathematical processingof discrete signals starting
the digital revolution.
On the other hand, between 1940 and 1956, appeared the rst generation of computers which
used VacuumTubes (VT). The VT is an electronic device that can be used as an electrical switch
to implement logical and arithmetic operations. As the VT were large and consumed too much
energy, the rst computers were huge and had high maintenance and operation costs. They
used magnetic drums for memory, their language was machine language (the lowest level
programming language understood by computers) and the information was introduced into
them through punched cards
2
[36].
One of the rst computers was developed in Pennsylvania University, in 1941 by John
Mauchly and J. Prester and it was called ENIAC [36]. This computer had the capacity of
performing 5000 additions and 300 multiplications per second (Nowadays, a PC like Intel
core i7 980 XE can perform 109 billions of Floating Point Operations [27]). In the next years
were developed the EDVAC (1949), the rst commercial computer UNIVAC (1951) [36].
One nal important event on this period, was the general recognition that the subsurface
geological layers werent completely at (based on Rieber work previously mentioned),
leading the oil companies to developthe necessary mathematical algorithms to locate correctly
the position of the reections and in this way strengthen the technique.
2.3. Second generation (1956-1963): Transistors
In this generation the VT were replaced by transistors (invented in 1947). The transistor also
works as an electric switch but its smaller and consumes less energy than the VT (see gure 1).
This brought a great advantage for the computers because made them smaller, faster, cheaper
and more efcient in energy consumption.
Additionally this generation of computers started to use a symbolic language called assembler
(see gure 2) and it was developed the rst version of high level languages like COBOL
and FORTRAN. This generation also kept using the same input method (punched cards) but
changed the storage technology from magnetic drum to magnetic core.
On the other hand, the SMin this periodreceived a great contribution with the J.G. Hagedoorn
work called A process of seismic reection interpretation [18]. In this work Hagedoorn
introduced a ruler-and-compass method for nding reections as an envelope of equal
traveltime curves based on Christiaan Huygens principle.
Other important aspect in this period was that the computational technology began to be used
in seismic data processing, like the implementation made by Texas Instrument Company in
2
A punched card, is an input method to introduce information, through the hole identication
85 Alternative Computing Platforms to Implement the Seismic Migration
4 Will-be-set-by-IN-TECH
Figure 1. In order from left to right: three vacuum tubes, one transistor, two integrated circuits and one
integrated microprocessor.
Figure 2. Assembly language.
1959 on the computer TIAC. In the next year (1960) Alan Trorey from Chevron Company
developed one of the rst computerized methods based on Kirchhoff and in 1962 Harry
Mayne obtained a patent on the CMP stack [30]; this two major contributions would facilitate
later the full computational implementation of the SM.
2.4. Third generation (1964-1971): Integrated circuits
In this generation the transistors were miniaturized and packaged in integrated circuits (IC)
(see gure 1). Initially the IC could contain less than 100 transistors but nowadays they can
contain billions of transistors. With this change of technology, the new computers could
be faster, smaller, cheaper and more efcient in the consumption of energy than the second
generation [36].
Additionally, the way in which the user could introduce the information to the computers also
changed, because the punched cars were replaced by the monitor, keyboard and interfaces
based on operating systems (OS). The OS concept appeared for rst time, allowing to this
new generation of computers execute more than one task at the same time.
On the other hand, the SM also had a signicant progress in this period. In 1967, Sherwood
completed the development of Continuous Automatic Migration (CAM) on an IBM accounting
machine in San Francisco. The digital age may have been in its infancy, but there was no question that
it was now running full blast, [6] and, in 1970 and 1971, Jon Claerbout published two papers
focused on the use of second order, hyperbolic, partial-differential equations to perform the
86 New Technologies in the Oil and Gas Industry
Alternative Computing Platforms to Implement the Seismic Migration 5
imaging. Largely, the Claerbout work was centered in the use of nite differences taking
advantage of the numeric analysis created during the World War II. The differences nite
work, allowed to apply all these developments over the computers of that time.
2.5. Fourth generation (1971- Present): Microprocessors
The microprocessor is an IC that works as a data processing unit, providing the control of the
calculations. It could be seen as the computer brain, [36].
This generation was highlighted because each 24 months the number of transistors inside
an IC was doubled according with the Moore Law, who predicted this growing rate in 1975,
[32]. This allowed that the development of computers with high processing capacity were
growing very fast (see table 1). In 1981 IBM produced the rst computer for home, in
1984 Apple introduced the Macintosh and in 1985 was created the rst domain on Internet
(symbolics.com). During the next years the computers were reducing their size and cost; and
taking advantage of Internet, they raided in many areas.
Microprocessors Year. Number of transistors Frequency bits
Intel 4004 1971 2,300 700 KHz 4
Intel 8008 1972 3,300 800 KHz 8
Intel 8080 1974 4,500 2MHz 8
Intel 8086 1978 29,000 4MHz 16
Intel 80386 1980 275,000 40MHz 32
Intel Pentium 4 2000 42,000,000 3.8GHz 32
Intel Core i7 2008 731,000,000 3GHz 64
Table 1. Summary of the microprocessors evolution.
Moore law was in force approximately until 2000, because of power dissipation problems
produced by the amount of transistors inside a single chip, this effect is known as the Power
Wall (PW) and remains stagnating the processing capacity.
From that moment began to surface new ideas to avoid this problem. Ideas like fabricate
processors with more than one processing data unit (see gure 3); these units are known as
cores, seemed to be a solution but new problems arose again. Problems like the increase of
cost, power consumption and design complexity of the new processors, didnt reveal to be a
good solution. This is the reason that even today we can see that the processors fabricated
with this idea only have got 2, 4, 6, 8, or in the best way 16 cores, [5].
In gure 4 we can see the Moore Law tendency until 2000, and after this date we can see the
emergence of multi-core processors.
2.6. HPCs birth
A new challenge to computing researchers was the implementation of computationally
expensive algorithms. For this purpose was necessary to design new strategies to do an
optimal implementation over the more powerful computers. That group of implementation
strategies over supercomputers has been known as High Performance Computing (HPC).
The HPC was born in Control Data Corporation (CDC) in 1960 with Seymour Cray, who
launched in 1964 the rst supercomputer called CDC 6600 [24]. Later in 1972 Cray left CDC
87 Alternative Computing Platforms to Implement the Seismic Migration
6 Will-be-set-by-IN-TECH
Figure 3. Intel multi-core processor die. Taken from [8]
1970 1975 1980 1985 1990 1995 2000 2005 2010 2015
10
3
10
4
10
5
10
6
10
7
10
8
10
9
10
10
Year
T
r
a
n
s
i
s
t
o
r
n
u
m
b
e
r
CPUs
GPUs
FPGAs
NV15
Intel 4004
Pentium
NV3
Virtex
Virtex E
Virtex 4
Intel i7
Virtex 7
Motorola 6800
Pentium 4
Figure 4. CPU, GPU and FPGA Transistor Counts
88 New Technologies in the Oil and Gas Industry
Alternative Computing Platforms to Implement the Seismic Migration 7
to create his own company and four years later in 1976, Cray developed Cray1 which worked
at 80MHz and it became in the most famous supercomputer of the history.
After that, the HPC kept working with clusters of computers using the best cores (processors)
of each year. Each cluster was composed by a front-end station and nodes interconnected
through the Ethernet port (see gure 5). So, in that way the HPC evolution was focused
on increase the quantity of nodes per cluster, improve the interconnection network and the
development of new software tools that allow to take advantage of the cluster. This strategy
began to be used by other companies in the world like Fujitsus Numerical Wind Tunnel,
Hitachi SR2201, Intel Paragon among others, giving very good results.
Maestro
/ Host
Nodo Nodo
Router
Front-end
station
Node Node
Router
Internet
Nodo Node Nodo Node Nodo Node Nodo Node
Wired Ethernet
Wireless Ethernet
Figure 5. General diagram of a cluster
2.7. SM consolidation
On the other hand, the SM also received a great contribution by Jon Claerbout, because in
1973 he formed The Stanford Exploration Project (SEP). The SEP together with the group
GAC from MIT built the bases for many consortia that would be formed in the future like
Center For Wave Phenomenon at Colorado School of Mines. In 1978 R. H. Stolt presented a
different approach to do the SM, which was called Migration by Fourier Transform, [41].
Compared with the approach of Claerbout, the Stolt migration is faster, can handle up to 90
degrees formation dips but its limited to constant velocities; while as Claerbout migration is
insensitive to velocity variations and can handle formation dips up to 15 degrees.
In 1983 three papers were published at the same time about a new two-way solution to
migration; these works were done by Whitmore, McMechan, and Baysal, Sherwood and
Kosloff. Additionally, in 1987 Bleistein published a great paper about Kirchhoff migration,
using a vastly improved approach to seismic amplitudes and phases. From that moment, the
SM could be done in space time (x,t), frequency space (f,x), wavenumber time (k,t), or almost
any combination of these domains.
Once the mathematical principles of the SM were developed, the following efforts focused on
propose efcient algorithms. The table 2 summarizes some of these implementations, [6].
But perhaps one of the most important software developments between 1989 and 2004 for
seismic data processing was performed by Jack Cohen and John Stockwell at the Center for
Wave Phenomena (CWP) at the Colorado School of Mines. This tool was called Seismic Unix
(SU) and has been one of the most used and downloaded worldwide.
Moreover, the combination of cluster of computers with efcient algorithms began to bear
fruit quickly, allowing pre-stack migrations in depth at reasonable times. An important aspect
to note is that almost all algorithms that we use today were developed during that time and
have been implemented as the technology has been allowing it.
89 Alternative Computing Platforms to Implement the Seismic Migration
8 Will-be-set-by-IN-TECH
Year Author Contribution
1971 W. A. Schneiders tied diffraction stacking and Kirchhoff
migration together.
1974 and 1975 Gardner et al Claried this even more.
1978 Schneider Integral formulation of migration.
1978 Gazdag Phase shift method.
1984 Gazdag and Sguazzero Phase shift plus interpolation.
1988 Forel and Gardner A completely velocity-independent
migration technique.
1990 Stoffa The split step method.
Table 2. Development of efcient algorithms.
Over the years the seismic imaging began to be more demanding with the technology, because
every time it would need a more accurate subsurface image. This led to the use of more
complex mathematical attributes. Additionally it became necessary to process a large volume
of data resulting from the 3D and 4D seismic and the use of multicomponent geophones.
All this combined with the technological limitations of computers since 2000, made HPC
developers began to seek new technological alternatives.
These alternatives should provide a solution to the three barriers currently faced by computers
(Power wall, Memory wall and IPL wall; these three barriers will be explainedin detail below).
For that reason the attention was focused on two emerging technologies that promised to
address the three barriers in a better way than computers, [14]. These technologies were GPUs
and FPGAs.
In the following sections its explained how these two technologies have been facing the three
barriers, in order to map out the future of the HPC in the area of seismic data processing.
3. Understanding the CPUs
In this section initially we will present to the reader an idea about the internal operation
of a general purpose processing unit (CPU), assuming that the reader doesnt have any
previous knowledge. Subsequently, we will analyze the contribution made by the alternative
computing platforms in the high performance computing.
Lets imagine an automobile assembly line completely automated like the shown in the gure
6, which produce type A automobiles. The parts of the cars can be found in the warehouse
and are taken into the line, passing through the following stages: engine assembly, bodywork
assembly, motor drop, painting and quality tests. Finally we get the vehicle and it is carried to
the parking zone. In this way, in the seismic data processing, a computing system can access
the data stored in memory, adapt it, calculate the velocity model, and nally migrate them to
produce a subsurface image that is stored again in memory.
Now lets suppose that we want to modify the assembly line in order to make it produce type
B automobiles. Therefore, its necessary either to add some stages to the assembly line or to
modify the existing modules conguration. In like manner operates a CPU, where it can be
executed different kind of algorithms as well as the assembly line can now produce different
90 New Technologies in the Oil and Gas Industry
Alternative Computing Platforms to Implement the Seismic Migration 9
Figure 6. CPU analogy
kinds of automobiles. A CPU includes different kind of modules that allow it execute several
instructions which can execute the desired algorithm.
As well as the CPU receive and deliver data, the assembly line receive parts and deliver
vehicles. But to assemble a particular type of car, the line have to get prepared. In this way, the
line can now select the right parts fromthe warehouse, transport it and handle it suitably. In a
CPU, the program is the responsible to indicate how to get each data and how to manipulate
it to obtain the nal result.
A program is composed by elementary process called instructions. In the assembly line,
an instruction matched simple actions like bring a part from the warehouse, adjust a screw,
activate the painting machine, etc. In a CPU, an instruction executes a elementary calculation
over the data such as fetch data from memory, carry out a sum, multiplication, etc.
3.1. CPU operation
In a CPU, the instructions are stored in the memory and are executed sequentially one by
one as its shown in the gure 6. To execute one instruction, it must fetch it from memory,
decode (interpret) it, read the data required by the instruction, manipulate the data and nally
return the results to the memory. This architecture is known as Von Neumann. Moderns
CPUs have evolved from their original concept, but its operating principle is still the same.
91 Alternative Computing Platforms to Implement the Seismic Migration
10 Will-be-set-by-IN-TECH
Nowadays almost any electronic component that we use is based on CPUs like our PC, cell
phones, videogames, electrical appliance, automobiles, digital clocks, etc.
As was described, the CPU is in charge to process the data stored in memory as the program
indicated. In our assembly line we can identify several blocks that handle, transport and t the
vehicle parts. In a CPU, this set of parts is known as the datapath. The main functionality of
the datapath is to temporally store, transform and route the data in a track from the entrance
to the exit.
In the same way, in the assembly line we can nd the main controller which is in charge to
monitor all the process and to activate harmonically the units on each stage to execute the
assembly process. All this, taking into account the requested assembly process. In a CPU
we can nd the control unit, in charge to address orderly the data through the functional
units to execute each one of the instructions that the program indicates and thus perform the
algorithm.
3.2. CPU performance
The performance of an assembly line, could be measured as the time required to produce
certain amount of vehicles. In the same way, the CPU performance is measured as how long
it takes to process some data. In rst place, the performance of the assembly line could be
limited by the speed of its machines as well as in the CPU the integrated circuit speed is
proportional to its performance. The second aspect that affects the performance is howfast can
be put the parts at the beginning of the assembly line. In the same way, the data transference
rate between CPU and memory could limit its performance. The CPU performance is related
with the execution time of an algorithm, for that reason we are going to analyze some aspects
that have slowed down the CPU performance.
We will analyze the assembly line operation to make the best use of its machines, and increase
its performance. We can observe that the units on each stage could simultaneously work
over different vehicles, and it is not necessary to nish one vehicle to start the next one (see
gure 6). In the same way, the CPU can process several data at the same time provided that
it has been designed using the technique called pipeline, [19]. This technique segments the
execution process and allows that the CPU handles several data in parallel. This is one of the
digital techniques developed to improve the CPU speed although nowadays developments
on this area are stuck. This phenomenon is one of the greatest performance improvement
constraint and its called the Instruction Level Parallelism (ILP) wall [23].
The ILP can be associated with a technique that tries to reorganize the assembly line machines,
in order to improve the performance. Additionally, there have been other different ways
to improve the performance, one of them is using new machines more efcient that can
assemble the parts faster. Also these new machines could be smaller in size, occupying less
space, allowing, therefore more machines in the same area, increasing the productivity. In
the CPU context, it has been the improvement of the integrated circuit manufacturing which
has allowed the deployment of smaller transistors. This supports the implementation of more
dense systems (i.e. the greatest number of transistors per chip) and faster (i.e. that can perform
more instructions per unit time).
The growth rate of the amount of transistors in integrated circuits has faithfully obeyed
Moores Law, it allows to have smaller transistors on a chip. As the transistors were becoming
92 New Technologies in the Oil and Gas Industry
Alternative Computing Platforms to Implement the Seismic Migration 11
smaller, their capacitances were reducing allowing shorter charges and discharges times. All
this allow higher working frequencies and there is a direct relationship between the working
frequency of an integrated circuit and power dissipation (given by equation 1 taken from [9]).
P = C f V
2
dd
(1)
where represents the transistor density, f is the working frequency of the chip, V
dd
is
the power supply voltage and C is the total capacitance. Nowadays the power dissipation
has grown a lot and has become an unmanageable problem with the conventional cooling
techniques. This limitation on the growth of the processors is known as the power wall and
has removed one of the major growth forces of the CPUs.
3.3. Memory performance
Returning to our assembly line, we saw it with a lot of high speed machines, but now its
performance could be limited by the second constraint of the system performance. The speed
of the incoming parts to the assembly line could not be enough to keep all these machines
busy. If we dont have all parts ready, we will be forced to stop until they become available
again. In the same way, faster CPUs require more data ready to be processed. The limitation
presented nowadays in the communication channels to supply the demand of data processing
units is called memory wall.
This wall was initially treated by improving the access paths between the warehouse and the
assembly line, which in a CPU, means improving the transmission lines between memory and
CPU on the printed circuit board.
The second form, the memory wall was faced, is more complex, but takes into account all
the process since the manufacturer deliver. Analyzing the features of the warehouse parts,
we can nd different kinds. The part manufacturers have large deposits with a lot of parts
ready to be used, but use a part from these deposits is expensive in time. On the other hand,
the warehouse next to the assembly line have stored a smaller number of pieces of different
kinds, because we must remember that the assembly line can produce different types of cars.
The advantage of this warehouse is that the parts reach faster the points of the production line
where they are needed. Some deposits are larger but their transportation time to the line is
slower, besides, the nearest deposit has a lower storage capacity. The line could improve its
performance taking advantage of this deposit features.
Similarly occurs with the computing system memory. High capacity memory such as hard
disks, are read by blocks, and access one single data requires to transport a large volume
of information that is locally stored in RAM, which represents a nearest deposit. Then
the single data can be read and taken to the CPU. The memory organization in a modern
computing system is arranged hierarchically as the gure 6 shows. Even in this organization,
the assembly line has provided small storage spaces next to the machines that require specic
parts. In a CPU this is called CPU cache.
The memory hierarchy creates local copies of the data that will be handled in certain algorithm
on fast memories to speed up its access. The challenge of such systems is always have
available the required data in the fast memory, because otherwise, the CPU must stop while
the data is accessed in the main memory.
93 Alternative Computing Platforms to Implement the Seismic Migration
12 Will-be-set-by-IN-TECH
Currently, the three computing barriers are present and they are the main cause of stagnation
in which CPU based technology has fallen. Some solutions have been proposed based on
alternative technologies, such as the graphics processing units (GPU) and FieldProgrammable
Gate Array (FPGA), and have intended to mitigate these phenomena, [4, 22]. They seem to be
a short and mid-term solutions respectively.
4. GPUs: A computing short term alternative
GPUs are the product of the gaming industry development that was born in the 50 and started
a continuous growing market on 80. Video-games require to execute intensive computing
algorithms to display the images, but unlike other applications such as seismic migration,
the interaction with the user requires the execution in a very short time. Therefore, since the
90 have been developed specialized processors for this purpose. These processors have been
called GPU, and they have been widely commercialized in video-game consoles and PC video
acceleration cards, [37].
GPUs are specic application processors that reach high performance on the task that they
were designed for, in the same way that the assembly line would outperform itself if it were
dedicated to assembly a little range of vehicle types. This is achieved because the unnecessary
machines can be eliminated and the free area is optimized. Likewise improve the availability,
storage and handling of the parts(See gure 7).
Figure 7. GPU analogy
The task of a GPU is an iterative process that generates an image pixel by pixel (point by
point) from some data and input parameters. This allows to process in parallel each output
94 New Technologies in the Oil and Gas Industry
Alternative Computing Platforms to Implement the Seismic Migration 13
pixel and therefore GPUs architecture is based on a set of processors that perform the same
algorithm for different output pixels. It is very similar to the sum of the contributions made
in Kirchhoff migration, so in that way this architecture is a good option to try to speed up a
migration process.
The memory organization is another relevant feature of the GPU architecture. This allows all
the processors to access common information to all pixels that are being calculated and in the
same way each processor can access particular pixel information.
Like our assembly line, the GPU task can be segmented in several stages, elementary
operations or specic graphics processing instructions. Initially GPUs could only be used
in graphical applications and although their performance in these tasks was far superior than
a CPU, its computing power could only be used to carry out this task. For this reason the
GPU manufacturers made a rst contribution in order to make them capable to execute any
algorithm; they make their devices more exible and have become General Purpose GPU
(GPGPU).
This make possible to exploit the computing power of these devices in any algorithm, but it
was not an easy task for programmers. Making an application required a deep understanding
of the GPGPUs instructions and architecture, so programmers were not very captivated.
Therefore, the second contribution that denitely facilitated the use of these devices was the
development in 2006 of programming models that does not require advanced knowledge on
GPUs such as Compute Unied Device Architecture (CUDA).
GPGPUs are denitely an evolved form of a CPU. Their memory management is highly
efcient, which has reduced the memory wall; and because of the number of processing cores
on a single chip, they have been forced to reduce the working speed and to operate at the limit
of the power wall. Its growth rate seems to continue stable and it promises to be in a mid-term
with the technology that drives the high performance computing. But these three barriers are
still present and this technology soon will be faced them.
5. FPGAs: Designing custom processors
Another alternative to accelerate the SM process are the FPGAs (Field Programmable Gate
Array), these devices are widely used in many problems of complex algorithms acceleration,
for this reason, since a couple of years, some traditional manufacturers of high performance
computing equipment began to include in their brochure, computer systems that include
FPGAs.
To get an initial idea of this technology, let us imagine that the car assembly is going to be
amended as follows:
The assembly plant will produce only one type of vehicle at a time, the purpose is to
redesign the entire assembly plant in order to concentrate efforts and increase production.
After selecting the vehicle that will be produced, the functional units requiredfor assembly
the vehicle are designed.
After designing all the functional units, the assembly line is designed. Both (the assembly
line and the control unit) will be a little bit simpler, because now they have less functional
95 Alternative Computing Platforms to Implement the Seismic Migration
14 Will-be-set-by-IN-TECH
units (remember that now only have the functional units required to assembly only a
vehicle type).
In order to increase the production will be replicated the assembly line as many times as
possible and the number of replicates will be determined taking into account two aspects:
in the rst place, the speed with which the inputs can be placed at the beginning of the
assembly line and secondly by the space available within the company.
5.1. FPGA architecture
An FPGA consists mainly of three types of components (see Figure 8): Congurable Logic
Blocks (or Logic Array Block depending on vendor), recongurable interconnects and I/O
blocks.
FPGA
CLB
E
/
S
E
/
S
E
/
S
E
/
S
E
/
S
E
/
S
E
/
S
E
/
S
E
/
S
E/S
E/S
CLB CLB CLB CLB CLB CLB CLB
CLB CLB CLB CLB CLB CLB CLB CLB
CLB CLB CLB CLB CLB CLB CLB CLB
CLB CLB CLB CLB CLB CLB CLB CLB
CLB CLB CLB CLB CLB CLB CLB CLB
CLB CLB CLB CLB CLB CLB CLB CLB
CLB CLB CLB CLB CLB CLB CLB CLB
CLB CLB CLB CLB CLB CLB CLB CLB
E/S
E/S
E/S
E/S
E/S
E/S
E/S
E
/
S
E
/
S
E
/
S
E
/
S
E
/
S
E
/
S
E
/
S
E
/
S
E
/
S
E/S
E/S
E/S
E/S
E/S
E/S
E/S
E/S
E/S
E/S
CLB
Bloques Lgicos
Congurables
Bloques de
Entrada - Salida
Interconexiones
programables
Figure 8. FPGA components.
In the analogy of the car assembly line, the Congurable Logic Blocks (CLBs) come to be the
raw material for the construction of each one of the functional units required to assemble the
car. In an FPGA the CLBs are used to build all the functional units required in an specic
algorithm, these units can go from simple logical functions (and, or, not) to mathematical
complex operations (algorithms, trigonometric functions, etc.). Therefore, a rst step in
the implementation of an algorithm over an FPGA is the construction of each one of the
functional units required by the algorithm to be implemented (see gure 9). The design of
these functional units is carried out by means of Hardware Description Languages (HDLs)
like VHDL or Verilog.
The design of a functional unit using HDLs is not simply to write a code free of syntax errors,
the process corresponds more to the design of a digital electronic circuit [13] which requires a
good grounding about digital hardware. It must be remembered that this process consumes
more time than an implementation on a CPU or a GPU.
96 New Technologies in the Oil and Gas Industry
Alternative Computing Platforms to Implement the Seismic Migration 15
Figure 9. FPGA analogy.
Currently, providers of FPGAs (within which highlights Xilinx [42] and Altera [2]) offer
predesigned modules that facilitate the design, in this way, commonly used blocks (as oating
point units) should not be designed from scratch for each new application; these modules are
offered as an HDL description (softcore modules) or physically implemented inside the FPGA
(modules hardcore).
The recongurable interconnects, allows to interconnect the CLBs and the functional units
each other, in the analogy of the car assembly line, the recongurable interconnects can be
viewed as the conveyor belt where circulating the parts necessary for the assembly of the cars.
Finally, the I/O blocks allow to communicate the pins of the device and the internal logic of
the FPGA, in the analogy of the car assembly line, the I/O blocks are the devices that place
the parts at the beginning of the conveyor belt and also the devices that allow to bring out the
cars of the factory. The blocks of input-output are responsible for controlling the ow of data
between physical devices (extern) and the functional units in the interior of the FPGA. The
different companies design these blocks in order to support different digital communication
standards.
5.2. Algorithms that can be accelerated in an FPGA
Not all the applications can be beneted in the same way in an FPGA and due to the difculty
of implementing an application in these devices, it is advisable to do some previous estimates
in order to determine the possibilities to speed up an specic algorithm.
97 Alternative Computing Platforms to Implement the Seismic Migration
16 Will-be-set-by-IN-TECH
Applications that require processing large amounts of data with little or no data dependency,
3
are ideal for implementing in the FPGAs, in addition it is required that these applications
are limited by computing and not by data transfer, that is to say, the number of mathematical
operations is greater than the number of read and write operations, [20]. In this regard, the
SM have in favor that requires processing large amounts of data (in order of tens of Terabytes)
and is required to perform billions of mathematical operations. However, migration also
have a large number of read and write instructions which cause signicant delays in the
computational process.
Furthermore, the accuracy and the data format are other inuential aspects on the
performance of the applications over the FPGAs, with a lower data accuracy (least amount of
bits to represent data), the performance of the application inside the FPGAincrease; regarding
the data format, the FPGAs get better performances with xed point numbers than oating
point numbers. The SM has been traditionally implemented using oating point numbers
with single precision, [1, 21]. [14, 16] show that it is possible to implement some parts of the
SM using xed point instead of oating point and produce images with very similar patterns
(visually identical), reducing computation times.
The FPGAs have a disadvantage in terms of the operation frequency, if they are compared
with a CPU (10 times lower ) or a GPGPU (5 times), for this reason, in order to accelerate
an application inside an FPGA is required to perform at least 10 times more computational
work per clock cycle than the performed in a CPU, [20]. In this regard, it is important that the
algorithm has a great potential of parallelization. So in this aspect, it is helpful for the SM that
the traces can be processed independently, due that the shots are made in different times, [4],
which facilitates the parallelization of the process.
5.3. Implementation of the SM on FPGAs
Since 2004, it began to be implemented on the FPGAs, processes related with the construction
of subsurface imaging. These implementations have made great contributions to the problems
of processing speed and memory.
5.3.1. Breaking the power wall
As mentioned above, the operating frequency of traditional computing devices has been
stalled for several years, due to problems related to the power dissipation. On the other hand,
to mitigate the problems associated with the processing speed inside the FPGAs, it has been
implemented modules that optimize the development of expensive mathematical operations
4
,
for example in [21] functional units were designed using the Cordic method, [3] to perform
the square root, in [17] are used Lee approximations, [28] to perform trigonometric functions.
Other research has addressed this problem from the perspective of the representation of
numbers [17], the purpose is to change the single-precision (32-bit) oating-point format to
xed-point format (this is not possible, either for CPUs or GPUs) or a oating point format of
less than 32 bits, in order to that the mathematical operations can be carried out in less time.
3
The data dependency is one situation in which the instructions of a programrequire results of previous ones that have
not yet been completed.
4
The expensive mathematical operations are those that take more clock cycles to complete the process and consume
more hardware resources; the expensive operations that are usedin seismic processingare the square roots, logarithms
and trigonometric functions, among others.
98 New Technologies in the Oil and Gas Industry
Alternative Computing Platforms to Implement the Seismic Migration 17
All these investigations have reported signicant reductions in processing times of the SM
sections when were compared with state of the art CPUs.
5.3.2. Breaking the memory wall
Computing systems based on FPGAs have both on-board memory
5
as on-chip memory
6
,
these two types of memory are faster than the external memories. Some research has made
implementations in order to reduce the delays caused in the reading and writing operations
[22], in their researches, have been designing special memory architectures that allow to
optimize the use of different types of memory with FPGAs. The intent is that the majority
of read and write operations must be performed at the speed of the on chip memory because
this is the fastest, however, the challenge is to put all the data required in each instruction in
on-chip memory because this is the one of the smaller capacity.
6. Wishlist
In spite of the great possibilities that have both FPGAs and GPGPUs to reduce the
computation times of the seismic processes, their performance at this time is braking, because
the rate at which seismic data are transmitted from the main memory to the computing
device (FPGA or GPU) is not enough to maintain its computing potential 100% busy. The
communication between the FPGAs or GPUs with the exterior can be carried out using
large amount of output interfaces, currently one of the most used is Peripheral Component
Interconnect Express (PCIe) port that transfer data between the computing device (FPGA or
GPU) and a CPU at speeds of 2GB/sec, [14] and this transfer rate can not provide all the
necessary data to keep the computing device busy.
Currently, at the Universidad Industrial de Santander (Colombia), a PhD project is being
carried out that seeks to increase the transfer rate of seismic data between the main memory
and the FPGA using data compression.
In addition, there are currently two approaches to try to reduce the design time on FPGAs
(the main problem of this technology): The rst strategy is to use compilers CtoHDL [11, 25,
35, 39, 43], these froma C code generates a description in a HDL, without having to manually
performthe design; the second strategy is to use high level languages, these languages are the
called Like-C as Mitrion-C [31] or SystemC [26]; despite their need, these languages have not
yet achieved wide acceptance in the market, because its use still compromise the efciency
of the designs. The research regarding the compilation CtoHDL and high-level languages are
active [29, 33, 34, 40, 44] and some results have been positive [7], but the possibility of having
access to all the potential of the FPGAs from a high-level language is still in development.
On the other hand, it can be seen that the technological evolution of the GPUs is similar
to the beginnings of the PCs evolution, where their progress was subject to Moores law.
It is therefore expected that in coming years new GPGPUs families continue to appear,
increasingly so much that are going to raid in many areas of the HPC, but denitively it will
come the time when this technology will stagnate for the same three barriers that stopped the
5
This is the available memory on the board which contains the FPGA
6
These are blocks of memory inside the FPGA
99 Alternative Computing Platforms to Implement the Seismic Migration
18 Will-be-set-by-IN-TECH
computers. When that time comes it will be expected that the FPGAs will be technological
mature and can take the HPC baton during the next years.
At the end, we believe that the HPC future is going to be built of heterogeneous cluster,
composed by these three technologies (CPUs, GPUs and FPGAs). These clusters will have an
special operating system(O.S.) that will take advantage of each of these technologies, reducing
the three barriers effect and getting the best performance in each application.
Acknowledgments
The authors wish to thank the Universidad Industrial de Santander (UIS), in particular the
research group in Connectivity and Processing of Signals (CPS), head by Professor Jorge H.
Ramon S. who has unconditionally been supporting our research work. Additionally we
thank also to the Instituto Colombiano del Petroleo (ICP) for their constant support in recent
years, especially thanks to Messrs WilliamMauricio Agudelo Zambrano and Andres Eduardo
Calle Ochoa.
Author details
Sergio Abreo, Carlos Fajardo, William Salamanca and Ana Ramirez
Universidad Industrial de Santander, Colombia
7. References
[1] Abreo, S. & Ramirez, A. [2010]. Viabilidad de acelerar la migracin ssmica 2D usando
un procesador especco implementado sobre un FPGA The feasibility of speeding up
2D seismic migration using a specic processor on an FPGA, Ingeniera e investigacin
30(1): 6470.
[2] Altera [n.d.]. http://www.altera.com/. Reviewed: April 2012.
[3] Andraka, R. [1998]. A survey of cordic algorithms for fpga based computers, Proceedings
of the 1998 ACM/SIGDA sixth international symposium on Field programmable gate arrays,
FPGA 98, ACM, New York, NY, USA, pp. 191200.
URL: http://doi.acm.org/10.1145/275107.275139
[4] Araya-polo, M., Cabezas, J., Hanzich, M., Pericas, M., Gelado, I., Shaq, M., Morancho,
E., Navarro, N., Valero, M. & Ayguade, E. [2011]. Assessing Accelerator-Based HPC
Reverse Time Migration, Electronic Design 22(1): 147162.
[5] Asanovic, K., Bodik, R., Catanzaro, B. C., Gebis, J. J., Husbands, P., Keutzer, K., Patterson,
D. A., Plishker, W. L., Shalf, J., Williams, S. W. & Yelick, K. A. [2006]. The landscape of
parallel computing research: Aview fromberkeley, Technical Report UCB/EECS-2006-183,
EECS Department, University of California, Berkeley.
URL: http://www.eecs.berkeley.edu/Pubs/TechRpts/2006/EECS-2006-183.html
[6] Bednar, J. B. [2005]. A brief history of seismic migration, Geophysics 70(3): 3MJ20MJ.
[7] Bier, J. & Eyre, J. [Second Quarter 2010]. BDTI Study Certies High-Level Synthesis
Flows for DSP-Centric FPGA Designs, Xcell Journal Second pp. 1217.
[8] Bit-tech staff. [n.d.]. Intel sandy bridge review., http://www.bit-tech.net/.
Reviewed: April 2012.
[9] Brodtkorb, A. R., Dyken, C., Hagen, T. R. & Hjelmervik, J. M. [2010]. State-of-the-art in
heterogeneous computing, Scientic Programming 18: 133.
100 New Technologies in the Oil and Gas Industry
Alternative Computing Platforms to Implement the Seismic Migration 19
[10] Brodtkrorb, A. R. [2010]. Scientic Computing on Heterogeneous Architectures, Phd thesis,
University of Oslo.
[11] C to Verilog: automating circuit design [n.d.]. http://www.c-to-verilog.com/.
Revisado: Junio de 2011.
[12] Cabezas, J., Araya-Polo, M., Gelado, I., Navarro, N., Morancho, E. & Cela, J. M. [2009].
High-Performance Reverse Time Migration on GPU, 2009 International Conference of the
Chilean Computer Science Society pp. 7786.
[13] Chu, P. [2006]. RTL Hardware Design Using VHDL: Coding for Efciency, Portability, and
Scalability, Wiley-IEEE Press.
[14] Clapp, R. G., Fu, H. & Lindtjorn, O. [2010]. Selecting the right hardware for reverse time
migration, The Leading Edge 29(1): 48.
URL: http://link.aip.org/link/LEEDFF/v29/i1/p48/s1&Agg=doi
[15] Cleveland, C. [2006]. Mintrop, Ludger, Encyclopedia of Earth .
[16] Fu, H., Osborne, W., Clapp, R. G., Mencer, O. & Luk, W. [2009]. Accelerating seismic
computations using customized number representations on fpgas, EURASIP J. Embedded
Syst. 2009: 3:13:13.
URL: http://dx.doi.org/10.1155/2009/382983
[17] Fu, H., Osborne, W., Clapp, R. G. & Pell, O. [2008]. Accelerating Seismic Computations
on FPGAs From the Perspective of Number Representations, 70th EAGE Conference &
Exhibition (June 2008): 9 12.
[18] Hagedoorn, J. [1954]. A process of seismic reection interpretation, E.J. Brill.
URL: http://books.google.es/books?id=U6FWAAAAMAAJ
[19] Harris, D. M. & Harris, S. L. [2009]. Digital Desing and Computer Architecture, MK.
[20] Hauck Scott, D. A. [2008]. Recongurable computing. The theory and practice of FPGA-
BASED computing, ELSEVIER - Morgan Kaufmann.
[21] He, C., Lu, M. & Sun, C. [2004]. Accelerating Seismic Migration Using FPGA-Based
Coprocessor Platform, 12th Annual IEEE Symposium on Field-Programmable Custom
Computing Machines pp. 207216.
[22] He, C., Zhao, W. & Lu, M. [2005]. Time domain numerical simulation for transient waves
on recongurable coprocessor platform, Proceedings of the 13th Annual IEEE Symposium
on Field-Programmable Custom Computing Machines, IEEE Computer Society, pp. 127136.
[23] Hennessy, J. L. & Patterson, D. A. [2006]. Computer Architecture A Quantitative Approach,
4th edn, Morgan Kaufmann.
[24] IEEE Computer Society [n.d.]. Tribute to Seymour Cray, http://www.computer.
org/portal/web/awards/seymourbio. Reviewed: April 2012.
[25] Impulse Accelerate Technologies [n.d.]. Impulse codeveloper c-to-fpga tools, http://
www.jacquardcomputing.com/roccc/. Revisado: Agosto de 2011.
[26] Initiative, O. S. [n.d.]. Languaje SystemC, http://www.systemc.org/home/.
Reviewed: April 2012.
[27] Intel [2011]. Intel core i7-3960x processor extreme edition, http://en.wikipedia.
org/wiki/FLOPS. Reviewed: April 2012.
[28] Lee, D.-U., Abdul Gaffar, A., Mencer, O. &Luk, W. [2005]. Optimizing hardware function
evaluation, IEEE Trans. Comput. 54: 15201531.
URL: http://dl.acm.org/citation.cfm?id=1098521.1098595
[29] Lindtjorn, O., Clapp, R. G., Pell, O. & Flynn, M. J. [2011]. Beyond traditional
microprocessors for Geoscience High-Performance computing a pplications and
Geoscience, Ieee Micro pp. 4149.
101 Alternative Computing Platforms to Implement the Seismic Migration
20 Will-be-set-by-IN-TECH
[30] Mayne, W. H. [1962]. Common reection point horizontal data stacking techniques,
Geophysics 27(06): 927938.
[31] Mitrionics [n.d.]. Languaje Mitrion-C, http://www.mitrionics.com/. Reviewed:
April 2012.
[32] Moore, G. E. [1975]. Progress in digital integrated electronics, Electron Devices Meeting,
1975 International, Vol. 21, pp. 1113.
[33] Necsulescu, P. I. [2011]. Automatic Generation of Hardware for Custom Instructions, PhD
thesis, Ottawa, Canada.
[34] Necsulescu, P. I. & Groza, V. [2011]. Automatic Generation of VHDL Hardware Code
from Data Flow Graphs, 6th IEEE International Symposium on Applied Computational
Intelligence and Informatics pp. 523528.
[35] Nios II C-to-Hardware Acceleration Compilern [n.d.]. http://www.altera.com/.
Revisado: Junio de 2011.
[36] Onifade, A. [2004]. History of the computer, Conference of History of Electronics.
[37] Owens, J. D., Houston, M., Luebke, D., Green, S., Stone, J. E. & Phillips, J. C. [2008]. Gpu
computing, Proceedings of the IEEE 96(5): 879899.
[38] Rieber, F. [1936]. Visual presentation of elastic wave patterns under various structural
conditions, Geophysics 01(02): 196218.
[39] Riverside Optimizing Compiler for Congurable Computing: Roccc 2.0 [n.d.]. http://
www.jacquardcomputing.com/roccc/. Revisado: Junio de 2011.
[40] Snchez Fernndez, R. [2010]. Compilacin C a VHDL de cdigos de bucles con reuso de datos,
Tesis, Universidad Politcnica de Catalua.
[41] Stolt, R. H. [1978]. Migration by fourier transform, Geophysics (43): 2348.
[42] Xilinx [n.d.a]. http://www.xilinx.com/. Reviewed: April 2012.
[43] Xilinx [n.d.b]. High-Level Synthesis: AutoESL, http://www.xilinx.com/
university/tools/autoesl/. Reviewed: April 2012.
[44] Yankova, Y., Bertels, K., Vassiliadis, S., Meeuws, R. & Virginia, A. [2007]. Automated
HDL Generation: Comparative Evaluation, 2007 IEEE International Symposium on Circuits
and Systems pp. 27502753.
102 New Technologies in the Oil and Gas Industry
Chapter 0
The Slug Flow Problem in Oil
Industry and Pi Level Control
Airam Sausen, Paulo Sausen and Mauricio de Campos
Additional information is available at the end of the chapter
http://dx.doi.org/10.5772/50711
1. Introduction
The slug is a multiphase ow pattern that occurs in pipelines which connect the wells in
seabed to production platforms in the surface in oil industry. It is characterized by irregular
ows and surges fromthe accumulation of gas and liquid in any cross-section of a pipeline. In
this work will be addressed the riser slugging, that combined or initiated by terrain slugging
is the most serious case of instability in oil/water-dominated systems [5, 15, 21].
The cyclic behavior of the riser slugging, which is illustrated in Figure 1, can be divided
into four phases: (i) Formation: gravity causes the liquid to accumulate in the low point
in pipeline-riser system and the gas and liquid velocity is low enough to enable for this
accumulation; (ii) Production: the liquid blocks the gas ow and a continuous liquid slug
is produced in the riser, as long as the hydrostatic head of the liquid in the riser increases
faster than the pressure drop over the pipeline-riser system, the slug will continue to grow;
(iii) Blowout: when the pressure drop over the riser overcomes the hydrostatic head of the
liquid in the riser the slug will be pushed out of the system; (iv) liquid fall back: after the
majority of the liquid and the gas has left the riser the velocity of the gas is no longer high
enough to drag the liquid upwards, the liquid will start owing back down the riser and the
accumulation of liquid starts over again [15].
The slug ow causes undesired consequences in the whole oil production such as: periods
without liquid or gas production into the separator followed by very high liquid and gas
rates when the liquid slug is being produced, emergency shutdown of the platform due to
the high level of liquid in the separators, oods, corrosion and damages to the equipments of
the process, high costs with maintenance. One or all these problems cause signicant losses
in oil industry. The main one has been of economic order, due to reduction in oil production
capacity [6, 8, 1620].
Currently, control strategies are considered as a promising solution to handle the slug ow [4,
5, 7, 10, 15, 20]. An alternative to the implementation of control strategies is to make use of
a mathematical model that represents the dynamic of slug ow in pipeline-separator system.
Chapter 5
2 Will-be-set-by-IN-TECH
Figure 1. Illustration of a slug cycle.
In this chapter has been used the dynamic model for a pipeline-separator system under the
slug ow, with 5 (ve) Ordinary Differential Equations (ODEs) coupled, nonlinear, 6 (six)
tuning parameters and more than 40 (forty) internal, geometric and transport equations [10,
13], denominated Sausens model.
To carry out the simulation and implementation of control strategies in the Sausens model,
rst it is necessary to calculate its tuning parameters. For this procedure, are used data
from a case study performed by [18] in the OLGA commercial multiphase simulator widely
used in the oil industry. Next it is important to check how the main variables of the model
change their behavior considering a change in the models tuning parameters. This testing,
called sensitivity analysis, is an important tool to the building of the mathematical models,
moreover, it provides a better understanding of the dynamic behavior of the system, for later
implementation of control strategies.
In this context, from the sensitivity analysis, the Sausens model has been an appropriate
environment for application of the different feedback control strategies in the problem of the
slug in oil industries through simulations. The model enables such strategies can be applied
in consequence the slug, that is in the oil or gas output valve separator, as well as in their
causes, in the top riser valve, or yet in the integrated system, in other words, in more than one
valve simultaneously.
Therefore, as part of control strategies that can be used to avoid or minimize the slug ow,
this chapter presents the application of the error-squared level control strategy Proportional
Integral (PI) in the methodology by bands [10], whose purpose damping of the load ow rate
oscillatory that occur in productions separators. This strategy is compared with the level
controls strategy PI conventional [1], widely used in industrial processes; and with the level
control strategy PI also in the methodology by bands.
The remainder of this chapter is organized as following. Section 2 presents the equation of the
Sausens model for a pipeline-separator system. Section 3 shows the simulation results of the
Sausens model. Section 4 presents the control strategies used to avoid or minimize the slug
104 New Technologies in the Oil and Gas Industry
The Slug Flow Problem in Oil Industry and Pi Level Control 3
ow. Section 5 shows the simulation results and analysis of the control strategies applied the
Sausens model. And nally, in Section 6, are discussed the conclusions and future research
directions.
2. The dynamic model
2.1. Introduction
This section presents a mathematical model for the pipeline-separator system, illustrated in
Figure 2 with biphase ow (gas-liquid). The model is the result of coupling the simplied
dynamic model of Storkaas [15, 17, 20] with the model for a biphase horizontal cylindrical
separator [22]. The new model has been called of Sausens model.
The following are shown the modelling assumptions, the model equations, how the
distribution of liquid and gas inside the pipeline for the separator occurs. Finally, are
presented the simulation results for this model considering two settings for simulations: (i)
the opening valve Z in top of the riser z = 20% (ow steady); and (ii) the opening valve Z in
top of the riser z = 50% (slug ow).
Figure 2. Illustration of the pipeline-separator system with the slug formation.
2.2. Model assumption
The Sausens model assumptions are presented as follow.
A1: Liquid dynamics in the upstream feed section of the pipeline have been neglected, that
is, the liquid velocity in this section is constant.
A2: Follows fromassumption A1 that the gas volume is constant in the upstream feed section
pipeline and that the volume variations due to liquid level h
1
(t) at the low point are
neglected.
A3: Only one dynamical state M
L
(t) for holdup liquid in the riser section. This state includes
both the liquid in the riser and at the low point section (with level h
1
(t)).
A4: Two dynamical states for holdup gas (M
G1
and M
G2
(t)) occupying the volumes V
G1
and V
G2
(t), respectively. The gas volumes are related to each other by a pressure-ow
relationship at the low point.
105 The Slug Flow Problem in Oil Industry and Pi Level Control
4 Will-be-set-by-IN-TECH
A5: Simplied valve equation for gas and liquid mixture leaving the system at the top of the
riser.
A6: Stationary pressure balance over the riser (between pressures P
1
(t) and P
2
(t)).
A7: There is not chemical reaction between the uids (gas-liquid) in pipeline.
A8: Each one of the uid consists of a single component in the separator.
A9: The portion of liquid mixed with the gas in the entrance of the separator is neglected.
A10: Simplied valve equation for the gas and the liquid leaving the separator.
A11: The liquid is incompressible.
A12: The temperature is constant.
A13: The gas has ideal behavior.
2.3. Model equations
The Sausens model is composed of 5 (ve) ODEs that are based on the mass conservation
equations. The equations (1)-(3) represent the dynamics of the pipeline system and the
equations (4)-(5) represent the dynamics of the separator:
M
L
(t) = m
L,in
m
L,out
(t) (1)
M
G1
(t) = m
G,in
m
Gint
(t) (2)
M
G2
(t) = m
G1
(t) m
G,out
(t) (3)
N(t) =
r
2
s
(r
s
N(t))
2
2H
4
L
N(t) [3r
s
2N(t)]
[m
L,out
(t) m
LS,out
(t)] (4)
P
G1
(t) =
{
L
[m
G,out
(t) m
GS,out
(t)] + P
G1
(t) [m
L,out
(t) m
LS,out
(t)]}
L
[V
S
V
LS
(t)]
(5)
where: M
L
(t) is the liquid mass at low point in the pipeline, (kg); M
G1
(t) is the gas mass in the
upstream feed section of pipeline, (kg); M
G2
(t) is the gas mass at the top of the riser, (kg); N(t)
is the liquid level in the separator, (m); P
G1
(t) is the gas pressure in the separator, (N/m
2
);
and the
M
L
(t),
M
G1
(t),
M
G2
(t),
N(t),
P
G1
(t) are their respective derivatives in relation to
time; m
L,in
is the liquid mass owrate that enters the upstream feed section of the pipeline,
(kg/s); m
G,in
is the gas mass owrate that enters in the upstream feed section of the pipeline,
(kg/s); m
L,out
(t) is the liquid mass owrate leaving through the valve at the top of the riser
enters the separator, (kg/s); m
G,out
(t) is the gas mass owrate leaving through the valve at the
top of the riser enters the separator, (kg/s); m
Gint
(t) is the internal gas mass owrate, (kg/s);
m
LS,out
(t) is the liquid mass owrate that leaves the separator through the valve Va
1
, (kg/s);
m
GS,out
(t) is the gas mass owrate that leaves the separator through the valve Va
2
, (kg/s); r
s
is the separator ray, (m); H
4
is the separator length, (m);
L
is the liquid density, (kg/m
3
); V
S
is the separator volume, (m
3
); V
LS
(t) is the liquid volume in the separator, (m
3
); =
RT
M
G
is
a constant; R is the ideal gas constant (8314
J
K.kmol
); T is the temperature, (K); M
G
is the gas
molecular weight, (kg/kmol).
The stationary pressure balance over the riser is assumed to be given by
P
1
(t) P
2
(t) = g (t)H
2
L
gh
1
(t)
106 New Technologies in the Oil and Gas Industry
The Slug Flow Problem in Oil Industry and Pi Level Control 5
where: P
1
(t) is the gas pressure in the upstream feed section of the pipeline, (N/m
2
); P
2
(t) is
the gas pressure at the top of the riser, (N/m
2
); g is the gravity (9.81m/s
2
); (t) is the average
mixture density in the riser, (kg/m
3
); H
2
is the riser height, (m); h
1
(t) is the liquid level at the
decline, (m).
A simplied valve equation is used to describe the ow through the Z valve at the top of the
riser that is given by
m
mix,out
(t) = zK
1
T
(t)(P
2
(t) P
G1
(t)) (6)
where: z is the valve position (0 100%); K
1
is the valve constant and a tuning parameter;
T
(t) is the density upstream valve, (kg/m
3
); P
G1
(t) is the gas pressure into the separator,
(N/m
2
). It is possible to observe that the coupling between the pipeline and the separator
occurs through a pressure relationship, in other words, the gas pressure into the separator
P
G1
(t) is the pressure before the Z valve at the top of the riser, according to equation (6).
Considering the result that has been shown in equation (6), it is also possible to obtain the
liquid mass owrate given by
m
L,out
(t) =
m
L
(t)m
mix,out
(t)
and the gas mass owrate given by
m
G,out
(t) = [1
m
L
(t)]m
mix,out
(t)
that leave through the Z valve at the top of the riser, where
m
L
(t) is the liquid fraction
upstream valve.
The liquid mass owrate that leaves the separator is represented by the Va
1
valve equation
given by
m
LS,out
(t) = z
L
K
4
L
[P
G1
(t) + g
L
N(t) P
OL2
] (7)
where: z
L
is the liquid valve opening (0 100%); K
4
is the valve constant and a tuning
parameter; P
OL2
is the downstream pressure after the V
a1
valve, (N/m
2
).
The gas mass owrate that leaves the separator is representedby the Va
2
valve equation given
by
m
GS,out
(t) = z
G
K
5
G
(t)[P
G1
(t) P
G2
] (8)
where: z
G
is the gas valve position (0 100%); K
5
is the valve constant and a tuning parameter;
G
(t) is the gas density, (kg/m
3
); P
G2
is the downstream pressure after the V
a2
valve, (N/m
2
).
The boundary condition at the inlet (inow m
L,in
and m
G,in
) can either be constant or
dependent on the pressure. In this work they are constant and have been considered
disturbances of the process. The most critical section of the model is the phase distribution
and phase velocities of the uids in the pipeline-riser system. The gas velocity is based on an
assumption of purely frictional pressure drop over the low point and the liquid distribution
is based on an entrainment model. Finally, the internal, geometric and transport equations for
the pipeline system are found in [15, 17, 20].
107 The Slug Flow Problem in Oil Industry and Pi Level Control
6 Will-be-set-by-IN-TECH
2.4. Displacement of the gas ow
The displacement of gas in the pipeline system occurs through a relationship between the gas
mass ow and the variation of the pressure inside the pipeline. The acceleration has been
neglected for the gas phase, so that it is the difference of the pressure that makes the uids
outow pipeline above. Its equation is given by
P(t) = P
1
(t) [P
2
(t) + g
L
L
(t)H
2
]
where:
L
(t) is the average liquid fraction in riser.
It is considered that there are two situations in the riser: (i) h
1
(t) > H
1
, in this case the liquid
is blocking the low point and the internal gas mass owrate m
Gint
(t) is zero; (ii) h
1
(t) < H
1
,
in this case the liquid is not blocking the low point, so the gas will ow from V
G1
to V
G2
(t)
with a internal gas mass owrate m
Gint
(t) = 0, where V
G1
is the gas volume in upstream feed
section of the pipeline, (m
3
) and V
G2
is the gas volume at the top of the riser, (m
3
).
From physical insight, the two most important parameters determining the gas owrate are
the pressure drop over the low point and the free area given by the relative liquid level
(t) = (H
1
h
1
(t))/H
1
at the low point. This suggests that the gas transport could be described by a valve equation,
where the pressure drop is driving the gas through a valve with opening (t). Based on trial
and error, the following valve equation has been proposed
m
G1
(t) = K
2
f (h
1
(t))
G1
(t)[P
1
(t) P
2
(t) g
L
L
(t)H
2
] (9)
where: K
2
is the valve constant and a tuning parameter; f (h
1
(t)) =
A(t)(t) e
A(t) is the
cross-section area at the low point, (m
2
); h
1
(t) is the liquid level upstream in the decline, (m);
H
1
is the critical liquid level, (m);
G1
(t) is the gas density in the volume 1, (kg/m
3
). The
internal gas mass owrate from the volume V
G1
to volume V
G2
(t) is given by
m
Gint
(t) =
G1
(t)
G1
(t)
A(t) (10)
where:
G1
(t) is the gas velocity at the low point, m/s. Therefore, substituting equation (10)
into equation (9), it has been found that the gas velocity is
G1
(t) =
K
2
(t)
P
1
(t)P
2
(t)g
L
L
(t)H
2
G1
(t)
h
1
(t) < H
1
,
0 h
1
(t) H
1
.
(11)
2.5. Entrainment equation
The distribution of liquid occurs through an entrainment equation. It is considered that the
gas pushes the liquid riser upward, then the volume fraction of liquid (
LT
(t)) that is leaving
through the Z valve at the top of the riser is modelled.
108 New Technologies in the Oil and Gas Industry
The Slug Flow Problem in Oil Industry and Pi Level Control 7
The volume fraction of liquid will lie between two extremes: (i) when the liquid blocks the gas
ow (
G1
= 0), there is no gas owing through the riser and
LT
(t) =
LT
(t), in most cases
there will be only gas leaving the riser, so
LT
(t) = 0, however, eventually the entering liquid
may cause the liquid to ll up the riser and
LT
(t) will exceed zero; (ii) when the gas velocity
is very high there will be no slip between the phases, so
LT
(t) =
L
(t), where
L
(t) is the
average liquid fraction in the riser.
The transition between these two extremes should be smooth and occurs as follows: when the
liquid blocks the low point of the riser, the liquid fraction on top is
LT
(t) = 0, so the amount
of liquid in the riser goes on increases until
LT
(t) > 0. At this moment the gas pressure and
the gas velocity in the feed upstream section of the pipeline is very high, then the entrainment
occurs. This transition depends on a parameter q(t). The entrainment equation is given by
LT
(t) =
LT
(t) +
q
n
(t)
1 + q
n
(t)
(
L
(t)
LT
(t)) (12)
where
q(t) =
K
3
G1
(t)
2
G1
(t)
L
G1
(t)
and K
3
and n are tuning parameters of the model. The details of the modelling of the equation
(12) are found in Storkaas [15].
3. Simulation and analysis results of the Sausens model
In this section are presented the simulation results of the Sausens model for a
pipeline-separator system. Initially the tuning parameters are calculated: K
1
in Z valve
equation (6), K
2
in gas velocity equation (11), K
3
and n in entrainment equation (12), K
4
in
Va
1
liquid valve equation (7), and K
5
in Va
2
gas valve equation (8). The calculation of these
tuning parameters depends on the available data from a real system or an experimental loop,
but a complete set of data is not found in the literature and is not provided by oil industries.
Therefore, to calculate the tuning parameters of the dynamic model are used the case study
data carried out by Storkaas [15] through the multiphase commercial simulator OLGA [2]
that accurately represents the pipeline system under slug ow [15] and the data of separator
dimensioned from a tank of literature [10]. In this case study the transition of the steady ow
to a slug ow occurs in the valve opening z = 13% (i.e., z
crit
= 13%). Table 1 presents the
data for the simulation of the dynamic model and Table 2 presents the values of the tuning
parameters of the dynamic model.
Now are presented the simulation results considering the Z valve opening z = 12%. Figure 3
shows the varying pressures P
1
(t) in the upstreamfeed section and P
2
(t) at the top of the riser.
Figure 4 shows the dynamics of the liquid mass owrate (up-left) and the dynamics of the gas
mass owrate (down-left) that are entering the separator, and the dynamics of the liquid mass
owrate (up-right) and the dynamics of the gas mass owrate (down-right) that are leaving
the separator. Figure 5 shows the dynamics of the liquid level (left) and of the gas pressure
(right) in the separator. It is possible to observe in all these simulation results that the varying
pressures induce oscillations, but because the valve position is less than z
crit
, these oscillations
eventually die out characterizing the steady ow in pipeline-separator system.
109 The Slug Flow Problem in Oil Industry and Pi Level Control
8 Will-be-set-by-IN-TECH
Symbol/Value Description SI
m
L,in
= 8.64 Liquid mass owrate into system kg/s
m
G,in
= 0.362 Gas mass owrate into system kg/s
P
1
(t) = 71.7 10
5
Gas pressure in the upstream feed section of the pipeline N/m
2
P
2
(t) = 53.5 10
5
Gas pressure at the top of the riser N/m
2
r = 0, 06 Pipeline ray m
H
2
= 300 Height of riser m
L
1
= 4300 Length of horizontal pipeline m
L
3
= 100 Length of horizontal top section m
H
4
= 4.5 Length of separator m
D
s
= 1.5 Diameter of separator m
N
t
= 0.75 Liquid level m
P
G1
= 50 10
5
Pressure after Z valve at the top of the riser N/m
2
P
OL2
= 49 10
5
Pressure after Va
1
liquid valve of separator N/m
2
P
GL2
= 49 10
5
Pressure after Va
2
gas valve of separator N/m
2
Table 1. Data for simulation dynamic model.
K
1
K
2
K
3
K
4
K
5
2.55 0.005 0.8619 1.2039 0.002 0.0003
Table 2. Model tuning parameters.
0 50 100 150 200 250 300 350
50
55
60
65
70
75
time [min]
p
r
e
s
s
u
r
e
[
B
a
r
]
P
1
[Bar]
P
2
[Bar]
Figure 3. Varying pressures in pipeline system with z = 12% (steady ow).
In the following section we are presenting the simulation results considering the Z valve
opening z = 50%. Figure 6 shows the varying pressures throughout the pipeline system.
Figure 7 presents the dynamics of the liquid mass owrate (up-left) and the dynamics of
the gas mass owrate (down-left) that are entering the separator with peak mass owrate
of the 14 kg/s for the liquid and 2 kg/s for the gas, and the dynamics of the liquid mass
owrate (up-right) and the dynamics of the gas mass owrate (down-right) that are leaving
the separator. Figure 8 shows the dynamics of the liquid level (left) and of the gas pressure
(right) in the separator. Finally, it has been observed that in all these simulation results the
varying pressures induce periodical oscillations, characterizing the slug ow that happens in
110 New Technologies in the Oil and Gas Industry
The Slug Flow Problem in Oil Industry and Pi Level Control 9
the pipeline-separator system. It has also been shown that the slug ow happens in intervals
of 12 minutes.
0 50 100 150 200 250
7.5
8
8.5
9
9.5
m
L
o
u
t
[
k
g
/
s
]
0 50 100 150 200 250
0.32
0.34
0.36
0.38
0.4
time [min]
m
G
o
u
t
[
k
g
/
s
]
0 50 100 150 200 250
7.5
8
8.5
9
9.5
m
L
S
o
u
t
[
k
g
/
s
]
0 50 100 150 200 250
0.32
0.34
0.36
0.38
0.4
time [min]
m
G
S
o
u
t
[
k
g
/
s
]
Figure 4. Input liquid (up-left) and input gas (down-left) mass owrate in the separator and output
liquid (up-right) and output gas (down-right) mass owrate in the separator with z = 12% (steady ow).
0 50 100 150 200 250
0.72
0.73
0.74
0.75
0.76
0.77
0.78
0.79
time [min]
l
e
v
e
l
[
m
]
0 50 100 150 200 250
49.85
49.9
49.95
50
50.05
50.1
time [min]
p
r
e
s
s
u
r
e
[
b
a
r
]
Figure 5. Liquid level (left) and gas pressure (right) in the separator with z = 12% (steady ow).
0 50 100 150 200 250 300 350
50
55
60
65
70
75
time [min]
p
r
e
s
s
u
r
e
[
B
a
r
]
P
1
[Bar]
P
2
[Bar]
Figure 6. Varying pressures in the pipeline system with z = 50% (slug ow).
111 The Slug Flow Problem in Oil Industry and Pi Level Control
10 Will-be-set-by-IN-TECH
0 50 100 150 200 250 300 350
0
5
10
15
m
L
o
u
t
[
k
g
/
s
]
0 50 100 150 200 250 300 350
0
0.5
1
1.5
2
2.5
time [min]
m
G
o
u
t
[
k
g
/
s
]
0 50 100 150 200 250 300 350
0
5
10
15
m
L
S
o
u
t
[
k
g
/
s
]
0 50 100 150 200 250 300 350
0.2
0.3
0.4
0.5
0.6
time [min]
m
G
S
o
u
t
[
k
g
/
s
]
Figure 7. Input liquid (up-left) and gas (down-left) mass owrate in the separator and output liquid
(up-right) and gas (below-right) mass owrate in the separator with z = 50% (slug ow).
0 50 100 150 200 250 300 350
0.55
0.6
0.65
0.7
0.75
0.8
time [min]
l
e
v
e
l
[
m
]
0 50 100 150 200 250 300 350
48.5
49
49.5
50
50.5
51
51.5
52
52.5
time [min]
p
r
e
s
s
u
r
e
[
b
a
r
]
Figure 8. Liquid level (left) and gas pressure (right) in the separator with z = 50% (slug ow).
4. Control strategies
Controller PI actuating in the oil output valve in the oil industry is the traditional method
used to control the liquid level in production separators. If the controller is tuned to maintain
a constant liquid level, the inow variations will be transmitted to the separator output, in
this case, causing instability in the downstream equipments. An ideal liquid level controller
will let the level to vary in a permitted range (i.e., band) in order to make the outlet ow more
smooth; this response specication cannot be reached by PI controller conventional for slug
ow regime. Nunes [9] dened a denominated level control methodology by bands, which
promotes level oscillations within certain limits, i.e, the level can vary between the maximum
and the minimum of a band, as Figure 9, so that the output owrate is close to the average
value of the input owrate. This strategy does not use ow measurements and can be applied
to any production separator.
In the band control when the level is within the band, it is used the moving average of the
control action of a slow PI controller, because reducing the capacity of performance of the
controller gives a greater uctuation in the liquid level within the separator. The moving
average is calculated in a time interval, this interval should be greater than the period T of
112 New Technologies in the Oil and Gas Industry
The Slug Flow Problem in Oil Industry and Pi Level Control 11
Figure 9. Band control diagram of Nunes.
the slug ow. When the band limits are exceeded, the control action in moving average of the
slow PI controller is switched to the PI controller of the fast action for a time, whose objective
is return to the liquid level for within the band, if so, the action of the control again will be
the moving average. To avoid abrupt changes in action control for switching between modes
of operation within the band and outside the band, it is suggested to use the average between
the actions of PI controller of the fast action and in moving average.
Therefore, this paper performs the application in the Sausens model of the level control
strategy PI considering 3 (three) methodologies: (1) level control strategy PI conventional,
the level shall remain xed at setpoint; (2) level control strategy PI in the methodology by
bands; (3) error-squared level control strategy PI in the methodology by bands.
The error-squaredcontroller [14] is a continuous nonlinear function whose gain increases with
the error. Its gain is computed as
k
c
(t) = k
1
+ k
2NL
|e(t)|
where k
1
is a linear part, k
2NL
is a nonlinear one and e(t) is the tracking error. If k
2NL
= 0 the
controller is linear, but with k
2NL
> 0 the function becomes squared-law.
In literature the error-squared controller is suggested to be used in liquid level control in
production separators under load inow variations. Fromthe application of the error-squared
controller, in liquid level control process in vessels, it is observed that small deviations
from the setpoint resulted in very little change to the valve leaving the output ow almost
unchanged. On the other hand large deviations are opposed by much stronger control action
due to the larger error and the law of the error-squared, thereby preventing the level from
rising too high in the vessel. The error-squared controller has the benet of resulting in
more steady downstream ow rate under normal operation with improved response when
compared to the level control strategy conventional [12].
For implementation of the controllers it is used the algorithm control PI [1] in speed form,
whose equation is given by
u(t) = k
c
e(t) + k
c
1
T
i
T
a
e(t) (13)
where u(t) is the variation of the control action; k
c
is the gain controller; e(t) is the variation
of the tracking error; T
a
is the sampling period of the controller; e(t) is the tracking error. It is
considered that the valve dynamics, i.e., the time for its opening reach the value of the control
action is short, so this implies that the valve opening is the control action itself.
113 The Slug Flow Problem in Oil Industry and Pi Level Control
12 Will-be-set-by-IN-TECH
5. Simulation and analysis results of the control strategies
This section presents the simulation results of the control strategies using the computational
tool Matlab. To implement the control by bands it is used a separator with length 4.5 m
and diameter 1.5m following the standards used by [9]. The setpoint for the controller is
0.75mn(i.e., separator half), the band is 0.2m, where the liquid level maximum permitted is
0.95m and the minimum is 0.55m. The bands were dened to follow the works of [3, 9]
Initially, for the rst simulation, it is considered the Z valve opening at the top of the riser in
z = 20% (slug ow). To simulate the level control strategy PI conventional the values used
for the controller gain k
c
and the integral time T
i
are 10 and 1380s respectively, according to
the heuristic method to tune level controllers proposed by Campus et al. (2006). In level
control strategy PI in the methodology by bands the level can oat freely within the band
limits in separator. In this case, the controller PI with slow acting (i.e., within the band) uses
controller gain k
c
= 0.001 and integral time T
i
= 100000s, and the PI controller with fast
acting (i.e., out the band) uses k
c
= 0.15 and T
i
= 1000s. In error-squared control strategy PI
in the methodology by bands, the gain linear and nonlinear of the controller are computed to
following the methodology present in [11] based on Lyapunov stability theory. In this case,
the error-squared level PI controller with slow acting (i.e., within the band) uses k
c
= 0.001,
k
2NL
= 0.000004 and T
i
= 100000s, and the PI controller with fast acting (i.e., out the band)
uses k
c
= 0.15, k
2NL
= 0.03 and T
i
= 1000s. The period for calculating the moving average of
the PI controllers by band was T
i
= 1000s.
Figure 10 (a) presents the liquid level variations N(t) considering level controller strategy PI
conventional (dashed line) and level control strategy PI in the methodology by bands (solid
line) in the separator, and the Figure 10 (b) presents liquid level variations N(t) considering
level control strategy PI conventional (dashed line) and error-squared level control strategy
PI in methodology by bands (solid line). Figures 11 (a) and (b) shown the liquid output ow
rate variations m
LS,out
(t) of the separator that corresponding to controls of the presented in
Figures 10 (a) and (b).
0 50 100 150 200 250 300 350 400
0.4
0.5
0.6
0.7
0.8
0.9
1
1.1
1.2
time [min] (a)
l
e
v
e
l
[
m
]
Setpoint.
PI conventional level controller.
PI level controller by bands.
0 50 100 150 200 250 300 350 400
0.5
0.6
0.7
0.8
0.9
1
1.1
1.2
time [min] (b)
l
e
v
e
l
[
m
]
Setpoint.
PI conventional level controller.
Errorsquared PI level controller by band.
Figure 10. Liquid level variations N(t), (a) level control strategy PI conventional (dashed line) and level
control strategy PI by band (solid line), (b) level control strategy PI conventional (dashed line) and
error-squared level control strategy PI by band (solid line), z = 20%.
114 New Technologies in the Oil and Gas Industry
The Slug Flow Problem in Oil Industry and Pi Level Control 13
0 50 100 150 200 250 300 350 400
0
2
4
6
8
10
12
14
16
18
20
o
u
t
p
u
t
f
l
o
w
r
a
t
e
[
k
g
/
s
]
time [min] (a)
Output flowrate variations with PI
conventional level controller.
Output flowrate variations with PI
level controller by bands.
0 50 100 150 200 250 300 350 400
0
2
4
6
8
10
12
14
16
18
20
o
u
t
p
u
t
f
l
o
w
r
a
t
e
[
k
g
/
s
]
time [min] (b)
Output flowrate variations with PI
conventional level controller.
Output flowrate variations with
errorsquared PI level controller by bands.
Figure 11. Liquid output ow rate variations m
LS,out
(t), (a) level control strategy PI conventional
(dashed line) and level control strategy PI by band (solid line), (b) level control strategy PI conventional
(dashed line) and error-squared level control strategy PI by band (solid line), z = 20%.
The following are presented simulation results for valve opening at the top of the riser, i.e.,
z = 20%, z = 25%, z = 30% and z = 35% (slug ow). Figure 12 (a) presents the liquid
level variations N(t) considering level control strategy PI conventional (dashed line) and level
control strategy PI in the methodology by bands (solid line) in separator, and the Figure 9
(b) presents liquid level variations N(t) considering level control strategy PI conventional
(dashed line) and error-squared level control strategy PI in the methodology by bands (solid
line). Figures 10 (a) and (b) shown the liquid output ow rate variations of the separator that
corresponding to controls of the level presented in Figure 9 (a) and (b).
0 50 100 150 200 250 300 350 400
0.4
0.5
0.6
0.7
0.8
0.9
1
1.1
1.2
time [min] (a)
l
e
v
e
l
[
m
]
Setpoint.
PI conventional level controller.
PI level controller by bands.
0 50 100 150 200 250 300 350 400
0.4
0.5
0.6
0.7
0.8
0.9
1
1.1
1.2
time [min] (b)
l
e
v
e
l
[
m
]
Setpoint.
PI conventional level controller.
Errorsquared PI level controller by bands.
Figure 12. Liquid level variations N(t), (a) level control strategy PI conventional (dashed line) and level
control strategy PI by band (solid line), (b) level control strategy PI conventional (dashed line) and
error-squared level control strategy PI by band (solid line), z = 20%.
Comparing the simulation results between the level control strategy PI and error-squared
level control strategy PI both in the methodology by bands, it is observed that the second
controller (Figures 10 and 12 (b)) has respected strongly the dened bands, i.e., in 0.95m
(higher band) and in 0.55m (lower band), because it has the more hard control action than
the rst controller (Figure 10 and 12 (a)). However, when the liquid level reached the band
limits for the error-squared level control strategy PI, at this time, the liquid output ow rate
has a little more oscillatory ows than the ones found for the level controller PI by bands, but
115 The Slug Flow Problem in Oil Industry and Pi Level Control
14 Will-be-set-by-IN-TECH
0 50 100 150 200 250 300 350 400
0
2
4
6
8
10
12
14
16
o
u
t
p
u
t
f
l
o
w
r
a
t
e
[
k
g
/
s
]
time [min] (a)
Output flowrate variations
with PI conventional level controller.
Output flowrate variations with
PI level controller by bands.
0 50 100 150 200 250 300 350 400
0
2
4
6
8
10
12
14
16
18
20
o
u
t
p
u
t
f
l
o
w
r
a
t
e
[
k
g
/
s
]
time [min] (b)
Output flowrate variations
with PI conventional level controller.
Output flowrate variations with
errorsquared PI level controller by bands.
Figure 13. Liquid output ow rate variations m
LS,out
(t), (a) level control strategy PI conventional
(dashed line) and level control strategy PI by band (solid line), (b) level control strategy PI conventional
(dashed line) and error-squared level control strategy PI by band (solid line), z = 20%.
this difference is minimal, according to Figures 11 (a) and (b), Figures 13 (a) and (b). For both
controllers simulation results of the liquid output ow rate are better than the results obtained
with the level control strategy PI conventional. Considering the liquid output ow rate when
the level is within the band, both processes (i.e., level control strategy PI and error-squared
level control strategy PI both in the methodology by band) have similar trends.
6. Conclusion
In this chapter with objective of reducing the export oscillatory ow rate caused by slug
ow, three methodologies of the level controls were implemented (1) level control strategy
PI conventional; (2) level control strategy PI in the methodology by bands; (3) error-squared
level control strategy PI in the methodology by bands.
The simulation results showed that the error-squared level control PI strategy in the
methodology for bands presented the better results when compared with the level control
strategy PI conventional, because reduced ow uctuations caused by slug ow; and with the
level control strategy PI in the methodology by bands, it probably happened because the rst
has highly respected the dened bands.
As suggestions for future work new control strategies can be implemented in integrated
system, i.e., more than one valve simultaneously. Considering the mathematical modeling
of the process, it was necessary to investigate a mathematical model with fewer parameters,
along with the construction of an experimental platform, since the data of a real process is
difcult to obtain and are not provided by oil industry.
Author details
Airam Sausen, Paulo Sausen, Mauricio de Campos
Masters Program in Mathematical Modeling (MMM), Group of Industrial Automation and Control,
Regional University of Northwestern Rio Grande do Sul State (UNIJU), Iju, Brazil.
116 New Technologies in the Oil and Gas Industry
The Slug Flow Problem in Oil Industry and Pi Level Control 15
7. References
[1] Astrom, K. J. & Hagglund, T. [1995]. PID Controllers: Theory, Design, and Tuning, ISA,
New York.
[2] Bendiksen, K., Malnes, D., Moe, R. & Nuland, S. [1991]. The dynamic two-uid model
olga: theory and application, SPE Production Engineering, pp. 171180.
[3] de Campos, M. C. M., Costa, L. A., Torres, A. E. & Schmidt, D. C. [2008]. Advanced
control levels of separators production platforms, 1 CICAP Congresso de Instrumentao,
Controle e Automao da Petrobrs (I CICAP), Rio de Janeiro. in Portuguese.
[4] Friedman, Y. Z. [1994]. Tuning of averaging level controller, Hydrocarbon Processing
Journal .
[5] Godhavn, M. J., Mehrdad, F. P. & Fuchs, P. [2005]. New slug control strategies, tuning
rules and experimental results, Journal of Process Control 15: 547577.
[6] Havre, K. & Dalsmo, M. [2002]. Active feedback control as a solution to severe slugging,
SPE Production and Facilities, SPE 79252 pp. 138148.
[7] Havre, K., Stornes, K. & Stray, H. [2000]. Taming slug ow in pipelines, ABB review,
number 4, pp. 5563.
[8] Havre, K. & Stray, H. [1999]. Stabilization of terrain induced slug ow in multiphase
pipelines, Servomotet, Trondheim.
[9] Nunes, G. C. [2004]. Bands control: basic concepts and application in load oscillations
damping in platform of oil production, Petrleo, Centro de Pesquisas (Cenpes), pp. 151165.
in Portuguese.
[10] Sausen, A. [2009]. Mathematical modeling of a pipeline-separator system unde slug ow and
control level considering an error-squared algorithm, Phd thesis, Norwegian University of
Science and Technology, Brazil. in Portuguese.
[11] Sausen, A. & Barros, P. R. [2007a]. Lyapunov stability analysis of the error-squared
controller, Dincon 2007 - 6th Brazilian Conference on Dynamics, Control and Their
Applications, So Jos do Rio Preto, Brasil, pp. 18.
[12] Sausen, A. & Barros, P. R. [2007b]. Properties and lyapunov stabilty of the error-squared
controller, SSSC07 - 3rd IFAC Symposium on System, Structure and Control, Foz do Iguau,
Brasil, pp. 16.
[13] Sausen, A. & Barros, P. R. [2008]. Modelo dinmico simplicado para um
sistema encanamento-Riser-separador considerando um regime de uxo com golfadas,
Tendncias em Matemtica Aplicada e Computacional pp. 341350. in Portuguese.
[14] Shinskey, F. [1988]. Process Control Systems: Application, Design, and Adjustment,
McGraw-Hill Book Company, New York.
[15] Storkaas, E. [2005]. Stabilizing control and controllability: control solutions to avoid slug ow
in pipeline-riser systems, Phd thesis, Norwegian University of Science and Technology,
Norwegian.
[16] Storkaas, E., Alstad, V. & Skogestad, S. [2001]. Stabilization of desired ow regimes in
pipeline, AIChE Annual Meeting, number Paper 287d, Reno, Nevada.
[17] Storkaas, E. & Skogestad, S. [2002]. Stabilization of severe slugging based on
a low-dimensional nonlinear model, AIChE Annual Meeting, number Paper 259e,
Indianapolis, USA.
[18] Storkaas, E. &Skogestad, S. [2005]. Controllability analysis of an unstable, non-minimum
phase process, 16th IFAC World Congress, Prague, pp. 16.
117 The Slug Flow Problem in Oil Industry and Pi Level Control
16 Will-be-set-by-IN-TECH
[19] Storkaas, E. & Skogestad, S. [2006]. Controllability analysis of two-phase pipeline-riser
systems at riser slugging conditions, Control Enginnering Practice pp. 567581.
[20] Storkaas, E., Skogestad, S. & Godhan, J. M. [2003]. A low-dimensional dynamic model of
severe sluggingfor control designand analysis, 11th International Conference on Multiphase
ow (Multiphase03), San Remo, Italy, pp. 117133.
[21] Tengesdal, J. O. [2002]. Investigation of self-lifting concept for severe slugging elimination
in deep-water pipeline/riser systems, Phd thesis, The Pennsylvania State University,
Pennsylvania.
[22] Thomas, P. [1999]. Simulation of Industrial Processes for Control Engineers, Butterworth
heinemann.
118 New Technologies in the Oil and Gas Industry
Chapter 6
Integration of Seismic Information in Reservoir
Models: Global Stochastic Inversion
Hugo Caetano
Additional information is available at the end of the chapter
http://dx.doi.org/10.5772/52308
1. Introduction
The stochastic models for reservoir properties characterization are a known important tool
for reserve management, as well as reservoir quality and thickness that play a key role in
deciding optimal well locations in any producing fields. Todays production reservoirs are
each day more complex, and the majority of them, are of difficult access (off-shore), which in
technical and cost terms, represents a lack of information.
In petroleum applications, stochastic modeling of internal properties (porosity and
permeability), lithofacies and sand bodies of reservoirs, normally use core and log data
which in the area provides detailed reservoir parameters, spatially it is limited to a few
subsurface locations, scarce and expensive but it is reliable information.
The models created, with the lack of information, are models with great level of
uncertainty. It is in this category of models that it is possible to find the stochastic
simulation Sequential Indicator Simulation to the morphological characterization of
lithoclasses in [1], the Sequential Gaussian Simulation in [2] and recently, the Direct
Sequential Simulation in [3].
The integration of different types of information in a unique and coherent stochastic
model has been one of the most important, and still current, challenges of the
geostatistical practice of modeling physical phenomena of natural resources and in order
to make decisions regarding the development of well locations the geoscientists need to
use all available data.
The recent trend of the scientific community regarding development and research for
reservoir characterization is creating models which integrate other kind of information
(secondary or auxiliary) normally available the seismic information.
New Technologies in the Oil and Gas Industry 120
The seismic data which can cover the entire reservoir space has a high uncertainty given the
quality and the vertical coarse resolution of seismic. This varies from 25 by 25 meters in
horizontal and 1 to 4 milliseconds, in 3-D seismic acquisitions. This data sample is much
coarser that the data measured in wells, which vary from some centimeters to a few feet. It is
important information never the less, but in almost all applications the seismic data cannot
have a direct link to the wells properties (lithology, porosity and permeability), and are
difficult to use directly in the models one wishes to create.
The reservoir models based only in seismic information (3-D or 4-D), are normally limited to
the structural information. This relationship derives from the major horizons and faults
systems, interpreted in the coarse seismic, and it does not take in account the available well
information, related to the internal characteristics of the reservoir (porosity, permeability
and water saturation). On the other side, the characterization of reservoir models based only
in the information of wells, like the recent geostatistical stochastic models, can have a great
improvement by the integration of seismic information, which normally is available in the
initial phases of prospecting and production.
The integration of these two types of information, with different special coverage and with
totally different uncertainty levels is a challenge that even today dazzles the scientific
community linked to the earth science modeling.
2. Objective
The main objective of this work is the development and implementation of a stochastic
model algorithm for seismic inversion to improve reservoir characterization.
The methods of integration of seismic data can be roughly divided in two approaches. The
methods that rely on a statistical relationship between seismic data and internal properties,
or lithofacies, to characterize local distributions of these properties in any location of the
reservoir by using, for example, co-simulations as [4,5], and others different approaches are
posed as an inverse problem framework where the solution, the known amplitudes of
seismic, are physically related with the unknown acoustic impedances (or porosity) by mean
of a convolution model. Among them there is the so called geostatistical inversion in [6].
The major disadvantages and drawbacks of the direct models, are that the correlation found
in the wells locations between the seismic and the internal properties (porosity and
permeability), are normally low and sometimes spurious, condemning all the models from
there to a great uncertainty.
From the recent deterministic inversion models, the great drawback, is caused by the lack of
production of any measure of uncertainty and from not being robust (a great dependency
from the seismic quality) and having little liability in almost all of the more complex
reservoirs.
In 1993 Bortolli, launched the embryo of what is considered a liable alternative to the
existing inverse models, the stochastic inversion. In this method the sequential gaussian
Integration of Seismic Information in Reservoir Models: Global Stochastic Inversion 121
simulation is used to transform, using an interactive process, each of the N verticals columns
of the seismic cube.
Since then, the geostatistical seismic inversion has been a commonly used technique to
incorporate seismic information in stochastic fine grids models.
Essentially, geostatistical inversion methods as in [7-9], perform a sequential approach in
two steps:
i. Acoustic impedance values are simulated in each trace (a column of a 3D grid) based on
well data and spatial continuity pattern as revealed by the variograms;
ii. The acoustic impedance values are transformed, by a convolution with a known
estimated wavelet, into amplitudes giving rise to a synthetic seismogram that can be
compared with the real seismic.
A best simulated trace is retained, based on the match of an objective function (function of
the similitude between real seismic trace and seismogram), and another trace is visited to be
simulated and transformed. The sequential process continues until all traces of acoustic
impedances are simulated. In each step as long as the best transformed trace is accepted,
the traces of simulated acoustic impedances are incorporated as real data for the next
sequential simulation step. This can lead to artificially good matches in local areas where the
bad quality of seismic prevails.
The base idea of this research work is precisely to incorporate stochastic simulation and co-
simulation methodologies to conceive and implement a model of global seismic inversion
and creating uncertainty linked to areas with different seismic quality.
The use of geostatistics for the creation and transformation of images (acoustic impedances)
and the genetic algorithms for the modification and generation of better images, allow the
convergence of the inverse process.
The methodology is proposed based on a global perturbation, instead of trace-by-trace, to
reach the objective function of the match between synthetic seismogram and real seismic.
Using the sequential simulation and co-simulation approaches it creates several realizations
of the entire 3D cube of acoustic impedances that are simulated in a first step, instead of
individual traces or cells.
After the convolution, local areas of best fit of the different images are selected and
merged into a secondary image of a direct co-simulation in the next iteration.
The iterative and convergent process continues until a given match with an objective
function is reached. Spatial dispersion and patterns of acoustic impedances imposed are
reproduced at the final acoustic impedance cube.
As the iterative process is based on global simulations and co-simulations of impedances,
there is no local imposing of artificial good fit, i.e. areas of bad seismic tend to remain with
bad match coefficients, as it does not happens in most trace-by-trace approaches.
New Technologies in the Oil and Gas Industry 122
At each iterative step one knows how close is one given generated image from the objective,
by the global and local correlation coefficients between the transformed traces and the real
seismic traces. These correlation coefficients of different simulated images are used as the
affinity criterion to create the next generation of images until it converges to a given
predefined threshold.
In a last step, porosity images can be derived from the seismic impedances obtained by
seismic inversion and the uncertainty derived from the seismic quality is assessed based on
the quality of match between synthetic seismogram and real seismic.
For the case of characterization of the reservoir in terms of facies distribution several
methods for the integration of the seismic data in facies models have been proposed, several
of which rely on the construction of a facies probability cube by calibration of the seismic
data with wells. If only post-stack seismic data is considered, is typically inverts the seismic
amplitudes into a 3D acoustic impedance cube, and then converts this impedance into a 3D
facies probability using a calibration method of choice.
This facies probability can serve as input of several well-known geostatistical algorithms to
create a facies realization, such as the use the cube as locally varying mean on indicator
kriging or the use of the tau model in [10], in a multi-point simulation to integrate the facies
probability cube with spatial continuity information provided by a training image.
While this provides satisfactory results in most cases, the resulting facies realization does
not necessarily match the original seismic amplitude from which the acoustic impedance
was inverted. Indeed, if one would forward simulate, for example a 1D convolution on a
single facies realizations, then this procedure does not guarantee that the forward simulated
seismic matches the field amplitudes.
Another objective of this work is to present a geostatistical methodology, based on
multipoint technique that generates facies realization compatible with the field seismic
amplitude data, and therefore this new procedure has two main advantages, matching field
seismic amplitude data in a physical sense, not merely in a probabilistic sense and using
multi-point statistics, not just the two-point statistics (variogram).
3. Seismic inversion
The seismic data is the best source of spatially extensive measurement over the reservoir,
but as explained previously, is very coarse vertically. From the wells we can use the acoustic
impedance (AI) which is the result of multiplying the lateral interval velocity by the layer
density, that later can be transformed in to seismic amplitude, this is possible because the
difference between the different geological materials is linked to the response of the seismic
signal.
The advantages of working with AI instead of the recorded seismic data are that is layer
property rather than an interface property and hence more like geology and therefore it has
Integration of Seismic Information in Reservoir Models: Global Stochastic Inversion 123
a more physical meaning and during the inversion process all the well data is tied to the
seismic data giving better understanding of the quality of both datasets.
So, the integration of the amplitude seismic data with the acoustic impedance data from
wells requires some kind of transformation. There are two methods that transform one in to
the other, an inverse method and a direct or forward method.
Both of them use wavelets for the transformation. These can be summarized as one-
dimensional pulse and the link between seismic data and geology and they are a kind of
impulse of energy that is created when a marine air gun or land dynamite source is released
during the acquisition of seismic surveys.
The first method, inverse process, transforms the amplitudes from seismic to acoustic
impedance by removing the wavelet from the amplitudes and obtaining a model of the
acoustic impedance. The problem of simultaneously inverting reservoir engineering and
seismic data to estimate porosity and permeability involves complex processes such as
fluid flow through porous media, and acoustic wave propagation and cannot be solved by
linear inversion methods as [11]. On the other hand, the process of sending a wavelet into
the earth and measure the reflection is called forward process. Here we need to know
what it is in the subsurface in terms of geological models divided in layers and each
characterized by his acoustic impedance. Forward modeling combines the sequence of
differences in the acoustic impedances with a seismic pulse to obtain a synthetic
amplitude trace.
In the forward model if we compare those sequences of synthetic amplitude variations with
the real ones, we can identify and quantify the differences and try to minimize them.
3.1. Synthetic seismogram
It is known that the propagation of waves or sounds that pass through earth can be
explained by an elastic wave equation. One simple, but powerful and commonly used
approach for computing the seismic response of a certain earth model is the so-called
convolutional model as in [12-14], which can be derived from an acoustic approximation of
an elastic equation.
This convolutional model (equation 1) creates a synthetic seismic trace Sy(t), that is the
result of convolving a seismic wavelet w(t) with a time series of reflectivity coefficients r(t),
with the addition of a noise component n(t), as follows:
Sy(t) = w(t) * r(t) + n(t) (1)
An even simpler assumption is to consider the noise component to be zero (equation 2), in
which case the synthetic seismic trace is simply the convolution of a seismic wavelet with
the earths reflectivity:
Sy(t) = w(t) * r(t) (2)
New Technologies in the Oil and Gas Industry 124
The reflection coefficient is one of the fundamental physical concepts in the seismic method.
Basically, each reflection coefficient may be thought of as the response of the seismic
wavelet to an acoustic impedance change within the earth and represents the percentage of
the energy that is emitted compared to the one that is reflected
Mathematically, converting from acoustic impedance to reflectivity involves dividing the
differences in the acoustic impedances by the sum of them. This gives the reflection
coefficient at the boundary between the two layers.
Each layer can have different rock acoustic impedance, which is a function of two porosity-
dependent rock properties: bulk density and velocity (equation 3). For instance, the P-wave
acoustic impedance is given by:
AI(t) = density(t) * P-Velocity(t) (3)
So the reflectivity coefficient (equation 4) can be computed from:
1
1
AI AI
AI AI
t t
t
t t
r
+
+
=
+
(4)
where AI represents the acoustic impedance and the subscripts t and t+1 refer to two
subsequent layers in the stratigraphic column.
The wavelet is usually extracted from the seismic survey through deconvolution (the
methodology of extraction the wavelet from the seismic signal). During the deconvolution,
the wavelet extraction does not take in account the low frequency of the earth model, this
represents the compaction of the porous media in depth, which in some seismic inversion
algorithm and in the inverse modeling, are ignored or only take in account adjacent geology
units, not the full vertical analysis.
In this work, the problem is resolved by the algorithm of generation of the acoustic
impedance model, consider the trend that are represented in the wells log data, by creating
initially a model only conditioned by the wells data (AI).
3.2. Correlation coefficient comparison
Using the convolution algorithm described previously, one can forward an entire model (or
cube if one considers a model as a cartesian grid) of AI created only by using the wells and
its spatial continuity to a synthetic seismic amplitude cube that can be compared with the
real amplitude seismic cube. This comparison will give the mismatch between those two
cubes of amplitudes.
The mismatch between the synthetic and the field amplitudes is calculated as a simple co-
located correlation coefficient as in [15-17]. Alternatively, this mismatch could be calculated
in the form of a least-square difference between both amplitude cubes.
Integration of Seismic Information in Reservoir Models: Global Stochastic Inversion 125
In a first step, each of the N synthetic seismic amplitude cubes are divided into layers (figure
1).The choice of number of layers is a tuning parameter of the algorithm.
The amount of mismatch between each pair of columns of the synthetic and the real
amplitude is calculated and stored in a new cube (figure 1), named the mismatch cube.
Essentially, this new cube provides information on which locations in the reservoir are
fitting the seismic and which need improvement.
Figure 1. Calculus of the mismatch cube.
A mismatch cube is calculated for each of the simulated models using their corresponding
synthetic seismic models.
3.3. Co-generation of acoustic images
The generation of an acoustic impedance model is the main objective of this work, but until
an optimized one is created, it is submitted to a convergence genetic modification.
These images are generated through stochastic simulation, i.e. generation of different
models with the same probabilistic and spatial distribution. This means that if the
parameters that are used to the creation of the models are too tight, the produced images
are almost identical, if the parameters allow more freedom, these models are totally
different.
Using this idea, the proposed methodology uses two approaches for image generation, two
different programs are used. The first one is the DSS (Direct Sequential Simulation) for the
generation of the first unconditioned images and then for the convergence is used the Co-
DSS (Direct Sequential Co-Simulation) that uses a soft or secondary data to simulation
conditioning as in [3,18,19].
The second algorithm is the Snesim (Single Normal Equation SIMulator), a multi point
simulation algorithm described in [20,21]. This version is used to create the first
unconditional facies models. A second version with modification in [2], uses the influence of
secondary data to create the models that are conditioned to more information. Both these
Synthetic
Seismic
Amplitude
Real
Seismic
Amplitude
y x
y x
y y Cov
o o
=
) , (
,
New Technologies in the Oil and Gas Industry 126
programs use secondary data for the co-simulations, and this secondary or soft data is the
one that makes the algorithm converge.
For the use of Direct Sequential Co-Simulation for global transformation of images as in [17],
let us consider that one wishes to obtain a transformed image Zt(x), based on a set of Ni
images Z1(x), Z2(x),ZNi(x), with the same spatial dispersion statistics (e.g. correlogram or
variogram and global histogram): C1(h) , 1(h) , Fz1(z).
Direct co-simulation of Zt(x), having Z1(x), Z2(x),ZNi(x) as auxiliary variables, can be
applied as in [3]. The collocated cokriging estimator (equation 5) of Zt(x) becomes:
( ) ( )
0 0 0 0 0 0
1
( ) * ( ) ( ) ( ) ( ) ( )
Ni
t t t t i i i
i
Z x m x x Z x m x x Z x m x
o o
o
o
=
( ( = +
(5)
Since the models i(h), i=1, Ni, and t(h) are the same, the application of Markov
approximation is, in this case, quite appropriated, i.e., the co-regionalization models are
totally defined with the correlation coefficients t,i(0) between Zt(x) and Zi(x).
The affinity of the transformed image Zt(x) with the multiple images Zi(x) are
determined by the correlation coefficients t,i(0). Hence, one can select the images with
which characteristics we wish to preserve in the transformed image Zt(x). So, a local
screening effect approximation can be done, by assuming that to estimate Zt(x0)
(equation 5) the collocated value Zi(x0) of a specific image Zi(x), with the highest
correlation coefficient t,i(0), screens out the influence of the effect of remaining
collocated values Zj(x0), j = i. Hence, equation 6 can be written with just one auxiliary
variable: the best at location x0:
( ) ( )
0 0 0 0 0 0
( ) * ( ) ( ) ( ) ( ) ( )
t t t t i i i
Z x m x x Z x m x x Z x m x
o o
o
o ( ( = +
(6)
With the local screening effect, Ni images Zi(x) give rise to just one auxiliary variable. The
Ni images are merged in on a single image based on the local correlation coefficient
criterion.
Basically, the correlation or least mismatch of each of the previously simulated images at
each x0 is converted to a single image where the best of each x0 is selected, this way.
In practice, the algorithm chooses in each x0 of the simulated acoustic impedance model the
best genes and the correlation that those genes have. Based on figure 2, the process is
explained as follows:
The process starts by analyzing each of the mismatch cubes that were created previously,
and in each position of the cubes (x0) it compares which has the higher value of correlation
or lower mismatch. In this case the example used is the correlation, so, the points 1a and 1b
are compared, and since it is the 1a that has the higher value of correlation, that value is
copied (1c) to the Best Correlation Cube.
Integration of Seismic Information in Reservoir Models: Global Stochastic Inversion 127
At the same time, since it was from the first simulation that has better coefficient of correlation
(CC) with the real seismic, another cube is created with the correspondent acoustic impedance
(AI) value (1d). Next, it starts comparing the next set of values from the CC cubes, and the
process is the same. With a comparison between the sets 2a and 2b, and since in this case it is
the second cube that has the higher value of cc, it is this values that is copied to the Best CC
cube (2c). The same is done from the AI cube of this simulation, copying the acoustic
impedance values to the Best AI (2d). This process continues until all the N simulated models
have been analyzed and to all the values of the cubes. Through this way two new cubes are
created, one with the best genes from each acoustic impedance model (Best AI) generated
previously and another cube with the confidence factor of each part (Best CC).
Figure 2. Process of creating Best Correlation Cube and Best Acoustic Impedance cube
Since the process is an iterative one, in the very first steps of the iterative process, the
secondary image (Best AI) do not have the spatial continuity pattern of the primary
(simulated AI), as it results from a composition of different parts of a set of simulated
images. As the process continues, the secondary image tends to have the same spatial
pattern of generated images by co-simulation, because the correlation values are becoming
higher and the freedom of the co-simulation is diminishing. Finally these two new data sets
are used as soft data for the next iteration.
4. Algorithm description using direct sequential simulation
The proposed stochastic inversion algorithm is settled with the following key ideas in mind:
generation of stochastic images, transformation of the images in synthetic seismograms,
chose and keep the best genes from each of the images and then use them through the
exercise of the genetics algorithm formalism and the stochastic co-simulation to create a new
generation of images and the convergence of the process. It can be summarized in the
following steps;
New Technologies in the Oil and Gas Industry 128
i. Generate a set of initial 3D images of acoustic impedances by using direct sequential
simulation. Instead of individual traces of cells;
ii. Create the synthetic seismogram of amplitudes, by convolving the reflectivity, derived
from acoustic impedances, with a known wavelet;
iii. Evaluate the match of the synthetic seismograms, of entire 3D image, and the real
seismic by computing, for example local correlation coefficients;
iv. Ranking the best images based on the match (e.g. the average value or a percentile of
correlation coefficients for the entire image). From them, one selects the best parts (the
columns or the horizons with the best correlation coefficient) of each image. Compose
one auxiliary image with the selected best parts, for the next simulation step;
v. Generate a new set of images, by direct sequential co-simulation, using the best locals
correlation as the local co-regionalization model and return to step ii) starting an
iterative process that will end when the match between the synthetic seismogram and
the real seismic is considered satisfactory or until a given threshold of the objective
function is reached;
vi. The last step of the process is the transformation of the optimized cube of acoustic
impedance in internal reservoir characteristics.
At each iterative step one knows how closer is one given generated image from the
objective, by the global and local correlation coefficients between the synthetic seismogram
and the real seismic. These correlation coefficients of different simulated images are used as
the affinity criterion to create the next generation of images until it converges to a given
predefined threshold. A simplified diagram is show in figure 3.
Figure 3. Diagram of the proposed algorithm
Integration of Seismic Information in Reservoir Models: Global Stochastic Inversion 129
4.1. Case study
A case study of a Middle East reservoir will be presented but only a small part of the full
reservoir is studied due to data confidentiality and the coordinates presented are modified.
The field described in [22,23] is a carbonate reservoir with a deposition geology that holds in
some zones a strong internal geometry with clinoforms. These are sedimentary deposits that
have a sigmoidal or S shape. They can range in size from meter, like sand dunes, to
kilometers and can grow horizontally in response to sediment supply and physical limits on
sediment accumulation.
This type of geological phenomenon increases the complexity of the internal reservoir,
making the geophysics interpretation of seismic a very difficult job, mainly when the
resolution of acquisition is very coarse. It also causes a great impact in oil production in
wells and reservoir characterization and modeling.
From the calibration of the acoustic impedance data of the wells with the seismic, a wavelet
was extracted and there were 19 wells in the study area but only two were used, because
only these two have the velocity log, and without the velocity log the acoustic impedance
data cannot be calculated. On those wells the acoustic impedance was determined, then
calibrated with the seismic and finally upscaled to fit the seismic scale of 4 ms.
The convolution for well 11 is show in figure 4.
Figure 4. Calculation of the AI, convolution and match with seismic of well 11.
New Technologies in the Oil and Gas Industry 130
As one can notice, there is a very good match with the synthetic amplitude (red line on right
side) calculated with the acoustic impedance (white line in the middle) from the wells and
the real seismic amplitude extracted from the seismic cube (cyan line in the right side).
4.2. Results
The first sets of 32 images of acoustic impedances are generated with the direct sequential
simulation conditioned to well data (AI) and the chosen variogram model. In the first
iteration, the generated acoustic models (figure 5) are not constrained to any soft data, so
Figure 5. Acoustic Impedance model of simulation #1 (left) and #15 (right).
Figure 6. Correspondent Synthetic Seismic model of simulation #1 (left) and #15 (right).
Integration of Seismic Information in Reservoir Models: Global Stochastic Inversion 131
only the wells are reproduced. This causes a high variability between the synthetic models
(figure 6) and wide range standard deviation when calculated using all 32 simulations.
As it can be noticed, the two models have a totally different spatial distribution, although
the histograms and variograms are the same.
That different spatial distribution is visible in the correlation cubes between the synthetic
models and the real seismic (figure 7).
Figure 7. Correlation cubes between the synthetic seismic model and the real seismic, of simulation #1
(left) and #15 (right).
Figure 8. Average (left) and Standard Deviation (right) of acoustic impedance of all 32 simulations of
this iteration.
New Technologies in the Oil and Gas Industry 132
To confirm this variability, the average cube and standard deviation (figure 8) of all 32
simulations were determined and the influence of the wells in the average of the
unconditioned simulations is highly visible principally around well 1, as the variogram is
reproducing the well log spatial variability.
As perceived in the average model, the small scale variability has disappeared, however this
cube can be considered the Low Frequency Model, because it reproduces the main trends
that are represented in the wells logs. Also noticeable the practically constant value of
standard deviation, representing the variability of unconditional stochastic simulations,
except the small decline of values in the middle of the figure, which is the influence of the
well 1.
Still, these first 32 simulations were used to build the Best Correlation Cube and Best
Acoustic Impedance cube that will be used as soft data for the next iteration (figure 9).
Figure 9. Best Acoustic Impedance cube (left) and Best Correlation Cube (right) derived from first
iteration.
It is visible the delineation of the structural layers in the Best Acoustic Impedance model
(BAI) and in the Best Correlation Cube (BCC) a selection of correlation coefficients high
values.
This assembled acoustic impedance model, has lost all the spatial distribution that the
original acoustic impedance models had (figure 5), but this is a simple intermediate result
and not the final one since these will be used as secondary or soft data for the next iteration
that will impose the variogram spatial distribution and wells global histogram.
The algorithm has made six iterations, one of them, the first, was unconditional to any
secondary or soft data, only the last five had the Best Correlation Cube and Best Acoustic
Impedance cube as secondary or soft data imposition.
Integration of Seismic Information in Reservoir Models: Global Stochastic Inversion 133
Since the algorithm will always choose the best genes from each iteration, the patterns that
are in the real seismic will start to become more visible in each iteration and the correlation
will became higher and more continuous in all cube positions.
The convergence of the process is inevitable until a local maximum correlation is attained
(figure 10). This maximum does not represent the best that the original seismic can produce,
but rather a simulation that the entire process has created. If one has run the same process
with the same data but with a different seed for the generation of the acoustic impedance
model the result could be a different one, but not so totally different.
Figure 10. Convergence progress of the algorithm.
The 32 images of acoustic impedances of iteration 5 (considering that the first iteration is
called 0, because it is an unconditional to soft data) are generated with the direct sequential
co-simulation conditioned to well data (acoustic impedances), the chosen variogram model
and the soft or secondary data (BAI and BCC) of the forth iteration.
One can clearly see the fast convergence from iteration zero to iteration 1 and
afterwards the process starts to stabilize. The algorithm chooses the parts that have
higher correlation values in the end of iteration 0 and after iteration 1, almost every
simulation has its correlation values around 1, making the selection of each part of
simulation, a very detailed event (sometimes in the third of forth decimal of the
correlation value).
The obtained final results demonstrate that the conditional data is imposing a very strong
effect in all 32 simulations. The variability of different models generated with different seeds
is now almost none existing (figure 11 and 12) and they practically look the same.
As it can be noticed the two models have an almost equally spatial distribution, but some
differences can be found, since they are two independent realizations.
0.85
0.87 0.88
0.80
0.62
0.08
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
0 1 2 3 4 5
Iterations
C
o
r
r
e
l
a
t
i
o
n
New Technologies in the Oil and Gas Industry 134
Figure 11. Acoustic Impedance model of simulation #3 (left) and #28 (right).
Figure 12. Correspondent Synthetic Seismic model of simulation #3 (left) and #28 (right).
Those small differences are easier to distinguish in the correlation cubes between the
synthetic models and the real seismic (figure 13), since the correlation coefficient is very
sensible to little variations in patterns.
In these examples the layer set sizes were big enough to show the differences.
Integration of Seismic Information in Reservoir Models: Global Stochastic Inversion 135
Figure 13. Correlation cubes between the synthetic seismic model and the real seismic, of simulation 3
(left) and 28 (right).
To confirm this lack of variability (figure 14), the average cube is almost equal to a
simulation, and validated by the lower values of standard deviation of all 32 simulations.
Figure 14. Average and standard deviation of acoustic impedance of all 32 simulations of this iteration.
In figure 14 the same color scale for standard deviation, was used, for comparison with the
standard deviation of the first iteration model (figure 8).
The values of standard deviation for the final iteration only vary from 0 to 6900, with an
average of 1600 which is a reduction of almost 80% in variability, comparing with the
variation between 0 and 11500 and average of 8000 of the first iteration.
New Technologies in the Oil and Gas Industry 136
The influence of the wells is no longer visible because all data of wells are integrated in the
full model.
4.3. Remarks
In spite of the scarcity of the log data, the proposed method achieved extremely good
results. In case of data abundance, if the data is not well calibrated it could compromise the
quality of the convergence and the influence of the not calibrated wells in the final model
would be noticeable.
In this case, both synthetic seismic data and acoustic impedance cube captured the main
geologic features of these complex reservoirs, noticeable in the correlation coefficients
between the seismic and the synthetic amplitudes. The quality of the seismic data takes a
minor role since the method overcomes the situation of imposition of artificial correlations
as it happens in the standard methods;
Since the co-simulation of the impedances uses a local coefficient correlation, it is possible to
compute the local uncertainty associated to the seismic acoustic impedances;
The uncertainty of the seismic acoustic impedance could be used to access the uncertainty
associated with the porosity model, as presented in [24].
Tests prove that the variation of the final correlation coefficient is about 2% with the
modification of the initial seed, it means that others parameters such as the number of layers
and the size of it, as other parameters can be optimized to produce better results. But these
results are more difficult to reach when the complexity of the geology and the structural
model became more elaborated.
To handle different geology scenarios such channels or specific shape reservoirs, an adapted
approach is proposed in the next part of this work.
5. Multipoint statistics
The objective of this work is to build a reservoir model with multiple alternatives, thereby
assessing uncertainty about the reservoir, integrating information from different sources
obtained at different resolutions:
- Well-data which is sparse but of high resolution, in the scale of a foot;
- Seismic data which is exhaustive but of much lower resolution, in the scale of 10s feet
in the vertical direction;
- Conceptual geological models, which could quantify reservoir heterogeneity from the
layer scale to the basin scale.
This last point has been possible using a variogram in algorithms such as DSS and Co-DSS,
previously described, which allow integration of well and seismic data using a pixel-based
approach. Those variogram based models are inadequate in integrating geological concepts
since the variogram is too limited in capturing complex geological heterogeneity and is a
Integration of Seismic Information in Reservoir Models: Global Stochastic Inversion 137
two-point statistics that poorly reflects the geologists prior conceptual vision of the
reservoir architecture, e.g., sand channels as in [20].
Integration of geological information beyond two-point variogram reproduction becomes
critical in order to quantify more accurately heterogeneity and assess realistically the
uncertainty of model description.
A solution was initiated by practitioners from the oil industry in Norway, where Boolean
objects-based algorithms were introduced in the late 1980s to simulate random geometry as
in [25,26]. These parametric shapes, such as sinusoidal channel or ellipsoidal lenses, are
placed in simulated volume. Through an iterative process this shape is changed, displaced
or removed to fit the conditioning statistics and local data.
The simulated objects finally resemble the geologist drawings or photographs of present day
depositions
Passed the enthusiasm, the limitation of objects-based simulation algorithms became
obvious, this iterative, perturbation type, algorithm for data conditioning did not converge
in the presence of dense hard data or could not account for diverse data types, such as
seismic used as soft data.
Also the limitation of not simulating continuous variables, time and CPU demanding in
large 3D cases, enroll on the drawbacks of the methodology.
Strebelle in [20], following the works of Srivastava in [27] and Caers in [28] has proposed an
alternative approach of Multipoint statistics that combines the easy conditioning of pixel-
based algorithms with the ability to reproduce shapes of object-based techniques, without
relying on excessive CPU demand.
Multipoint statistics uses a training image instead of a variogram to account for geological
information. The training image describes the geometrical facies patterns believed to
represent the subsurface.
Training images does not need to carry any local information of the actual reservoir, they
only reflect a prior geological/structural concept.
Basically, multipoint statistics consists of extracting patterns from the training image, and
anchoring them to local data, i.e. well logs and seismic data. Several training images
corresponding to alternative geological interpretations can be used to account for the
uncertainty about the reservoir architecture.
Training images, identical to the variogram, can be a questionable subject when the model to
be generated has few hard data or no pre-conceptual model has been studied by geologists
or geomodelers.
Using this new tool a new objective is to illustrate the multipoint statistical methodology to
seismic inversion.
New Technologies in the Oil and Gas Industry 138
5.1. Seismic inversion using multipoint statistics
For the case study, the Stanford VI synthetic reservoir dataset described in [29], was used.
This synthetic reservoir (figure 15), consists of meandering channels, sand vs mud system,
and each facies were populated with rock impedance, using sequential simulation.
To mimic a seismic inversion, this high resolution rock impedance model were smoothed by
a low-pass filter to obtain a typical acoustic impedance response and then convoluted to an
amplitude model that will be considered as reference seismic amplitude.
Figure 15. Facies model (left), high resolution rock impedance (middle left), smothered rock impedance
(middle right) and forwarded seismic synthetic amplitude (right)
5.2. Probabilistic approach
For comparison of the methodology, is presented the application of a traditional
probabilistic modeling on the Stanford VI acoustic impedance, i.e. calibrate acoustic
impedance into a 3D facies probability cube, and then use it as soft data constraint for
multiple-point geostatistical simulation.
Several techniques exist to calibrate one or more seismic attributes with well data, such as
PCA, ANN, etc., one use a simple Bayes approach (equation 7) as documented in [30]. In a
Bayesian method one uses the histogram of impedance for each facies denoted as:
( )
{ }
| , 1, 2,...
f
P AI facies f = (7)
Using Bayes rule one can calculate the probability for each facies for given impedance
values as (equation 8):
( )
( | ) ( )
|
( )
f f
f
P AI facies P facies
P facies AI
P AI
-
= (8)
where P(faciesf) is the global proportion of faciesf and P(AI) is derived from the histogram of
the impedance values. The final result is a cube of probabilities for each facies type based on
the acoustic impedance cube. In figure 16, one can see the result of the application of this
method to the case study data.
This probability data can be used by different geostatistical algorithms to create a facies
model. In our case the simulated facies are obtained with the multi-point method snesim,
conditioned to the probability cubes previously calculated, to the training image and to the
rotation and affinity cubes.
Integration of Seismic Information in Reservoir Models: Global Stochastic Inversion 139
Figure 16. Result of the Bayesian approach for the calculus of the probabilities cubes for case study and
the simulated facies model.
To verify our hypothesis that this facies model does not match the field amplitude, the facies
model is forwarded simulated to a synthetic amplitude dataset. We assumed the ideal
situation where an exact forward model was available. Figure 17 confirm that the forward
modeled amplitude does not match the field data. In fact, the co-located correlation
coefficient is 0.20.
Figure 17. Real seismic amplitude (left) vs forward modeled amplitude derived from the simulated
facies model using a traditional probabilistic approach (right).
5.3. Algorithm description using multipoint statistics
For the case of using multi-point statistics, the seismic inversion algorithm has gone through
some changes, manly caused by the simulation algorithm that does not creates acoustic
impedance models directly, but facies models, that can be populated with acoustic
impedance values later.
So the method is initialized by simulating N facies models only conditioned to the wells and
training images using snesim, in this case, additional channel azimuth and affinity (as
interpreted from seismic or geological understanding) constraints are enforced (figure 18).
Probability Cubes
Facies 2
Facies 1
Simulated Facies Acoustic Impedance
Wells
Calibration MPS
New Technologies in the Oil and Gas Industry 140
Figure 18. Initial simulations conditioned only to the training image and rotation/affinity.
The facies models are then populated with acoustic impedances, converted in synthetic
seismograms and chose and keep the best genes from each of the images (see figure 19). In
the next step there is also a modification, since one cannot use directly the correlation values
in the snesim simulator. So a modification of the probability perturbation method as in [31]
is used to create a probability for each facies.
This facies probabilities cube is then used as soft constraint to generate the next set of N
cubes. This is done using Journels tau-model as in [10], to integrate probabilistic
information from various sources. The process converges as the previous algorithm and it
can be summarized in the following steps:
i. Generate a set of initial 3D images of facies by using snesim.
ii. Populate the facies models with acoustic impedances.
iii. Create the synthetic seismogram of amplitudes, by convolving the reflectivity, derived
from acoustic impedances, with a known wavelet.
iv. Evaluate the match of the synthetic seismograms, of entire 3D image, and the real
seismic by computing, for example local correlation coefficients.
v. Ranking the best images based on the match (e.g. the average value or a percentile of
correlation coefficients for the entire image). From it, one selects the best parts (the
columns or the horizons with the best correlation coefficient) of each image. Compose
one auxiliary image with the selected best parts, for the next simulation step.
vi. Transform the cube with the best parts and the corresponded best values to a
probability cube to each facies to be simulated.
MPS
Training Image
Affinity
N
Simulated Facies
Rotation
Integration of Seismic Information in Reservoir Models: Global Stochastic Inversion 141
vii. Generate a new set of images using snesim adaptation to integrate secondary
information, with the probability cubes used as the local co-regionalization model and
return to step ii) starting an iterative process that will end when the match between the
synthetic seismogram and the real seismic is considered satisfactory or until a given
threshold of the objective function is reached.
viii. The last step of the process can be the transformation of the optimized facies cube in
acoustic impedance, porosity or any internal reservoir characteristics.
As noted in previously algorithm, in step vi), in the very first steps of the iterative process,
the probability data can not have the spatial continuity pattern of the primary simulated
facies, as it results from a composition of different parts of a set of simulated images. As the
process continues, that soft secondary image (probability cubes) tend to have the same
spatial pattern of generated images by co-simulation, i.e. the imposed training image altered
by the affinity and azimuth
5.4. Co-generation of acoustic images with multipoint statistics
The major change made for the adaptation to MPS, is that the acoustic impedance values are
estimated or populated in the simulated facies model. This can cause some reservations but if
one considered that the main objective is comparing the reflection coefficients between the real
and the simulated seismic amplitude, those reflection coefficients are more accentuated in the
change of terrain type, and those can be considered a channel or crevasse. Inside the channels
the rock type does not vary too much and that is visible in the seismic profiles by the presence
of low amplitude seismic. So this difference in the algorithm can be considered a valid one.
The second difference is that to co-simulate a facies model, one can not use acoustic
impedance values (a continuous variable) as soft data, so instead of choosing from the
simulated acoustics model, the algorithm build the Best Facies cube from the simulated
facies models (figure 19).
Figure 19. Process of creating Best Mismatch cube and Best Facies cube.
New Technologies in the Oil and Gas Industry 142
As the methodology for acoustic impedance models, the process starts by analyzing each of
the mismatch cubes that were created previously, and in each position of the cubes (x0) it
compares which has the higher value of correlation.
The point set 1a and point set 1b are compared, from those is the 1a that has a higher value
of correlation so, it is copied (1c) to the Best Mismatch cube.
At the same time, since the first simulation showed better coefficient of correlation (CC)
with the real seismic, another cube is created with the correspondent facies values (1d).
The process follow to the next set of values from the CC cubes, and it is the same,
comparison between the sets 2a and 2b, that in this case it is the second cube that has the
higher value of cc (or lower mismatch), it is these values that are copied to the Best
Mismatch cube (2c). The same is done from the facies cube of this simulation, copy of the
facies values to the Best Facies (2d). This process continues until all the N simulated models
have been analyzed and to all the values of the cubes.
Through this way two new cubes are created, one with the best genes from each facies
model (Best Facies) generated previously and another cube with the confidence factor of
each part (Best Mismatch). As noticed above, the least mismatch cube can be seen as a
summary of the least mismatch of all N realizations. The best facies cube can be seen as
facies model combined from all N facies models that best matches the seismic data.
However the best facies cube does not have the same geological concept as the training
image and may have various artifacts, since it is constructed by copying several sets from
independently generated facies models.
The third difference is that, these two new cubes can not be used for the co-simulation of the
new generation of facies models, but the best facies cube can be used indirectly to
improve the existing N facies models.
In order to do this, one uses a modification of the probability perturbation method as in [24].
Caers method was developed to solve non-linear inverse problems under a prior model
constraint. It allows the conditioning of stochastic simulations to any type of non-linear
data. The principle of this method relies on perturbing the probability models used to
generate a chain of realizations that converge to match any type of data.
In this methodology the unknown pre-posterior probability Prob(Aj | C) is modeled
using a single parameter model in the following equation 9:
( )
(0)
| ( ( ) 1| ) (1 ) ( ) ( ), 1,...,
j j c B j c j
P A C P I u C r i u r P A j N = = = + = (9)
where A is unknown data, B is well data and previously simulated facies indicators and C
will be the best facies cube, rC is not dependent on uj and is between [0,1], and
( )
( )
{ }
0
, 1,...,
B j
i j N = u is an initial realization conditioned to the B-data only.
According to the methodology proposed in this work, the equation 9 is adapted with rC now
representing the mismatch between the field and synthetic seismic as summarized with the
Integration of Seismic Information in Reservoir Models: Global Stochastic Inversion 143
least-mismatch cube (correlation coefficient), and
( ) 0
B
i corresponding to the presence of the
facies with the least mismatch. This adaptation leads to the following equation 10:
( )
| ( ) ( ) (1 ( )) ( ), 1,...,
j c j c j
P A C u I u u P A j N = + =
(10)
where C is the correlation coefficient, between [0,1], extracted from the least mismatch
cube, I(uj) is the sand indicator extracted from the best facies cube with j=1,,N
facies, and P(Aj) is the global proportion of the considered facies. This expression
generates a facies-probability cube which is a mixture of global proportion and the
best facies cube.
This facies probabilities cube is then used as soft constraint to generate the next set of N
cubes. This is done using Journels tau-model to integrate probabilistic information from
various sources (see equation 11 and figure 20);
x c
b a
=
(11)
where:
1 ( | , )
( | , )
P A B C
x
P A B C
= ,
1 ( | )
( | )
P A B
b
P A B
= ,
1 ( | )
( | )
P A C
c
P A C
= and
1 ( )
( )
P A
a
P A
=
a is the information of the global facies proportion, b is the influence of the training image,
and c is the conditioning of the soft probability cube.
Figure 20. Next generation of simulations conditioned to the training image, rotation/affinity and
probability cubes.
N
Simulated Facies
CO-MPS
Training Image
Rotation Affinity
Probability facies 1
Probability facies 2
New Technologies in the Oil and Gas Industry 144
This iterative algorithm is run until the global mismatch between the synthetic seismic of the
facies model and the field seismic data reach an optimal minimum.
5.5. Results
To illustrate the method, 6 iterations were computed with N=30 simulations on the Stanford
VI dataset. Note that one assumed the availability of perfect geological information and a
perfect forward model.
The algorithm converges as shown by a systematic increase in the global correlation
coefficient between model and amplitude data (figure 21). The facies model with the highest
correlation out of 30 models in the last iteration has a correlation of 0.88.
Figure 21. Convergence of the algorithm.
Figure 22. Comparison between the two methods
60. 49
24. 17
0. 17
0. 88
0
10
20
30
40
50
60
70
1 2 3 4 5 6
I te ra ti ons
D
i
f
e
r
e
n
c
e
0
0. 1
0. 2
0. 3
0. 4
0. 5
0. 6
0. 7
0. 8
0. 9
1
C
o
r
r
e
l
a
t
i
o
n
Bes t Minimum Diferenc e Bes t Global Correlat ion
Facies Cube
Facies Cube
= 0.88
Facies Cube
= 0.20
Seismic Amplitude Field Seismic
Seismic Amplitude
(Reference Data) (Proposed Approach) (Probabilistic Approach)
Forward Modeling
Forward Modeling
Forward Modeling
Integration of Seismic Information in Reservoir Models: Global Stochastic Inversion 145
The final facies model (top left in in figure 22) does not contain any artifacts, i.e. reproduces
well the geological continuity of the training image.
To check how well the seismic amplitude data is reproduced, is forward modeling the best
facies model to seismic synthetic (see figure 22). Clearly the method matches well the field
seismic, particularly when compared with the probabilistic approach.
In figure 23 some slices of the results are presented, where it is possible to see some
similarities between the seismic response of the proposed approach and the reference data
seismic amplitude.
Figure 23. Correlation slices
In summary, as stated previously, the local maximum correlation or local minimum
mismatch depends on the parameters used to calculate the mismatches. Some parameters
sensitivity is further analyzed:
- Number of iterations
In the first iteration the main optimization is obtained, after this the process tends to
stabilize (figure 24). Hence, without changing any of the other parameters, the number of
iterations does not have a big influence on the optimization.
Figure 24 compares the values of correlation and average difference of the simulation model
with the best result for different number of facies models N simulated. These values are
presented only for the first and last iterations.
For the first iteration the difference is not very noticeable, but in the last iteration the
correlation value is higher as the number of simulations increases.
Correlation Correlation
(Reference Data) (Proposed Approach) (Probabilistic Approach)
Seismic Amplitude Seismic Amplitude Seismic Amplitude
New Technologies in the Oil and Gas Industry 146
- Number of facies simulations (N) per iterations
This parameter can have a major influence on the optimization: with numerous simulations
the process has a wide variety of possibilities to choose from, and can build a more precise
best-facies cube, but if the simulations are very alike, the choice of N will not matter.
Figure 24. Influence of different number of iterations (1 to 6) and simulations (20, 30 and 40).
- Criterion of convergence
The criterion of differences is more precise than the correlation, since the correlation gives us
a value of the similarity between two sets of data, while the difference is more susceptible to
different patterns or small variation in the patterns. But to calculate the probabilities the
correlations are needed.
- Size of the columns
In the previous test the cubes were divided in 4 layers with 20 values each column. Since the
correlation has some sensitivity to the number of data used, the results also change. For a
largest column the convergence will tend to be slower, since it is more difficult to match an
entire column than piece-wise matching. In general we recommend that the number of
layers chosen depends on the vertical resolution of the seismic data, i.e. for lower resolution
less layer could be retained.
6. Conclusions
The algorithm proposed uses two different approaches to create a stochastic model of a
reservoir property that are conditioned to both well and seismic data, since this two data are
totally different, this process was to be an iterative one, that used the maximum correlation
or minimum mismatch as the objective function.
Integration of Seismic Information in Reservoir Models: Global Stochastic Inversion 147
The first approach uses the direct sequential co-simulation as the method of transforming
3D images, in an iterative process. Hence it generates, at each iterative step, global acoustic
impedances images with the same spatial pattern, without imposing artificially good match
in areas of low confidence in seismic quality.
It means that in those areas the final images will reflect high uncertainty. This uncertainty of
the seismic acoustic impedance is used to access the uncertainty associated with the porosity
model, by co-simulate the porosity model and using the generated final acoustic impedance
model and its correlation coefficient cube as soft or secondary data.
Another example is the capacity to improve the seismic resolution, if the acoustic impedance
model is created below the original resolution of the seismic. This is possible since the data
from wells, that is mainly used to create the model, has a much lower resolution than the
seismic.
Time consuming is no more a drawback as it is implemented in a parallel computing
platform in [32].
This algorithm has been implemented and benchmarked tested with several real reservoirs,
e.g. [33].
For the case of the multipoint approach, it is a new application for the newest stochastic
generation of images and is a complement for the first approach.
The modifications made from the first approach to the second, are not too complex,
except the population of the acoustic impendence cube, which for this example, a
technique of simple populate the channels was used with previous simulated acoustic
impedance values.
To resolve this bypass, an algorithm by [34], simulates any variable within the channel,
making this step a more realistic one.
The program still needs to be improved and the algorithm can be optimized for real case
applications and the lack of training images that could implement it to all kind or examples
is still in research.
Author details
Hugo Caetano
Partex Oil and Gas, Portugal
Acknowledgement
This work was created based on the research made at: Centro de Modelizao de
Reservatrios Petrolferos (CMRP), of Instituto Superior Tcnico (IST), Universidade Tcnica
de Lisboa (UTL) with sponsorship from Partex - Oil and Gas and Stanford Center for
New Technologies in the Oil and Gas Industry 148
Reservoir Forecast (SCRF), Stanford University, Califrnia, EUA, with financial support
from Fullbright Fundation and Fundao Luso-Americana.
7. References
[1] Soares, A., 1998, Sequential Indicator with Correlation for Local Probabilities,
Mathematical Geology, Vol 30, n 6, p. 761-765
[2] Deutsch, C. V.; Journel, A. G., 1998, GSLIB Geostatistical Software Library and Users
Guide, 2 ed., Oxford University Press, New York
[3] Soares, A., 2001, Direct Sequential Simulation and Co-Simulation, Mathematical
Geology, Vol. 33, No. 8, p. 911-926
[4] Xu, W., Tran T.T., Srivastava, M., Journel A., 1992, Integrating Seismic data in reservoir
modeling: the collocated cokriging alternative. SPE #24742
[5] Doyen, P.M.; Psaila, D. E.; Strandenes, S., 1994, Bayesian Sequential Indicator
Simulation of Channel Sands from 3-D Seismic Data in The Oseberg Field, Norwegian
North Sea, SPE 28382.
[6] Dubrule, O., 1993 Introducing More Geology in Stochastic Reservoir Modeling,
Geostatistics Troia 92, Soares Ed., Vol 1 pp 351-370
[7] Bortolli L., Alabert F., Haas A., Journel A.,1993. Constraining Stochastic Images to
Seismic Data. Geostatistics Troia 92, Soares Ed.,, Vol 1. pp 325-338
[8] Haas A., Dubrule O., 1994, Geostatistical Inversion A sequential Method for
Stochastic Reservoir Modelling Constrained by Seismic Data First Break (1994) 12, n-
11, 561-569
[9] Grijalba-Cuenca A., Torrres-Verdin C., 2000, Geostatistical Inversion of 3D Seismic
Data to Extrapolate Wireline Petrophysical Variables Laterally away from the Well,
SPE 63283
[10] Journel, A. G., 2002, Combining Knowledge from Diverse Sources: an Alternative to
Traditional Data Independence Hypothesis, Mathematical Geology, v. 34, No. 5, July
2002
[11] Mantilla, A., 2002, Predicting Petrophysical Properties by Simultaneous Inversion of
Seismic and Reservoir Engineering Data, PhD thesis, Stanford University
[12] Tarantola, A., 1984, Inversion of Seismic Reflection Data in the Acoustic
Approximation Geophysics 49, 1259-1266
[13] Russell, B.H., 1988, Introduction to Seismic Inversion Methods, Society of Exploration
Geophysicists, 90 pp.
[14] Sheriff, R. E. and L. P. Geldart, 1995, Exploration Seismology, Cambridge University
Press, New York, 592 pp.
[15] Caetano, H., Caers, J., 2006, Constraining Multiple-point Facies Models to Seismic
Amplitude Data, SCRF 2006
Integration of Seismic Information in Reservoir Models: Global Stochastic Inversion 149
[16] Caetano, H., Caers, J., 2007, Geostatistical Inversion Constrained to Multi-Point
Statistical Information Derived from Training Images, EAGE Petroleum
Geostatistics, Cascais, September 2007
[17] Soares, A., Diet, J.D., Guerreiro, L., 2007, Stochastic Inversion with a Global
Perturbation Method, EAGE Petroleum Geostatistics, Cascais, September 2007
[18] Carvalho, J., Soares, A., Bio, A., 2006, Classified Satellite Sensor Data Calibration with
Geostatistical Stochastic Simulation, International Journal of Remote Sensing, 27(16), p.
3375-3386.
[19] Mata-Lima, H., 2008, Reservoir Characterization with Iteractive Direct Sequencial Co-
Simulation: Integrating Fluid Dynamic Data into Stochastic Model, Journal of
Petroleum Science and Engineering, 62(3-4), p. 59-72.
[20] Strebelle, S., 2002, Conditioning Simulation of Complex Structures Multiple-Point
Statistics, Mathematical Geology, v. 34, No. 1, January 2002.
[21] Strebelle, S., 2001, Sequencial Simulation drawing structures from training images,
PhD thesis, Stanford University
[22] Hulstrand R. F., Abou Choucha, M. K. A., Al Baker, S. M., 1985, Geological Model of
the Bu hasa / Shuaiba Reservoir, Society of Petroleum Engineers
[23] Guerreiro, L., Caetano, H., Maciel, C., Real, A., Silva, F., Al Shaikh, A., Al Shemsi, M. A.,
2007, Geostatistical Seismic Inversion Applied to a Carbonate Reservoir, EAGE
Petroleum Geostatistics, Cascais, September 2007
[24] Caeiro H., 2007, Caracterizao da Incerteza da Porosidade de um Reservatrio
Petrolfero, MSc Thesis, Instituto Superior Tcnico
[25] Stoyan, D., Kendall, W., Mecke, L., 1987, Stochastic geometry and its applications
Wiley, N.Y.
[26] Haldorsen, H., Damsleth, E., 1990, Stochasting Modeling, Journal or Petroleum
Technology, April 1990, p 404-412.
[27] Srivastava, R. M,, 1992, Reservoir Characterization with Probability field Simulation,
SPE paper no. 24753
[28] Caers, J., Journel, A. G., 1998, Stochasting Reservoir Simulation Using Neural
Networks Trained on Outcrop Data, SPE paper no. 49026
[29] Castro, S., Caers, J and Mukerji Y, 2005, The Stanford VI reservoir, SCRF 2005,
Stanford University
[30] Caumon, G., Journel, A., 2005, Early Uncertainty Assessment: Application to a
Hydrocarbon Reservoir Appraisal, Quantitative Geology and Geostatistics Volume 14,
Pages 551-557
[31] Caers, J., 2003, History matching under training-image based geological model
constraints, SPE Journal, v. 7: 218-226
[32] Vargas, H., Caetano, H., Filipe, M., 2007, Parallelization of Sequential Simulation
Procedures, EAGE Petroleum Geostatistics, Cascais, September 2007
[33] Guerreiro, L., Caetano, H., Maciel, C., Real, A., Silva, F., Al Shaikh, A., Ali Al Shemsi M.
A., Soares A., 2007, Global Seismic Inversion - A New Approach to Integrate Seismic
New Technologies in the Oil and Gas Industry 150
Information in the Stochastic Models, SPE 111305, 2007 SPE/EAGE Reservoir
Characterization and Simulation Conference held in Abu Dhabi
[34] Horta A., Caeiro M.H., Nunes R., Soares A, 2008, Simulation of Continuous Variables
at Meander Structures: Application to Contaminated Sediments of a Lagoon
Section 3
Reservoir Management
Chapter 7
Transient Pressure and Pressure Derivative
Analysis for Non-Newtonian Fluids
Freddy Humberto Escobar
Additional information is available at the end of the chapter
http://dx.doi.org/10.5772/50415
1. Introduction
Conventional well test interpretation models do not work in reservoirs containing non-
Newtonian fluids such as completion and stimulation treatment fluids: polymer solutions,
foams, drilling muds (this should not be considered as a reservoir fluid, since before testing
we should clean the well to remove all the drilling invasion fluids, however it obeys the
power-law), etc., and some paraffinic oils and heavy crude oils. Non-Newtonian fluids are
generally classified as time independent, time dependent and viscoelastic. Examples of the
first classification are the Bingham, pseudoplastic and dilatant fluids, Figure 1, which are
commonly dealt by petroleum engineers.
As a special kind of non-Newtonian fluid, Bingham fluids (or plastics) exhibit a finite yield
stress at zero shear rates. There is no gross movement of fluids until the yield stress, ty, is
exceeded. Once this is accomplished, it is also required cutting efforts to increase the shear
rate, i.e. they behave as Newtonian fluids. These fluids behave as a straight line crossing the
y axis in t = ty, when the shear stress,t plotted against the shear rate, in Cartesian
coordinates. The characteristics of these fluids are defined by two constants: the yield, ty,
which is the stress that must be exceeded for flow to begin, and the Bingham plastic
coefficient, B. The rheological equation for a Bingham plastic is,
y B
= +
The Bingham plastic concept has been found to approximate closely many real fluids
existing in porous media, such as paraffinic oils, heavy oils, drilling muds and fracturing
fluids, which are suspensions of finely divided solids in liquids. Laboratory investigations
have indicated that the flow of heavy-oil in some fields has non-Newtonian behavior and
approaches the Bingham type.
New Technologies in the Oil and Gas Industry 154
S
lo
p
e
p
B
in
g
h
a
m
P
la
s
t
ic
flu
id
P
s
e
u
d
o
p
la
s
tic
flu
id
N
e
w
t
o
n
ia
n
flu
id
D
ila
t
a
n
t
f
lu
id
S
lo
p
e
a
S
lo
p
e
Figure 1. Schematic representation of time-independent fluid
Pseudoplastic and dilatant fluids have no yield point. The slope of shear stress versus shear
rate decreases progressively and tends to become constant for high values of shear stress for
pseudoplastic fluids. The simplest model is power law,
1 ; .
n
k n = <
k and n are constants which differ for each particular fluid. k measures the flow consistency
and n measures the deviation from the Newtonian behavior which k = and n = 1.
Dilatants fluids are similar to pseudoplastic except that the apparent viscosity increases as
the shear stress increases. The power-law model also describes the behavior of dilatant
fluids but n > 1.
Currently, unconventional reservoirs are the most impacting subject in the oil industry.
Shale reservoirs, coalbed gas, tight gas, gas hydrates, gas storage, geothermal energy, coal
conversion to gas, coal-to-gas, in-situ gasification and heavy oil are considered
unconventional reservoirs. In the field of well testing, several analytical and numerical
models taking into account Bingham, pseudoplastic and dilatant non-Newtonian behavior
have been introduced in the literature to study their transient nature in porous media for a
better reservoir characterization. Most of them deal with fracture wells, homogeneous and
double-porosity formations and well test interpretation is conducted via the straight-line
conventional analysis or type-curve matching and recently some studies involving the
pressure derivative have also been introduced.
When it is required to conduct a treatment with a non-Newtonian fluid in an oil-bearing
formation, this comes in contact with conventional oil which possesses a Newtonian nature.
This implies the definition of two media with entirely different mobilities. If a pressure test
is run in such a system, the interpretation of data from such a test through the use of
conventional straight-line method may be erroneous and may not provide a way for
verification of the results obtained.
Transient Pressure and Pressure Derivative Analysis for Non-Newtonian Fluids 155
The purpose of this chapter is to provide the most updated state-of-the-art on transient
analysis of Non-Newtonian fluids and to present both conventional and modern
methodologies for well test interpretation in reservoirs saturated with such fluids. Especial
interest is given to the use of the pressure and pressure derivative for both homogeneous
and double-porosity formations.
2. Non-Newtonian fluids in transient pressure analysis
Non-Newtonian fluids are often used during various drilling, workover and enhanced oil
recovery processes. Most of the fracturing fluids injected into reservoir-bearing formations
behave non-Newtonianly and these fluids are often approximated by Newtonian fluid flow
models. In the field of well testing, several analytical and numerical models taking into
account Bingham and pseudoplastic non-Newtonian behavior have been introduced in the
literature to study the transient nature of these fluids in porous media for a better reservoir
characterization. Most of them deal with fracture wells and homogeneous formations and
well test interpretation is conducted via the straight-line conventional analysis or type-curve
matching. Only a few studies consider pressure derivative analysis. However, there exists a
need of a more practical and accurate way of characterizing such systems.
Many studies in petroleum and chemical engineering and rheology have focused on non-
Newtonian fluid behavior though porous formations, among them, we can name [6, 9, 10,
18, 20, 23]. Several numerical and analytical models have been proposed to study the
transient behavior of non-Newtonian fluid in porous media. Since all of them were
published before the eighties, when the pressure derivative concept was inexistent;
interpretation technique was conducted using either conventional analysis or type-curve
matching.
It is worth to recognize that Ikoku has been the researcher who has contributed the most to
non-Newtonian power-law fluids modeling, as it is demonstrated in the works of
[9,10,11,13]. All of these models have been used later for other researchers for further
development of test interpretation techniques. For instance, reference [24] presented a study
of a pressure fall-off behavior after the injection of a non-Newtonian power-law fluid. [14]
presented a study using the elliptical flow on transient analysis interpretation in Polymer
flooding EOR since polymer solutions also exhibit non-Newtonian rheological behavior
such as in-situ shear-thinning and shear-thickening effects.
[25] used for the first time the pressure-derivative concept for well test analysis of non-
Newtonian fluids, and later on, [12] presented the first extension of the TDS (Tiabs Direct
Synthesis) technique, [21] to non-Newtonian fluids. [7] used type-curve matching for
interpretation of pressure test for non-Newtonian fluids in infinite systems with skin and
wellbore storage effects. Recent applications of the derivative function to non-Newtonian
system solutions are presented by [1] and [15] who applied the TDS technique to radial
composite reservoirs with a Non-Newtonian/Newtonian interface for pseudoplastic and
dilatants systems, respectively.
New Technologies in the Oil and Gas Industry 156
As far as non-Newtonian fluid flow through naturally fractured reservoirs is concerned only
a study presented by [19] is reported in the literature. He presented the analytical solution
for the transient behavior of double-porosity infinite formations which bear a non-
Newtonian pseudoplastic fluid and his analytical solution also considers wellbore storage
effects and skin factor; therefore, [2] used the analytical solution without wellbore storage
and skin introduced by [19] was used to develop an interpretation technique using the
pressure and pressure derivative, so expressions to estimate the Warren and Root
parameters [26] (dimensionless storage coefficient and interporosity flow parameter) were found
and successfully tested with synthetic data.
3. Pseudoplastic infinite-acting radial flow regime in homogeneous
formations
Interpretation of pressure tests for non-Newtonian fluids is performed differently to
conventional Newtonian fluids. During radial flow regime, Non-Newtonian fluids exhibit a
pressure derivative curve which is not horizontal but rather inclined. As shown by [12], the
smaller the value of n (flow behavior index) the more inclined is the infinite-acting pressure
derivative line, see Figure 2.
A partial differential equation for radial flow of non-Newtonian fluids that follow a power-
law relationship through porous media was proposed [11]. Coupling the non-Newtonian
Darcy's law with the continuity equation, they derived a rigorous partial differential
equation:
1
1
2
2
/
( )/
n
n n
eff
t
P n P P P
c n
r r k r t
r
| |
| | c c c c
| + =
|
|
c c c
c \ .
\ .
| (1)
This equation is nonlinear. For analytical solutions, a linearized approximation was also
derived by [11]:
1
1
n n
n
P P
r Gr
r r t
r
| | c c c
=
|
c c c
\ .
(2)
Where:
1
3792 188
96681 605
.
.
n
t eff
n c
h
G
k qB
| |
=
|
\ .
|
(3)
and,
( )
1 2
12
3
9 1 59344 10
12
( )/
.
n
n
eff
H
k
n
| || |
= +
| |
\ .\ .
| (4)
Transient Pressure and Pressure Derivative Analysis for Non-Newtonian Fluids 157
1.E-01
1.E+00
1.E+01
1.E+02
1.E-03 1.E-02 1.E-01 1.E+00 1.E+01 1.E+02 1.E+03 1.E+04 1.E+05
(
t
/
C
)
*
P
'
D
D
D
t /C
D D
2
3
10
25
40
1 10
1 10
1 10
1 10
s
D
C e
n = 1
n = 0.8
n = 0.6
n = 0.4
n = 0.2
Figure 2. Pressure derivative for a pseudoplastic non-Newtonian fluid in an infinite reservoir After
Reference [12]
The dimensionless quantities were also introduced by [10] as
( )
1
1
141 2 96681 605 . .
DNN
n n
n eff w
P
P
r
qB
h k
A
=
| |
|
\ .
(5)
3
DNN
n
w
t
t
Gr
= (6)
141 2 .
DN
N
k h P
P
q B
A
= (7)
2
0 0002637 .
DN
N t w
k t
t
c r
=
|
(8)
D
w
r
r
r
= (9)
Where suffix N indicates Newtonian and suffix NN indicates non-Newtonian. The
dimensionless well pressure analytical solution in the Laplace space domain for the case of a
well producing a pseudoplastic non-Newtonian fluid at a constant rate from an infinite
reservoir is given in reference [11]:
New Technologies in the Oil and Gas Industry 158
( )
( )
( ) ( ) ( ) ( )
1
D
D
D
K sr s sK s
P s
s sK s sC K s s sK s
| |
|
+
|
\ .
=
(
+ +
(
(10)
Being | = 2/(3-n) and v = (1-n)/(3-n).
The dimensionless pressure derivative during radial flow regime is governed by:
( )
0 5 * ' .
D D DNN
rNN
t P t = (11)
[12] presented the following expression to estimate the permeability,
( )
( )
( ) ( )( )
( )
1
1 1 1 1
1
2
0 5 .
* '
n n
w r
n
eff
r
h r t k
C t P
q
(
(
=
(
A
(
(12)
where
2
0 1486 0 178 0 3279 . . . n n = +
being n the flow behavior index which may be found from the slope of the pressure
derivative curve during radial flow regime. [12] also introduced another expressions and
correlations to find permeability, skin factor and wellbore storage coefficient using the
maximum point (peak) found on the pressure derivative curve during wellbore storage
effects which are not shown here. The point of intercept between the early unit-slope line
and radial flow regime is used to estimate wellbore storage:
( )
( )
1 85 1
3 13
2
.
.
n n
eff
i
n
w
e C
q
t
k r
h
| |
=
|
|
\ .
(13)
Parameters in both Equations 11 and 12 are given in CGS (centimeters, grams, seconds)
units.
[1] presented more practical expressions for the determination of both permeability and skin
factor:
( )
( )( )
( )
( )
1
1 1
1 1 0 0002637 1
70 6 96681 605
.
. .
* '
n n
n
r
eff t
r
t qB k
n c h t P
(
| |
| |
| |
(
|
=
|
|
|
( |
A
\ .
\ .
\ .
|
(14)
Where o is the slope of the pressure derivative curve and is defined by:
1
3
n
(15)
Transient Pressure and Pressure Derivative Analysis for Non-Newtonian Fluids 159
( )
( )
3
1 1
2 * '
rNN rNN
rNN
n
w rNN
P
t
s
t P
G r
| || | A
| = |
| |
A
\ . \ .
(16)
4. Well pressure behavior in non-Newtonian/Newtonian interface
In many activities of the oil industry, engineers have to deal with completion
and stimulation treatment fluids such as polymer solutions and some heavy crude oils
which obey a non-Newtonian power-law behavior. When it is required to conduct a
treatment with a non-Newtonian fluid in an oil-bearing formation, this comes in contact
with conventional oil which possesses a Newtonian nature. This implies the definition of
two media with entirely different mobilities. If a pressure test is run in such a system, the
interpretation of data from such a test through the use of conventional straight-line method
may be erroneous and may not provide a way for verification of the results obtained. Then,
[13] proposed a solution for the system sketched in Figure 3 which was solved numerically
by [17].
[15] presented for the first time the pressure derivative behavior for the mentioned system,
Figure 4. Notice in that plot that the pressure derivative shows an increasing slope as the
flow behavior index decreases. Also, the derivative has no slope during infinite-acting
Newtonian behavior, as expected.
During the non-Newtonian region, region 1 in Figure 3, Equations 13 to 15 work well. For
the Newtonian region, region 2, the permeability and skin factor are estimated with the
equations presented by Tiab (1993) as:
r
a
=
r
a
(
t
)
re rw
NON-NEWTONIAN
REGION
1
2
( )
NEWTONIAN REGION
CONSTANT =
Figure 3. Composite non-Newtonian/Newtonian radial reservoir
New Technologies in the Oil and Gas Industry 160
0.1
1
10
1.E+03 1.E+04 1.E+05 1.E+06 1.E+07 1.E+08 1.E+09 1.E+10
t
D
*
P
D
'
t DNN
n
1
0.8
0.6
0.4
0.2
Figure 4. Dimensionless pressure derivative behavior for ra = 200 ft. Case Non-Newtonian
pseudoplastic
2
70 6 .
( * ' )
N
r
q B
k
h t P
=
A
(17)
2 2
2
2
2
1
7 43
2
ln .
* '
r
r
N t w
k t P
s
t P
c r
( | |
| | A
( | = +
|
|
A (
\ .
\ .
|
(18)
Suffix 2 denotes the non-Newtonian region.
[15] also found an expression to estimate the non-Newtonian permeability using the time of
intersection of the non-Newtonian and Newtonian radial lines, tiN_NN:
( )
( )
1 2
1 1
1
1 2
1 2
12
3
9 1 59344 10 96681 605
12 0 0002637
/
/
/
/
_
. .
.
n
n
n
w N t w
irN NN
hr c r n H
k
n qB t
(
| | ( | || |
= +
|
| | (
\ .\ . \ .
(
|
| (19)
The radius of the injected non-Newtonian fluid bank is calculated using the following
correlation (not valid for n=1), obtained from reading the time at which the pressure
derivative has its maximum value:
( )
( )
1
1 3
3 2
0 0 0 0 0 0
/
.2258731 .2734284 .5 646 2 .5178275
n
a
MAX
G n n n
r
t
(
+ +
(
=
(
(
(
(20)
Transient Pressure and Pressure Derivative Analysis for Non-Newtonian Fluids 161
[13] found that the radius of the non-Newtonian fluid bank can be found using the radius
investigation equation proposed by [10]:
( ) ( )
( )
1
1
2 3
1 3
2
3
n
n
a
n t
r
n G
(
( | |
(
= I
( |
(
\ .
(
(21)
where t is the end time of the straight line found on a non-Newtonian Cartesian graph of AP
vs. t
1-n/3-n
.
Later, [16] found that Equations 13, 14, 15 and 22 also worked for dilatant systems. This is
the case when 2 < n < 1. The pressure derivative behavior is given in Figure 5. Notice that for
this case the slope decreases as the flow behavior index increases. For dilatant-Newtonian
interface the position of the front obeys the following equation:
( )
( )
1
1 3
0 76241
0 46811
/
.
_
.
n
n
a
e rNN
G e
r
t
(
(
=
(
(
(
(22)
0.1
1
10
100
1.E+01 1.E+02 1.E+03 1.E+04 1.E+05 1.E+06 1.E+07
t
D
*
P
D
'
t DNN
n
1
1.1
1.2
1.4
1.5
1.6
Figure 5. Dimensionless pressure derivative behavior for ra = 200 ft. Case Non-Newtonian dilatant
Example 1. A constant-rate injection test for a well in a closed reservoir was generated by
[13] with the data given below. It is required to estimate the permeability and the skin factor
in each area and the radius of injected non-Newtonian fluid bank.
PR = 2500 psi re = 2625 ft rw = 0.33 ft h = 16.4 ft
| = 20 % k = 100 md q = 300 BPD B = 1.0 rb/STB
ct = 6.89x10
-6
1/psi ra = 131.2 ft H = 20 cp*s
n-1
N = 3 cp
New Technologies in the Oil and Gas Industry 162
n = 0.6
Solution. The log-log plot of pressure and pressure derivative against injection time is given
in Figure 6. Suffix 1 and 2 indicate the non-Newtonian and Newtonian regions, respectively.
From Figure 6 the following information was read:
tr1 = 0.3 APr1 = 541.54 psi (t*AP)r1 = 105.45 psi
tr2 = 120 APr2 = 991.5 psi (t*AP)r2 = 39.02 psi
tMAX = 1.3 tirN_NN = 0.0008 hr
First, o is evaluated with Equation 15 to be 0.17 and a value of 100.4 md was found with
Equation 14 for the non-Newtonian effective fluid permeability. Equation 4 is used to find
an effective viscosity of 0.06465 cp(s/ft)
n-1
. Then, the skin factor in the non-Newtonian region
is found with Equation 16 to be 179.7.
10
100
1000
10000
0.0001 0.001 0.01 0.1 1 10 100 1000
( )
1
541 69 psi
r
P . A =
( )
2
901 5 psi
r
P . A =
( )
2
39 02 psi
r
t * P . A =
At, hr
t
*
A
P
'
a
n
d
A
P
,
p
s
i
1 3 hr
M
t . =
2
100 hr
r
t =
0 0008 hr
riN _ NN
t . =
1
0.3 hr
r
t =
( )
1
105 45 psi
r
t * P . A =
Figure 6. Pressure and pressure derivative for example 1
A value of 6.228x10
-5
hr/(ft
3-n
) was found for parameter G using Equation 3. This value is
used in Equation 24 to find the distance from the well to the non-Newtonian fluid bank. This
resulted to be 120 ft.
Equations 17 and 18 were used to estimate permeability and skin factor of the Newtonian
zone. They resulted to be 100 md and 4.5.
Using a time of 0.0008 hr which corresponds to the intersect point formed between the non-
Newtonian and Newtonian radial flow regime lines in Equation 19, a non-Newtonian
effective fluid permeability of 96 md is re-estimated. [13] obtained a permeability of the non-
Newtonian zone of 101 md and ra = 116 ft from conventional analysis.
5. Hydraulically fractured wells
[18] linearized the partial-differential equation for the problem of a well intercepted by a
vertical fracture. Their dimensionless pressure solution is given below:
Transient Pressure and Pressure Derivative Analysis for Non-Newtonian Fluids 163
( )
( )
2
3 1
1 1 1
( )
( )
v v
D
D D
n t
P t
n n
=
(23)
Where v = (1-n)/(3-n)
[24] presented two interpretation methodologies: type-curve matching and conventional
straight-line for characterization of fall-off tests in vertically hydraulic wells with a
pseudoplastic fluid. They indicated that at early times, a well-defined straight line with
slope equal to 0.5 on log-log coordinates will be evident, then,
1
2
2
*
n
D Dxf
P t
| |
=
|
\ .
(24)
2
0 0002637
*
.
Dxf
t f
kt
t
c x
=
|
(25)
Where the characteristic viscosity,
*
, is given by:
1
96681 605
*
.
n
eff
h
qB
| |
=
|
\ .
(26)
And the derivative of Equation 24 is:
1
2
0 5
2
* ' .
n
Dxf D Dxf
t P t
| |
=
|
\ .
(27)
And the dimensionless fractured conductivity is;
f f
fD
f
k w
c
k x
=
(28)
[22] presented an expression which relate the half fracture length, xf, formation permeability,
k, fracture conductivity, kfwf, and post-frac skin factor, s:
3 31739
1 92173
.
.
f f
s
w f
k
k w
e
r x
=
(29)
However, there is no proof that Equation 28 works for Non-Newtonian systems. Using
Equation 23, [3] presented pressure and pressure derivative curves for vertically infinite-
conductivity fractured wells. See Figure 7. They extended the TDS methodology, [21], for
the systems under consideration. By using the intersect point of the pressure derivatives
during linear flow regime, Equation 26, with the radial flow regime governing equation,
Equation 11, tRLi, an expression to obtain the half-fracture length is presented:
New Technologies in the Oil and Gas Industry 164
1.E-03
1.E-02
1.E-01
1.E+00
1.E+01
1.E+02
1.E-05 1.E-03 1.E-01 1.E+01 1.E+03 1.E+05 1.E+07
t
Dxf
t
*
P
'
D
x
f
D
P
a
n
d
D
Pressure derivative
curve
LRi
t
Linear flow regime
Radial flow regime
Figure 7. Dimensionless pressure and pressure derivative behavior for a vertical infinite-conductivity
fractured well with a non-Newtonian pseudoplastic fluid with n = 0.5
( )
1
2 1 570796
0 028783
0 0002637
*
*
.
.
.
n
LRi
f
t
LRi
t
t k
x
c
kt
c
(
(
(
(
=
(
| |
(
|
(
|
( \ .
|
|
(30)
Where v = (1-n)/(3-n).
The expression governing the late-time pseudosteady-state flow regime is:
2 * '
D D DA
t P t = (31)
The point of intersection of the pressure derivatives during linear flow and pseudosteady-
state (mathematical development is not shown here) allows to obtain the well drainage area
by means of the following expression:
2 3
1
0 0625
2
/( )
.
n
iLPSS
n
t
A
(
(
(
=
(
| |
(
|
(
\ .
(32)
Example 2. Fan (1998) presented a pressure test of a test conducted in a hydraulic fractured
well with the information given below. Pressure and pressure derivative data for this test is
reported in Figure 8.
n = 0.4 h = 70 ft k = 0.65 md q = 507.5 BPD
| = 10 % B = 1 rb/STB
*
= 0.00065 cp ct = 0.00001 psi
-1
Transient Pressure and Pressure Derivative Analysis for Non-Newtonian Fluids 165
rw = 0.26 ft H = 20 cp*s
n-1
10
100
1000
0.01 0.1 1
A
P
,
t
*
A
P
'
,
p
s
i
t, hrs
0.7217 hr
r
t =
762 psi
r
P A =
( * ') 522.06 psi
r
t P A =
0.4495 hr
LRi
t =
Figure 8. Pressure and pressure derivative for example 2
Solution. The following information was read from the pressure and pressure derivative
plot, Figure 8,
tLRi =0.4495 hr tr =0.7217 hr APr = 762 psi (t*AP)r = 522.06 psi
Using Equation 15, a value of 0.23 is found for o. Reservoir permeability, skin factor, half-
fracture length were estimated with Equations 14, 16 y 30. Their respective values are 0.65 md,
-13.9 and 771 ft. Reservoir permeability and half-fracture length are re-estimate by simulating
the test providing values of 0.65 md and 776 ft, respectively; therefore, the absolute errors for
these calculations are less 0.06 % and 0.5 %. A G value of 0.001241 hr/(ft
3-n
) was found with
Equation 3.
A fracture conductivity of 868.5 md-ft was calculated using Equation 29. It is important to
clarify that this equation is valid for the Newtonian case. This value was used in Equation 28
to find a dimensionless fracture conductivity of 1.73.
6. Finite-homogeneous reservoirs
For the cases of bounded and constant-pressure reservoirs, [8] presented the solutions to
Equation 1. The initial and boundary conditions for the first case are:
( )
0 0 ,
DNN D
P r = (33)
1
1 0
D
DNN
DNN
D
r
P
for t
r
=
| | c
= >
|
|
c
\ .
(34)
New Technologies in the Oil and Gas Industry 166
1
0
eD
DNN
DNN
D
r
P
for t
r
=
| | c
=
|
|
c
\ .
(35)
The analytical solution in the Laplace space domain for the closed reservoirs under constant-
rate case is given as:
( )
( )
( )
( )
( )
( )
( ) ( )
( )
( )
3 2 3 2
1 1 2 3 2 3
3 3
3 2 3 2 3 2
2 3 2 3 2 3 2 3
2 2 2 2
3 3 3 3
2 2 2 2
3 3 3 3
( )
n n
eD n eD n n n
n n
n n
eD eD n n n n
K sr I s I sr K s
n n n n
P s
s I sr K s K sr I s
n n n n
( ( ( (
- + -
`
( ( ( (
)
=
( ( ( (
- -
`
( ( ( (
)
| |
|
|
\ .
(36)
For the case of constant-pressure external boundary, the boundary condition given by
Equation 35 is changed to:
( )
0 ,
DNN eD DNN
P r t =
(37)
And the analytical solution for such case is:
( ) ( )
( )
( )
( )
( )
3 2 3 2
1 1 1 1
3 3 3 3
3 2 3 2 3 2
1 1 2 3 2 3
3 3
2 2 2 2
3 3 3 3
2 2 2 2
3 3 3 3
( )
n n
n eD n n eD n
n n n n
n n
n eD n eD n n
n n
I sr K s K sr I s
n n n n
P s
s I s K sr K s I sr
n n n n
( ( ( (
- -
`
( ( ( (
)
=
( ( ( (
- + -
( ( (
| |
|
`
(
|
) \ .
(38)
Using the solution provided by [8], [4] presented pressure and pressure derivative plots for
such behaviors as shown in Figs. 9 and 10. In these plots it is seen for closed systems in both
pseudoplastic and dilatant cases, that the late-time pressure derivative behavior always
displays a unit-slope line as for Newtonian fluids. As for Newtonian behavior, the late-time
pressure derivative decreases in both dilatant or pseudoplastic cases.
[4] rewrote Equation 6 based on reservoir drainage area, so that:
( )
3
DA
n
e
t
t
G r
=
(39)
[4] combined Equations 11, 31 and 39 to develop an analytical expression to find well
drainage area,
2
1 1 3
1
4
*
n
rpiNN
t
A
G
(
| |
(
=
|
(
\ .
(40)
Where trpiNN is the intersection point formed by the straight-lines of the radial and
pseudosteady-state flow regimes. The above equation was multiplied by (t
(1/
o
-1)
)
1/3-n
as a
correction factor. This is valid for both dilatant and pseudoplastic non-Newtonian fluids.
Transient Pressure and Pressure Derivative Analysis for Non-Newtonian Fluids 167
1.E+00
1.E+01
1.E+02
1.E+03
1.E+05 1.E+06 1.E+07 1.E+08 1.E+09 1.E+10
t
D
t
*
P
'
D
D
P
a
n
d
D
Open
boundary
closed
boundary
Figure 9. Dimensionless pressure and pressure derivative behavior in closed and open boundary
systems for a non-Newtonian pseudoplastic fluid with n = 0.5, re = 2000 ft
1.E-03
1.E-02
1.E-01
1.E+00
1.E+01
1.E+03 1.E+04 1.E+05 1.E+06 1.E+07
t
D
t
*
P
'
D
D
P
a
n
d
D
Open
boundary
closed
boundary
Figure 10. Dimensionless pressure and pressure derivative behavior in closed and open boundary
systems for a non-Newtonian dilatant fluid with n = 1.5, re = 2000 ft
There is no pressure derivative expression for open boundary systems. Then, for
pseudoplastic fluids the following correlation was also developed [4],
2
0 003 0 0337 0 052 . . .
NN
DA
t n n = + + (41)
Equating Equation 41 to 39 and solving for reservoir drainage area, such as:
New Technologies in the Oil and Gas Industry 168
( )
2
3
2
0 003 0 0337 0 052 . . .
n
rsiNN
t
A
G n n
(
(
=
(
+ +
(
(42)
For dilatant fluids the correlation found is:
3 2
0 9175 3 7505 5 1777 2 2913 . . . .
NN
DA
t n n n = +
(43)
In a similar fashion as for the pseudoplastic case,
( )
2
3
3 2
0 9175 3 7505 5 1777 2 2913 . . . .
n
rsiNN
t
A
G n n n
(
(
=
(
+
(
(44)
trsiNN in Equations 42 and 44 corresponds is the intersection point formed by the straight-line
of the radial and negative unit-slope line drawn tangentially to the steady-state flow regime.
Example 3. [4] presented a synthetic example to determine the well drainage area. Pressure
and pressure derivative data are provided in Figure 11 and other relevant information is
given below:
n = 0.5 h = 16.4 ft k = 350 md q = 300 BPD
| = 5 % Bo = 1 rb/STB eff = 0.014833 cp ct = 0.0000689 psi
-1
rw = 0.33 ft H = 20 cp*s
n-1
re = 2000 ft Pi = 2500 psi
Solution. From Figure 11, the intercept point, trpiNN, of the radial and pseudosteady-state
straight lines is 60 hr which is used in Equation 40 to provide a well drainage area of 275
acres. Notice that this reservoir has an external radius of 2000 ft which represents an area of
288 acres. This allows obtaining an absolute error of 2.33 %.
10
100
1000
0.01 0.1 1 10 100 1000
A
P
,
t
*
A
P
'
,
p
s
i
t, hrs
60 hr
rpiNN
t =
Figure 11. Pressure and pressure derivative for example 3
Transient Pressure and Pressure Derivative Analysis for Non-Newtonian Fluids 169
7. Heterogeneous reservoirs
In the well interpretation area of the Petroleum Engineering discipline a homogeneous
reservoir is conceived to possess a single porous matrix while a heterogeneous reservoir has
a porous matrix and either vugs or fractures. A common term used for heterogeneous
systems is naturally-fractured reservoirs. However, this term is not recommended to be
used since the fractures may result for either a mechanic process or a chemical process
(matrix dissolution). Therefore, a more convenient term used in this book is double porosity
systems in which the well is fed by the fractures and the fractures are fed by the matrix. By
the same token, in a double-permeability system the well is fed by both fractures and matrix
and the fractures are also fed by the matrix. This last one, however, has little application in
the oil industry.
The governing well pressure solution in the Laplacian domain for a double-porosity system
with a non-Newtonian fluid excluding wellbore storage and skin effects was provided by
[19] as:
1
3
2
3
2
3
2
3
( )
( ) ( )
n
n
DNN
n
K sf s
n
P
s sf s K sf s
n
| |
|
\ .
=
| |
| |
|
|
|
\ .
\ .
(45)
The Laplacian parameter, f( s ) is a function of the model type and fracture system geometry
and is given by:
( )
( )
1
1
( )
s
f s
s
+
=
+
(46)
[2] implemented the TDS methodology for characterization of double-porosity systems with
pseudoplastic fluids. As for Newtonian case, the infinite-acting radial flow regime is
represented by a horizontal straight line on the pressure derivative curve. The first segment
corresponds to pressure depletion in the fracture network while the second portion is due to
the pressure response of an equivalent homogeneous reservoir. On the other hand, the
transition period which displays a trough on the pressure derivative curve during the
transition period depends only on the dimensionless storage coefficient, e. The warren and
Root parameters are defined in reference [26].
Figure 12 shows a log-log plot of the dimensionless pressure and pressure derivative for a
double- porosity system with constant interporosity flow parameter, constant n value and
variable dimensionless storage coefficient the higher e the less pronounced the trough. As
seen there, as the value of n decreases, the slope of the derivative during radial flow
increases. In Figure 13 is shown the effect of variable of the interporosity flow parameter for
constant values of dimensionless storage coefficient and flow behavior index. Notice in that
New Technologies in the Oil and Gas Industry 170
plot that as the value of decreases, the transition period shows up later. Finally, Figure 14
shows the effect of changing the value of the flow behavior index for constant values of
and e. The effect of the increasing the pressure derivative curves slope is observed as the
value of n decreases. Needless to say that neither wellbore storage nor skin effects are
considered.
1.E-01
1.E+00
1.E+01
1.E+02
1.E+03
1.E+01 1.E+02 1.E+03 1.E+04 1.E+05 1.E+06 1.E+07 1.E+08
t
D
P
,
(
t
*
P
'
)
D
D
D
e
0.3
0.1
0.05
0.03
0.01
0.005
Figure 12. Dimensionless pressure and pressure derivative log-log plot for variable dimensionless
storage coefficient, =1x10
-6
and n=0.2 for a heterogeneous reservoir
1.E-01
1.E+00
1.E+01
1.E+02
1.E+01 1.E+02 1.E+03 1.E+04 1.E+05 1.E+06 1.E+07 1.E+08 1.E+09
t
D
P
,
(
t
*
P
'
)
D
D
D
1x10
5x10
1x10
5x10
1x10
5x10
1x10
-4
-5
-5
-6
-6
-7
-7
Figure 13. Dimensionless pressure and pressure derivative log-log plot for variable interporosity flow
parameter, =0.05 and n=0.8 for a heterogeneous reservoir
Transient Pressure and Pressure Derivative Analysis for Non-Newtonian Fluids 171
The infinite-acting radial flow regime is identified by a straight line which slope increase as
the value of the flow behavior index decreases. See Figure 14. The first segment of such line
corresponds to the fracture-network dominated period, and, the second one -once the
transition effects are no longer present-, responds for a equivalent homogeneous reservoir.
An expression for the slope is given [11] as:
1
3
n
m
n
=
(47)
Also, the slope of the pressure derivative during radial flow regime is related to the flow
behavior index by:
3 0 5
1 8783425 7 8618321 0 19406557 2 8783425
.
. . . .
m
n m m e
= + + (48)
As observed in Figure 12, as the dimensionless storage coefficient decreases the transition
period is more pronounced no matter the value of the interporosity flow parameter. Therefore,
a correlation for 0 s e s 1 with an error lower than 3 % as a function of the minimum time
value of the pressure derivative during the trough, the flow behavior index and the beginning
of the second of the infinite-acting radial flow regime is developed in this study as:
2
0 5 1 5
2
0 5 1 5 2
1 2053 5888 75 337547 1 4787073
3180 6369 551 0582
910 05377 988 80592 459 61296 73 93695
min
. .
. .
. . .
. . ln
. . . .
b
t
t x
x x
n
n n n
| |
= + +
|
|
\ .
+ +
(49)
1.E-02
1.E-01
1.E+00
1.E+01
1.E+02
1.E+03
1.E+01 1.E+02 1.E+03 1.E+04 1.E+05 1.E+06 1.E+07 1.E+08 1.E+09
t
D
P
,
(
t
*
P
'
)
D
D
D
n
0.2
0.4
0.6
0.8
1
Figure 14. Dimensionless pressure and pressure derivative log-log plot for variable flow behavior
index, =0.03 and =1x10
-5
for a heterogeneous reservoir
New Technologies in the Oil and Gas Industry 172
Another way to estimate e uses a correlation which is a function of the intersection time
between the unit-slope pseudosteady-state straight line developed during the transition
period, the time of the trough. We also found that this correlation is also valid for 0 s e s 1
with an error lower than 0.7 %.
2 3 4 5
2 3 4 5
1 153351 43 428536 555 85387 3232 6805 6716 9801
0 019884508
0 0093613189 0 0042870178 0 00027356586 0 0005221335 0 000072466135
. . . . .
.
. . . . .
y
y y y y
n
n n n n
= + +
+ + +
(50)
A final correlation to estimate valid for 0 s e s 1 with an error lower than 0.4 % is given as
follows:
2 3 2
2
0 098427346 0 00046337048 0 000025063353 0 00000050316996 0 0036057682 0 0073959605
1 0 36468068 0 064934748 0 047596083
. . . . . .
. . .
y y y n n
y n n
+ + +
=
(51)
The interporosity flow parameter also plays an important role in the characterization
of double porosity systems. From Figure 13, it is observed that the smaller the value of
the later the transition period to be shown up. A correlation for it was obtained using the
time at the trough and the dimensionless storage coefficient, as presented by next
expression:
( )
( )
7 8 8 2 2 12 3
12 2
6 9690127 10 3 4893658 10 3 2315082 10 5 9013807 21571690 3 6102987 10
1 0 0099353372 3740035 1 6 7143604 10
. . . . .
. . .
n n w w w
n w w
+ + +
=
+ +
(52)
Equation 51 is valid for 1x10
-4
< < 9x10
-7
with an error lower than 4 %. A correlation
involving the coordinates of the trough is given as:
2 5 2
3 6 3 5 2
5 2
0 00082917155 0 0014247498 0 00028717451
0 00077173053 3 2538271 10 0 0003203949
0 0001423889 1 212213 10 1 7831692 10
8 6457217 10
. . .
. . .
. . .
.
n
z n z nz
n z nz
n z
=
(53)
Which is valid for 1x10
-4
< < 9x10
-7
with an error lower than 3.7 %. Another expression for
within the same mentioned range involving the minimum time of the trough is given for an
error lower than 1.3 %.
( )
0 5
0 5 0 5
5 9 13
1 5 2
0 010651118 0 043958503
2 1223034 0 09473309 0 077489686
1 5653137 10 0 00024143014 8 7148736 10 4 0331364 10
.
. .
.
. .
ln . . . ln
. ln . . .
n n n
n w
w
w w
w w
= +
+ + +
(54)
Transient Pressure and Pressure Derivative Analysis for Non-Newtonian Fluids 173
Example 4. Figure 15 contains the pressure and pressure derivative log-log plot of a
pressure test simulated by [2] with the information given below. It is requested to
estimate from these data the dimensionless storage coefficient and the interporosity flow
parameter.
1.E+00
1.E+01
1.E+02
1.E+03
1.E-02 1.E-01 1.E+00 1.E+01 1.E+02 1.E+03 1.E+04 1.E+05 1.E+06 1.E+07
t, hr
A
P
,
(
t
*
A
P
'
)
,
p
s
i
( )
min
* ' 10 psi t P A =
min
272.6 hr t =
2
14480 hr
b
t =
,
2129.4 hr
US i
t =
Figure 15. Pressure and pressure derivative for example 4
Solution. From Figure 15 the following characteristic points are read:
tmin = 272.6 hr tb2 = 14480 hr tUS,i =2129.4 hr (t*AP)min = 10 psi
Using Equations 5 and 6, the above data are transformed into dimensionless quantities as
follows:
tDmin =32000 tDb2 = 17000000 tDUS,i =250000 (tD*PD)min =0.31
During the infinite-acting radial flow regime the following points were arbitrarily read:
(t)r1 = 35724.9 hr (t*AP)r1 = 67.292 psi (t)r2 =56169.5 hr (tD*AP)r2 = 61.2283 psi
With these points a slope is estimated to be m = 0.108. Equation 47 allows obtaining a flow
behavior index of 0.76. The Warren and Roots naturally fractured reservoir parameters are
estimated as follows:
Equation e Equation
49 0.052 52 5.01E-06
50 0.05 53 5.043E-06
51 0.05 54 3.66E-06
Table 1. Summary of results for example 4
New Technologies in the Oil and Gas Industry 174
As a final remark, I would like to comment that some crude oils or other type of fluids used
in the oil industry may display a non-Newtonian Bingham-type behavior. It is common to
deal with Non Newtonian fluids during fracturing and drilling operations and oil recovery
processes, as well. When a reservoir contains a non-Newtonian fluid, such as those injected
during EOR with polymers flooding or the production of heavy-oil, the interpretation of a
pressure test for these systems cannot be conducted using the conventional models for
Newtonian fluid flow since it will lead to erroneous results due to a completely different
behavior.
The problem considered now, presented in reference [27], involves the production of a
Bingham fluid from a fully penetrating vertical well in a horizontal reservoir of constant
thickness; the formation is saturated only with the Bingham fluid. The basic assumptions
are: (a) Isothermal, isotropic and homogeneous formation, (b) Single-phase horizontal flow
without gravity effects, (c) Darcys law applies, and (d) Constant fluid properties and
formation permeability.
The governing flow equation can be derived by combining the modified Darcys law with
the continuity equation and is expressed in a radial coordinate system as:
( )
( ) ( )
B
P
P P
k P
r G
r r r t
(
| | c c c
( =
( |
c c c
( \ .
|
(55)
The density of the Bingham fluid, (P), and the porosity of the formation, |i = |(P), are
functions of pressure only, so Equation 54 may be rewritten as:
1
B t
c P P
r G
r r r k t
( | | c c c
=
( |
c c c
\ .
|
(56)
The initial condition is:
( )
0 , ,
i w
P r t P r r = = >
(57)
At the wellbore inner boundary, r = rw, the fluid is produced at a given production rate, q;
then, the inner boundary condition is:
2
w
B r r
k P
q rh G
r
=
| | c
=
|
c
\ .
(58)
Parameter G is de minimum pressure gradient which expressed in dimensionless form
yields:
141 2 .
w
D
B
Gr kh
G
q B
= (59)
Transient Pressure and Pressure Derivative Analysis for Non-Newtonian Fluids 175
[15] solved numerically Equation 55 and provided an interpretation technique for this type
of fluids using the pressure and pressure derivative log-log plot. For a Bingham-type non-
Newtonian fluid, this behavior changes by observing that there is a point where the
dimensionless pressure derivative is high and this increases with an increase of GD and the
reservoir radius, Figure 16.
0.1
1
10
100
1.E+04 1.E+05 1.E+06 1.E+07 1.E+08 1.E+09
t
D
*
P
D
'
a
n
d
P
D
t D
GD
1.333E-03
1.173E-03
8.000E-04
5.332E-04
2.666E-04
2.133E-04
1.600E-04
1.173E-04
5.332E-05
0
Figure 16. Dimensionless pressure and derivative pressure for reD = 9375
8. Conclusion
This chapter comprises the most updated state-of-the-art for well test interpretation in
reservoirs having a non-Newtonian fluid. Extension of the TDS technique along with
practical examples is given for demonstration purposes. This should be of extreme
importance since most heavy oil fluids behave non-Newtonially, then, its characterization
using conventional analysis is inappropriate and the methodology presented here are
strongly recommended.
Nomenclature
B Volumetric factor, RB/STB
ct System total compressibility, 1/psi
C Wellbore storage, bbl/psi
CfD Dimensionless fracture conductivity
New Technologies in the Oil and Gas Industry 176
h Formation thickness, ft
H Consistency (Power-law parameter), cp*s
n-1
G Group defined by Equation 3
G Minimum pressure gradient, Psi/ft
GD Dimensionless pressure gradient
k Permeability, md
k Flow consistency parameter
kfwf Fracture conductivity, md-ft
m Slope
n Flow behavior index (power-law parameter)
P Pressure, psi
q Flow/injection rate, STB/D
t Time, hr
r Radius, ft
t*AP Pressure derivative, psi
tD*PD Dimensionless pressure derivative
ra Distance from well to non-Newtonian/Newtonian front/interface
w e/tDmin
s Laplace parameter
x tmin/tb2
xf Half-fracture length, ft
y tUSi/tmin
z ln [(tD*PD)min/tDmin]
Table 2. Nomenclature of main variables
A Change, drop
| Porosity, Fraction
Shear rate, s
-1
o Shear stress, N/m
Dimensionless interposity parameter
Viscosity, cp
eff Effective viscosity for power-law fluids, cp*(s/ft)
n-1
B Bingham plastic coefficient, cp
*
Characteristic viscosity, cp/ft
1-n
t Shear stress, N/m
e Dimensionless storativiy coefficient
Table 3. Greeks
Transient Pressure and Pressure Derivative Analysis for Non-Newtonian Fluids 177
1 Non-Newtonian region
2 Newtonian region
app Apparent
D Dimensionless
DA
Dimensionless based on area
Based on area
Dxf
Dimensionless based on half-fracture length
Based on area
e External
eff Effective
i Initial
LPi Intersect of linear and pseudosteady-state lines
M Maximum
N Newtonian
NN Non-Newtonian
r Radial (any point on radial flow)
RLi Intersect of radial and linear lines
rpiNN Intersect of radial and pseudosteady-state lines
rsiNN Intersect of radial and steady-state lines
w Wellbore
Table 4. Suffices
Author details
Freddy Humberto Escobar
Universidad Surcolombiana, Neiva (Huila), Colombia, South America
9. References
[1] Escobar, F.H., Martinez, J.A., and Montealegre-M., Matilde, 2010. Pressure and
Pressure Derivative Analysis for a Well in a Radial Composite Reservoir with a
Non-Newtonian/Newtonian Interface. CT&F. Vol. 4, No. 1. p. 33-42. Dec.
2010.
New Technologies in the Oil and Gas Industry 178
[2] Escobar, F.H., Zambrano, A.P, Giraldo, D.V. and Cantillo, J.H., 2011. Pressure and
Pressure Derivative Analysis for Non-Newtonian Pseudoplastic Fluids in Double-
Porosity Formations. CT&F, Vol. 5, No. 3. p. 47-59. June.
[3] Escobar, F.H., Bonilla, D.F. and Ciceri, Y,Y. 2012. Pressure and Pressure Derivative
Analysis for Pseudoplastic Fluids in Vertical Fractured wells. Paper sent to CT&F to
request publication.
[4] Escobar, F.H., Vega, L.J. and Bonilla, L.F., 2012b. Determination of Well Drainage Area
for Power-Law Fluids by Transient Pressure Analysis. Paper sent to CT&F to request
publication.
[5] Fan, Y. 1998. A New Interpretation Model for Fracture-Calibration Treatments. SPE
Journal. P. 108-114. June.
[6] Hirasaki, G.J. and Pope, G.A., 1974. Analysis of Factors Influencing Mobility and
Adsorption in the Flow of Polymer Solutions through Porous Media. Soc. Pet. Eng. J.
Aug. 1974. p. 337-346.
[7] Igbokoyi, A. and Tiab, D., 2007. New type curves for the analysis of pressure transient
data dominated by skin and wellbore storage: Non-Newtonian fluid. Paper SPE 106997
presented at the SPE Production and Operations Symposium, 31 March 3 April, 2007,
Oklahoma City, Oklahoma.
[8] Ikoku, C.U. 1978. Transient Flow of Non-Newtonian Power-Law Fluids in Porous
Media. Ph.D. dissertation. Stanford U., Stanford, CA.
[9] Ikoku, C.U., 1979. Practical Application of Non-Newtonian Transient Flow Analysis.
Paper SPE 8351 presented at the SPE 64th Annual Technical Conference and Exhibition,
Las Vegas, NV, Sept. 23-26.
[10] Ikoku, C.U. and Ramey, H.J. Jr., 1979. Transient Flow of Non-Newtonian
Power-law fluids Through in Porous Media. Soc. Pet. Eng. Journal. p. 164-174.
June.
[11] Ikoku, C.U. and Ramey, H.J. Jr., 1979. Wellbore Storage and Skin Effects During the
Transient Flow of Non-Newtonian Power-law fluids Through in Porous Media. Soc.
Pet. Eng. Journal. p. 164-174. June.
[12] Katime-Meindl, I. and Tiab, D., 2001. Analysis of Pressure Transient Test of Non-
Newtonian Fluids in Infinite Reservoir and in the Presence of a Single Linear
Boundary by the Direct Synthesis Technique. Paper SPE 71587 prepared for
presentation at the 2001 SPE Annual Technical Conference and Exhibition held in
New Orleans, Louisiana, 30 Sept.3 Oct.
[13] Lund, O. and Ikoku, C.U., 1981. Pressure Transient Behavior of Non-
Newtonian/Newtonian Fluid Composite Reservoirs. Society of Petroleum Engineers of
AIME. p. 271-280. April.
[14] Mahani, H., Sorop, T.G., van den Hoek, P.J., Brooks, A.D., and Zwaan, M. 2011.
Injection Fall-Off Analysis of Polymer Flooding EOR. Paper SPE 145125 presented at the
Transient Pressure and Pressure Derivative Analysis for Non-Newtonian Fluids 179
SPE Reservoir Characterization and Simulation Conference and Exhibition held in Abu
Dhabi, UEA, October 9-11.
[15] Martinez, J.A., Escobar, F.H., and Montealegre-M, M. 2011. Vertical Well Pressure and
Pressure Derivative Analysis for Bingham Fluids in a Homogeneous Reservoirs. Dyna,
Year 78, Nro. 166, p.21-28. Dyna. 2011.
[16] Martinez, J.A., Escobar, F.H. and Cantillo, J.H., 2011b. Application of the TDS
Technique to Dilatant Non-Newtonian/Newtonian Fluid Composite Reservoirs. Article
sent to Ingeniera e Investigacin to request publication. Ingeniera e Investigacin. Vol.
31. No. 3. P. 130-134. Aug.
[17] Martinez, J.A., Escobar, F.H. and Bonilla, L.F. 2012. Numerical Solution for a Radial
Composite Reservoir Model with a No-Newtonian/Newtonian Interface. Paper sent to
CT&f to request publication.
[18] Odeh, A.S. and Yang, H.T., 1979. Flow of non-Newtonian Power-Law Fluids Through
in Porous Media. Soc. Pet. Eng. Journal. p. 155-163. June.
[19] Olarewaju, J.S., 1992. A Reservoir Model of Non-Newtonian Fluid Flow. SPE paper
25301.
[20] Savins, J.G., 1969. Non-Newtonian flow Through in Porous Media. Ind. Eng. Chem. 61,
No 10, Oct. 1969. p. 18-47.
[21] Tiab, D., 1993. Analysis of Pressure and Pressure Derivative without Type-Curve
Matching: 1- Skin and Wellbore Storage. Journal of Petroleum Science and Engineering,
Vol 12, pp. 171-181.Also Paper SPE 25423, Production Operations Symposium held in
Oklahoma City, OK. pp 203-216.
[22] Tiab, D., Azzougen, A., Escobar, F. H., and Berumen, S. 1999. Analysis
of Pressure Derivative Data of a Finite-Conductivity Fractures by the Direct
Synthesis Technique. Paper SPE 52201 presented at the 1999 SPE Mid-
Continent Operations Symposium held in Oklahoma City, OK, March 28-31,
1999 and presented at the 1999 SPE Latin American and Caribbean
Petroleum Engineering Conference held held in Caracas, Venezuela, 2123
April.
[23] van Poollen, H.K., and Jargon, J.R. 1969. Steady-State and Unsteady-State Flow of Non-
Newtonian Fluids Through Porous Media. Soc. Pet. Eng. J. March 1969. p. 80-88; Trans.
AIME, 246.
[24] Vongvuthipornchai, S. and Raghavan, R. 1987. Pressure Falloff Behavior in Vertically
Fractured Wells: Non-Newtonian Power-Law Fluids. SPE Formation Evaluation,
December, p. 573-589.
[25] Vongvuthipornchai, S. and Raghavan, R. 1987b. Well Test Analysis of Data Dominated
by Storage and Skin: Non-Newtonian Power-Law Fluids. SPE Formation Evaluation,
December, p. 618-628.
[26] Warren, J.E. and Root, P.J. 1963. The Behavior of Naturally Fractured Reservoirs. Soc.
Pet. Eng. J. (Sept. 1963): 245-255.
New Technologies in the Oil and Gas Industry 180
[27] WU, Y.S., 1990. Theoretical Studies of Non-Newtonian and Newtonian Fluid Flow
Through Porous Media. Ph.D. dissertation, U. of California, Berkeley.
Chapter 8
The Role of Geoengineering
in Field Development
Patrick W. M. Corbett
Additional information is available at the end of the chapter
http://dx.doi.org/10.5772/50535
1. Introduction
The oil and gas industry is truly multi-disciplinary when it comes to analysing, modelling
and predicting likely movement of fluids in the subsurface reservoir environment. Much has
been written on the subject of integration in recent years and in this Chapter we can
consider one particular approach to tackling the problem. The Petroleum Geoengineering[1]
solution is offered to the origin, understanding, and static geological modelling of a
reservoir and the simulation of the flow and the dynamic response to a production test. In
field development these models remain a key monitoring and planning tool but here we
consider the initial modelling steps only. As an example of building a heterogeneous
reservoir model, we have chosen to illustrate this approach for certain types of fluvial
reservoirs [which have presented challenges for reservoir description for many years, 2,3]
which can benefit further from this detailed integrated approach. Furthermore, as such
reservoirs are characterised by relatively low oil recovery, and where further intensive work
by the industry will be needed to maintain hydrocarbon supplies in the future.
Integration challenge. It is often quoted that the use of the word Integration in SPE paper
titles has followed a hockey stick rise in recent years. Books have been written on the
subject of integration and in the forward to one such study Luca Cosentino[4] pointed out
that studies were merely becoming less disintegrated as the industry evolved. The industry
has developed ever more powerful, cross-disciplinary software platforms and workflows to
help integration. In parallel is the need for professionals to stay abreast of the key work
processes in each discipline and this chapter helps illustrate one such integrated approach
from a scientific/technological approach rather than embedded in or wedded to particular
software.
Geoengineering concept. This concept was introduced [1] into petroleum industry to
capture the spirit of the workflow being a seamless progression from geological conceptual
New Technologies in the Oil and Gas Industry 182
understanding, through petrophysical description to a numerical model and prediction of a
dynamic response. The Petroleum Geoengineering approach outlined here is a small
component of an all encompassing Intentional manipulation of the subsurface environment
as practiced by the petroleum industry with global impact. The recovery of oil and gas and
the management of CO2 being the ultimate outcome and target of this approach.
Static and dynamic reservoir characterisation. Reservoir Characterisation is defined as the
numerical quantification of reservoirs for numerical simulation. The petroleum industry
often refers to static and dynamic characterisation of the subsurface and many workers will
have their own interpretation of the terms. In the context of this Chapter we describe the
rocks statically when we keep to a numerical characterisation of the rock at initial boundary
conditions and dynamic being the response to some perturbation of the system (with
production as an example). There are other definitions of static and dynamic properties
(properties that can be changed versus those that cannot) but the above are followed here.
Permeability which only occurs during an experiment in response to a perturbation is
considered static when it is the initial permeability of the system prior to the experiment.
Field Development: Field Development plans are based on computer simulation models of
the field. This models consisting of multi-million cells are built by geologists for simulation
by engineers. The resolution of geological models is often higher than can be accommodated
by the flow simulation (particularly when complex fluids are involved). There is usually a
reduction of geological detailed as the cells are upscaled in order to reduce the number of
cells for computational expediency. The fundamental challenge being considered here is
how detailed should the original model be and with this upscaling how the key properties
are preserved in the model. Models are built prior to reservoir development, updated
during the development and on continued use through the planning any improved oil
recovery strategies and remain the key field development tool up until field abandonment.
2. Origin of reservoir heterogeneity
Clastic reservoirs are those made up of particles of rock that are the accumulated products
of erosion, transport and deposition (Fig.1). Broadly speaking these are usually sands and
clays and these types of reservoirs contain a significant proportion of the worlds reservoirs.
Within the clastic reservoir family are those reservoirs resulting from deposition by rivers
fluvial reservoirs. Fluvial reservoir types are very varied with braided and meandering
being important end members. Depending on the slope, sediment supply, nature of the
floodplain, rain fall and proximity to mountainous sediment sources, the resulting reservoir
architecture will vary from low net:gross, meandering up to high net:gross, braided (Fig.2).
This describes the macroscopic variation but within the channel sand bodies are additional
textural variations at various meso- and micro-scopic scales. Each of these scales will have a
potential impact on the hydrocarbon recovery.
Role of texture in controlling reservoir properties. In sandstones, primary texture exerts a
large influence of reservoir properties [5]. Primary texture is measured by grain size, grain
shape, grain sorting, clay content, etc. Well sorted and rounded sands, in sandstone
The Role of Geoengineering in Field Development 183
reservoirs, tend to have good porosity and high permeability. Fine grained sands of the
same uniform shape and sorting will also have good porosity, but lower permeability.
Poorly sorted sandstone with a variation in grain shape and size, will tend to have low
porosity and permeability.
Figure 1. Modern analogue for a fluvial system showing characteristic channel channel complexity,
Longcraigs Beach, Scotland
Figure 2. Various fluvial reservoirs at outcrop. Left low net:gross channelized, meandering, system
from Tertiary, near Huesca, Spain. Right: High net:gross braided system from the Devonian, Scotland.
Where petrophysical heterogeneity in sandstones is present it is often due to the spatial
distribution of these lithologies and their related properties which is why outcrop analogue
studies remain a useful tool to define geobody geometries for reservoir modelling.
Use of Outcrop Analogues: The industry uses analogue reservoirs, outcropping on the
surface, where relevant geological objects (geobodies) can be measured and their aspect ratios
and stacking patterns determined (Fig. 3). Some very good outcrop analogues of fluvial
reservoirs have been studies by the industry over the last 20 years [examples can be found
New Technologies in the Oil and Gas Industry 184
over this period in 6,7] with outcrops in the UK (Yorkshire, Devon), Spain (S. Pyrenees)
Portugal and the US (Utah) being used for reservoir studies in the North Sea, North Africa and
Alaska. Geological age is not the critical consideration when it comes to chosing an analogue
but net;gross (sand proportion in the system), channel size, bed load, flood plain, stacking
patterns, climate, etc are more important criteria in selecting a good outcrop analogue.
Figure 3. Outcrop of a fluvial system in Spain (near Huesca) where the average thickness of the
channels was measured as 5.3m, with an average aspect ratio of 27:1 (with acknowledgement to the
group of students who collected the data). Whilst only medium net:gross (35-45%) the channels are
laterally stacked and within these layers the connectivity will be greater than expected from a simple
model with a random distribution of sandbodies).
3. Measures of reservoir heterogeneity
Heterogeneity usually refers to variation of a property above a certain threshold (so as to
distinguish from homogeneity). In reservoirs, the property we usually consider, when
referring to heterogeneity, is that which controls flow, namely permeability. Porosity, which
controls the hydrocarbon in place, in the fluvial reservoirs we are considering in this
Chapter, tends by contrast to be relatively homogeneous. Heterogeneity can be described by
statistical criteria from a sample data set. The petroleum geoengineers starting point is often
an analysis of heterogeneity. This determines what level of detail might be required to
characterise the flow process. Heterogeneity is sometimes responsible for anisotropy but
not always so we have also to consider this aspect of the reservoirs characteristics.
Porosity and permeability distributions. Heterogeneity is often first seen in a review of the
histograms of the porosity and permeability data and these should always be part of an
initial reservoir analysis.
Porosity data tends to form a symmetrical or normal distribution. Permeability on the other
hand is often positively skewed, bimodal and usually highly variable (Fig. 4) [8]. It is a
mistake to think of permeability as being always log-normally distributed (as is often
implied in the literature) and the type of distribution should always be checked. Sometimes
the distributions are clearly bi- (tri- or even multi-) modal and this aspect will require
further analysis. Ideally each important element of the reservoir should be described by
characteristic porosity and permeability distributions and these can be used in the
geological (i.e., geostatistical) modelling. Geostatistical (i.e., pixel) modelling is often
performed in a Gaussian domain (Sequential Gaussian Simulation) and the skewed
distributions are first transformed to Gaussian to make this technique most effective.
The Role of Geoengineering in Field Development 185
Figure 4. Porosity (Left decimal), Permeability (Centre-mD) and Log Permeability (Right - mD)
distributions for a fluvial data set. Porosity tends to a normal distribution with permeability being
bimodal in the log domain (often this relates to properties of flood plain and channels)
Variation between Averages. Another useful indication of heterogeneity is apparent in
differences between the arithmetic, geometric and harmonic averages (Table 1). For porosity
these are often quite similar but for permeability these can differ in fluvial reservoirs by
orders of magnitude! Different averages have different applications in reservoir engineering
and often used as a way of upscaling the directional flow properties (in the static model) in
different directions.
New Technologies in the Oil and Gas Industry 186
UK North Africa
Average Poro Perm Poro Perm
Arithmetic 0.167 441 0.108 25.7
Geometric 0.154 23.7 0.094 2.78
Harmonic 0.138 0.263 0.072 0.009
Table 1. Porosity (decimal) and permeability (mD) averages in fluvial sandstones Left Triassic
Sherwood Sandstone, UK (Fig.4); Right Triassic Nubian Sandstone, North Africa. Note relatively small
differences between average porosity contrasting with order of magnitude variation between average
permeabilities. This is further evidence of extreme permeability heterogeneity in these sandstones.
The arithmetic average is used as an estimator of horizontal permeability, and the harmonic
for the vertical permeability, in horizontally layered systems. Where layered systems have
different orientations (i.e., significant dip) then the averages need to be rotated accordingly.
In the case of a random system, then the geometrical average is used in both horizontal and
vertical directions. A truly random system, without any dominant directional structure, can
also be assumed to be isotropic. Use of theses averages for upscaling comes with some
caveats the assumption that each data point carries the same weight (i.e., from a layer of
the same thickness) and only single phase flow is being considered. In many fluvial
reservoirs, the system is neither nicely layered nor truly random which requires careful
treatment/use of the averages.
Coefficient of Variation (Cv). There are a number of statistical measures which are used in
reservoir engineering to quantify the heterogeneity. The variance and the standard
deviation are the well known ones used by all statisticians. However, in reservoir
characterisation we tend to use the normalised standard deviation (standard deviation
divided by the arithmetic average) as one such measure of heterogeneity and this is known
as the Coefficient of Variation (Table 2) [8]. Another measure of heterogeneity, that probably
has limited use to petroleum engineering only, is the Dykstra-Parsons coefficient (VDP), but
this assumes a log-normal distribution (of permeability) so tends to be used in modelling
studies when a log-normal distribution is required to be input to the simulation process. The
log-normal distribution, as discussed above, is not always found to be the case for
permeability in reservoir rocks and therefore care has to be taken when using VDP.
UK North Africa
Poro Perm Poro Perm
S.D. 0.061 972 0.046 58.9
Cv 0.37 2.20 0.425 2.29
Table 2. Heterogeneity in porosity (decimal) and permeability (mD) averages in fluvial sandstones
Left Triassic Sherwood Sandstone, UK (Fig. 4); Right Triassic Nubian Sandstone, North Africa. Note
porosity heterogeneity is low (but relatively high for sandstones) whereas permeability is very
heterogeneous supporting the trend seen in the averages. Note these two Triassic reservoirs on different
continents have remarkably consistent poroperm variability.
The Role of Geoengineering in Field Development 187
Lorenz Plot (LP): The Lorenz Plot (which is more widely known in economics as the GINI
plot) is a specialised reservoir characterisation plot that shows the relative distributions of
porosity and permeability in an ordered sequence (of high-to-low rock quality, essentially
determined by the permeability, Fig. 5 right) and can be quantified through the Lorenz
Coefficient. Studying how porosity and permeability jointly vary is important. In Fig. 5 (left)
80% of the flow capacity (transmissivity) comes from just 30% of the storage capacity
(storativity).
Figure 5. Example Lorenz and Modified Lorenz Plots for a fluvial data set (Fig.4). The LP (Right) shows
high heterogeneity as the departure of the curve from the 45
o
line. The MLP (Left) shows presence of
speed zones at various point (arrowed) in the reservoir. If the MLP is close to the 45
o
line then that is
perhaps an indication of randomness and this can also be checked by variography.
The industry often uses cross plots of porosity and permeability which will be discussed
further below - which can focus the viewer on average porosity permeability relationships
but the LP should appear in every reservoir characterisation study as it emphasises the
extremes that so often identify potential flow problems.
Modified Lorenz Plot (MLP). In a useful modification of the original Lorenz Plot where the
re-ordering of the cumulative plot by original location provides the locations of the extremes
(baffles and thief or speed zones). This plot (Fig. 5 left) has a similar profile to the
production log and hence is an excellent tool for predicting inflow performance. The LP and
MLP used in tandem can provide useful insights in to the longer term reservoir sweep
efficiency and oil recovery.
Anisotropy vs Heterogeneity. With heterogeneity, sometimes comes anisotropy,
particularly if the heterogeneity shows significant correlation structure. Correlation is
New Technologies in the Oil and Gas Industry 188
measured by variography and where correlation lengths are different in different directions
this can identify anisotropy. Correlation in sedimentary rocks is often much longer in the
horizontal and this gives rise to typical kv/kh anisotropy. In fluvial reservoirs, with common
cross-bedding, the anisotropy often relates to small scale structure caused by the lamination
but it is the larger scale connectivity that dominates (see the Exercise 1 in reference [1] for
further consideration of this issue).
Rarely does significant anisotropy result from grain anisotropy alone as has been suggested
by some authors. Anisotropy is a scale dependent property smaller volumes tend to be
isotropic (and this tendency is seen in core plugs) whereas at the formation scale bedding
fabric tends to give more difference between kv and kh and therefore greater anisotropy. In
fluvial systems, the arrangement of channel and inter-channel elements can have a
significant effect on anisotropy. In high net:gross fluvial systems, well-intercalated channel
systems will have higher tendency to be isotropic (geometric average) whilst preservation of
more discrete channels will exhibit more anisotropic behaviour (arithmetic and harmonic
average permeability). In this Chapter we are not considering natural or induced fractures
which can increase anisotropy.
4. Reservoir rock typing
Petrophysicists use the term Rock Typing in a very specific sense - to describe rock
elements (core plugs) with consistent porosity permeability (i.e., constant pore size pore
throat) relationships. These relationships are demonstrated by clear lines on a poro-perm
cross-plot and similar capillary pressure height functions. There are various ways these
relationships can be captured (and the literature includes references to RQI, FZI, Amaefule,
Pore radius, Winland, Lucia, RRT, GHE, Shenawi.) and each method directs the
petrophysicist towards a consistent petrophysical sub-division of the reservoir interval. In
Fig. 6 the coloured bands follow a consistent GHE approach based on the Amaefule FZI,
RQI equation [9]. It matters not so much which rock typing method is used but that a rock
typing method is used but that a rock typing method is used as the basis for reservoir
description. Geologists and petrophyicists need to make these links work for an effective
reservoir evaluation project. Special core analysis data when collected in a rock typing
framework is most useful.
Property variation in poro-perm space. In fluvial reservoirs, it is very common to have a
wide diversion of porosity and permeability (Fig. 6) due to the poorly sorted, immature,
nature of these sands. Well sorted sands will have higher porosity and permeability than
their poorly sorted neighbours. Coarse sands tend to have less primary clay content.
Presence of mica and feldspar can also effect the textural properties especially if the
feldspar breaks down into clay components. Clays are more common in fine and poorly
sorted sandstones. The variation of properties within fluvial systems is often a result of
primary depositional texture. Diagenetic effects especially where associated with
calcrete (carbonate cement that is formed by surface evaporation and plant root influence
The Role of Geoengineering in Field Development 189
in arid fluvial environments) or reworked calcrete into channel base (lag) deposits - can
modify the original depositionally-derived properties (such as well cemented lag
intervals) but perhaps do not change the overall permeability patterns. For this reason
channel elements are often detected in fluvial reservoirs and are measured at outcrop for
use in fluvial reservoir modelling studies.
Figure 6. Porosity and permeability heterogeneity in fluvial sandstones Left Triassic Sherwood
Sandstone, UK; Right Triassic Nubian Sandstone, North Africa.
Link between geology and engineering. Rock types are a key link between geology and
engineering as they are the geoengineering link between the depositional texture, the oil in
place and the ease with which water can imbibe and displace oil. If fluvial reservoirs the
presence of many rock types is critical to understanding oil-in-place and the, relatively low,
recovery factors. Rock types are the fundamental unit of petrophysical measurement in a
reservoir and failure to recognise the range of properties in a systematic framework can
potentially result in the use of inappropriate average properties.
Link between MLP and rock typing. The MLP if coded by rock type can also emphasise the
role of some rock types as conduits to flow and potential barriers/baffles to flow [10]. The
link between rock types and heterogeneity is also important in understanding the
plumbing in the reservoir where are the drains, the speed zones, the thief zones, the
baffles and the storage tanks?
Production logging. Ultimately the proof of what flows and what doesnt flow in a reservoir
comes with the production (i.e, spinner) log. The spinner tool identifies flowing and non-
flowing intervals (by the varying speed of rotation of a impellor in the well stream) and
when correlated with the MLP can provide validation that the static and dynamic model are
consistent [11]. If the best, and only the best, rock types are seen to be flowing then there is
evidence of a double matrix porosity reservoir. If there is no correlation, then perhaps this
points to evidence of a fractured (double porosity) system. The well test interpretation
cannot distinguish between the two double porosity cases but the production log perhaps
New Technologies in the Oil and Gas Industry 190
can. Of course when it comes to interpreting downwhole data there are also the downhole
environment considerations needed (such as perforation location, perforation efficiency,
water or gas influx, etc) to be taken into account. The geoengineering approach to
calibrating a static model with a dynamic model for key wells (where there is perhaps core,
log, production log and test data) and iterating until theres a match will have benefits when
it comes to subsequent history matching of field performance.
Core to Vertical Interference Test comparison (kv/kh). Where there is also vertical
interference data available, which is generally quite rarely, this can also be used to
calibrate models of anisotropy [12]. The kv/kh ratio is often one of the critical reservoir
performance parameters but rarely is there a comprehensive set of measurements. Core
plug scale kv/kh measurements are not always helpful as they are often contaminated
by local heterogeneity issues at that scale. Vertical plugs are often sampled at different
always wider spacings, compared with horizontal plugs, and this means critical
elements (which tend to be thin) controlling the effective vertical permeability are often
missed. In fluvial reservoirs, these are often the overbank or abandonment shale intervals.
Vertical plug measurements in shales are often avoided for pragmatic reasons (because
measuring low permeability takes time and often the material doesnt lend itself to easy
plugging). The effective kv/kh parameter that is needed for reservoir performance
prediction often needs to be an upscaled measurement. Choosing the interval over which
to conduct a representative vertical interference test is an important consideration if that
route is chosen.
5. Dynamic well testing
Well testing is achieved by perforating, producing and shutting-in the well for a relatively
short period of time, whilst recording the flow rates and (bottom-hole as the estimate of
reservoir) pressures. The practical aspects are covered elsewhere in this book, here we
consider the role of well test data in understanding the performance of fluvial reservoirs.
The way that fluid flows towards the well bore following a perforating job, and the paths
that the pressure drop takes in the reservoir are important considerations. Fluvial
reservoirs are not homogeneous, isotropic, sands of constant thickness. They are systems
with highly variable (showing many orders of magnitude permeability variation for the
same porosity) internal properties. The paths (comprising both horizontal and vertical
components) of pressure disturbance away from the well will depend very much on the
3D arrangement of the sand bodies and the floodplain characteristics the reservoir
plumbing (Fig.7) [13]. In this respect, fluvial reservoirs are some of the more complex
(clastic) reservoirs encountered.
The diffusion of the pressure response into the reservoir is constrained by the diffusivity
constant. In heterogeneous formations such as fluvial reservoirs this assumed constant isnt
actually constant and varies with rock quality through the tested volume. In an ideal case
the arithmetic average would be expected in the initial period of the test and the geometric
The Role of Geoengineering in Field Development 191
at later stages (for a completely random system (Fig. 8). In reality, there are a number of less
than ideal situations in the geology. Channels are not always big enough to see the first
stabilisation clearly and the system is not absolutely random and therefore the geometric
average is not always reached in the length of the test. These problems give rise to many
well test interpretation challenges in fluvial reservoirs.
Cross-flow and comingled flow. When a reservoir is said to have cross-flow this means that
the fluid passes in response to pressure changes between layers of different properties in the
reservoir. This effect occurs in all directions vertically and laterally rather than in simple
uniform radial directions from the well.
Figure 7. A simulation showing the location of the most sensitive parts of the formation at a particular
time to the pressure response measured at the well. This effectively illustrates complex pressure
diffusion (rather than simple radial flow) in a fluvial reservoir [13].
Figure 8. An ideal pressure derivative showing two stabilisations the first would be expected to give
the arithmetic average and the second, the geometric average. Remember that the difference between
the arithmetic and geometric average in fluvial reservoirs is an order of magnitude or more (Table 1).
Log(t)
First
stabilization
Second
stabilization
L
o
g
(
P
r
e
s
s
u
r
e
d
e
r
i
v
a
t
i
v
e
)
New Technologies in the Oil and Gas Industry 192
In a commingled reservoir the reservoir layers only communicate through the well bore. In
the reservoir there is not flow between the layers. This situation is much more common in
more layered reservoirs with laterally extensive shales between sheet-like (e.g. turbidite)
sand bodies. Such situations can occur in fluvial systems ephemeral channel sands
sandwiching sheetflood deposits and interbedded shales but perhaps as an exception,
rather than the rule.
High net:gross fluvial reservoirs are often cross-flow in their internal drainage nature and
cross-flow reservoirs are recognised as the most challenging for enhanced oil recovery.
Gravity means that water slumps or gas overrides more easily in cross flow reservoirs.
Shutting-off water influx or gas in producing well is ineffective as there are no laterally-
extensive reservoir barriers present to base this strategy upon.
In homogeneous formations. Where the heterogeneity is low (Cv less than 0.5), the
effects of cross flow are mitigated. Low heterogeneity fluvial sands can occur where the
sands are relatively mature and far from source. This tends to occur in more distal
locations. In these locations wind-blown sands can also occur and these are usually more
uniform. In these situations well test will see the arithmetic (equals geometric) average
permeability.
In heterogeneous formations. Where the heterogeneity is moderate (Cv between 0.5
and 1.0) these reservoirs might be dominated by cross bedding (not identified in low more
homogeneous reservoirs) and these will induce strong capillary trapping. The well
test might show reduced geometric average permeability in this case. Square root of kx
and ky product for significant lateral (point bar) or downstream accretion-derived
anisotropy.
In highly heterogeneous formations. Where the heterogeneity is very high (Cv greater
than 1.0) and often this is the case with braided fluvial reservoirs then the most extreme
cross flow can be seen. These are often detected by speed zones, drains) in the production
log profile. Cross-flow introduces flow regime which can be confused with parallel (i.e.
channel) boundaries. The ramp is seen best when the vertical permeability is effectively
zero and the second stabilisation converges at the harmonic average within the
commingled layers (Fig. 9 lower). The geometric average is seen when there is good
connectivity and any channels near the well give rise to a geoskin response (Fig. 9 top).
In the middle case the restriction cause by the limits of the channels near the well is
overcome in the later time by increased connectivity and this is the geochoke response
(Fig. 9 - middle). These responses can be confused with the effects of faults (which may
also be present and add to the confusion!).
Reservoir boundaries. The detection of reservoir boundaries is an important aspect of the
well test interpretation. In relatively uniform sand properties then boundaries might be
readily detected. In highly heterogeneous reservoirs cross flow effects might be
misinterpreted as faults. It is often commented that well tests in fluvial reservoirs tend to
The Role of Geoengineering in Field Development 193
show faults short (ca. 40ft) from the well. These may be channel margins or perhaps more
likely, subtle, cross flow effects. The degree of heterogeneity is an important consideration
in deducing boundaries (either sedimentological or structural) from internal cross flow
effects. The impact of the two interpretations on the approach taken to reservoir modelling
will be significant.
Figure 9. Shows various connectivity arrangements in fluvial reservoirs (between channels and
floodplain) and an equivalent schematic pressure deriviative responses to the scenarios [13]. With
subtle changes in lateral and vertical connectivity the response changes from a geoskin response (top) to
a geochoke response (middle) or to a ramp response (lower).
Reservoir limit tests. Fluvial systems to produced sand bodies that are limited in extent
(point bars). These are characterised but unit slope depletion on the well test response [14].
Point bars are often of a particular geometry (ca 3 times as long as wide) in which linear
flow will not develop. From depletion, reservoir volumes can be determined and these
New Technologies in the Oil and Gas Industry 194
will be small if detected during a short (i.e. 24hr) production test. There are relationships
published between thickness, width and volume for point bar sandstones. Of course, in
some fluvial reservoirs a mixture of channel body boundaries and fault induced boundaries
may be present.
6. Considering other very heterogeneous reservoirs Carbonate and
fractured
A few words are warranted of other even more heterogeneous reservoirs where aspects of
the above will be important to note and the effects may be even more dramatic.
Very high heterogeneity. Carbonates often have even larger ranges of permeability for
given porosity and this will translate into even higher measures of variability. Sometimes
the presence of vugs are not captured in the core plug data because of their size. This effect
is mitigated by the use of whole core samples but these are also of limited use where very
large vugs are present.
Multiple rock types. Carbonates have many more reservoir pore space creation
mechanisms often diagenetic by origin which adds to the complexity. Dissolution, vugs,
stylolites, microporosity, dolomitisation are just a few of the additional geological
phenomena/processes, that impact reservoir properties, to look out for in carbonates.
Fractures. Carbonate (and occasionally fluvial) reservoirs are often fractured. Detecting
fractures relies on core, image logs and production logs being carful not to confuse
fractures with high permeability matrix elements as discussed above, does require special
attention. Fractures are rarely sampled in core plugs but where they are often stand out as
high permeability, low porosity anomalies.
Well testing considerations. Identification of fractures and boundaries natural or artificial
- from highly heterogeneous reservoirs might be misleading. Complex double matrix
porosity considerations with lateral and vertical cross flow effects might be confused with
double porosity interpretations. Negative skin is not necessarily a diagnostic signature of a
fractured reservoir. Geoskin can result from presence of high-permeability pseudo-
channels which are present in make fluvial reservoirs.
7. Effect of heterogeneity on oil recovery
Poor areal and vertical sweep leads to poor oil recovery from a reservoir [15]. Fluvial
reservoirs with disconnected channels or partially- and variably-connected, vertical and
laterally aggrading sand bodies (the net: gross and the lateral and vertical architectural
stacking patterns are critical in this respect) will have very variable flow paths through the
system. Rarely will the sweep be uniform more likely to be fingering, bypassing and
dispersive leading to high remaining mobile oil [16, 1]. Cross-flow is a problem for gas and
water flooding as there is little to counteract the effects of gravity and the is often the reason
why WAG works well in high net:gross fluvial reservoirs.
The Role of Geoengineering in Field Development 195
Low net to gross fluvial reservoirs require something very different as connectivity is the
major challenge and infill drilling may be the answer. Where there is good sand continuity
the presence of cross-bedding might impact the capillary trapping of remaining oil.
There is no doubt that fluvial reservoirs are complex and that finding the right engineering
solution will be a painstaking and demanding task. Gravity and capillary forces in the
reservoir and the viscous-dominated issues in the connectivity of the reservoir to the
producing wells have all to be overcome.
Figure 10. Potential IOR targets in fluvial systems where high amounts of unrecovered oil remain
[from 16 and adapted in 1]. A better understanding of the connectivity issues and potential habitat of
the unrecovered oil in fluvial reservoirs is a multidisciplinary, geoengineering challenge.
8. Conclusions
All petroleum reservoirs are certainly not of fluvial origin. However, fluvial reservoirs are
good reservoirs in which to study the impact of reservoir and thats why I chose them to
illustrate this chapter. The reader will have to extrapolate their learnings to other reservoir
systems. Fluvial reservoirs are also good reservoirs in which to demonstrate, and to
understand the relevance of, close integration between the disciplines. Their understanding
and development benefits from such a close integration of geoscience and engineering
New Technologies in the Oil and Gas Industry 196
technology. Through gaining this understanding the reader and industry will doubtless
develop improved performance capabilities and thereby engineer higher recovery in these
(and other such complex) systems.
Author details
Patrick W. M. Corbett
Institute of Petroleum Engineering, Heriot-Watt University, Edinburgh, UK
Institute of Geosciences, Universidade Federal do Rio de Janeiro, Brazil
Acknowledgement
The author acknowledges the contributions by many students over the years to his
understanding and many of those are co-authors in the references. Patrick also
acknowledges 17 years of funding from Elf and Total over which period the work on fluvial
reservoirs was a major effort.
Symbols
FZI Flow Zone Indicator
GHE Global Hydraulic Elements
IOR Improved Oil Recovery
kv Vertical permeability
kh Horizontal Permeability
kx,ky Permeability in orthogonal horizontal directions
LP Lorenz Plot
mD Milledarcy
MLP Modified Lorenz Plot
RQI Reservoir Quality Index
RRT Reservoir Rock Type
S.D. Standard Deviation
SG Solution Gas Drive Mechanism
VDP Dykstra-Parsons Coefficient
WAG Water Alternating Gas
WD Water Drive Mechanism
9. References
[1] Corbett, P.W.M., 2009, Petroleum Geoengineering: Integration of Static and Dynamic
Models, SEG/EAGE Distinguished Instructor Series, 12, SEG, 100p. ISBN 978-1-56080-
153-5
The Role of Geoengineering in Field Development 197
[2] Davies, D. K., Williams, B. P. J. & Vessell, R. K.: Models for meandering and braided
fluvial reservoirs with examples from the Travis Peak Formation, East Texas, SPE
24692, 1992.
[3] Corbett, P.W.M., Zheng, S.Y., Pinisetti, M., Mesmari, A., and Stewart, G., 1998; The
integration of geology and well testing for improved fluvial reservoir characterisation,
SPE 48880, presented at SPE International Conference and Exhibition, Bejing, China, 2-6
Nov
[4] Cosentino, L., 2001, Integrated Reservoir Studies, Editions Technip, Paris, 310p.
[5] Brayshaw, A.C., Davies, R., and Corbett, P.W.M., 1996, Depositional controls on
primary permeability and porosity at the bedform scale in fluvial reservoir sandstones,
Advances in fluvial dynamics and stratigraphy, P.A.Carling and M. Dawson (Eds.), John
Wiley and Sons, Chichester, 373-394.
[6] Flint, S.S., and Bryant, I.D., 1993, Geological Modelling of Hydrocarbon Reservoirs and
Outcrop Analogues, International Association of Sedimentologists, Wiley, ISBN
9780632033928, Online ISBN 9781444303957
[7] CIPR, University of Bergen, Norway, accessed 19May 2012,
http://www.cipr.uni.no/projects.aspx?projecttype=12&project=86
[8] Corbett, P.W.M., and Jensen, J.L., 1992. Estimating the mean permeability: How many
measurements do you need? First Break, 10, p89-94
[9] Corbett, P.W.M., and Mousa, N., 2010, Petrotype-based sampling to improved
understanding of the variation of Saturation Exponent, Nubian Sandstone Formation,
Sirt Basin, Libya, Petrophysics, 51 (4), 264-270
[10] Ellabad, Y., Corbett, P.W.M., and Straub,R., 2001, Hydraulic Units approach
conditioned by well testing for better permeability modelling in a North Africa oil field,
SCA2001-50, Murrayfield, 17-19 September, 2001.
[11] Corbett, P.W.M., Ellabad, Y., Egert, K., and Zheng, S.Y., 2005, The geochoke test
response in a catalogue of systematic geotype well test responses, SPE 93992, presented
at Europec, Madrid, June
[12] Morton, K., Thomas, S., Corbett, P.W.M., and Davies, D., 2002, Detailed analysis of
probe permeameter and vertical interference test permeability measurements in a
heterogeneous reservoir, Petroleum Geoscience, 8, 209-216.
[13] Corbett, P.W.M.,Hamdi, H.,and Gurev, H., 2012, Layered Reservoirs with Internal
Crossflow: A Well-Connected Family of Well-Test Pressure Transient Responses,
Petroleum Geoscience, v18, 219-229.
[14] De Rooij, M., Corbett, P.W.M., and Barens, L., 2002, Point Bar geometry, connectivity
and well test signatures, First Break, 20, 755-763
[15] Arnold, R., Burnett, D.B., Elphick, J., Freeley III, T.J., Galbrun, M., Hightower,
M., Jiang, Z., Khan, M., Lavery, M., Luffey, F., Verbeek, P., 2004, Managing
Water From waste to resource, The Technical Review, Schlumberger, v16, no2,
26-41
New Technologies in the Oil and Gas Industry 198
[16] Tyler, N., and Finley, R.J., 1991, Architectural controls on the recovery of
hydrocarbons from sandstone reservoirs, in Miall, A.D., and Tyler, N., (eds.) The three
dimensional facies architecture of terrigeneous clastic sediments and its implications for
hydrocarbon discovery and recovery, SEPM Concepts in Sedimentology and
palaeontology, Tulsa, Ok, 3, 1-5
Section 4
Core Analyses
Chapter 9
Digital Rock Physics for Fast and Accurate
Special Core Analysis in Carbonates
Mohammed Zubair Kalam
Additional information is available at the end of the chapter
http://dx.doi.org/10.5772/52949
1. Introduction
Initiatives for increasing hydrocarbon recovery from existing fields include the capability to
quickly and accurately conduct reservoir simulations to evaluate different improved oil
recovery scenarios. These numerical simulations require input parameters such as relative
permeabilities, capillary pressures, and other rock and fluid porosity versus permeability
trends. These parameters are typically derived from Special Core Analysis (SCAL) tests.
Core analysis laboratories have traditionally provided SCAL through experiments
conducted on core plugs. Depending on a number of variables, SCAL experiments can take
a year or longer to complete and often are not carried out at reservoir conditions with live
reservoir fluids. Digital Rock Physics (DRP) investigates and calculates the physical and
fluid flow properties of porous rocks. In this approach, high-resolution images of the rocks
pores and mineral grains are obtained and processed, and the rock properties are evaluated
by numerical simulation at the pore scale.
Comparisons between the rock properties obtained by DRP studies and those obtained by
other means (laboratory SCAL tests, wireline logs, well tests, etc.) are important to validate
this new technology and use the results it provides with confidence. This article shares a
comparative study of DRP and laboratory SCAL evaluations of carbonate reservoir cores.
This technology is a breakthrough for oil and gas companies that need large volumes of
accurate results faster than the current SCAL labs can normally deliver. The oil and gas
companies can use this information as input to numerical reservoir simulators, fracture
design programs, analytic analysis of PTA, etc. which will improve reserve forecasts, rate
forecasts, well placement and completion designs. It can also help with evaluating option for
improved oil recovery with sensitivity analysis of various options considering the actual pore
scale rock fabric of each reservoir zone. Significant investment savings can also be realized
using good DRP derived data compared with the conventional laboratory SCAL tests.
New Technologies in the Oil and Gas Industry 202
2. Background and summary
The main objective is to provide petrophysical and multiphase flow properties, calculated
from 3D digital X-ray micro-tomographic images of the selected reservoir core samples. The
simulations have been conducted on sub-samples (micro plugs) and then upscaled to cores
plug scale for direct comparison with experimental data. Typically the whole core are
imaged on dimensions of 11 16.5 cm with a resolution of 500 microns, while the core plugs
are imaged from to 2 4 cm with resolutions of 12 19 microns. The micro plugs have
dimensions from 1 5 mm with resolutions of 0.3 5 microns and with Nano-CT one can
look at rocks of 50 300 microns with resolutions from 50 300 nm. In DRP process, the
results of these increasingly smaller and smaller investigations are then integrated by an
upscaling, either by steady state or geometric methods. Hence, rock properties are
computed from Nano and micro scale to plug scale to a whole core scale.
Absolute permeability can be computed using Lattice-Boltzmann simulations, calculation of
Formation Resistivity Factor is based on a solution of the Laplace equation with charge
conservation (the equations were solved using a random walk algorithm) and elastic
properties were calculated by the finite element method. Primary drainage and waterflood
capillary pressure and relative permeabilities are determined from multi-phase flow
simulations on the pore network representation of the 3D rock model. Flow simulation
input parameters were set according to expected wettability conditions.
The first section outlines the basic DRP based results on reservoir properties determined on
complex carbonates from giant Middle East reservoirs and compared with similar
measurements performed in SCAL laboratories. Section 3 outlines possible details in the
calculation process for each of the many reservoir parameters that can be calculated using
DRP, while section 4 illustrates snapshots of multi-phase flow results on complex carbonates
from the same giant Middle East reservoirs. Capillary pressure, cementation exponent (m),
saturation exponent (n) for both primary drainage and imbibition, water-oil and gas-oil
relative permeability and elastic properties of carbonates were calculated from DRP. Very
good agreement is obtained between DRP derived properties and available experimental
data for the studied data set. The results obtained for porosity, absolute permeability,
formation resistivity factor, cementation and saturation exponent are shown in Figure 1.
through 5. Calculations of elastic properties have been performed on all reconstructed
samples. The elastic parameters Vs and Vp are reported in figure 6 and compared to
available literature data.
3. Typical DRP Workflow
3.1. Introduction
In order to meet project objectives the workflow illustrated in Figure 7. was implemented.
High resolution (19 m/voxel) micro-CT images of core plugs were first recorded in order
to identify rock types and their distribution within the core plugs, and to select locations for
thin sections and micro-plugs.
Digital Rock Physics for Fast and Accurate Special Core Analysis in Carbonates 203
Comparison of experimental and simulated permeability for all samples (straight line is 1:1 and dotted line is factor 2)
Figure 1. Simulated vs. experimental permeability
Comparison of experimental and simulated porosity for 100+ samples (straight line is 1:1 and dotted line is +/- 3%)
Figure 2. Simulated vs. experimental porosity
New Technologies in the Oil and Gas Industry 204
Calculated Formation Resistivity Factor (FRF) as a function of total porosity for the e-Core models compared to
available experimental data.
Figure 3. Porosity-FRF correlation
Calculated cementation exponent (m) as a function of total porosity for the e-Core models compared to available
experimental data.
Figure 4. Porosity-Cementation exponent correlation
1.0
1.5
2.0
2.5
3.0
0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40
Porosity [Frac.]
C
e
m
e
n
t
a
t
i
o
n
E
x
p
.
(
m
)
Lab DRP
Digital Rock Physics for Fast and Accurate Special Core Analysis in Carbonates 205
Calculated saturation exponent (n) for primary drainage displacement as a function of total porosity for the e-Core
models compared to available experimental data.
Figure 5. Porosity-Saturation exponent correlation
Calculated Vs (open symbols) and Vp (colored symbol) as a function of total porosity compared to available literature
data.
Figure 6. Porosity-Vs and Vp correlation
1.0
1.5
2.0
2.5
3.0
0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40
Porosity [Frac.]
S
a
t
u
r
a
t
i
o
n
E
x
p
.
(
n
P
D
)
Lab DRP
0.0
1.0
2.0
3.0
4.0
5.0
6.0
7.0
8.0
0 0.1 0.2 0.3 0.4
Total porosity
S
o
n
i
c
V
e
l
o
c
i
t
y
V
p
&
V
s
[
k
m
/
s
]
DRP-Vp
Lab-Vp-Derzhi et al. (2011)
Lab-Vp-Anselmetti (1994)
DRP-Vs
Lab-Vs-Rgen et al. (2005)
Lab-Vs-Fabricius et al. (2007)
New Technologies in the Oil and Gas Industry 206
Project workflow and calculated properties.
Figure 7. Digital rock analyses on carbonate samples
Thin sections were then prepared and analysed for each core plug. Micro-plugs were drilled
and high resolution (8.0 0.3 m/voxel) micro-CT images were recorded, processed and
analysed for each micro-plug, generating 3D digital rock models.
Micro-plug properties have been calculated directly on the digital rock (grid calculations) or
on network representations of the pore space (network based calculations):
Grid based reservoir core properties
- Total porosity |
- Absolute permeability kabs
- Formation Resistivity Factor FRF and corresponding cementation exponent m
- Elastic moduli assuming isotropy (bulk modulus K, shear modulus , Youngs modulus
E, Poissons ratio and Lams parameter ) and corresponding acoustic velocities (P-
wave velocity Vp and shear-wave velocity Vs)
Digital Rock Physics for Fast and Accurate Special Core Analysis in Carbonates 207
Network based flow relations
- Capillary pressure Pc as a function of water saturation Sw
- Relative permeability kr as a function of water saturation Sw
- Resistivity Index RI as a function of water saturation Sw and corresponding saturation
exponent n for both primary drainage and imbibition cycle
Capillary pressure and relative permeability curves were established for the following flow
processes:
2-phase flow
- Oil/water primary drainage to initial water saturation Swi
- Water/oil imbibition to residual oil saturation Sorw
3-phase flow
- Gas/oil drainage at initial water saturation Swi
Resistivity index curves and corresponding saturation exponents were established for
water/oil primary drainage and imbibition.
The last step of the workflow is to upscale micro-plug properties (volumes in the mm
3
range) to core plug properties (volumes ranging from ~ 40 to100 cm
3
). Rock types were
identified and micro-plugs representing these rock types were selected. Each core plug
voxel was then assigned to a given rock type or pore space. Corresponding micro-plug and
pore fluid properties were used as input to the calculation of upscaled properties.
The following properties were upscaled:
Petrophysical properties
- Total porosity |
- Absolute permeability kabs
- Formation Resistivity Factor FRF
- Cementation Exponent m
Flow properties
- Capillary pressure Pc as a function of water saturation Sw
- Relative permeability kr as a function of water saturation Sw
- Resistivity Index RI as a function of water saturation Sw
Capillary pressure, relative permeability and resistivity index curves were generated for the
micro-plug flow processes listed on previous page.
3.2. Imaging
The principle of microfocus X-ray Computed Tomography (mCT) is based on Beers law, i.e.
the intensity of X-rays is attenuated when passing through physical objects. The attenuated
New Technologies in the Oil and Gas Industry 208
X-rays are captured by a detector to compose a projection image. A series of projection
images from different angles (0 to 360) are collected by rotating the object around its axis.
These projection images are processed to generate 2D mCT slices, which subsequently are
input date to construct 3D images of the objects. Each volume-element in these 3D images
corresponds to one voxel. The voxel value of the recorded image is proportional to the
attenuation coefficient, which is mainly a function of the density and the effective atomic
number Z of the constituents of the object (Alvarez and Macovski, 1976; Pullan et al., 1981).
3.3. Thin section preparation, sub-sampling
The petrographic study of thin sections cut from the core plugs is an important part of the
workflow. It allows describing and examining the heterogeneity of the core plugs on the
micro- to macro-scale. Important features are the identification of micro-facies, the
microscopic fabric in each facies type, and the mineralogy of the rock - including diagenetic
features (cementation and secondary porosity). The first step in studying the facies
associations is the analysis of the initial core plug mCT images (at 19 micron/voxel
resolution) to identify the spatial distribution of the various facies, which is the basis for
selecting where to cut the slab for thin section preparation, and also to select sites for drilling
sub-plugs.
All samples are classified according to Dunham (1962) and Embry & Klovan (1971)
limestone Classification. Several rock types are described with special attention to the grain
types in terms of size, shape and sorting, because these parameters are influencing the flow
properties. One of the main purposes of the microscopic analysis is to describe the porosity
of the sample. The different pore types are identified using the Choquette and Pray (1970)
porosity classification scheme. Furthermore, the high resolution images of the thin sections
are used to distinguish between carbonate cements and micrite, in addition to identifying
the main minerals (calcite or dolomite). Drilling sites for micro-plugs are also decided based
on the characteristic microfacies types of the sample. The mCT scanning resolution of the
micro plugs is chosen in accordance with the microscopic analysis, with particular emphasis
on micrite, pore sizes and cementation features.
3.4. Segmentation
3.4.1. Step 1: Cropping and filtering
Scanned mCT images of plugs are cropped into the largest possible undisturbed rectangular
volume avoiding cracks, uneven edges and gradients towards the outer surface of the
sample. Cropped images are scaled to a voxel size representing approximately the size of
the corresponding micro-plug images. A noise reduction filter is applied.
Scanned images of micro-plugs are cropped to a volume of 1200
3
grid cells to avoid
gradients at the side of the image and to allow further processing (size limitations of applied
software).
Digital Rock Physics for Fast and Accurate Special Core Analysis in Carbonates 209
3.4.2. Step 2: Histogram Analysis
Each image element carries an 8-bit signal (256 grey values) corresponding to the X-ray
attenuation experienced within its volume. A voxel completely filled by empty pore space
(air) data is approaching black (0, air = 0.0012 g/cm
3
), while a voxel completely filled by
high density mineral such as pyrite is approaching white (255, pyrite = 4.8 5.0 g/cm
3
).
Voxels completely filled by minerals with lower densities, or voxels filled by various
proportions of minerals and/or pore space cover the entire grey scale range from black to
white. The grey value is consequently not determining the contents of the voxels uniquely; it
may represent a single mineral, two or more minerals or porosity and minerals.
An example of a grey value histogram is shown in Figure 8. The histogram itself is shown as
blue diamonds and is displayed on logarithmic scale (to the left). The other two data sets (pink
and yellow) show the first and second order differentials of the grey value counts.
Figure 8. Example grey value histogram.
The following porosities can then be calculated:
1
1 GV
vis n
n o
x |
=
=
(1)
2
1
2
; 1 2 1
1
GV
n GV m
m
x
GV GV
|
= =
| |
=
|
|
\ .
(2)
res vis
| | | = + (3)
where x denotes the fraction of the entire image of the grey value and n and m are counting
variables in a grey value range (n is grey value specific, m positive integer to distribute
micro-porosity evenly).
1.E+04
1.E+05
1.E+06
1.E+07
1.E+08
0 50 100 150 200 250
Grey Value
C
o
u
n
t
-200000
-150000
-100000
-50000
0
50000
100000
150000
200000
C
o
u
n
t
New Technologies in the Oil and Gas Industry 210
This analysis underestimates porosity and permeability. Therefore, a practical threshold
needs to be found to decide up until which micro-porosity value voxels are considered pore.
For this purpose, the grey value (GVp) is found where:
0
p
GV
n res
n
x f
=
=
(4)
This is illustrated in Figure 9. Thus, the total porosity of the image is calculated according to:
2
2
1; 1 2 1
1
p p
GV
tot n
n GV m GV GV
m
res x
GV GV
| |
= + = +
| |
= +
|
|
\ .
(5)
where the second term gives the amount of micro-porosity extracted from the image. Note
that the micro-porosity values for individual grey values are the same in eq. 5 as in eq. 2.
Figure 9. Practical pore threshold.
The grey values GVp and GV2 give the segmentation thresholds for the micro-plug images.
Three images are prepared according to this segmentation method:
1. representing | res and matrix (including remaining micro-porosity)
2. representing | res, micro-porosity and matrix
3. representing a coarser image where | res and matrix are represented as one phase and
micro-porosity is segmented into 6 different porosity ranges
Image 1 is segmented into a pore network representation (see Bakke and ren, 1997, ren
and Bakke, 2002, 2003) which is used to calculate capillary pressure, relative permeability,
resistivity index, and saturation exponent. Image 2 is used to calculate porosity,
1.E+04
1.E+05
1.E+06
1.E+07
1.E+08
0 50 100 150 200 250
Grey Value
C
o
u
n
t
-200000
-150000
-100000
-50000
0
50000
100000
150000
200000
C
o
u
n
t
Porosity = 1 Porosity = 0
GV
1
GV
2
GV
0
GV
p
Digital Rock Physics for Fast and Accurate Special Core Analysis in Carbonates 211
permeability, formation resistivity factor, and cementation exponent. Image 3 is used in a
steady-state upscaling routine to include properties calculated for generic models for
micritic material representing the micro-porosity.
The scaled image of the whole core plug is segmented in the same way to identify res as
vugs. However, the remaining matrix of the core plug is segmented into pure solid and 1-3
porosity classes according to the segmentation of the respective micro-plugs extracted from
the core plug.
3.5. Calculations/Simulations
3.5.1. Single-phase properties
3.5.1.1. Porosity
Three different porosities are reported: total porosity ( | tot), micro-porosity ( | ) and
percolating porosity ( | perc). | perc is the fraction of | res (see above) that is available to flow,
i.e. accessible from any side of the micro-plug image, expressed as a fraction of the whole
micro-plug volume. The fraction of | res that is not available for flow is isolated porosity ( |
iso). | is all porosity present in voxels between GVp and GV2 expressed as a fraction of the
whole micro-plug volume. | perc is only reported for the micro-plug images. | tot and | are
reported for both the micro-plugs and the whole core plug. Some micro-plug images with
the highest resolution can be considered entirely as micro-porosity. The following relations
hold for porosities of micro-plug and core plug images:
, , , , , , , ,
,
,
tot coreplug res vugs plug A tot A plug B tot B plug C tot C
tot plug res
res plug perc iso
f f f
| | | | |
| | |
| | |
= + + +
= +
= +
(6)
where f is fraction.
3.5.1.2. Absolute permeability
A Lattice-Boltzmann method is applied to solve Stokes equation in the uniform grid model.
Flow is driven either by a constant body force or a constant pressure gradient through the
model. Permeability is calculated in three orthogonal directions separately. Sides
perpendicular to flow directions are closed during each directional calculation (no-flow
boundary conditions). In this study, absolute permeability is calculated using a constant
body force because this setting delivers more accurate results when model resolution is
sufficient and porosity is relatively high. Averages of three directional calculations are
reported for each model realization. For further details see ren and Bakke (2002).
3.5.1.3. Formation resistivity factor
The steady state electrical conductivity, or formation resistivity factor (F), of a brine
saturated rock is governed by the Laplace equation
New Technologies in the Oil and Gas Industry 212
=0
=
w
o
V-
Vu
J
J
(7)
subject to the boundary condition n=0 Vu- on the solid walls (i.e. insulating walls). Here J
is the electrical current, w is the electrical conductivity of the fluid that fills the pore space,
is the potential or voltage and n is the unit vector normal to the solid wall. Numerical
solutions of the Laplace equation are obtained by a random walk algorithm or by a finite
difference method (ren and Bakke, 2002).
The effective directional conductivities i, i = x, y, z are computed by applying a potential
gradient across the sample in i-direction. The directional formation resistivity factor Fi is the
inverse of the effective electrical conductivity Fi = i /w. We define the average formation
resistivity factor F as the harmonic mean of direction dependent formation factors. The
cementation exponent m is calculated from F and the sample porosity using Archies law F =
|
-m
.
Formation factor and cementation exponent reported were approximately 10-15% greater
than experimental data. The F and m values reported in the previous version were
calculated as described above, i.e. by assuming that only the resolved porosity contributes to
the conductivity. Micro porosity present in the micrite phase was thus treated as insulating
solid. In the second version, we accounted for the conductivity of the micrite phase by
assigning a finite conductivity mic to micrite voxels using Archies law = w ( | )
m
,
where | and m are the micrite porosity and the cementation exponent of the micrite
phase.
3.5.1.4. Elastic moduli
Pore-scale modeling of elastic properties
Numerical code
The finite element method described by Garboczi and Day (1995) has been implemented for
calculation of elastic properties. The method uses a variational formulation of the linear
elastic equations and finds the solution by minimizing the elastic energy using a fast
conjugate-gradient method. The results are valid for quasi-static conditions or at frequencies
which are sufficiently low such that the included pore pressures are in equilibrium
throughout the pore space (Arns et al., 2002).
The effective bulk and shear moduli are computed assuming isotropic linear elastic
behavior. The Vp and Vs are subsequently calculated using the simulated effective elastic
moduli and the effective density according to:
4
3
p
K
V
+
= (8)
s
V
= (9)
Digital Rock Physics for Fast and Accurate Special Core Analysis in Carbonates 213
Inputs to the calculations are:
- A three-dimensional representation of the rock microstructure (a digital rock sample)
- Density and elastic properties (K and ) of each mineral composing the rock matrix
- Density and elastic properties (K and ) of the fluid present in the pore space
Pore-scale versus experimental data
Scale and sample selection
Acoustic measurements are generally performed on samples with volumes ranging from 40
to 100 cm
3
. Digital rock samples are significantly smaller, generally in the mm
3
range. Care
must therefore be taken when laboratory measurements and pore-scale derived properties
are compared, due to the scale difference. Laboratory samples and plugs for CT imaging
are not only of different volumes, they are also generally sampled at different locations. Due
to the spatial variability, it is recommended to compare trends when laboratory
measurements are compared with pore-scale derived properties. This is illustrated in Figure
10 for the P-wave velocity. A digital sample with 12.9% porosity has been made. However,
samples tested in the laboratory do not have porosities close to this value. By sub-sampling
the original digital sample, a relative broad porosity range is obtained (from 9.9 to 16.3%).
The pore-scale derived velocity porosity trend is now overlapping with the measurements
performed in the laboratory, and a comparison can be made.
Pore-scale derived P- and S-wave velocities (dark and light blue symbols) are compared with laboratory
measurements (green filled symbols). Dark blue points represent a large digital mother sample, while the light blue
points represent sub-samples of the original mother sample.
Figure 10. Fontainebleau sandstone.
New Technologies in the Oil and Gas Industry 214
Stress
Cracks may reduce the acoustic velocities significantly. Sub-resolution cracks are not
incorporated in the processed mCT images and the corresponding pore-scale derived
velocities are therefore overestimated for materials containing such cracks.
Cracks are closed during loading. Acoustic measurements performed at elevated stress
levels are consequently expected to approach pore-scale derived velocities. Derzhi and
Kalam (2011) compared acoustic measurements at different stress levels with pore-scale
derived velocities. Their results are shown in Figure 11. Note that this assumes that the pore
space and rock framework deforms without large micro-structural changes such as pore
collapse, grain rotation and grain crushing.
P-wave velocity as a function of porosity at different stress levels: a) at ambient conditions, b) at 7 MPa, c) at 30 MPa
and d) at 40 MPa effective hydrostatic stress. Pore-scale derived values are denoted DRP and shown as open orange
triangles. Laboratory measurements are shown as filled symbols; from Derzhi et al. (2011).
Figure 11. P-wave velocity and stress for carbonate samples.
Frequency
Pore-scale derived elastic properties represent properties in the low frequency limit (f 0).
Measurements in the laboratory are generally performed in the ultrasonic range (1 Hz to 4
kHz). Acoustic velocities are independent of frequency for dry materials, while an increase
with increasing frequency has been observed for fluid saturated rocks. Pore-scale derived
properties are therefore expected to be comparable to laboratory measurements for dry
rocks and lower for fluid saturated rocks.
3.5.1.5. NMR
NMR is simulated as a diffusion process using a random walk algorithm to solve the
diffusion equations (ren, Antonsen, Ruesltten and Bakke, 2002). The T2 responses at Sw =
Digital Rock Physics for Fast and Accurate Special Core Analysis in Carbonates 215
1.0 were simulated on one sample from the field A. The results are shown as T2 decay and
T2 distribution curves in figures 14 and 15. The surface relaxation strength was kept at
1.65x10
-5
m/sec for all minerals. The inter echo time was 200sec at a background magnetic
field gradient of 0.2 G/m, the bulk water T2 was 0.3 sec, and the diffusion constant was
2x10
-9
m
2
/sec. The decay curves are transformed into T2 distributions using 50 exponential
functions (assuming the same has been done for the lab data).
3.5.2. Multi-phase flow simulations
The reconstructed rock models were simplified into pore network models. Crucial
geometrical and topological properties were retained, while the data volume was reduced to
allow timely computation (ren and Bakke, 2003). In pore network modelling, local
capillary equilibrium and the YoungLaplace equation are used to determine multiphase
fluid configurations for any pressure difference between phases for pores of different shape
and with different fluid/solid contact angles. The pressure in one of the phases is allowed to
increase and a succession of equilibrium fluid configurations are computed in the network.
Then, empirical expressions for the hydraulic conductance of each phase in each pore and
throat are used to define the flow of each phase in terms of pressure differences between
pores. Conservation of mass is invoked to find the pressure throughout the network,
assuming that all the fluid interfaces are fixed in place. From this the relationship between
flow rate and pressure gradient can be found and hence macroscopic properties, such as
absolute and relative permeabilities, can be determined (ren and Bakke, 2002).
The following oil-water displacements were simulated:
Primary drainage
Imbibition at a given wettability preference
For each displacement process, capillary pressure and relative permeability curves were
calculated. Resistivity index with the corresponding n-exponent was calculated after
primary drainage.
Each saturation and relative permeability value corresponds to a capillary equilibrium state.
In all the simulations, it is assumed that capillary forces dominate. This is a good
approximation for capillary numbers Nca < 1e
-6
. This, however, does not necessarily mean
that it is the most efficient displacement possible. In certain cases, viscous and/or gravity
forces can dominate and result in higher (or lower) displacement or sweep efficiency.
3.5.2.1. Relative permeability
Simulated relative permeability data are fitted to the empirical LET expression (Lomeland,
Ebeltoft and Thomas, 2005). For water/oil displacements the LET equations become:
( )
( )
( ) ( )
1
o
o
L
on
ro ro wi
Lo T
on o on
S
k k S
S E S
=
+
(10)
New Technologies in the Oil and Gas Industry 216
( )
( )
( ) ( )
1
w
w o
L
wn
rw rw or
L T
wn w wn
S
k k S
S E S
=
+
(11)
, 1
1
w wi
wn on wn
wi oy
S S
S S S
S S
= =
(12)
where kro and krw are the relative permeability of oil and water, respectively. The Lis, Eis,
and Tis are the LET fitting parameters, where i is either oil (o) or water (w). kro(Swi), krw(Sor),
Swi and Sor are determined from the computed results and the optimised values of the fitting
parameters were determined using a simulated annealing algorithm.
3.5.2.2. Capillary pressure
A detailed account of the methods used to calculate capillary pressure as a function of Sw
during primary drainage and waterflooding invasion sequences is given in ren et al.
(1998). Fluid injection is simulated from one side of the model (usually x-direction). Thus,
the entry pressure is a function of the pore sizes present in the inlet. In a mercury injection
capillary pressure simulation, fluid is allowed to enter the model from all sides. In that case,
entry pressures are much lower.
The calculated capillary pressures can be expressed in terms of the dimensionless Leverett J-
function:
( )
, c ow
w
ow
P
k
J S
o i
= (13)
where k and | are the permeability and total porosity, respectively. Pc,ow is the oil-water
capillary pressure and oow the oil-water interfacial tension. The Leverett J-function results
have been fitted to the empirical Skjveland correlation (Skjveland et al, 2000):
( )
( ) ( )
( )
1 2
1
1 1
w
S
w wi w or
wi or
c c
J
S S S S
S S
= +
( (
( (
( (
(14)
where c1, c2, a1 and a2 are curve fitting parameters. Sw is the water saturation, Swi initial water
saturation and Sor residual oil saturation. Sw, Swi and Sor is determined by the simulations.
Here, results are reported both as J-function and capillary pressure.
3.5.2.3. Water saturation
All reported Sw is total Sw, i.e. including water in microporosity and isolated pores. It should
be noted that Swi is strongly dependent on capillary pressure. Thus, any comparison with
laboratory data should be done at the same capillary pressure. Any isolated pore volume
and any -porosity cannot be invaded and, thus, contributes to Swi. Therefore, the
irreducible water saturation is given by:
Digital Rock Physics for Fast and Accurate Special Core Analysis in Carbonates 217
,
isolated
wi w corner
tot
S S
| |
|
+
= + (15)
( )
2
,
1 1 ,max
cos
cos
sin 2
r
i n
ow
r r ow
w corner ow ow
ow
S
Pc
u |
o t
u | u
|
| |
+
| |
|
= + + |
| |
|
\ .
\ .
(16)
where is porosity (with suffixes denoting isolated, total and microporosity), i is the
number of connected pores in the network, n the number of corners per pore (3 or 4),
interfacial tension, Pc capillary pressure, contact angle and the corner half angle. Note
that initial water saturation for waterflooding depends on Pc and can be given as an input to
the simulation if needed.
3.5.2.4. Resistivity Index and saturation exponent, n
The resistivity index is calculated from capillary dominated two-phase flow simulations on
the pore network representation of the 3D rock image. The basis for simulating capillary
dominated displacements is the correct distribution of the fluids in the pore space. For two-
phase flow, the equilibrium fluid distribution is governed by wettability and capillary
pressure and can be found by applying the Young-Laplace equation for any imposed
pressure difference between the phases. A clear and comprehensive discussion of all the
mathematical details, including the effects of wettability, involved in the simulations can be
found in ren et al., 1998, and ren and Bakke, 2003).
The current I between two connecting nodes i and j in the network is given by Ohms law
( )
ij
ij i j
ij
g
I
L
= u u (17)
where Lij is the spacing between the node centres. The effective conductance gij is the
harmonic mean of the conductances of the throat and the two connecting nodes
ij j
i t
ij i t j
L l
l l
g g g g
= + + (18)
where the subscript t denotes the pore throat and the conductance gt is evaluated at the
throat constriction. The effective lengths li, lj, and lt govern the potential drop associated
with the nodes and the throat, respectively. By letting lt = Lij and li = lj = 0.5(1-)Lij, the
effective lengths can be calculated from as
1 1 1
2
1
t
ij i j
t t
ij
i j
g
g g g
g g
g
g g
o
| |
|
|
\ .
=
| |
|
|
\ .
(19)
New Technologies in the Oil and Gas Industry 218
The conductance of a pore element k (pore body or throat) is given by gk = wAw, where Aw is
the area of the pore element filled with water. Expressions for Aw for different fluid
configurations, contact angles, and pore shapes are given in ren et al., 1998. We impose
current conservation at each pore body, which means that
0
ij
j
I =
(20)
where j runs over all the pore throats connected to node i. This gives rise to a set of linear
equations for the pore body potentials. The formation resistivity factor of the network is
computed by imposing a constant potential gradient across the network and let the system
relax using a conjugate gradient method to determine the node potentials. From the
potential distribution one may calculate the total current and thus the formation resistivity
factor F = 0/w, where 0 is the conductivity computed at Sw = 1.
The resistivity index is computed similarly. At various stages of the displacement (i.e.
different Sw values), we compute the current and the resistivity index defined as
( )
( )
0 n
w w
w
RI S S
S
o
o
= = (21)
The n-exponent is determined from a linear regression of the RI(Sw) vs. Sw. curve.
3.5.3. Upscaling of single phase and effective flow properties
Effective properties of the core samples are determined using steady state scale up methods.
The CT scan of the core plug is gridded according to the observed geometrical distribution
of the different rock types or porosity contributors. Each grid cell is then populated with
properties calculated on the pore scale images of the individual rock types. The following
properties are assigned to each grid cell; porosity, absolute permeability tensor (kxx, kyy, kzz),
directionally dependent m-exponents, capillary pressure curve, relative permeability curve,
and n-exponent.
Single phase up-scaling is done by assuming steady state linear flow across the model. The
single phase pressure equations are set up assuming material balance and Darcys law
( )
1 0
,
0
k P O with boundary conditions
p P at x o p P at x L
v n at other faces
V- V =
= = = =
- =
(22)
The pressure equation is solved using a finite difference formulation. From the solution one
can calculate the average velocity and the effective permeability using Darcys law. By
performing the calculations in the three orthogonal directions, we can compute the effective
or up-scaled permeability tensor for the core sample. The effective formation resistivity
factor is computed in a similar manner by replacing pressure with voltage, flow with
Digital Rock Physics for Fast and Accurate Special Core Analysis in Carbonates 219
current, and permeability with electrical conductivity. The up-scaled m-exponent is
determined from the effective formation resistivity factor and the sample porosity.
Effective two-phase properties (i.e. capillary pressure, relative permeability, and n-
exponent) are calculated using two-phase steady state up-scaling methods. We assume that
the fluids inside the sample have come to capillary equilibrium. This is a reasonable
assumptions for small samples (<30cm) when the flow rate is slow (<1m/day). The main
steps in the two-phase up-scaling algorithm are:
1. Select a capillary pressure (Pc) level
2. Using the Pc (Sw) relationship, calculate Sw in each grid cell
3. Calculate the average water saturation using pore volume weighting
4. From the kr(Sw) curves, calculate krw and kro in each grid cell, and then the phase
mobilities kw and ko by multiplying the relative permeabilities with the absolute
permeability for the grid cell
5. Perform two separate single phase steady state simulations, one for the oil and one for
the water, to calculate the effective phase permeability
6. Divide the phase permeability with the effective absolute permeability to obtain the
effective relative permeability
7. Repeat these steps with different Pc levels to construct the effective relative permeability
curves
The resistivity index curve is generated in a similar manner by replacing phase permeability
in the water phase calculations with electrical conductivity. The effective n-exponent is
determined from a linear regression of the effective RI(Sw) vs. Sw. curve.
3.6. Uncertainty
The overall uncertainty in up-scaled properties varies from sample to sample. Uncertainties
may be introduced in the following steps:
- Generation of digitized core models
How representative are identified rock types and their corresponding distribution? In
other words; how representative are the digitized model of the core samples?
- Generation of digitized micro-plugs
How representative are CT models of the rock micro-structure? The main uncertainty
is related to size (REV) and image segmentation where the spatial distribution of pore-
space and rock minerals is set.
- Calculation of absolute permeability and formation resistivity factor
An uncertainty of 2% related to the accuracy of pressure solvers.
- Calculation of elastic properties
Uncertainty related to whether all relevant physics are included in the calculations or not.
- Simulation of two- and three-phase flow
Wettability is an input parameter to the simulations. The uncertainty in wettability is
the main source of uncertainty for both two- and three-phase flow simulations.
New Technologies in the Oil and Gas Industry 220
4. DRP applications in multi-phase flow
In this section, we present some novel results of validation of multi-phase flow SCAL results
using Digital Rock Physics. The reservoir cores comprise complex carbonates from giant
producing reservoirs in Middle East. Figure 12 show the comparisons of water-oil capillary
pressure (Pc) measured in a SCAL laboratory at reservoir temperature and net over burden
pressure using a Porous Plate and MICP trims from the same cores corrected to the reservoir
conditions. The DRP data were acquired from the cores after the tests were completed on
the Porous Plate and core thoroughly cleaned for final SCAL reference measurements. Both
limestone and dolomite samples show excellent similarity of DRP derived data with the
laboratory evaluations. Figure 13 confirms the validity of such measurements on different
sets of core samples comprising the same reservoir rock type (RT), provided the rock typing
is valid and captures the key formation properties of rock and fluids.
Figure 12. Water-oil Pc (Porous Plate): DRP vs lab on same core sample
Figures 14 and 15 show for the first time in industry that laboratory NMR T2 and MICP
measurements done on carbonate rock types can also be captured using DRP based
simulations on the same cores with distinctly different pore geometries. The robustness of
DRP in capturing NMR T2 based pore bodies and MICP based pore throat distributions
have far reaching consequences. This shows that DRP in essence can be used confidently to
quantify pore body and pore throat distributions, and therefore the 3D pore geometry is
representative of the specific core sample and pore network topology. In using DRP
effectively, it is recommended that one compares and validates measured NMR T2 and
MICP prior to detailed simulations to quantify various two-phase and three-phase flow
properties through such reservoir rocks.
Figures 16 and 17 demonstrate example DRP based validations with respect to water-oil
relative permeabilities conducted at full reservoir conditions (reservoir temperature,
reservoir pressure and live fluids) on other complex carbonates, including highly permeable
vuggy samples. The imbibition displacements were conducted under steady state conditions
at SCAL laboratories and QCed thoroughly with respect to production, pressure profiles
and insitu saturation data, and the corresponding numerically simulated measrements. The
Digital Rock Physics for Fast and Accurate Special Core Analysis in Carbonates 221
DRP data were acquired on cores comprising each of the composites tested. It is interesting
to note that when plug DRP data are compared with composite laboratory measurements
there is some scatter and divergence for each reservoir rock type. However, the divergences
are significantly minimized when the DRP plugs used are digitally butted to represent the
composite used in the laboratory tests. DRP captures the full reservoir condition multi-
phase flow data very well, and in some cases even show the experimental artefacts of the
SCAL measurements. The validity of tehse tests were confirmed on 14 different reservoir
rock types comprising different formations.
Figure 13. Water-oil Pc (Porous Plate): DRP vs lab in different core samples, but same RRT
Figure 14. NMR T2 distribution and MICP pore throat distribution, DRP vs Lab vuggy core
New Technologies in the Oil and Gas Industry 222
Figure 15. NMR T2 distribution and MICP pore throat distribution, DRP vs Lab tight core
Figure 16. Validating water-oil kr of low permeability composite samples: RRT 6 (10-25 mD)
Digital Rock Physics for Fast and Accurate Special Core Analysis in Carbonates 223
Figure 17. Validating water-oil kr of high permeability composite samples: RRT 8 (350-560 mD)
Author details
Mohammed Zubair Kalam
Abu Dhabi Company for Onshore Oil Operations (ADCO), Abu Dhabi, UAE
Acknowledgement
ADCO and ADNOC Management are acknowledged for their permission to publish these
novel Digital Rock Physics based SCAL results.
Numerical Rocks (Norway) is acknowledged for providing the detailed drafts relating to the
procedures adopted in example DRP computations and the robust measurements presented
in this chapter.
Ingrain Inc (Houston and Abu Dhabi) are acknowledged for introducing the author to the
various challenges ahead, and the uncertainties in current DRP based dvelopments.
Digital Core (Australia) is gratefully remembered in first introducing the concept of DRP to
Middle East, and involving ADCO in one of the first Joint Industy Projects offered to
industry.
iRocks (Beijing and London) are acknowledged for many stimulating discussions relating to
the state-of-the-art.
New Technologies in the Oil and Gas Industry 224
Finally, one must remember the ADCO DRP team for the interest generated and the
numerous insights to imaging, segmentation, data evolution and their impact on different
Petrophysical, SCAL and elastic properties. I thank my son, AbdAllah, for helping me in
getting this draft ready despite the very busy schedules of August 2012.
5. References
Bakke, S., ren, P.-E., 3-D Pore-Scale Modelling of Sandstones and Flow Simulations in the
Pore Networks, SPE Journal (1997) 2, 136-149.
Blunt M., Jackson M.D., Piri M., Valvatne P.H., Detailed physics, predictive capabilities and
macroscopic consequences for pore network models of multiphase flow, Advances in
Water Resources (2002) 25, 1069-1089.
Ghous, A., Knackstedt, M.A., Arns, C.H., Sheppard, A.P., Kumar, R.M., Senden, T.J.,
Latham, S., Jones, A.C., Averdunk, H. and Pinczewski, W.V., 3-D imaging of reservoir
core at multiple scales: Correlations to petrophysical properties and pore-scale fluid
distributions, presented at International Petroleum Technology Conference, Kuala
Lumpur, Malaysia, 2008, 10 p.
Gomari, K. A. R., Berg, C. F., Mock, A., ren, P.-E., Petersen, E. B. Jr., Rustad, A. B., Lopez,
O., 2011, Electrical and petrophysical properties of siliciclastic reservoir rocks from
pore-scale modeling, paper SCA2011-20 presented at the 2011 SCA International
Symposium, Austin, Texas.
Grader, A., Kalam, M. Z., Toelke, J., Mu, Y., Derzhi, N., Baldwin, C., Armbruster, M., Al
Dayyani, T., Clark, A., Al Yafei, G. B. And Stenger, B., 2010, A comparative study of
DRP and laboratory SCAL evaluations of carbonate cores, paper SCA2010-24 presented
at the 2010 SCA International Symposium, Halifax, Canada.
Kalam, M.Z., Al Dayyani, T., Grader, A., and Sisk, C., 2011, Digital rock physics analysis in
complex carbonates, World Oil, May 2011.
Kalam M.Z., Serag S., Bhatti Z., Mock A., Oren P.E., Ravlo V. and Lopez O., SCA2012-03,
Relative Permeability Assessment in a Giant Carbonate Reservoir Using Digital Rock
Physics, SCA 2012 International Symposium, Aberdeen, United Kingdom.
Kalam, M.Z., Al-Hammadi, K., Wilson, O.B., Dernaika, M., and Samosir, H., Importance of
Porous Plate Measurements on Carbonates at Pseudo Reservoir Conditions, SCA2006-
28, presented at the 2006 SCA International Symposium, Trondheim, Norway.
Kalam M.Z., El Mahdi A., Negahban S., Bahamaish J.N.B., Wilson O.B., and Spearing M.C.,
A Case Study to Demonstrate the Use of SCAL Data in Field Development Planning of
a Middle East Carbonate Reservoir, SCA2006-18, presented at the 2006 SCA
International Symposium, Trondheim, Norway.
Kalam, M. Z., Al Dayyani, T., Clark, A., Roth, S., Nardi, C., Lopez, O. and ren, P. E., Case
study in validating capillary pressure, relative permeability and resistivity index of
carbonates from X-Ray micro-tomography images, SCA2010-02 presented at the 2010
SCA International Symposium, Halifax, Canada.
Digital Rock Physics for Fast and Accurate Special Core Analysis in Carbonates 225
Knackstedt, M.A., Arns, C.H., Limaye, A., Sakellariou, A., Senden, T.J., Sheppard, A.P.,
Sok, R.M., Pinczewski, W.V. and Bunn G.F., Digital core laboratory: Reservoir-core
properties derived from 3D images," Journal of Petroleum Technology, (2004) 56, 66-
68.
Lopez, O., Mock, A., ren, P. E., Long, H., Kalam, M. Z., Vahrenkemp, V., Gibrata, M.,
Serag, S., Chacko, S., Al Hosni, H., Al Hammadi M. I., and Vizamora, A., Validation of
fundamental carbonate reservoir core properties using Digital Rock Physics, SCA2012-
19, SCA 2012 International Symposium Aberdeen, United Kingdom.
Lopez, O., Mock, A., Skretting, J., Petersen, E.B.Jr, ren, P.E. and Rustad, A.B., 2010,
Investigation into the reliability of predictive pore-scale modeling for siliciclastic
reservoir rocks, SCA2010-41 presented at the 2010 SCA International Symposium,
Halifax, Canada.
Mu, Y., Fang, O., Toelke. J., Grader, A., Dernaika, M., Kalam, M.Z., Drainage and imbibition
capillary pressure curves of carbonate reservoir rocks by digital rock physics, SCA 2012
Paper A069, Aberdeen, United Kingdom.
ren, P.E. and Bakke, S., Process Based Reconstruction of Sandstones and Prediction of
Transport Properties, Transport in Porous Media, 2002, 46, 311-343
ren, P.E, Antonsen, F., Ruesltten, H.G., and Bakke, S., 2002, Numerical simulations of
NMR responses for improved interpretation of NMR measurements in rocks, SPE paper
77398, presented at the SPE Annual Technical Conference and Exhibition, San Antonio,
Texas.
ren, P. E., Bakke, S. and Arntzen, O. J., 1998, Extending predictive capabilities to network
models, SPE J., 3, 324336.
ren, P.E. and Bakke, S., Process Based Reconstruction of Sandstones and Prediction of
Transport Properties, Transport in Porous Media, (2006) 46, 311-343.
ren, P.E., Antonsen, F., Ruesltten, H.G., and Bakke, S., Numerical simulations of NMR
responses for improved interpretation of NMR measurements in rocks, SPE paper
77398, presented at the 2002 SPE Annual Technical Conference and Exhibition, San
Antonio, Texas.
ren, P. E., Bakke, S. and Arntzen, O. J., Extending predictive capabilities to network
models, SPE Journal, (1998) 3, 324336.
Ramstad, T., ren, P. E., and Bakke, S., 2010, Simulations of two phase flow in reservoir
rocks using a Lattice Boltzmann method, SPE J., SPE 124617.
Youssef, S., Bauer, D., Bekri, S., Rosenberg, E. and Vizika, O., Towards a better
understanding of multiphase flow in porous media: 3D in-situ fluid distribution
imaging at the pore scale, SCA2009-17, presented at the 2009 SCA International
Symposium, Noordwijk, The Netherlands.
Wu, K., Jiang Z., Couples, G. D., Van Dijke, M.I.J., Sorbie, K.S., 2007. Reconstruction of
multi-scale heterogeneous porous media and their flow prediction, SCA2007-16
presented at the 2007 SCA International Symposium, Calgary, Canada.
New Technologies in the Oil and Gas Industry 226
Wu, K., Ryazanov, A., van Dijke, M.I.J., Jiang, Z., Ma, J., Couples, G.D., and Sorbie, K.S.,
Validation of methods for multi-scale pore space reconstruction and their use in
prediction of flow properties of carbonate, SCA-2008-34 presented at the 2008 SCA
International Symposium in Abu Dhabi, UAE.