A Regression Technique With Dynamic Parameter Selection For Phase-Behavior Matching
A Regression Technique With Dynamic Parameter Selection For Phase-Behavior Matching
A Regression Technique With Dynamic Parameter Selection For Phase-Behavior Matching
Summary. The major problem in phase-behavior matching with a cubic equation of state (EOS) is the selection of regression param-
eters. Many parameters can be selected as the best set of parameters; therefore, a dynamic parameter-solution scheme is desired to
avoid tedious and time-consuming trial-and-error regression runs. This paper proposes a regression technique in which the most signif-
icant parameters are selected from a large set of parameters during the regression process. This technique reduces the regression effort
considerably and alleviates the problem associated with a priori selection of regression parameters. The technique's success is demon-
strated by matching experimental data for a light oil and a gas condensate.
Introduction
Cubic EOS's generally do not predict laboratory data of oil/gas Several strategies are available that exploit that structure. Coats and
mixtures accurately without tuning of the EOS parameters. I The Smart I used a modified linear programming least-squares al-
practice often has been to adjust the properties of the components gorithm, while Watson and Lees used a modification of the
(usually the heavy fractions)-e.g., Pc. Tc, and ",-to fit the ex- Levenberg-Marquardt algorithm9 to solve the nonlinear least-
perimental data. squares problem. In this paper, a modification of the adaptive least-
The objective function in the regression involves the solution of squares algorithm of Dennis et al. 2 is used. The present method
complex nonlinear equations, such as flash and saturation-pressure departs from Dennis et al.'s approach in the use of different non-
calculations. A robust minimization method is therefore required linear optimization concepts for selecting the search direction and
for rapid convergence to the minimum. In this paper, a modification the step size. Details of the regression method are described in the
of Dennis et al. 's2 adaptive least-squares algorithm is used. The Appendix.
modification involves the use of some other nonlinear optimization
concepts on direction and step-size selection. 3 Calculation of Jacobian
Dynamic selection of the most meaningful regression parameters
It was found early in the investigation that the key to an efficient
from a larger set of variables is described. This feature is extremely
algorithm would be a fast and accurate estimation of the Jacobian
useful in EOS fitting because it alleviates the problem of deciding
matrix, J =0 rlox. In the Appendix, it is shown that matrix J is
a priori the best regression variables, which is extremely difficult.
also used to determine the Hessian matrix, V2f
It should be stressed that the regression procedure will not correct
The Jacobian J is obtained through numerical differentiation.
the deficiencies of the EOS used and that the EOS predictive capa-
Although the EOS can be differentiated analytically with respect
bility depends entirely on the type and accuracy of the data used
to the regression parameters Xj, the residuals, ri, are extremely
in the regression. For predictive purposes, attempts should be made
complex functions of these parameters (e.g., the GOR of the last
to ensure that the tuned parameters are within reasonable physical
step in a differential liberation) and are more suitable to numerical
limits.
differentiation.
All calculations in this paper are performed with the Peng-
The calculations of ri also involve the iterative solution of non-
Robinson EOS, 4 although the scheme is general and can be ap-
linear systems of equations (e.g., flash and saturation-pressure cal-
plied to any EOS.
culations) where the results are available only to some accuracy
fj. Thus, in the calculation of J through numerical differentiation,
Regre••lon Method
the perturbations of the independent variables' Xj must not be
The implementation of the dynamic-parameter-selection strategy masked by the convergence tolerances Ei or the round-off errors
for tuning the EOS is a nonlinear optimization problem. The goal associated with the calculations of rio It has been found that a per-
is to minimize a weighted sum of squares turbation of 1% of the independent variables Xj is adequate to
nm compute J through numerical differentiation.
f(x)= E rl, ................................... (1)
i=1 Selection of Regre••lon Parameter.
x
where =(Xb x2' .. 'Xn )T = vector of nr regression parameters Given a global set of np regression parameters, the method selects
an active subset of nr parameters with which regression will be
and nm = number of measurements to be fitted. Usually, nm > nr.
The elements of r(x) denoted by ri(x) are nonlinear functions performed. The global set of regression parameters is supplied by
of x; i.e., the user and includes all or some of the following: Pci. Tci , Vci •
VI, dij , and 8.
ri=wi£ei(x)-e;"1/ei, i=l. .. n m , . . . . . . . . . . . . . . . . . . . . (2) The interaction coefficients between hydrocarbons are estimated
from the following equation 10:
where ei(x)=values calculated with the EOS, ei=corresponding
experimental measurements, and Wi = weight associated with the d ij =I-[2VJVcf/(Vc;'+VcJ)]8 . ..................... (3)
ith experimental data point. Note that the differences between the The volume translation techniques of Peneloux et al. II are used
experimental and calculated values are normalized by dividing by to correct the molar volume as follows:
ei in Eq. 2. This brings the magnitudes of the ri to comparable
values. nc
The minimization off(x) may be solved by various methods for V'=V- E y;Vf, ................................ (4)
nonlinear parameter estimation5 and for nonlinear optimization. 6,7 i=1
The general-purpose optimization methods, however, do not take
advantage of the structure of the nonlinear least-squares problem. where V=molar volume from the EOS, V' = corrected molar
volume, and Yi=mole fraction of Component i. Eq. 4 has been
Copyright 1990 Society of Petroleum Engineers proved to improve density predictions of the EOS.
No Exampl••
The Peng-Robinson EOS4 is used in alt calculations.
1.21
1.94
65.99
1 INITIAL MODEL 8.69
'5,000
5.91
-------- FINAL MODEL
0
2.39
I 2.78
o EXPERIMENTAL DATA 1
1.57
A
'0,000
8. 1.12
J6
1.81
W 6.59
~
::>
II')
."
35,000 API gravity of C 7 + at 60 o F. °API 51.4
W
~
1 I Specific gravity of C 7 + 0.7737
//
0.. Molecular weight of C 7 + 140
z
Q 30,000
< ","
~ /" "
::>
< /' 1
~, ... " TABLE 5-MODEL FLUID FOR GAS CONDENSATE
." 25,000 ,..,..1 _ "
~ ......0'"
......
Component Composition
~...
~ ... Number Pseudocomponent (moJO/o)
1 C 1 • N2 67.93
20,000 2 C 2• CO 2 9.90
0 20 .0 60 80 100 3 C3 5.91
4 C4 • Cs 7.86
MOLE , CO2 5 C6 1.61
6 C 7 through C 12 5.18
Fig. 2-Llght·oIl/CO 2 mixtures. 7 C 13 + 1.41
For this example, several cases were run to illustrate some pitfalls For all these cases, the interaction coefficients between the first
in EOS tuning, to stress that caution must be exerted when the component and the second through seventh components were also
potential regression parameters are selected, and to show the im- part of the potential regression parameter set.
portance of choosing the upper and lower bounds of the regression Table 6 shows the properties of the original fluid model, along
parameters, The cases described below were run. with those determined in Cases 1 through 3. The experimental data
Case J- The critical pressures and temperatures of all and the regression results for the saturation-pressure/composition
hydrocarbon components except C 6 were treated as potential data are plotted in Fig. 3, and those for swelling-factor/composition
regression parameters, with the lower and upper bounds being 0.7 data in Fig. 4.
and 1.3 times the initial values, respectively. Case 1 gives the best fit, followed by Case 2 and then Case 3.
Case 2-The critical pressures and temperatures of all compo- Note, however, that for Case 1, the Tc have lost the property of
nents except those of C6 were treated as regression parameters. monotonic increase with increasing carbon number. A possible cause
Also, the upper and lower bounds were much narrower than in Case of such behavior is that a match of the experimental data can be
1 to ensure that after regression Pc and Tc would foilow the trend achieved only by drastically changing the current set of regression
of decreasing Pc with increasing carbon number and increasing Tc parameters so that the trend of Tc cannot be preserved. To preserve
with increasing carbon number. the trend of component properties, bounds need to be placed on
Case 3-The critical properties of only the two heaviest fractions these variables. This is not a limitation of the regressi9n technique,
were chosen as regression parameters, again with the restrictions but the consequence of some inadequacies of the fluid representation.
on the upper and lower bounds to maintain the monotonic increase Case 2, on the other hand, maintains the desirable trend of
and decrease in Tc and Pc with carbon number. decreasing Pc and increasing Tc. However, the fit is not as good
--
EXPERIMENTAL DATA
INITIAL MOOEL
~
.9
..
/~~-
............
.700
..._.. _- CASE I
CASE 2
•.....•. ..0
.,00 ---- CASE 3
~~ .....
0- ~./
l
.300 /'<. . '
' ,...
~~~
'
...-
o EXPERIMENTAL DATA
~
3.0
/~/
...II;! .'00
~
,, ......
....6
~ C>
~ 3900
/,//...... z
'"::I /
/,/.,. ...
:::;
w
~ 3700 " . .<... ~
on
2.0
/~
3500
" ../
... -;.~:ii
1.0 "'-!!!!!::;::=-~----"""'----~--T"""---r---~
3300+-----~--~----~----~--~----~--__, 10 20 30 50 60 70
10 20 30 60
as Case 1. Also note that, in Case I, the EOS is tuned by changing cluding only a small number of component properties in the
a few parameters by large amounts, whereas in Case 2, the tuning regression parameter set. Note that the results from Case 3 are still
is done by adjusting more parameters by smaller amounts. Because adequate for most engineering calculations.
the bounds on the variables were narrowed in this particular case, Case 4-The fluid model obtained from Case 3 was used to fit
the critical pressures and temperatures of Components I and 2 hit the liquid dropout for the constant-volume-depletion study by ad-
the upper bounds, as did the critical temperature of Component 6. justing the volume translation of the lightest and heaviest com-
These variables were subsequently discarded from the active ponents.
regression set and replaced by other variables. The results are shown in Fig. 5. The values of the volume trans-
As expected, the match in Case 3 is not as good as in Cases 1 lations before and after the regression are also given in Table 6.
and 2 because the available regression parameters are further re- This case has been included to show that more than one type of
stricted. This case is included to show the possible effects of in- data from different experiments can be matched reasonably by
changing very few regression parameters.
26 o EXPERIMENTAL POINTS
Discussion
- - - - INITIAL MODEL In the examples, the upper and lower bounds of the variables were
- - - - - fiNAL MODEL selected arbitrarily. In practice, one would have narrow bounds for
those variables that are known with a fairly large degree of cer-
tainty, such as the critical properties of the light components. The
narrow bounds render the objective function relatively insensitive
;e- to the corresponding variables. Since the variables are scaled be-
, .... ---- ", tween zero and unity in Eq. 5, the derivatives iJjliJXj are very small
,,
22
w
\ for variables with narrow bounds, and these will likely be dropped
~ ,,
\
\ during the regression. Conversely, larger (but realistic) bounds
g \
should be plaoed on variables that are known with a small degree
I- 20
,," \\
\ of certainty. Thus, variable scaling makes variables more sensitive
~
,,"
as their bounds become broader, which is a very desirable feature.
\\
~
Note that there are two ways by which a variable can be discarded
/
, \
\
\
from the active regression set: when it loses its "sensitivity" and
0 when it hits its bounds for two successive iterations. In the first
5 18 // \
I \ case, the sensitivity is determined by comparing the variable
"
:::;
/
/
I \
\
\
derivatives with the deviations between calculated and experimental
I \ results. A variable that has lost its sensitivity at a particular iter-
16 \ ation may regain it later. Because the method presented does not
\
allow for the reintroduction of a discarded variable, the method
will converge to a solution that may not be a "global" minimum.
However, because the purpose of the algorithm is not to find the
,.~------~------r-----~------~------~
500 1000 1500 2000 2500 3000
"global" or "true" minimum of the regression problem, but rather
to find a minimum that gives an adequate fit of the experimental
PRESSURE (psig) data, the method works extremely well. The method has been found
Fig. 5-Gas condensate: constant-yolume-depletlon study. to be quite robust and does not require initial guesses close to the
final solution.