Chemical Process System Engineering
Chemical Process System Engineering
Chemical Process System Engineering
META-MODELLING IN
CHEMICAL PROCESS SYSTEM ENGINEERING
Olumayowa T. Kajero1, Tao Chen1, Yuan Yao2, Yao Cheng Chuang2, and
Taiwan, R.O.C.
ABSTRACT
computing, it is still difficult to perform simulations with many physical factors taken
into accounts. Hence, translation of such models into computationally easy surrogate
models is necessary for successful applications of such high fidelity models to process
design optimization, scale-up and model predictive control. In this work, the
researchers be familiarized with the work that has been carried out and problems that
remain to be investigated.
Traditionally this is done in two distinct approaches: the first-principle approach and
steady state and dynamic process simulations model the operation performance of a
plant, computer fluid dynamic (CFD) simulations model the momentum, material and
heat transfer in an equipment, and molecular simulations model the relation between
experiment (DOE) to direct experiments. DOE can be divided into two categories, the
exploration of design space, e.g. screening designs and finding the optimum, e.g.
response surface method. Traditional DOE theory were based on the assumptions that
With the development of powerful computers, we can include more and more
details into first-principle simulation models so s to improve the fidelity of the model.
For example, we can model a continuous stirred tank reactor (CSTR) by assuming that
it is a well-mix reactor. The mixing and heat transfer can be modelled using a CFD
simulator and their effects can be integrated to the well-mix reactor through residence
time distribution and heat transfer rate. Alternatively, one can take into account
reactions and change in physical properties with change in composition and temperature
in a CFD simulation. Even we a given simulation model, the fidelity can increase by
including more mesh into the solver. As the fidelity of the physical model increases,
the number of parameters needed to be estimated, i.e. costs of calibrating the first-
principle model increase. However, the computer time required for simulation also
increase and the high cost the first-principle model becomes difficult to use.
models can be used efficiently in design, optimization and control. There have already
been many useful reviews and books in the development and application of meta model
to the best of our knowledge, there is no such review attempt specifically from the
2. META-MODELS REPRESENTATION
Consider an actual process with input 𝒙 = [𝑥1 ⋯ 𝑥𝐾𝑥 ] and output 𝒚 = [𝑦1 ⋯ 𝑦𝐾𝑦 ].
𝒚 = 𝚽(𝒙)
(1)
(2)
(3)
include polynomial, kriging, radial basis function and artificial neural network, etc.
We should bear in mind that the definition of input 𝒙 and output 𝒚 may be different
in different applications. For example, let us consider the CFD model of a heat
exchanger with fixed geometry. We can try to construct a meta-model that only apply
to hot and cold streams with specific physical properties. The input parameters 𝒙 are
the inlet flowrates and temperatures of the inlet hot and cold streams. However, if we
want to construct a more general meta-model that can be applied to different fluids,
the temperatures and the velocities at different points inside the heat exchanger.
2.2.1 POLYNOMIAL
Simpson et al9, Palmer and Realff10, Dutournie et al11, Chen et al12. Simplicity implies
ease in construction and application but also inability to describe complex input-output
relationships.
The work of Krige13 was widely used in geostatistics14 and spatial statistics15.
input space, with the correlation being used to predict response values between
̂ = [𝒙
Let 𝑿 ̂=
̂𝑁 ]𝑇 be a set of training data points (sites) and 𝒀
̂1 , ⋯ 𝒙
̂ )−𝟏 (𝒀
𝒚(𝒙) = 𝒇𝑇 (𝒙)𝜷 + 𝒓𝑇 (𝒙)𝚺(𝑿 ̂ − 𝑭(𝑿
̂ )𝜷),
where 𝒇(𝒙) contains a set of regression functions of the input variables, and 𝜷 is the
a matrix containing the regression functions calculated for all the training data points.
𝜌(𝒙 ̂1 ) ⋯
̂1 , 𝒙 ̂𝑁 )
̂1 , 𝒙
𝜌(𝒙
̂) = [
𝚺(𝑿 ⋮ ⋱ ⋮ ].
𝜌(𝒙 ̂1 ) ⋯
̂𝑁 , 𝒙 ̂𝑁 )
̂𝑁 , 𝒙
𝜌(𝒙
while
𝒓(𝒙) = [𝜌(𝒙, 𝒙 ̂𝑁 )]
̂𝑁 ), ⋯ 𝜌(𝒙, 𝒙
is vector of the correlation between a general point in the input space and the training
sites. The parameters of the Kriging model are the parameters in the correlation
function 𝜽 = [𝜃1 ⋯ 𝜃𝐾𝑥 ] and the regression coefficient 𝜷. They can be estimated by
the following iterative procedure. First assume a value for 𝜽, estimate the regression
coefficient 𝜷 by
−1
̂ )𝑇 𝚺(𝑿
̃ = (𝑭(𝑿
𝜷 ̂ )−𝟏 𝑭(𝑿
̂ )) ̂ )𝑇 𝚺(𝑿
𝑭(𝑿 ̂ )−𝟏 𝒀
𝜎𝑝2 =
1
(𝒀 − 𝑭(𝑿 ̂ )−𝟏 (𝒀 − 𝑭(𝑿
̂ ))𝑇 𝚺(𝑿 ̂ ))
𝑁
̂ )|1/𝑁 𝜎𝑝2 )
̃ = min (|𝚺(𝑿
𝜽
𝜽
̃ converge.
̃ and 𝜷
The above procedure is repeated until values of 𝜽
Kriging is also termed Gaussian process in the literature with slightly different
formulation18,19,20.
have been extensively investigated by many authors. A chronological, but far from
here18,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44, 45 46 47
, , . A review
learning classifier so that data in a (high dimensional) input space can be classified into
groups according to their locations. The SVM can also be formulated into an input-
output known as support vector regression (SVR)49. For a specific dimension in the
̂ = [𝒙
output space, given a training data set 𝑿 ̂𝑁 ]𝑇 and [𝑦̂1 , ⋯ 𝑦̂𝑁 ]𝑇 ; a
̂1 , ⋯ 𝒙
𝑦(𝒙) = 𝛼𝑜 + ∑𝑁 ∗
̂𝑖 )
𝑖=1(𝛼𝑖 − 𝛼𝑖 ) 𝐾(𝑥, 𝒙
∑𝑁 (𝛼 − 𝛼 ∗)
= 0
𝑠𝑢𝑏𝑗𝑒𝑐𝑡 𝑡𝑜: { 𝑛=1 𝑛 ∗ 𝑛
0 < 𝛼𝑛 , 𝛼𝑛 < 𝐶
determined using a least square approach that uniquely determined by the input-output
training data; the resulting model is known as known as least square support vector
here52,53,54,55,56.
̂ = [𝒙
Friedmann57 attempts to fit a set of training data 𝑿 ̂=
̂𝑁 ]𝑇 and 𝒀
̂1 , ⋯ 𝒙
𝒚(𝒙) = ∑𝑀
m=1 𝛽𝑚 𝑩𝑚 (𝒙)
𝐿 𝑞
𝑩𝑚 (𝒙) = ∏𝑙=1
𝑚
[𝑠𝑙,𝑚 (𝑥𝑣(𝑙,𝑚) − 𝑡𝑙,𝑚 )]+
𝑠𝑙,𝑚 can take values of ±1. 𝐿𝑚 is an interaction order of the 𝑚𝑡ℎ basis function.
0 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
𝑥𝑣(𝑘,𝑚) is value of one of the input variables, and 𝑡𝑘,𝑚 is hinge point so that the basis
function 𝑩𝑚 (𝒙) is cutoff either above or below the hinge point. The basis functions
spline fit of an input and output relation that can be piecewise continuous. Use of
A similar form of the above method is the radial basis function network (RBFN)65
The function 𝑓 can take many forms such as linear, cubic, thin plate spline, Gaussian,
include the weighting coefficients 𝝎 and the basis function centers[𝒄1 ⋯ 𝒄𝑛 ]. They
can be obtained by three step algorithm as follows. First a set of basis function centers
were chosen by some clustering of training data in the input space (unsupervised
Then both the weighting coefficients and the basis function centers are updated by
gradient search. Applications of radial basis function in meta-modelling have also been
Artificial neural network (ANN) has been built with the aim of modelling how the
human brain functions. They were used in many fields of machine learning and artificial
pointed out that RBFN and SVM are also forms of ANN interpreted in a general sense.
However, in this manuscript, we used ANN to denote a common form known as the
𝐾𝑘−1
𝑦 𝑘 𝑖 = 𝑓 (∑𝑗=1 𝑤𝑖𝑗𝑘 𝑓 (𝑦 𝑘−1𝑗 ) + 𝑏𝑖𝑘 )
𝑦 𝑘 𝑖 is the output of the ith neuron in the kth layer. 𝑤𝑖𝑗 is a synaptic weight connecting
the output of the jth neuron in the (k − 1)th layer to the input of the the ith neuron in the
kth layer. 𝑏 𝑘 𝑖 is the bias of the ith neuron in the kth layer. 𝐾 𝑘 is the number of neurons
in the in the kth layer. 𝑓 is known as the activation function which can take many forms.
Sigmoids such as hyperbolic tangent and logistic functions are commonly used. It
was proved that a single layer of such network is able to approximate and continuous
of areas75,58,76,77,78,79,80,81,82,83,84,85,86.
In many high fidelity simulations, especially CFD, the output are not limited to a
few variables but maps of spatial and temporal variations. While such maps can be
discretized to generate a large number of variables, it is not clear how they should be
incorporated into the meta-model. Using a dynamic system that has an input vector
𝒙 and produces output in a period which can be discretized into 𝒚 = [𝑦1 , ⋯ 𝑦𝑇 ], Conti
multiple output (MO) emulator expressed the output at different time steps as a function
of input
𝒚 = 𝛀(𝒙)
A multiple single output (MS) emulator expressed the output at each time instant as
𝑦𝑖 = Ω𝒊 (𝒙)
𝑦 = Ω(𝒙, 𝑡)
Let us use Kriging model as an example, and 𝑛 be the number of regression functions
and 𝑚 be the number of training data. The training of the MO emulator will required
̂ )𝑇 𝚺(𝑿
̂ ) and 𝑇𝑛 × 𝑇𝑛 matrix 𝑭(𝑿
an inversion of 𝑚 × 𝑚 matrix 𝚺(𝑿 ̂ )−𝟏 𝑭(𝑿
̂) .
Hence the size of matrix being inverted is dependent of T . The training of the MS
̂ ) and 𝑛 × 𝑛 matrix in
emulator will required an inversion of 𝑚 × 𝑚 matrix in 𝚺(𝑿
̂ )𝑇 𝚺(𝑿
𝑭(𝑿 ̂ )−𝟏 𝑭(𝑿
̂) . The size of the matrix inversion is always limited but the
training time increases with T. In the TI emulator, the number of regression function
n has to be increased to account for variation due to time dependence. Similar
output.
component analysis (PCA)88. For example, Chen et al20 used PCA to reduce the
aerosols and showed that output variations of 182250 spatial-temporal grid points can
GPR. Similar works were reported by Jia et al89 in real-time storm /hurricane risk
2.4 SUMMARY
In this section, we have examined several input and output relations commonly used
for meta-modelling. However, other forms of input and output relations may be used
depending on the nature of the problem. Meta-models used are often referred to being
assumed some forms of basis functions. The model coefficients are obtained from the
training data using regression analysis. In principle, we can forget the training data
set once the model is determined. The model complexity will increase as more basis
functions were added to the systems. Since the number of parameters are not prefixed,
therefore is always the danger of overfitting and the generalizability of the model is
the training data constitute the essential part of model parameters and must be
memorized to make predictions. It should be noted that the need of data storage is not
really a disadvantage because data storage become less and less expensive with cloud
technology and recall of old training data to train new model is also necessary even
inherently complex. Therefore the meta-model must be flexible enough to capture the
complexity. Hence there must be an element in the meta model that ensure that it can
3. META-MODEL CONSTRUCTION
The intuitive approach to meta-model construction may view each simulation run
determine the locations of each simulation runs to be executed. This area is known as
Traditional DOE methods can usually be classified as either one of the following
two objectives, (i) screening experiments that try to identify factors (input variables or
combination of input) that have statistically significant effects on the response (output);
and response surface experiments that try to build a input-output relation which can
Design, , etc. Another general class of screening are “optimal designs” that
optimize some form of metric of the information matrix |(𝑿 ̂ )−1 | where 𝑿
̂𝑇𝑿 ̂ is
matrix of the training data in a generalized parametric input space. For example the
In design of experiments, two kinds of errors are considered: (i) random errors that
mismatch of the model assumed and the actual response, i.e. bias. In design of
that are based upon a preselected model and the assumption that bias is small compared
their execution takes time and effort. Therefore the number of runs a major concern.
Relatively large number of runs are still required for some of these designs, especially
when the dimensionality design space increases. Small response surface design can
design95, saturated design96 (see review by Draper and Lin97 and Meyers and
Montgomery93). However, as pointed out by Allen and Yu98, the desirable properties
sufficiently accounted for the effects of bias while without sacrificing variance, e.g.
based on expected integral mean square error should be considered (Allen et al99).
sampled region. Examples of space filling design include: random or Monte Carlo
the design, a measure of the space filling nature should be defined, e.g. the modified
4 𝐾𝑥 21−𝐾𝑥 𝐾𝑥 2 )
𝑀𝐿2 = ( ) − ∑𝑁
𝑛=1 ∏𝑘=1(3 − 𝑥
̂𝑛𝑘 +
3 𝑁
1 𝑁 𝐾𝑥
∑𝑁 ̂𝑛𝑘 , 𝑥̂𝑛′𝑘 ))
𝑛=1 ∑𝑛′ =1 ∏𝑘=1(2 − 𝑚𝑎𝑥(𝑥
𝑁2
The smaller the value, the better is the space filling properties. Another measure is
𝑀𝑚 = min‖𝒙 ̂𝑛′ ‖
̂𝑛 − 𝒙
𝑛,𝑛′
The larger the value is the better. To balance between space-filling properties and
orthgonality, Cioppa and Lucas107 proposed a design that created the best space filling
to expect a reliable meta-model can be build or the optimum of the actual model can be
Early work in this area, reviewed by Wang and Shan3 include reducing the dimension
to survey a small design space and move limits towards the optimal region111,112,113,114.
points in the large design space and then cluster promising points. Meta-models for
local region around cluster centers are then generated to refine the optimum. The
above are optimization iterations designed for finding the optimum. Khu et al60 also
sample points. However, the objective function to be optimized is the sum square
error between the actual model and predicted values of a validation data set. Thus the
iterative approach is designed to increase the fidelity of the model. Chen et al 116
argued that, even in optimization, with limited data, the initial meta-model cannot be
completely trusted. Thus in early stages of the search, we need to focus on exploring
under-sampled region. At later stages focus should be on finding the optimum since
enough data has been accumulated for building an accurate enough model. An index,
the predicted fitness of objective function at a sample point is defined as the information
information entropy. New samples were generated based on Monte Carlo importance
sampling of the free energy, with “temperature” which is related to the number of
previously sampled points being a balancing parameter. Tang et al117 and Chi et al118
of the GPR was used in addition to the prediction mean to determine the future sampling
index that took care of the uncertainty of the prediction, known as expected
because the meta-model gives prediction mean and variance. Therefore, the predicted
improvement should be integrated with its probability density function to attain its
expected value, i.e. EI, which should be the objective function to be optimized.
problems with some degree of similarity. For example, CFD simulations were
carried out and a “old” meta-model has been constructed for a given equipment with a
would most desirable if we can construct “new” meta-model for another piece of
equipment that is similar in structure but with slightly different geometries and/or
another fluid with different rheological properties, using the given “old” meta-model
plus only a few simulation data . Another typical problem of in chemical engineering
is scale-up of reactor. It is well known that optimal operating condition found for a
reaction out in a lab size reactor may not work when a reactor is scaled to larger
dimensions. Although kinetics and thermodynamics will not change as the equipment
is scaled up. Mixing and heat transfer changes with dimension. CFD simulation
were often used to solve scale-up problems, but CFD coupled with reaction is
optimization is necessary.
modify an existing meta-process model to fit a different yet similar process. The
existing model is denoted as the “base model”, while the model to be developed for the
new process is called as the “new model”. At least one of the following two objectives
(1) To attain similar prediction accuracy, fewer data are required for migrating from the
base model to the new model than for developing an entirely new model without
(2) The migrated new model is more accurate than the model developed without the
aid of migration if nearly equal number of experimental data are used in model
training.
124
Motivated by the standardization step in calibration model transfer , the migration
can be conducted by a parametric scale and bias correction (SBC) of the base
model120,123:
parameters of different dimensions of the output and input space; 𝒃𝑦 and 𝒃𝒙 are bias
vectors in the output and input space respectively. Data of the new process are used
to determine the scale and bias transformation parameters. The SBC, being linear
transformations in output and input space, is an arbitrary similarity condition that may
not have sufficient flexibility to model the new process of interest. To overcome this
limitation, Yan et al.125 proposed a Bayesian migration method to update the scale-bias
functions given experimental data from the new process. Their method is named as
functional SBC, which is based on a GPR model framework. The input-output relation
and bias correction 𝜹(𝒙) is chosen to be a Gaussian process with zero-mean. The
underlying assumption that if the new process is similar to the old process, 𝜹(𝒙) would
be quite close to zero and its determination would require much less data points.
Applicability of this method was recently demonstrated using sequential sampling and
Bayesian techniques126,127,128.
is well known that equations in transport phenomena can be reduced into universal form
with dimensionless groups such as Reynolds number, Prandtl numbers and Nusselt
Such advantages should be much more evident if applied to the design of computer
CFD experiments.
The functional SBC transformation can find its root from the work of Qian et al38
where the same method was used to migrate between a model of low accuracy (LE)
experiment and a high accuracy (HE) experiment. The concept is especially important
for design of computer experiments, because complex computer codes can generate
results with different levels of mesh densities. It is always desirable that a meta-model
can be generated using more low fidelity runs and a small number of high fidelity runs.
Kennedy and O’Hagan130 first proposed using an auto regressive relation with a
Gaussian process model bias term to connect meta-model of different levels of mesh
fidelities. The problem of migrating between different levels of fidelity due to mesh
It should be pointed out the complexity of a computer code can be increased not
only increasing the mesh but increasing the physical processes being considered.
Chuang et al139 demonstrated by using a simple well-mixed CTSR model that contains
only the kinetics of the reaction as the base model, new meta-models can be readily
developed for full CFD simulations that take into account of mixing and heat transfer.
In theory, using the Gaussian process model for a bias term allows us to include
any differences between the “new” and “old” processes. It should be noted that the
migration is feasible even if new variables were added to the “new process”. The
relations of “new” and “old” processes in the old input-space. In other words, as the
computational model become more complex, less and less significant variations will be
3.4 SUMMARY
most appropriate for building meta-models. However, when a meta-model was used
an important research area. Migration between different meshing densities has been
transformation. The bias term in the functional scale and bias correction incorporate
enough flexibility to allow for migration between models of different complexities due
4. APPLICATIONS
industry156,157,158; etc. In should be pointed out that in some of these work, actual
additional experiments and finding the optimum process condition can be classified in
the number and levels of categorical variables are large. Qian and Wu159 have
discussed the use of GPR with quantitative and qualitative input as a surrogate model
for computer experiments. The covariance structure between category variables were
estimated using data sampled so that one does not have to perform experiments at every
Experimental design methods have been proposed160. The potential of such models,
sampling and optimization strategy in flowsheet and equipment design for chemical
It should be pointed out that there are numerous studies data-driven models such as
ANN161, RBFN162, SVM163, GPR164 can be used to represent nonlinear time series.
be considered as meta-model discussed in this work, because they are not surrogates of
a more complex model. Moreover, they are usually obtained from online data, thus
experimental design to construct these models received much less interest to adaptive
or just-in-time strategies. Tsen et al174 proposed a hybrid approach in which first-
principle simulation data were trained together with experimental data to obtain an
ANN model for use in control. Such hybrid models175,176 were developed because of
complex high fidelity dynamic simulation. However, many researchers prefer to use
was developed by reducing the original differential algebraic equations (DAE) system
techniques. Such methods have been developed for distillation columns177, bio-
reactors178, air separation179,180 etc. Since the reduced models is still an equation based
Perhaps the closest form of meta-model in process control is under the banner of
function, which is usually a time-discounted sum of costs of individual time points. The
major challenge under this framework is that the computation needed to evaluate the
cost-to-go is often very high, and thus on-line control is almost infeasible. Here the idea
and the cost-to-go function. This approximate function is then used for on-line control,
design space can be efficiently explored for process improvement is intuitive and
high fidelity model. Typically, a high fidelity model requires a set of physical
meaningful parameters 𝛝 to make predictions (see equation 1). For example, in CFD
practice, they have to be calibrated by fitting simulation results with experimental data.
To do so the high fidelity simulations has to be carried out at different parameter settings
The calibration of computer models dates back as far as 1978 (O’Hagan, 1978183),
where the Bayesian calibration approach was used. This involves fitting the posterior
mentioned in the literature different from the Bayesian approach. Some of them can be
optimization.
Algorithm191,192,193 (3) mechanical and aerospace systems which utilizes Radial Basis
Ghenaiet, 2015).
A computer model can help us to locate the optimal operation of a process, it can
also help us to evaluate the sensitivity of the response to a certain input. Sensitivity
characterized locally by carrying out one-at-time changes to each input and examine
network194 was analyzed and simplified. Alternatively global variance based index
such as the Sobol indices195, fast amplitude sensitivity test (FAST)196,197, high
of course time-consuming using the high fidelity model. However these indices can
dispersion90, etc.
4.5 SUMMARY
with categorical variables is only in its early stage. Thus applications of meta-model
programing have been limited. While there are many applications in using data-
driven soft-sensor and reduced order models in nonlinear model predictive control,
these models are not meta-model as defined in this study. The closest form of meta-
that meta-model can be used for model calibration and sensitivity analysis.
analysis of kinetics reaction network has been well researched, partly because the need
5. CONCLUSIONS
The objective of this review is to provide chemical process system engineers with
the statistical background that has been developed for design of computer experiment
and use of meta-models for optimization. We have introduced various forms of meta-
choice of initial experimental design and some form of balance between exploring
theory and sampling techniques for this kind of problems have not been fully
understood. With such knowledge, our ability to use complex, high fidelity, time
ACKNOWLEDGEMENT
O. Kajero’s PhD is funded by the Tertiary Education Trust Fund (TETFUND), Nigeria
in collaboration with University of Lagos, Nigeria. This work was also partially
supported
REFERENCES
1. Simpson, T. W.; Peplinski, J. D.; Koch, P. N.; Allen, J. K., Metamodels for
computer-based engineering design: survey and recommendations. Engineering with
Computers 2001, 17 (2), 129-150.
2. Chen, V. C. P.; Tsui, K. L.; Barton, R. R.; Meckesheimer, M., A review on design,
modeling and applications of computer experiments. Iie Transactions 2006, 38 (4), 273-
291.
3. Wang, G. G.; Shan, S., Review of metamodeling techniques in support of
engineering design optimization. Journal of Mechanical Design 2007, 129 (4), 370-380.
4. Kleijnen, J. P. C., Kriging metamodeling in simulation: A review. Eur. J. Oper. Res.
2009, 192 (3), 707-716.
5. Levy, S.; Steinberg, D. M., Computer experiments: a review. AStA-Adv. Stat. Anal.
2010, 94 (4), 311-324.
6. Shan, S.; Wang, G. G., Survey of modeling and optimization strategies to solve
high-dimensional design problems with computationally-expensive black-box
functions. Structural and Multidisciplinary Optimization 2010, 41 (2), 219-241.
7. Blondet, G.; Le Duigou, J.; Boudaoud, N.; Eynard, B., Simulation data
management for adaptive design of experiments: A litterature review. Mechanics &
Industry 2015, 16 (6).
8. Santner, T. J.; Williams, B. J.; Notz, W. I., The design and analysis of computer
experiments. Springer Science & Business Media: 2013.
9. Simpson, T. W.; Lin, D. K.; Chen, W., Sampling strategies for computer
experiments: design and analysis. International Journal of Reliability and Applications
2001, 2 (3), 209-240.
10. Palmer, K.; Realff, M., Metamodeling approach to optimization of steady-state
flowsheet simulations - Model generation. Chemical Engineering Research & Design
2002, 80 (A7), 760-772.
11. Dutournie, P.; Salagnac, P.; Glouannec, P., Optimization of radiant-convective
drying of a porous medium by design of experiment methodology. Dry. Technol. 2006,
24 (8), 953-963.
12. Chen, V. C.; Tsui, K.-L.; Barton, R. R.; Meckesheimer, M., A review on design,
modeling and applications of computer experiments. IIE transactions 2006, 38 (4), 273-
291.
13. Krige, D., A statistical approach to some mine valuations and allied problems at
the Witwatersrand, University of Witwatersrand. Masters: 1951.
14. Matheron, G., Principles of geostatistics. Economic geology 1963, 58 (8), 1246-
1266.
15. Cressie, N., Statistics for spatial data: Wiley series in probability and statistics.
Wiley-Interscience New York 1993, 15, 16.
16. Chang, K.-t., Introduction to geographic information systems. McGraw-Hill
Higher Education Boston: 2006.
17. Lophaven, S. N.; Nielsen, H. B.; Søndergaard, J. DACE-A Matlab Kriging toolbox,
version 2.0; 2002.
18. Sacks, J.; Welch, W. J.; Mitchell, T. J.; Wynn, H. P., Design and analysis of
computer experiments. Statistical science 1989, 409-423.
19. Jones, D. R., A taxonomy of global optimization methods based on response
surfaces. Journal of global optimization 2001, 21 (4), 345-383.
20. Chen, T.; Hadinoto, K.; Yan, W.; Ma, Y., Efficient meta-modelling of complex
process simulations with time–space-dependent outputs. Computers & chemical
engineering 2011, 35 (3), 502-509.
21. Sacks, J.; Schiller, S. B.; Welch, W. J., Designs for computer experiments.
Technometrics 1989, 31 (1), 41-47.
22. Barton, R. R. In Simulation metamodels, Proceedings of the 30th conference on
Winter simulation, IEEE Computer Society Press: 1998; pp 167-176.
23. Simpson, T. W.; Mistree, F., Kriging models for global approximation in
simulation-based multidisciplinary design optimization. Aiaa Journal 2001, 39 (12),
2233-2241.
24. Meckesheimer, M.; Booker, A. J.; Barton, R. R.; Simpson, T. W., Computationally
inexpensive metamodel assessment strategies. AIAA journal 2002, 40 (10), 2053-2060.
25. Shin, M.; Sargent, R. G.; Goel, A. L. In Optimization and response surfaces:
Gaussian radial basis functions for simulation metamodeling, Proceedings of the 34th
conference on Winter simulation: exploring new frontiers, Winter Simulation
Conference: 2002; pp 483-488.
26. Allen, T. T.; Bernshteyn, M. A.; Kabiri-Bamoradian, K., Constructing meta-
models for computer experiments. J. Qual. Technol. 2003, 35 (3), 264-274.
27. Ong, Y. S.; Nair, P. B.; Keane, A. J., Evolutionary optimization of computationally
expensive problems via surrogate modeling. Aiaa Journal 2003, 41 (4), 687-696.
28. Kleijnen, J. P. C., An overview of the design and analysis of simulation
experiments for sensitivity analysis. Eur. J. Oper. Res. 2005, 164 (2), 287-300.
29. Li, R. Z.; Sudjianto, A., Analysis of computer experiments using penalized
likelihood in Gaussian kriging models. Technometrics 2005, 47 (2), 111-120.
30. Martin, J. D.; Simpson, T. W., Use of kriging models to approximate deterministic
computer models. AIAA journal 2005, 43 (4), 853-863.
31. Wang, L.; Beeson, D.; Akkaram, S.; Wiggs, G. In Gaussian process meta-models
for efficient probabilistic design in complex engineering design spaces, ASME 2005
International Design Engineering Technical Conferences and Computers and
Information in Engineering Conference, American Society of Mechanical Engineers:
2005; pp 785-798.
32. Barton, R. R.; Meckesheimer, M., Metamodel-based simulation optimization.
Handbooks in operations research and management science 2006, 13, 535-574.
33. Joseph, V. R., Limit kriging. Technometrics 2006, 48 (4), 458-466.
34. Zhou, Z.; Ong, Y. S.; Nair, P. B.; Keane, A. J.; Lum, K. Y., Combining global and
local surrogate models to accelerate evolutionary optimization. Systems, Man, and
Cybernetics, Part C: Applications and Reviews, IEEE Transactions on 2007, 37 (1),
66-76.
35. Joseph, V. R.; Hung, Y.; Sudjianto, A., Blind kriging: a new method for developing
metamodels. Journal of Mechanical Design 2008, 130 (3).
36. Knowles, J.; Nakayama, H., Meta-modeling in multiobjective optimization. In
Multiobjective optimization, Springer: 2008; pp 245-284.
37. Marrel, A.; Iooss, B.; Van Dorpe, F.; Volkova, E., An efficient methodology for
modeling complex computer codes with Gaussian processes. Computational Statistics
& Data Analysis 2008, 52 (10), 4731-4744.
38. Qian, P. Z. G.; Wu, H. Q.; Wu, C. F. J., Gaussian Process Models for Computer
Experiments With Qualitative and Quantitative Factors. Technometrics 2008, 50 (3),
383-396.
39. Drignei, D., A Kriging Approach to the Analysis of Climate Model Experiments.
Journal of Agricultural Biological and Environmental Statistics 2009, 14 (1), 99-114.
40. Marrel, A.; Iooss, B.; Laurent, B.; Roustant, O., Calculations of sobol indices for
the gaussian process metamodel. Reliability Engineering & System Safety 2009, 94 (3),
742-751.
41. Antognini, A. B.; Zagoraiou, M., Exact optimal designs for computer experiments
via Kriging metamodelling. Journal of Statistical Planning and Inference 2010, 140 (9),
2607-2617.
42. Yin, J.; Ng, S. H.; Ng, K. M., Kriging metamodel with modified nugget-effect:
The heteroscedastic variance case. Computers & Industrial Engineering 2011, 61 (3),
760-777.
43. Zhou, Q.; Qian, P. Z. G.; Zhou, S. Y., A Simple Approach to Emulation for
Computer Models With Qualitative and Quantitative Factors. Technometrics 2011, 53
(3), 266-273.
44. Gramacy, R. B.; Lee, H. K., Bayesian treed Gaussian process models with an
application to computer modeling. Journal of the American Statistical Association 2012.
45. Roustant, O.; Ginsbourger, D.; Deville, Y., DiceKriging, DiceOptim: Two R
Packages for the Analysis of Computer Experiments by Kriging-Based Metamodeling
and Optimization. Journal of Statistical Software 2012, 51 (1), 1-55.
46. Kleijnen, J. P. C.; Mehdad, E., Multivariate versus univariate Kriging metamodels
for multi-response simulation models. Eur. J. Oper. Res. 2014, 236 (2), 573-582.
47. Peng, C. Y.; Wu, C. F. J., On the Choice of Nugget in Kriging Modeling for
Deterministic Computer Experiments. Journal of Computational and Graphical
Statistics 2014, 23 (1), 151-168.
48. Cortes, C.; Vapnik, V., Support-vector networks. Machine learning 1995, 20 (3),
273-297.
49. Smola, A. J.; Schölkopf, B., A tutorial on support vector regression. Statistics and
computing 2004, 14 (3), 199-222.
50. Keerthi, S. S.; Shevade, S. K.; Bhattacharyya, C.; Murthy, K. R. K., Improvements
to Platt's SMO algorithm for SVM classifier design. Neural Computation 2001, 13 (3),
637-649.
51. Suykens, J. A.; Vandewalle, J., Least squares support vector machine classifiers.
Neural processing letters 1999, 9 (3), 293-300.
52. Clarke, S. M.; Griebsch, J. H.; Simpson, T. W., Analysis of support vector
regression for approximation of complex engineering analyses. Journal of mechanical
design 2005, 127 (6), 1077-1087.
53. Pan, F.; Zhu, P.; Zhang, Y., Metamodel-based lightweight design of B-pillar with
TWB structure via support vector regression. Computers & structures 2010, 88 (1), 36-
44.
54. Eisenhower, B.; O’Neill, Z.; Narayanan, S.; Fonoberov, V. A.; Mezić, I., A
methodology for meta-model based optimization in building energy models. Energy
and Buildings 2012, 47, 292-301.
55. Liu, Y.; Pender, G., A flood inundation modelling using v-support vector machine
regression model. Engineering Applications of Artificial Intelligence 2015, 46, 223-231.
56. Mirfenderesgi, G.; Mousavi, S. J., Adaptive meta-modeling-based simulation
optimization in basin-scale optimum water allocation: a comparative analysis of meta-
models. Journal of Hydroinformatics 2015, jh2015157.
57. Friedman, J. H., Multivariate adaptive regression splines. The annals of statistics
1991, 1-67.
58. Jin, R.; Chen, W.; Simpson, T. W., Comparative studies of metamodelling
techniques under multiple modelling criteria. Structural and Multidisciplinary
Optimization 2001, 23 (1), 1-13.
59. Simpson, T. W.; Poplinski, J.; Koch, P. N.; Allen, J. K., Metamodels for computer-
based engineering design: survey and recommendations. Engineering with computers
2001, 17 (2), 129-150.
60. Khu, S.; Savic, D.; Liu, Y.; Madsen, H. In A fast evolutionary-based
metamodelling approach for the calibration of a rainfall-runoff model, Trans. 2nd
Biennial Meeting of the International Environmental Modelling and Software Society,
iEMSs: Manno, Switzerland, Citeseer: 2004.
61. Schueremans, L.; Van Gemert, D., Benefit of splines and neural networks in
simulation based structural reliability analysis. Structural safety 2005, 27 (3), 246-261.
62. De Wilde, P.; Tian, W., Predicting the performance of an office under climate
change: a study of metrics, sensitivity and zonal resolution. Energy and Buildings 2010,
42 (10), 1674-1684.
63. Shahsavani, D.; Tarantola, S.; Ratto, M., Evaluation of MARS modeling technique
for sensitivity analysis of model output. Procedia-Social and Behavioral Sciences 2010,
2 (6), 7737-7738.
64. Costas, M.; Díaz, J.; Romera, L.; Hernández, S., A multi-objective surrogate-based
optimization of the crashworthiness of a hybrid impact absorber. International Journal
of Mechanical Sciences 2014, 88, 46-54.
65. Mason, J.; Cox, M. In Algorithms for approximation: based on the proceedings of
the IMA Conference on Algorithms for the Approximation of Functions and Data, held
at the Royal Military College of Science, Shrivenham, July 1985, volume 10 of The
Institute of Mathematics and Its Applications conference, new series, The Institute of
Mathematics and Its Applications conference series, new series, 1987.
66. Kadirkamanathan, V.; Niranjan, M., A function estimation approach to sequential
learning with neural networks. Neural Computation 1993, 5 (6), 954-975.
67. Yingwei, L.; Sundararajan, N.; Saratchandran, P., A sequential learning scheme
for function approximation using minimal radial basis function neural networks. Neural
computation 1997, 9 (2), 461-478.
68. Garcet, J. P.; Ordonez, A.; Roosen, J.; Vanclooster, M., Metamodelling: Theory,
concepts and application to nitrate leaching modelling. Ecological modelling 2006, 193
(3), 629-644.
69. Mullur, A. A.; Messac, A., Metamodeling using extended radial basis functions: a
comparative approach. Engineering with Computers 2006, 21 (3), 203-217.
70. Regis, R. G.; Shoemaker, C. A., A stochastic radial basis function method for the
global optimization of expensive functions. INFORMS Journal on Computing 2007, 19
(4), 497-509.
71. Kitayama, S.; Arakawa, M.; Yamazaki, K., Sequential approximate optimization
using radial basis function network for engineering optimization. Optimization and
Engineering 2011, 12 (4), 535-557.
72. Gu, J.; Li, G.; Dong, Z., Hybrid and adaptive meta-model-based global
optimization. Engineering Optimization 2012, 44 (1), 87-104.
73. Farshidi, A.; Rakai, L.; Samimi, B.; Behjat, L.; Westwick, D., A new a priori net
length estimation technique for integrated circuits using radial basis functions.
Computers & Electrical Engineering 2013, 39 (4), 1204-1218.
74. Cybenko, G., Approximation by superpositions of a sigmoidal function.
Mathematics of control, signals and systems 1989, 2 (4), 303-314.
75. Badiru, A. B.; Sieger, D. B., Neural network as a simulation metamodel in
economic analysis of risky projects. Eur. J. Oper. Res. 1998, 105 (1), 130-142.
76. Fonseca, D.; Navaresse, D.; Moynihan, G., Simulation metamodeling through
artificial neural networks. Engineering Applications of Artificial Intelligence 2003, 16
(3), 177-183.
77. El Tabach, E.; Lancelot, L.; Shahrour, I.; Najjar, Y., Use of artificial neural network
simulation metamodelling to assess groundwater contamination in a road project.
Mathematical and computer modelling 2007, 45 (7), 766-776.
78. Gorissen, D.; Hendrickx, W.; Dhaene, T. In Adaptive Global Metamodeling with
Neural Networks, ESANN, Citeseer: 2007; pp 187-192.
79. Kuo, Y.; Yang, T.; Peters, B. A.; Chang, I., Simulation metamodel development
using uniform design and neural networks for automated material handling systems in
semiconductor wafer fabrication. Simulation Modelling Practice and Theory 2007, 15
(8), 1002-1015.
80. Chan, W.; Fu, M.; Lu, J., An integrated FEM and ANN methodology for metal-
formed product design. Engineering Applications of Artificial Intelligence 2008, 21 (8),
1170-1181.
81. Khosravi, A.; Nahavandi, S.; Creighton, D. In Constructing prediction intervals
for neural network metamodels of complex systems, Neural Networks, 2009. IJCNN
2009. International Joint Conference on, IEEE: 2009; pp 1576-1582.
82. Li, Y.; Ng, S. H.; Xie, M.; Goh, T., A systematic comparison of metamodeling
techniques for simulation optimization in decision support systems. Applied Soft
Computing 2010, 10 (4), 1257-1273.
83. Lönn, D.; Fyllingen, Ø.; Nilssona, L., An approach to robust optimization of
impact problems using random samples and meta-modelling. International Journal of
Impact Engineering 2010, 37 (6), 723-734.
84. Hanspal, N. S.; Allison, B. A.; Deka, L.; Das, D. B., Artificial neural network
(ANN) modeling of dynamic effects on two-phase flow in homogenous porous media.
Journal of Hydroinformatics 2013, 15 (2), 540-554.
85. Tøndel, K.; Vik, J. O.; Martens, H.; Indahl, U. G.; Smith, N.; Omholt, S. W.,
Hierarchical multivariate regression-based sensitivity analysis reveals complex
parameter interaction patterns in dynamic models. Chemometrics and Intelligent
Laboratory Systems 2013, 120, 25-41.
86. Behandish, M.; Wu, Z., Concurrent pump scheduling and storage level
optimization using meta-models and evolutionary algorithms. Procedia Engineering
2014, 70, 103-112.
87. Conti, S.; O’Hagan, A., Bayesian emulation of complex multi-output and dynamic
computer models. Journal of statistical planning and inference 2010, 140 (3), 640-651.
88. Jolliffe, I., Principal component analysis. Wiley Online Library: 2002.
89. Jia, G.; Taflanidis, A. A., Kriging metamodeling for approximation of high-
dimensional wave and surge responses in real-time storm/hurricane risk assessment.
Computer Methods in Applied Mechanics and Engineering 2013, 261, 24-38.
90. Wang, K.; Chen, T.; Kwa, S. T.; Ma, Y.; Lau, R., Meta-modelling for fast analysis
of CFD-simulated vapour cloud dispersion processes. Computers & Chemical
Engineering 2014, 69, 89-97.
91. Arlot, S.; Celisse, A., A survey of cross-validation procedures for model selection.
Statistics surveys 2010, 4, 40-79.
92. Goos, P.; Jones, B., Optimal design of experiments: a case study approach. John
Wiley & Sons: 2011.
93. Myers, R. H.; Montgomery, D. C.; Anderson-Cook, C. M., Response surface
methodology: process and product optimization using designed experiments. John
Wiley & Sons: 2016.
94. Plackett, R. L.; Burman, J. P., The design of optimum multifactorial experiments.
Biometrika 1946, 33 (4), 305-325.
95. Hartley, H. O., Smallest composite designs for quadratic response surfaces.
Biometrics 1959, 15 (4), 611-624.
96. Lin, D. K., A new class of supersaturated designs. Technometrics 1993, 35 (1), 28-
31.
97. Draper, N. R.; Lin, D. K., Small response-surface designs. Technometrics 1990,
32 (2), 187-194.
98. Allen, T. T.; Yu, L., Low ‐ cost response surface methods from simulation
optimization. Quality and Reliability Engineering International 2002, 18 (1), 5-17.
99. Allen, T. T.; Yu, L.; Schmitz, J., An experimental design criterion for minimizing
meta‐model prediction errors applied to die casting process design. Journal of the Royal
Statistical Society: Series C (Applied Statistics) 2003, 52 (1), 103-117.
100. Echard, B.; Gayton, N.; Lemaire, M., AK-MCS: an active learning reliability
method combining Kriging and Monte Carlo simulation. Structural Safety 2011, 33 (2),
145-154.
101.Taguchi, G., Taguchi methods: design of experiments. American Supplier Institute.
Inc., MI 1993.
102. Fang, K.-T.; Lin, D. K.; Winker, P.; Zhang, Y., Uniform design: theory and
application. Technometrics 2000, 42 (3), 237-248.
103. McKay, M. D.; Beckman, R. J.; Conover, W. J., A comparison of three methods
for selecting values of input variables in the analysis of output from a computer code.
Technometrics 2000, 42 (1), 55-61.
104. Ye, K. Q., Orthogonal column Latin hypercubes and their application in computer
experiments. Journal of the American Statistical Association 1998, 93 (444), 1430-
1439.
105. Diwekar, U. M.; Kalagnanam, J. R., Efficient sampling technique for optimization
under uncertainty. AIChE Journal 1997, 43 (2), 440-447.
106. Morris, M. D.; Mitchell, T. J., Exploratory designs for computational experiments.
Journal of statistical planning and inference 1995, 43 (3), 381-402.
107. Cioppa, T. M.; Lucas, T. W., Efficient nearly orthogonal and space-filling Latin
hypercubes. Technometrics 2012.
108. Dennis, J.; Torczon, T. In Approximation model management for optimization,
Proceedings of the Sixth AIAA/NASA/USAF Multidisciplinary Analysis &
Optimization Symposium, 1996; pp 4-6.
109. Welch, W. J.; Buck, R. J.; Sacks, J.; Wynn, H. P.; Mitchell, T. J.; Morris, M. D.,
Screening, predicting, and computer experiments. Technometrics 1992, 34 (1), 15-25.
110. Balabanov, V. O.; Giunta, A. A.; Golovidov, O.; Grossman, B.; Mason, W. H.;
Watson, L. T.; Haftka, R. T., Reasonable design space approach to response surface
approximation. Journal of Aircraft 1999, 36 (1), 308-315.
111. Alexandrov, N. M.; Dennis Jr, J. E.; Lewis, R. M.; Torczon, V., A trust-region
framework for managing the use of approximation models in optimization. Structural
optimization 1998, 15 (1), 16-23.
112. Wujek, B.; Renaud, J., New adaptive move-limit management strategy for
approximate optimization, part 1. AIAA journal 1998, 36 (10), 1911-1921.
113. Wujek, B.; Renaud, J., New adaptive move-limit management strategy for
approximate optimization, part 2. AIAA journal 1998, 36 (10), 1922-1934.
114. Wang, G. G., Adaptive response surface method using inherited latin hypercube
design points. Transactions-American Society of Mechanical Engineers Journal of
Mechanical Design 2003, 125 (2), 210-220.
115. Wang, G. G.; Simpson, T., Fuzzy clustering based hierarchical metamodeling for
design space reduction and optimization. Engineering Optimization 2004, 36 (3), 313-
335.
116. Chen, J.; Wong, D. S. H.; Jang, S. S.; Yang, S. L., Product and process
development using artificial neural‐network model and information analysis. AIChE
journal 1998, 44 (4), 876-887.
117. Tang, Q.; Lau, Y. B.; Hu, S.; Yan, W.; Yang, Y.; Chen, T., Response surface
methodology using Gaussian processes: towards optimizing the trans-stilbene
epoxidation over Co 2+–NaX catalysts. Chemical Engineering Journal 2010, 156 (2),
423-431.
118. Chi, G.; Hu, S.; Yang, Y.; Chen, T., Response surface methodology with prediction
uncertainty: A multi-objective optimisation approach. Chemical engineering research
and design 2012, 90 (9), 1235-1244.
119. Jones, D. R.; Schonlau, M.; Welch, W. J., Efficient global optimization of
expensive black-box functions. Journal of Global optimization 1998, 13 (4), 455-492.
120. Lu, J.; Gao, F., Process modeling based on process similarity. Industrial &
Engineering Chemistry Research 2008, 47 (6), 1967-1974.
121. Lu, J.; Gao, F., Model migration with inclusive similarity for development of a
new process model. Industrial & Engineering Chemistry Research 2008, 47 (23), 9508-
9516.
122. Lu, J.; Yao, Y.; Gao, F., Model migration for development of a new process model.
Industrial & Engineering Chemistry Research 2008, 48 (21), 9603-9610.
123. Lu, J.; Yao, K.; Gao, F., Process similarity and developing new process models
through migration. AIChE journal 2009, 55 (9), 2318-2328.
124. Feudale, R. N.; Woody, N. A.; Tan, H.; Myles, A. J.; Brown, S. D.; Ferré, J.,
Transfer of multivariate calibration models: a review. Chemometrics and Intelligent
Laboratory Systems 2002, 64 (2), 181-192.
125. Yan, W.; Hu, S.; Yang, Y.; Gao, F.; Chen, T., Bayesian migration of Gaussian
process regression for rapid process modeling and optimization. Chemical engineering
journal 2011, 166 (3), 1095-1103.
126. Luo, L.; Yao, Y.; Gao, F., Bayesian improved model migration methodology for
fast process modeling by incorporating prior information. Chemical Engineering
Science 2015, 134, 23-35.
127. Luo, L.; Yao, Y.; Gao, F., Iterative improvement of parameter estimation for model
migration by means of sequential experiments. Computers & Chemical Engineering
2015, 73, 128-140.
128. Luo, L.; Yao, Y.; Gao, F., Cost-effective process modeling and optimization
methodology assisted by robust migration techniques. Industrial & Engineering
Chemistry Research 2015, 54 (21), 5736-5748.
129. Shen, W. J.; Davis, T.; Lin, D. K. J.; Nachtsheim, C. J., Dimensional Analysis and
Its Applications in Statistics. J. Qual. Technol. 2014, 46 (3), 185-198.
130. Kennedy, M. C.; O'Hagan, A., Predicting the output from a complex computer
code when fast approximations are available. Biometrika 2000, 87 (1), 1-13.
131. Cumming, J. A.; Goldstein, M., Bayes linear uncertainty analysis for oil reservoirs
based on multiscale computer experiments. O’Hagan, West, AM (eds.) The Oxford
Handbook of Applied Bayesian Analysis 2009, 241-270.
132. Han, Z.-H.; Zimmermann, R.; Görtz, S., A new cokriging method for variable-
fidelity surrogate modeling of aerodynamic data. AIAA Paper 2010, 1225, 2010.
133. Joshi, Y., Reduced order thermal models of multiscale microsystems. Journal of
Heat Transfer 2012, 134 (3), 031008.
134. Goh, J.; Bingham, D.; Holloway, J. P.; Grosskopf, M. J.; Kuranz, C. C.; Rutter, E.,
Prediction and computer model calibration using outputs from multifidelity simulators.
Technometrics 2013, 55 (4), 501-512.
135. Xiong, S.; Qian, P. Z.; Wu, C. J., Sequential design and analysis of high-accuracy
and low-accuracy computer codes. Technometrics 2013, 55 (1), 37-46.
136. Tuo, R.; Wu, C. J.; Yu, D., Surrogate modeling of computer experiments with
different mesh densities. Technometrics 2014, 56 (3), 372-380.
137. Yang, J.; Liu, M.-Q.; Lin, D. K., Construction of nested orthogonal Latin
hypercube designs. Statistica Sinica 2014, 24 (1).
138. He, X.; Tuo, R.; Wu, C. J., Optimization of multi-fidelity computer experiments
via the EQIE criterion. Technometrics 2016, (just-accepted), 1-34.
139. Chuang, Y.-C.; Chan, C.-H.; Chen, T.; Yao, Y.; Wong, D. S. H., Surrogate model
calibration for computational fluid dynamics (CFD) using Bayesian migration. In The
7th International Symposium on Design, Operation & Control of Chemical Processes
(PSE Asia 2016), Tokyo, Japan, 2016; p No. 78.
140. Palmer, K.; Realff, M., Optimization and validation of steady-state flowsheet
simulation metamodels. Chemical Engineering Research & Design 2002, 80 (A7), 773-
782.
141. Henao, C. A.; Maravelias, C. T., Surrogate‐based superstructure optimization
framework. AIChE Journal 2011, 57 (5), 1216-1232.
142. Wright, M. M.; Román‐Leshkov, Y.; Green, W. H., Investigating the techno‐
economic trade‐offs of hydrogen source using a response surface model of drop‐in
biofuel production via bio‐oil upgrading. Biofuels, Bioproducts and Biorefining 2012,
6 (5), 503-520.
143. Fahmi, I.; Cremaschi, S., Process synthesis of biodiesel production plant using
artificial neural networks as the surrogate models. Computers & Chemical Engineering
2012, 46, 105-123.
144. Ochoa-Estopier, L. M.; Jobson, M.; Smith, R., The use of reduced models for
design and optimisation of heat-integrated crude oil distillation systems. Energy 2014,
75, 5-13.
145. Chu, J.-Z.; Shieh, S.-S.; Jang, S.-S.; Chien, C.-I.; Wan, H.-P.; Ko, H.-H.,
Constrained optimization of combustion in a simulated coal-fired boiler using artificial
neural network model and information analysis☆. Fuel 2003, 82 (6), 693-703.
146. Kusiak, A.; Song, Z., Combustion efficiency optimization and virtual testing: A
data-mining approach. Industrial Informatics, IEEE Transactions on 2006, 2 (3), 176-
184.
147. Zheng, L.; Zhou, H.; Wang, C.; Cen, K., Combining support vector regression and
ant colony optimization to reduce NOx emissions in coal-fired utility boilers. Energy
& Fuels 2008, 22 (2), 1034-1040.
148. Li, S.; Feng, L.; Benner, P.; Seidel-Morgenstern, A., Using surrogate models for
efficient optimization of simulated moving bed chromatography. Computers &
Chemical Engineering 2014, 67, 121-132.
149. Beck, J.; Friedrich, D.; Brandani, S.; Fraga, E. S., Multi-objective optimisation
using surrogate models for the design of VPSA systems. Computers & Chemical
Engineering 2015, 82, 318-329.
150. Gutiérrez-Antonio, C.; Briones-Ramírez, A., Multiobjective Stochastic
Optimization of Dividing-wall Distillation Columns Using a Surrogate Model Based
on Neural Networks. Chemical and Biochemical Engineering Quarterly 2016, 29 (4),
491-504.
151. Nuchitprasittichai, A.; Cremaschi, S., An algorithm to determine sample sizes for
optimization with artificial neural networks. AIChE Journal 2013, 59 (3), 805-812.
152. Wiltowski, T.; Piotrowski, K.; Lorethova, H.; Stonawski, L.; Mondal, K.; Lalvani,
S., Neural network approximation of iron oxide reduction process. Chemical
Engineering and Processing: Process Intensification 2005, 44 (7), 775-783.
153. Chang, J.-S.; Lee, Y.-P.; Wang, R.-C., Optimization of nanosized silver particle
synthesis via experimental design. Industrial & engineering chemistry research 2007,
46 (17), 5591-5599.
154. Chang, J.-S.; Lee, J.-T.; Chang, A.-C., Neural-network rate-function modeling of
submerged cultivation of Monascus anka. Biochemical Engineering Journal 2006, 32
(2), 119-126.
155. Chen, J.; Jang, S.-S.; Wong, D. S. H.; Ma, C.-C. M.; Lin, J.-M., Optimal design of
filament winding using neural network experimental design scheme. Journal of
Composite Materials 1999, 33 (24), 2281-2300.
156. Chen, J.; Chu, P. P.-T.; Wong, D. S. H.; Jang, S.-S., Optimal design using neural
network and information analysis in plasma etching. Journal of Vacuum Science &
Technology B 1999, 17 (1), 145-153.
157. Liau, L.-K.; Huang, C.-J.; Chen, C.-C.; Huang, C.-S.; Chen, C.-T.; Lin, S.-C.; Kuo,
L.-C., Process modeling and optimization of PECVD silicon nitride coated on silicon
solar cell using neural networks. Solar energy materials and solar cells 2002, 71 (2),
169-179.
158. Chuang, Y.-C.; Chen, C.-T., Mathematical modeling and optimal design of an
MOCVD reactor for GaAs film growth. Journal of the Taiwan Institute of Chemical
Engineers 2014, 45 (1), 254-267.
159. Qian, P. Z.; Wu, H.; Wu, C. J., Gaussian process models for computer experiments
with qualitative and quantitative factors. Technometrics 2012.
160. Qian, P. Z.; Wu, C. J., Sliced space-filling designs. Biometrika 2009, 96 (4), 945-
956.
161. Zhang, G. P., Time series forecasting using a hybrid ARIMA and neural network
model. Neurocomputing 2003, 50, 159-175.
162. Chen, S., Nonlinear time series modelling and prediction using Gaussian RBF
networks with enhanced cllustering and RLS learning. Electronics letters 1995, 31 (2),
117-118.
163. Sapankevych, N. I.; Sankar, R., Time series prediction using support vector
machines: a survey. Computational Intelligence Magazine, IEEE 2009, 4 (2), 24-38.
164. Wang, J.; Hertzmann, A.; Blei, D. M. In Gaussian process dynamical models,
Advances in neural information processing systems, 2005; pp 1441-1448.
165. Joseph, B.; Hanratty, F. W., Predictive control of quality in a batch manufacturing
process using artificial neural network models. Industrial & engineering chemistry
research 1993, 32 (9), 1951-1961.
166. Desai, K.; Badhe, Y.; Tambe, S. S.; Kulkarni, B. D., Soft-sensor development for
fed-batch bioreactors using support vector regression. Biochemical Engineering
Journal 2006, 27 (3), 225-239.
167. Gonzaga, J.; Meleiro, L.; Kiang, C.; Maciel Filho, R., ANN-based soft-sensor for
real-time process monitoring and control of an industrial polymerization process.
Computers & Chemical Engineering 2009, 33 (1), 43-49.
168. Liu, G.; Zhou, D.; Xu, H.; Mei, C., Model optimization of SVM for a fermentation
soft sensor. Expert Systems with Applications 2010, 37 (4), 2708-2713.
169. Ge, Z.; Chen, T.; Song, Z., Quality prediction for polypropylene production
process based on CLGPR model. Control Engineering Practice 2011, 19 (5), 423-432.
170. Yu, J., Online quality prediction of nonlinear and non-Gaussian chemical
processes with shifting dynamics using finite mixture model based Gaussian process
regression approach. Chemical engineering science 2012, 82, 22-30.
171. Nahas, E.; Henson, M.; Seborg, D., Nonlinear internal model control strategy for
neural network models. Computers & Chemical Engineering 1992, 16 (12), 1039-1057.
172. Willis, M. J.; Montague, G. A.; Di Massimo, C.; Tham, M. T.; Morris, A.,
Artificial neural networks in process estimation and control. Automatica 1992, 28 (6),
1181-1187.
173. Kocijan, J.; Murray-Smith, R.; Rasmussen, C. E.; Girard, A. In Gaussian process
model based predictive control, American Control Conference, 2004. Proceedings of
the 2004, IEEE: 2004; pp 2214-2219.
174. Tsen, A. Y. D.; Jang, S. S.; Wong, D. S. H.; Joseph, B., Predictive control of quality
in batch polymerization using hybrid ANN models. AIChE Journal 1996, 42 (2), 455-
465.
175. Ng, C.; Hussain, M., Hybrid neural network—prior knowledge model in
temperature control of a semi-batch polymerization process. Chemical Engineering and
Processing: Process Intensification 2004, 43 (4), 559-570.
176. Fullana, M.; Trabelsi, F.; Recasens, F., Use of neural net computing for statistical
and kinetic modelling and simulation of supercritical fluid extractors. Chemical
engineering science 1999, 54 (24), 5845-5862.
177. Cho, Y.; Joseph, B., Reduced ‐ order steady ‐ state and dynamic models for
separation processes. Part I. Development of the model reduction procedure. AIChE
journal 1983, 29 (2), 261-269.
178. Dochain, D.; Babary, J.-P.; Tali-Maamar, N., Modelling and adaptive control of
nonlinear distributed parameter bioreactors via orthogonal collocation. Automatica
1992, 28 (5), 873-883.
179. Huang, R.; Zavala, V. M.; Biegler, L. T., Advanced step nonlinear model predictive
control for air separation units. Journal of Process Control 2009, 19 (4), 678-685.
180. Chen, Z.; Henson, M. A.; Belanger, P.; Megan, L., Nonlinear model predictive
control of high purity distillation columns for cryogenic air separation. Control Systems
Technology, IEEE Transactions on 2010, 18 (4), 811-821.
181. Lee, J. M.; Lee, J. H., Approximate dynamic programming-based approaches for
input–output data-driven control of nonlinear processes. Automatica 2005, 41 (7),
1281-1288.
182. Lee, J. M.; Kaisare, N. S.; Lee, J. H., Choice of approximator and design of penalty
function for an approximate dynamic programming based control approach. Journal of
process control 2006, 16 (2), 135-156.
183. O'Hagan, A.; Kingman, J., Curve fitting and optimal design for prediction. Journal
of the Royal Statistical Society. Series B (Methodological) 1978, 1-42.
184. Kennedy, M. C.; O'Hagan, A., Bayesian calibration of computer models. Journal
of the Royal Statistical Society: Series B (Statistical Methodology) 2001, 63 (3), 425-
464.
185. Wilkinson, R. D., Bayesian calibration of expensive multivariate computer
experiments. Large-Scale Inverse Problems and Quantification of Uncertainty 2010,
195-215.
186. Rougier, J., Efficient emulators for multivariate deterministic functions. Journal
of Computational and Graphical Statistics 2008, 17 (4), 827-843.
187. McFarland, J.; Mahadevan, S.; Romero, V.; Swiler, L., Calibration and uncertainty
analysis for computer simulations with multivariate output. AIAA journal 2008, 46 (5),
1253-1265.
188. Bhat, K. S.; Haran, M.; Goes, M., Computer model calibration with multivariate
spatial output: A case study. Frontiers of Statistical Decision Making and Bayesian
Analysis 2010, 168-184.
189. Paulo, R.; García-Donato, G.; Palomo, J., Calibration of computer models with
multivariate output. Computational Statistics & Data Analysis 2012, 56 (12), 3959-
3974.
190. Manfren, M.; Aste, N.; Moshksar, R., Calibration and uncertainty analysis for
computer models–a meta-model based approach for integrated building energy
simulation. Applied energy 2013, 103, 627-641.
191. Broad, D.; Dandy, G. C.; Maier, H. R., Water distribution system optimization
using metamodels. Journal of Water Resources Planning and Management 2005, 131
(3), 172-180.
192. Yazdi, J.; Neyshabouri, S. S., Adaptive surrogate modeling for optimization of
flood control detention dams. Environmental Modelling & Software 2014, 61, 106-120.
193. Broad, D. R.; Dandy, G. C.; Maier, H. R., A systematic approach to determining
metamodel scope for risk-based optimization and its application to water distribution
system design. Environmental Modelling & Software 2015, 69, 382-395.
194. Yu, C. J.; Chen, Y. Y.; Peng, S. C.; Wong, D. S. H.; Chuang, Y. J., Core network
identification using parametric sensitivity and multi-way principal component analysis
in NFkB signaling network. Journal of the Taiwan Institute of Chemical Engineers
2013, 44 (5), 724-733.
195. Sobol, I. M., Global sensitivity indices for nonlinear mathematical models and
their Monte Carlo estimates. Mathematics and computers in simulation 2001, 55 (1),
271-280.
196. Cukier, R.; Fortuin, C.; Shuler, K. E.; Petschek, A.; Schaibly, J., Study of the
sensitivity of coupled reaction systems to uncertainties in rate coefficients. I Theory.
The Journal of chemical physics 1973, 59 (8), 3873-3878.
197. McRae, G. J.; Tilden, J. W.; Seinfeld, J. H., Global sensitivity analysis—a
computational implementation of the Fourier amplitude sensitivity test (FAST).
Computers & Chemical Engineering 1982, 6 (1), 15-25.
198. Sobol, I. M., Theorems and examples on high dimensional model representation.
Reliability Engineering & System Safety 2003, 79 (2), 187-193.
199. Xiu, D.; Karniadakis, G. E., Modeling uncertainty in flow simulations via
generalized polynomial chaos. Journal of computational physics 2003, 187 (1), 137-
167.
200. Crestaux, T.; Le Maıˆtre, O.; Martinez, J.-M., Polynomial chaos expansion for
sensitivity analysis. Reliability Engineering & System Safety 2009, 94 (7), 1161-1172.
201. Saltelli, A.; Annoni, P.; Azzini, I.; Campolongo, F.; Ratto, M.; Tarantola, S.,
Variance based sensitivity analysis of model output. Design and estimator for the total
sensitivity index. Computer Physics Communications 2010, 181 (2), 259-270.
202. Janon, A.; Klein, T.; Lagnoux, A.; Nodet, M.; Prieur, C., Asymptotic normality
and efficiency of two Sobol index estimators. ESAIM: Probability and Statistics 2014,
18, 342-364.
203. Koda, M.; Mcrae, G. J.; Seinfeld, J. H., Automatic sensitivity analysis of kinetic
mechanisms. International Journal of Chemical Kinetics 1979, 11 (4), 427-444.
204. Saltelli, A.; Ratto, M.; Tarantola, S.; Campolongo, F., Sensitivity analysis for
chemical models. Chemical reviews 2005, 105 (7), 2811-2828.
205. Kiparissides, A.; Kucherenko, S.; Mantalaris, A.; Pistikopoulos, E., Global
sensitivity analysis challenges in biological systems modeling. Industrial &
Engineering Chemistry Research 2009, 48 (15), 7168-7180.
206. Lucay, F.; Cisternas, L.; Gálvez, E., Global sensitivity analysis for identifying
critical process design decisions. Chemical Engineering Research and Design 2015,
103, 74-83.
207. Carrero, E.; Queipo, N. V.; Pintos, S.; Zerpa, L. E., Global sensitivity analysis of
Alkali–Surfactant–Polymer enhanced oil recovery processes. Journal of Petroleum
Science and Engineering 2007, 58 (1), 30-42.