Advanced Process Operations
Advanced Process Operations
Advanced Process Operations
1. Introduction to Course
1.1. Dynamic Optimization Formulations
1.1.1.
t Time tf Final time
x Differential state variables u Control variables
y Algebraic state variables v Time-invariant parameters
1.2. Batch Process Optimization
1.2.1. Determine the batch duration tf and the feed rate profile F(t) to maximize the
final amount of C, while ensuring the final purity of C (w.r.t. B, C and D) over
70%, and maintaining the cooling jacket temperature Tj(t) above 15°C at all
times
1.2.2. Model-Based Optimization Approach
1.2.2.1. Construct a mathematical model relating the control actions (flow rate)
with the states/outputs variables (concentrations, temperature)
1.2.2.2. Determine the optimal control actions based on this model
1.2.3. Trajectory
1.3.1. Determine the feed rate profile F(t) to minimize the transition time between
ref ref
reference concentrations C 1 and C 2 , while keeping the cooling jacket
temperature Tj(t) above 15°C at all times
1.3.2. Trajectory
1.3.2.1. Minimize the gap between the response and the desired set-point over
the prediction horizon, while penalizing control moves, and repeat on a
rolling horizon [tk ,tk + T]
1.4. Dynamic Model Building
1.4.1. Estimate (a subset of) the unknown parameters for the candidate model to
best fit the experimental data
1.4.2. Determine the best experimental conditions to maximize parameter precision
during parameter estimation
1.4.3. This is an optimization problem based on different states
2. Dynamic Optimization Formulations
2.1. Dynamic Optimization Problem Formulations
2.1.1. Expected to determine the time-varying and/or time-invariant inputs of a
dynamic system in order to minimize (or maximize) some performance
criterion while satisfying given physical and operational constraints
2.1.2. Main Steps of Problem Formulations
2.1.2.1. Specification/modelling of the controlled systems
2.1.2.1.1. Manipulated variables: controls u(t )∈ R n and decisions v ∈ R n
u v
2.1.2.1.2. Response and dynamical model: states x ( t ) ∈ Rn and y ( t ) ∈ R n
x y
2.2.3. Variables: n x
2.2.3.1. Differential: x ∈ Rn x
2.2.7.2.1.
2.2.7.2.2. At x(0)=0 the problem does not work as there are multiple
solutions possible, changing the initial condition to a real
number would fix this like x(0)=0.0001
2.2.7.3. The height of liquid in a gravity drained tank is described by:
2.2.7.3.1. ḣ ( t )=−C √h ( t )
2.2.7.4. Given that the tank is empty at T, i.e. h(T) = 0, determine the time t < T
at which the level was h(t)= H > 0
2.2.7.4.1. Hint: Consider reversing time as s = T – t
2.2.8. A Simple ODE with Finite Escape Time
2.2.8.1. Workshop: Consider the following ODE: ẋ ( t )=x ( t )2, with x ( 0 )=1
2.2.8.2. Does a solution exist locally?
2.2.8.3. Find an expression of the ODE solutions using the method of separation
of variables, and determine the escape time
x (t ) t
∫ dx 2 ∫
= dt
x(0) x 0
[ ]
x(t)
−1
=t
x x(0)
1 1
− =t
x ( 0) x ( t )
x (0)
x ( t )=
1−x ( 0 ) t
2.3. Systems of Differential-Algebraic Equations (DAEs)
2.3.1. f ( x ( t ) , ẋ ( t ) , y ( t ) )=0
2.3.2. g ( x ( t ) , y ( t ) )=0
2.3.3. Initial conditions for DAEs:
2.3.3.1. Conditions n x + n y
2.3.3.1.1. f ( x ( 0 ) , ẋ ( 0 ) , y ( 0 ) )=0
2.3.3.1.2. g ( x ( 0 ) , y ( 0 ) )=0
2.3.3.2. Variables: 2 n x +n y
2.3.3.2.1. x ( 0 ) , ẋ ( 0 ) , y ( 0 )
2.3.3.3. Consistent initialization: specify at most n x initial conditions
2.3.3.4. Test: Can we specify arbitrary values for x ( 0 )and determine the
corresponding values of ẋ ( 0 ) and y ( 0 ) ?
2.3.3.4.1. If yes specify exactly n x initial conditions
2.3.3.4.2. If no specify fewer than n x initial conditions
2.4. Test Yourself
''
2.4.1. x 1 =−λ x 1
''
2.4.2. x 2 =−λ x 2−¿
2.4.3. x 21+ x22 =1
2.4.4. Differential states
'
2.4.4.1. dx 1=x 1
'
2.4.4.2. dx 2=x 2
2.4.4.3. dx 1=− λ x1
2.4.4.4. dx 2=− λ x2 −g
2.4.5. Algebraic State
2 2
2.4.5.1. x 1+ x2 =1
2.4.5.1.1. Differentiate to time
' '
2.4.5.1.2. 2 x1 x 1+2 x 2 x 2=0
2.4.5.1.3. x 1 dx 1 +x 2 dx 2=0
2.4.5.1.3.1. Differentiate to time
2.4.5.1.3.2. dx 21 + x 1 dx '1 +dx 22 + x 2 dx '2=0
2 2 2 2
2.4.5.1.3.3. dx 1 +dx 2−λ x 1− λ x 2−x 2 g=0
2.4.5.1.3.4. Divide by λ
2 2
2.4.5.1.3.5. dx 1 +dx 2−x 2 g−λ=0
2.4.6. Count number of equations and variables
2.4.6.1. 7 equations
2.4.6.2. 9 variables
2.4.6.3. Can supply 2 initial conditions
2.4.6.3.1. Specify x 1 and dx 1
2.4.6.3.1.1. Can use all relationships including hidden relationships
2.4.7. Index of DAE system is 3 as you need one more time differentiation to make an
ODE system
2.4.8. Dummy derivative method
2.4.8.1. Need to introduce the same number of dummy variables as hidden
relations
'
2.4.8.2. z 1=dx 1
'
2.4.8.3. z 2=dx 2
2.4.8.4. Equations
2.4.8.4.1. dx 1=x '1
'
2.4.8.4.2. dx 2=x 2
2.4.8.4.3. z 1=−λ x 1
2.4.8.4.4. z 2=−λ x 2−g
2 2
2.4.8.4.5. x 1+ x2 =1
2.4.8.4.6. x 1 dx 1 +x 2 dx 2=0
2 2
2.4.8.4.7. dx 1 +dx 2−x 2 g−λ=0
2.4.8.5. Now there is an index 1 DAE system
2.5. Case Study of a Simple Car Control Problem
2.5.1. The motion of a car is described by the following ODEs
2.5.1.1. ṗ ( t ) =v ( t )
2.5.1.2. v̇ ( t )=u ( t )
2.5.2. Where states p(t) and v(t) denote position and speed and the control u(t) is the
force due to acceleration (≥0) or deceleration (≤0)
2.5.3. The optimization problem consists of driving the car from the position p 0
where it is initially parked, to its final destination p f where it is also parked,
using the least amount of fuel. The maximum speed limit is v U, and the car’s
maximal acceleration and decelaration are, respectively, uL and uU. For
simplicity, assume that the car’s instantaneous full consumption is
proportional to its squared velocity.
2.5.4. Formulate problem
2.5.4.1. Minimize
tf
2.5.4.1.1. ∫ α v ( t )2 dt
0
2.5.4.1.2. For u ( t ) at t 0 → t f
2.5.4.2. Subject to
2.5.4.2.1. . ṗ ( t ) =v ( t )
2.5.4.2.2. v̇ ( t )=u ( t )
2.5.4.2.3. p ( 0 )= p 0
2.5.4.2.4. v ( 0 )=0
2.5.4.2.5. p ( t f ) =p f
2.5.4.2.6. v ( t f )=0
v (t )≤ v , ∀ t ∈[0 , tf ]
U
2.5.4.2.7.
u ≤u ( t ) ≤ u , ∀ t ∈ [ 0 , t f ]
L U
2.5.4.2.8.
2.5.5. What if your minimizing travel time?
2.5.5.1. Minimize
tf
2.5.5.1.1. t f =∫ 1dt
0
2.5.5.1.2. Takes it from an integral term to a point term
2.6. Performance Criteria in Dynamic Optimization
2.6.1. Functional used for quantitative evaluation of the system’s performance
2.6.2. Main types of Criteria
2.6.2.1. Lagrange form
tf
2.6.2.1.1. ∂ ( u , v )=∫ l ( x ( t ) , y ( t ) , u ( t ) , v ) dt
0
2.6.2.1.2. Minimize average power consumption
2.6.2.2. Mayer form
N
2.6.2.2.1. ∂ ( u , v )=∑ φ(x ( t k ) , y ( t k ) , v)
k=1
2.6.2.2.2. maximize final batch product purity
2.6.2.2.3. minimise prediction error compared with time-series
measurements
2.6.2.3. Point Constraints
End point: ψ ( x ( t f ) , v ) ≤ψ
U
2.6.2.3.1.
Interior: ψ ( x ( t k ) , v ) ≤ ψ ,t k ∈ [ 0 , t f )
U
2.6.2.3.2.
2.6.2.3.3. batch process must reach at least 30% conversion after 2 hours:
x(7200) ≥ 0.3
2.6.2.4. Path Constraints
2.6.2.4.1. κ ( x ( t ) , y ( t ) , u ( t ) , v ) ≤ κU
2.6.2.4.2. product purity must never fall below 97%: x(t) ≥ 0.97, for all t ∈
[0,tf ] - temperature must always stay between 323 K and 500 K:
323 ≤ T(t) ≤ 500, for all t ∈ [0,tf ]
2.6.3. General Dynamic Optimization Formulation
2.7.1.2.2.
v̇ ( t ) =u ( t ) }
ṗ (t )=v ( t ) ∀ t ∈ 0 ,t
tf
[ f]
tf tf
2.7.2.3. ∫ ẋ n +1 ( t ) dt=∫ l ( x ( t ) , u ( t ) ) dt
x
0 0
2.7.2.4. ẋ n +1 ( t f ) −x 0 (0)
x
2.7.3.2.
3.3. s . t . ẋ ( t ) =u (t )−x ( t )
x ( 0 ) =1. x (1 )=0
3.4. The parameterized control trajectories are given by
3.4.1. U k0 ( t ,ω k ) ≔ωk0 , ∀ t ∈
[( ) ( ))
k−1
ns
,
k
ns
, k=1 … . ns
3.4.2.
3.5. Direct Sequential Approach: Single Shooting
3.5.1. Visual Depiction of the Approach
3.5.2. Compute the cost and constraint values using numerical integration
3.5.3. Compute the cost and constraint values using numerical sensitivity
3.5.4. Gradient free methods
3.5.4.1. Good if you only have an objective function
3.5.4.2. Don’t like constraints or lots of decision variables
3.5.5. Gradient based methods
3.5.5.1. Need to supply gradients to the optimization model
3.5.6. Pros and Cons
3.5.6.1. Relatively small-scale NLP problems in the variables p=( ω1 , … , ωn , v )
s
that is nu ( N +1 ) ns +n v variables
3.5.6.1.1. Stage times can be added to the decision vector p too
3.5.6.2. Accuracy of the state variables enforced via the error-control
mechanism of the ODE/DAE solver
3.5.6.3. Computing the gradients using numerical sensitivity analysis is often
the dominant cost
3.5.6.4. Feasible path method
3.5.6.4.1. The ODEs/DAEs are satisfied at each NLP iteration
3.5.6.4.2. This is computationally demanding
3.5.6.4.3. Only mildly unstable systems can be handled this way
3.5.6.5. Strategy is one implemented in gPROMS
3.6. Direct Sequential Approach: Multiple Shooting
3.6.1. Discretization as a finite dimensional NLP through parameterization of the
control variables, with the ODEs/DAEs embedded in the NLP problem but with
state discontinuities allowed at stage times
3.6.2. Lifting of the optimization problem
3.9.3. Limitation is that more parameters creates more integration which gets more
complicated
3.10. Procedure for Adjoint Sensitivity Analysis
3.10.1. State Numerical Integration 0→T
choosing
4.7.3. The variance and residual terms are pulling in different directions
4.8. Goodness of fit: Main Idea
4.10.1.1. Cov(x,y)
4.11.8.
4.12. Obtaining Confidence Intervals from Covariance Matrices
4.12.1. Linearized 100(1-α)% Confidence Intervals
4.12.1.1.
4.12.1.2. t N −n 1−
θ ( α2 ) is the z(1-α) distribution which gives larger range of
Gausian
4.12.2. 100(1-α) t-value test
4.12.2.1.
4.12.2.1.1. If 0 is within this confidence interval the parameter value is
garbage
4.12.3. Linearized 100(1-α)% Confidence Region
4.12.3.1. Large sample size-known measurement variance
4.12.3.1.1.
4.12.3.1.2. Ellipsoid gets bigger from the increase in confidence
4.12.3.2. Small sample size- uncertain measurement variance
4.12.3.2.1.
4.12.3.2.2. Using an F test is better because it is more conservative and is
better for low confidence
4.13. Precision of Parameter Estimates: Practical Aspects
4.13.1. Failure to pass a t-test for one/several parameters suggests a flat optimum in
certain directions
4.13.1.1. Cause: (i) low sensitivity; (ii) large correlations
4.13.1.2. Remedy: (i) focus on a parameters subset – apply
estimability/sensitivity analysis (ii) design experiments—apply
optimal experiment design
4.13.2. A model presenting a systematic bias (e.g. χ2-test not satisfied) or a highly
nonlinear model may invalidate the statistical analysis
4.13.2.1. Always be cautious with your conclusions
4.15.5.1.
4.16. Model based design of experiments (MBDoE)
Design conditions, initial
condition, run time
4.17.2.
4.17.3. D-optimality
4.17.3.1. Minimize the volume of confidence ellipsoid
1
min det C n
4.17.3.2.
u,v
[ θ] θ
4.17.4. A optimality
4.17.4.1. Minimize the average variance of the individual parameter estimate
min 1 trace [C ]
4.17.4.2. θ
u , v nθ
4.17.5. E optimality
4.17.5.1. Minimizes the longest axis of confidence ellipsoids
min λ [C ]
4.17.5.2.
u , v max θ
4.17.6. Modified-E optimality
4.17.6.1. Minimizes the ratio of longest to shortest axis of confidence ellipsoids
4.17.7. Differences in the resulting designs can be significant
4.18. Experiment Design Criteria: Covariance Matrix
4.20.1.1.
4.20.2. Average sensitivity coefficients over factor domain:
4.20.2.1.
4.21.1.1.
4.21.1.2. Impose extra conditions, e.g. nased on orthogonality for f0, fi, fi1, i2, …
4.21.2. The terms in this decomposition can be calculated recursively
4.21.6.1.
4.21.6.2. Effect of varying θi alone, averaged over variations in other parameters
4.21.7. Total-Order Sensitivity Indices:
4.21.7.1.
4.21.7.2. Effect of varying θi , including all variance caused by its interactions, of
any order, with any other parameters ST,i = Si + (S1,i + · · · + Si−1,i + Si,i+1 + ·
· · + Si,d) + · · · + S1,2,...,d
4.22. Sampling Strategies in Multiple Dimensions
4.22.1. Monte-Carlo integration
4.22.2.1. Low-discrepancy is more random than a regular grid but utilizes the
knowledge of previous points to create a more uniform coverage
without being too uniform
4.23. How to use the first and total order sensitivity indices
Dual Feasibility
5.4.1.2. This global condition does not require a distance (or a norm) on U: the
control u* is compared to every other feasible control
5.4.2. Locally Optimal Control (Minimize Case)
5.4.2.1. A feasible control u ∗ is a local optimum of the functional
5.5. Weak Neighborhoods in Optimal Control
feasibility
5.8. Remarks about Euler-Lagrange Conditions
5.8.1. Complete set of necessary conditions
5.8.1.1. 2 x nx ODEs in the state variable x(t) and adjoint variables λ(t), with
initial/terminal conditions
5.8.1.2. Nu AEs in the variables u(t)
5.8.2. But need to solve a two-point boundary value problem
5.8.3. Same set of conditions for minimize and maximize problems (in the absence of
terminal conditions
5.8.4. Second-order (Legendre-Clebsch) Condition:
5.8.5. For autonomous problems (i.e. no explicit time dependence), the Hamiltonian
is invariant along an optimal trajectory:
5.11.1. Handles control bounds in a very natural way: boils down to solving an NLP
problem at each time along [t0, tf]
5.11.2. Control bounds give rise to a solution structure comprised of interior and
boundary arcs
5.12. Applying the Pontryagin Maximum Principle
5.12.1.
5.13. Singularity
5.15.1.1.
5.15.2. Apply successive time differentiations
5.15.3. From previous example
5.16.2.1.1.
5.16.2.2. Pure-state constraints
5.16.2.2.1.
5.16.2.3.