Optimization in Practice
Optimization in Practice
Optimization in Practice
IN PRACTICE
• We say that 𝑓 𝑥, 𝑦 has a local
maximum/minimum at (a,b) if the
Minima
value of 𝑓 at (a,b) is at least high/low
as the value of 𝑓 at all other points in the
domain.
• We say that 𝑓 𝑥, 𝑦 has a local
maximum/minimum at (a,b) if the
value of 𝑓 at (a,b) is at least high/low
as the value of 𝑓 at all nearby points in
Maxima
the domain.
and
• We say that 𝑓 𝑥, 𝑦 has a global
maximum/minimum at (a,b) if the
value of 𝑓 at (a,b) is at least high/low
as the value of 𝑓 at all other points in the
Minima
domain.
Domain
(4,5). Hence, we look for (critical) points
where 𝑓 ′ = 0 and compute the function
value at those points. Of course, not
every critical point is an extremum, but
that’s okay; it's better to have extras than
to miss one. Also, if the points fall outside
(2,3) ∪ (4,5), we simply disregard them.
Domain
f(x,y), is often a two-dimensional region, and if
the region has an end, it is usually defined by an
equation that represents a curve.
Extrema on the
region has an end, it is usually defined by an
equation that represents a curve.
• Algebraic Derivation (Lagrange Function): Note that 𝑔 𝑥, 𝑦 = 0. Therefore, optimizing the Lagrange
function 𝐿 𝑥, 𝑦, λ = 𝑓 𝑥, 𝑦 + λ𝑔 𝑥, 𝑦 is equivalent to optimizing 𝑓(𝑥, 𝑦). Seeking critical points of
𝐿 𝑥, 𝑦, λ leads to ∇𝑓 = 𝜆∇𝑔 and 𝑔 𝑥, 𝑦 = 0 , which are the same equations used to find the constrained
critical points for 𝑓(𝑥, 𝑦) subject to 𝑔 𝑥, 𝑦 = 0.
Constrained Critical Points and Lagrangian
• Geometric Intuition: Let's assume the constraint equation is 𝑔 𝑥, 𝑦 = 0. The curve representing this equation is a
level set of 𝑔(𝑥, 𝑦). At extrema points, the level set of our target function 𝑓(𝑥, 𝑦) is tangent to the level set
𝑔 𝑥, 𝑦 = 0. Therefore, ∇𝑓 = 𝜆∇𝑔. Consequently, seeking points where ∇𝑓 = 𝜆∇𝑔 will guide us to the local
extrema on the constraint curve; these points are called constrained critical points.
• Algebraic Derivation (Lagrange Function): Note that 𝑔 𝑥, 𝑦 = 0. Therefore, optimizing the Lagrange function
𝐿 𝑥, 𝑦, λ = 𝑓 𝑥, 𝑦 + λ𝑔 𝑥, 𝑦 is equivalent to optimizing 𝑓(𝑥, 𝑦). Seeking critical points of 𝐿 𝑥, 𝑦, λ leads to
∇𝑓 = 𝜆∇𝑔 and 𝑔 𝑥, 𝑦 = 0 , which are the same equations used to find the constrained critical points for
𝑓(𝑥, 𝑦) subject to 𝑔 𝑥, 𝑦 = 0.
• Note that the Lagrangian changes a constrained optimization problem in 2 variables to an unconstrained problem
in 3 variables.
• Check whether the constraint curve has ends. You may still need to examine the end or limit behavior of 𝑓(𝑥, 𝑦).
Worksheet
#1-(a)
• GeoGebra:
https://www.geogebra.org/m/f2bzkjqz
Worksheet
#1-(a)
• GeoGebra:
https://www.geogebra.org/m/f2bzkjqz
Worksheet #1-part (b) and part (c)
• Part (b) • Part (c)
Worksheet • GeoGebra:
https://www.geogebra.org/m/myug2afc
#2
Worksheet #2
• GeoGebra:
https://www.geogebra.org/m/myug2afc
Methodology
● Gathered data on confirmed COVID-19 deaths by day from different states and provinces around
the world
(Hubei)
Methodology
● So, they fit this model to the U.S. data (finding values of p, α, β)
Actual US curve
Newton’s Method (Newton–Raphson method)
Finding critical points involves solving a set of
equations. However, if the equations are
complicated, solving them precisely can be very
difficult. In such cases, people often resort to
using Newton's method to approximate the
solution. In this illustration, we demonstrate
Newton's method using a single-variable
function, but it's important to note that this
method can be extended to a set of
multivariable equations. Also, please be aware
that the function denoted as f(x) in this context
is an arbitrary function and not our target
function.
𝑓 𝑥𝑛
𝑥𝑛+1 = 𝑥𝑛 −
𝑓 ′ 𝑥𝑛
Newton’s Method (Newton–Raphson method)
𝑓 𝑥𝑛
𝑥𝑛+1 = 𝑥𝑛 −
𝑓′ 𝑥𝑛