Constrained Optimization 2
Constrained Optimization 2
Constrained Optimization 2
University of Hohenheim
Chapter 6
Constrained Optimization II (Several Constraints)
Learning goals
Students
Can use the bordered Hessian matrix to determine whether the solution to a
constrained optimization problem with two variables
and one constraint is a maximum or minimum
Can discuss conditions under which the Lagrange method works
to solve a constrained optimization problem with two variables
and one constraint
Can set up and solve the Lagrange function associated to a constrained
optimization problem with n variables and m < n constraints and use the bordered
Hessian matrix to determine whether the solution is a maximum or minimum
Lagrange Theorem
Further steps
Second order conditions
Constraint qualification
Generalization to n choice variables and m < n constraints
) [email protected]
(b)
Optimization in Economic Theory (winter term 2021/22) 7 / 35
Chapter 6 – Constrained Optimization II Double click here for audio contents
Given
0 −H1 −H2 0 −H1 −H2
Hb = −H1
F11 − λH11 F12 − λH12 = −H1
L11 L12
−H2 F21 − λH21 F22 − λH22 −H2 L12 L22
One finds = H1 H2 L12 + H2 H1 L12 − H22 L11 − L22 H12
det(Hb)
= − L11 H22 − 2L12 H1 H2 + L22 H12
This term must be positive for a maximum.
Put differently
Let CH = {(x1 , x2 ) : H(x1 , x2 ) = c} be the constraint set.
Moreover, we assume that at the maximizer (x 1 , x 2 ) the conditions Hi (x) ̸= 0
apply (constraint qualification is given, see below).
Then, if F and H are C 2 functions, the constraint set can be considered as the
graph of a C 1 function x2 = ϕ(x1 ) around (x 1 , x 2 ) (e.g., the budget line), so that
H(x1 , ϕ(x1 )) = c∀x1 near x 1 .
Differentiating yields (e.g., slope of the budget line)
∂H(·)
∂x1
+ ∂H(·)
∂x2
ϕ′ (x1 ) = 0
′ ∂x2 ∂H(·)/∂x1
ϕ (x1 ) = ∂x1 = − ∂H(·)/∂x2
This proves that the determinant of the bordered Hessian matrix det(Hb) must be
positive for a maximum.
See slides 27ff for the case with n > 2 variables and m > 1 constraints.
L(x , y , λ) = ax + by + λ 100 − x 2 − y 2 .
∂L
= a − 2λx = 0, (1)
∂x
∂L
= b − 2λy = 0, (2)
∂y
∂L
= 100 − x 2 − y 2 = 0. (3)
∂λ
Step 3: Solve for the unknowns. For example: Isolating x and y from (1) and (2)
and plugging the result into (3) yields
√
a2 + b 2
λ̄ = .
20
We have excluded the solution with a negative λ because it is only consistent with
negative outputs x and y ; see equations (1) and (2).
Step 4: Plugging this into x and y yields
10a 10b
x̄ = √ , ȳ = √ .
a2 + b 2 a2 + b 2
Insights
If a rises, x increases and y decreases.
If a and b increase by the same factor, x and y do not change.
Relative changes between a and b, however, do matter.
We will analyze this further in the comparative statics section.
Constraint qualification
If this leads to a rise in the objective function (i.e. Fi (x̄1 , x̄2 ) > 0),
this is desirable
In this case, one should consume up to the point of satiation at which
Fi (x̄1 , x̄2 ) is also zero.
However, the Lagrange method fails if Hi (x̄1 , x̄2 ) = 0 for both i.
(It is not possible to draw the budget line any more.)
Remarks
In many economic applications, the failure of the constraint qualification is not the
relevant case.
Even if it is, it can usually be circumvented by rewriting the constraint.
Throughout the course, we will typically assume that the constraint qualification
holds, i.e., that Hi (x̄1 , x̄2 ) ̸= 0 for at least one i.
However, you should always keep in mind that the methods developed in this
course could fail if the constraint qualification does not hold.
The Lagrangian is m
X
λi ci − H i (x1 , . . . xn )
L(x1 , . . . xn , λ1 , . . . λm ) = F (x1 , . . . xn ) +
where λi is the Lagrange parameter associated toi=1 constraint i.
A more compact notation is
L(x, λ) = F (x) + λ [c − H(x)] ,
where x is an n-dimensional vector, λ is an m-dimensional row vector, c an
m-dimensional vector, F is a function taking on scalar values,
H is a function taking on m-dimensional vector values.
∂F ∂H 1 ∂H 1
∂x1 ∂x1
··· ∂xn
.. .. ..
− (λ1 , . . . , λm ) =0
. | {z } . .
∂F ∂H m ∂H m
∂xn
≡λ
∂x1
··· ∂xn
| {z } | {z }
≡Fx (x) ≡Hx (x)
Recall that Hx (x) is the m × n matrix where the row vectors of partial derivatives
are stacked
The constraint qualification is that rank Hx (x̄ ) = m
(the maximum possible; i.e. Hx has “full rank”).
A matrix is of “full rank”, if all rows and all columns are linearly independent
A set of vectors is said to be linearly dependent if one of the vectors in
the set can be defined as a linear combination of the others.
If no vector in the set can be written in this way, then the vectors are
said to be linearly independent.
We say that the vector of constraints (H i , . . . , H m ) satisfies the nondegenerate
constraint qualification (NDCQ) at x if the rank of Hx (x) is m.
Lx (x̄, λ) = 0,
Lλ (x̄, λ) = 0.
This is a straightforward extension of the simple case with two variables and one
constraint.
In the general case with n variables and m constraints, the bordered Hessian
matrix emerges as
0 −Hx
Hb ≡ ,
−HxT Fxx − λHxx
where HxT is the transpose of Hx
The top left partition is (m × m), the bottom right (n × n),
the top right is (m × n), and the bottom left is (n × m).
We can start at the upper left corner and evaluate the signs of the leading
principal minors.
On the next slides, we see the less compact formulation that provides the intuition
for the structure of the matrix.
.. .. .. .. ..
. . . . .
∂Hm ∂Hm
0 ... 0 − ∂x1 ... − ∂xn
Hb =
− ∂H1 ∂2L ∂2L
.
∂x1 ... − ∂H
∂x1
m
∂x12
... ∂x1 ∂xn
.. .. .. .. ..
. . . . .
∂H1 ∂Hm ∂2L ∂2L
− ∂xn ... − ∂xn ∂xn ∂x1 ... ∂xn2
In this case, we only need to evaluate the determinant of the whole bordered
Hessian (because 2m + 1 = 3 = m + n).
Note that (−1)n > 0 and (−1)m < 0.
Concluding remarks
We have explored
Second order conditions to figure out whether we have identified a
maximum or a minimum
Constraint qualifications to figure out whether or not the Lagrange
method is applicable
Moreover, we have extended the Lagrange method to the case of n variables and
m < n equality constraints
In a next step, we have to introduce inequality constraints