7_1_The Kuhn-Tucker conditions
7_1_The Kuhn-Tucker conditions
7_1_The Kuhn-Tucker conditions
Consider, for example, a consumer's choice problem. There is no reason to insist that a
consumer spend all her wealth, so that her optimization problem should be formulated with
inequality constraints:
Depending on the character of the function u and the values of p and w, we may have p·x < w or
p·x = w at a solution of this problem.
One approach to solving this problem starts by determining which of these two conditions holds
at a solution. In more complex problems, with more than one constraint, this approach does not
work well. Consider, for example, a consumer who faces two constraints (perhaps money and
time). Three examples are shown in the following figure, which should convince you that we
cannot deduce from simple properties of u alone which of the constraints, if any, are satisfied
with equality at a solution.
where f and gj for j = 1, ..., m are functions of n variables, x = (x1, ..., xn), and cj for j = 1, ..., m
are constants.
All the problems we have studied so far may be put into this form.
Equality constraints
We introduce two inequality constraints for every equality constraint. For example, the
problem
http://www.chass.utoronto.ca/~osborne/MathTutorial/KTC.HTM 20.02.2006
Optimization with inequality constraints: the Kuhn-Tucker conditions Page 2 of 5
may be written as
Nonnegativity constraints
For a problem with a constraint xk ≥ 0 we let gj(x) = −xk and cj = 0 for some j.
Minimization problems
For a minimization problem we multiply the objective function by −1:
is the same as
To start thinking about how to solve the general problem, first consider a case with a single
constraint (m = 1). We can write such a problem as
There are two possibilities for the solution of this problem. In the following figures, the black
closed curves are contours of f ; values of the function increase in the direction shown by the
blue arrows. The downward-sloping red line is the set of points x satisfying g(x) = c; the set of
points x satisfying g(x) ≤ c lie below and the the left of the line, and those satisfying g(x) ≥ c lie
above and to the right of the line.
In each panel the solution of the problem is the point x*. In the left panel the constraint binds at
the solution: a change in c changes the solution. In the right panel, the constraint is slack at the
solution: small changes in c have no effect on the solution.
http://www.chass.utoronto.ca/~osborne/MathTutorial/KTC.HTM 20.02.2006
Optimization with inequality constraints: the Kuhn-Tucker conditions Page 3 of 5
Then from our previous analysis of problems with equality constraints and with no constraints,
z if g(x*) = c (as in the left-hand panel) and the constraint satisfies a regularity condition,
then L′i(x*) = 0 for all i
z if g(x*) < c (as in the right-hand panel), then f i′(x*) = 0 for all i.
Now, I claim that in the first case (that is, if g(x*) = c) we have λ ≥ 0. Suppose, to the contrary,
that λ < 0. Then we know that a small decrease in c raises the maximal value of f . That is,
moving x* inside the constraint raises the value of f , contradicting the fact that x* is the
solution of the problem.
In the second case, the value of λ does not enter the conditions, so we can choose any value for
it. Given the interpretation of λ, setting λ = 0 makes sense. Under this assumption we have f i′
(x) = L′i(x) for all x, so that L′i(x*) = 0 for all i.
Thus in both cases we have L′i(x*) = 0 for all i, λ ≥ 0, and g(x*) ≤ c. In the first case we have g
(x*) = c and in the second case λ = 0.
Now, the product of two numbers is zero if and only if at least one of them is zero, so we can
alternatively write these conditions as
The argument I have given suggests that if x* solves the problem and the constraint satisfies a
regularity condition, then x* must satisfy these conditions.
Note that the conditions do not rule out the possibility that both λ = 0 and g(x*) = c.
The inequalities λ ≥ 0 and g(x*) ≤ c are called complementary slackness conditions; at most
one of these conditions is slack (i.e. not an equality).
For a problem with many constraints, then as before we introduce one multiplier for each
constraint and obtain the Kuhn-Tucker conditions, defined as follows.
Definition
The Kuhn-Tucker conditions for the problem
http://www.chass.utoronto.ca/~osborne/MathTutorial/KTC.HTM 20.02.2006
Optimization with inequality constraints: the Kuhn-Tucker conditions Page 4 of 5
are
where
These conditions are named in honor of Harold W. Kuhn, an emeritus member of the Princeton
Math Department, and Albert W. Tucker, who first formulated and studied the conditions.
In the following sections I discuss results that specify the precise relationship between the
solutions of the Kuhn-Tucker conditions and the solutions of the problem. The following
example illustrates the form of the conditions in a specific case.
Example
Consider the problem
We have
L(x1, x2) = −(x1 − 4)2 − (x2 − 4)2 − λ1(x1 + x2 − c) − λ2(x1 + 3x2 − 9).
−2(x1 − 4) − λ1 − λ2 = 0
−2(x2 − 4) − λ1 − 3λ2 = 0
http://www.chass.utoronto.ca/~osborne/MathTutorial/KTC.HTM 20.02.2006
Optimization with inequality constraints: the Kuhn-Tucker conditions Page 5 of 5
x1 + x2 ≤ 4, λ1 ≥ 0, and λ1(x1 + x2 − 4) = 0
x1 + 3x2 ≤ 9, λ2 ≥ 0, and λ2(x1 + 3x2 − 9) = 0.
http://www.chass.utoronto.ca/~osborne/MathTutorial/KTC.HTM 20.02.2006