15.093: Optimization Methods

Download as pdf or txt
Download as pdf or txt
You are on page 1of 8

15.

093: Optimization Methods

Lecture 14: Lagrangean Methods


1 Outline
Slide 1

� The Lagrangean dual


� The strength of the Lagrangean dual
� Solution of the Lagrangean dual

2 The Lagrangean dual


Slide 2
� Consider
ZIP � min c0
x
s:t: Ax � b

Dx � d
x integer
� X � fx integer j Dx � dg
� Optimizing over X can be done e�ciently
2.1 Formulation Slide 3
� Consider
Z (�) � min c0x + �0
(b � Ax) (D)
s:t: x2X
� For �xed �, problem can be solved e�ciently
� �
� Z (�) � mini�1;:::;m c0 xi + �0 (b � Axi ) :
� Z (�) is concave and piecewise linear
2.2 Weak Duality Slide 4
If problem (D) has an optimal solution and if � � , then Z (�) � Z
0 IP

� Proof: x� an optimal solution to (D).


� Then b � Ax� � 0 and, therefore,
c0x� + �0(b � Ax�) � c0x� � Z IP

� Since x� 2 X , Z (�) � c0x� + �0 (b � Ax� ); and thus, Z (�) � Z IP

ZD

Z(p)

p* p

2.3 Key problem Slide 5


� Consider the Lagrangean dual:
ZD � max Z (�)
s:t: � � 0

� ZD � ZIP
� We need to maximize a piecewise linear concave function
Slide 6

3 Strength of LD
3.1 Main Theorem Slide 7
X � fx integer j Dx � dg: Note that CH(X ) is a polyhedron. Then
ZD � min c0 x
s:t: Ax � b
x 2 CH(X )

3.2 Example Slide 8


min 3x1 � x2
s:t: x1 � x2 � �1
�x1 + 2x2 � 5
3x1 + 2x2 � 3
6x1 + x2 � 15
x1 ,x2 � 0 integer

2
x2
CH(X)

3 . .
xD
2 .
x IP

. .
c
xL P
1 . .
0
.
1
.
2 x1

Relax x1 � x2 � �1, X involves the remaining constraints

X � (1; 0); (2; 0); (1; 1); (2; 1); (0; 2);

(1; 2); (2; 2); (1; 3); (2; 3) :

Slide 9
For p � 0, we have

Slide 10

� �

Z (p) � min 3x1 � x2 + p(�1 � x1 + x2 )

(x ;x )2X 1 2
8
�2 + p; �
� 0 � p � 5�3;
Z (p) � � 3 � 2p; 5�3 � p � 3;
:
6 � 3p; p � 3:

p � 5�3, and ZD � Z (5�3) � �1�3 Slide 11

� �
Slide 12
� xD � 1�3; 4�3 , ZD � �1�3

� �

� xLP � 1�5; 6�5 , ZLP � �9�5

� xIP � (1; 2), ZIP � 1

� ZLP � ZD � ZIP
Slide 13

� In general, ZLP � ZD � ZIP

� For c x � 3x1 � x2, we have ZLP � ZD � ZIP .

� For c x � �x1 + x2 , we have ZLP � ZD � ZIP .

� For c x � �x1 � x2 , we have ZLP � ZD � ZIP .

� It is also possible: ZLP � ZD � ZIP but not on this example.

Z(p)

0
p

3.3 LP and LD Slide 14


� ZIP � ZD for all cost vectors c, if and only if
� � �
CH X \ x j Ax � b� � CH(X ) \ �x j Ax � b�

� We have ZLP � ZD for all cost vectors c, if


� �

CH(X ) � x j Dx � d
� � �
� If� x j Dx � d , has integer extreme points, then CH(X ) � x j Dx �
d , and therefore Z � Z D LP

4 Solution of LD
� � Slide 15
� Z (�) � mini�1;:::;m c0xi + �0 (b � Axi ) ; i.e.,
� 0 ��:
Z (�) � i�1min
;:::;m
hi + f i

� Motivation: classical steepest ascent method for maximizing Z (�)


�t+1 � �t + �trZ (�t); t � 1; 2; : : :
� Problem: Z (�) is not di�erentiable

4
f (x *) +s' (x -x *)

f (x)

x* x

Z (p)

f2

s
f1

p* p

4.1 Subgradients Slide 16


� A function f : �n 7! � is concave if and only if for any x� 2 �n , there
exists a vector s 2 �n such that

f (x ) � f (x� ) + s0 (x � x� );

for all x 2 �n .
� Let f be a concave function. A vector s such that

f (x ) � f (x� ) + s0 (x � x� );

for all x 2 �n , is called a subgradient of f at x� .


Slide 17
Slide 18
4.2 Subgradient Algorithm Slide 19
1. Choose a starting point �1 ; let t � 1.
2. Given �t , choose a subgradient st of the function Z (�) at �t. If st � 0,
then �t is optimal and the algorithm terminates. Else, continue.

t p
t

s
t

( t)
Z p

1 5:00
�3 �9 00
:

2 2:60
�3 �1 80
:

3 0:68
1 �1 32
:

4 1:19
1 �0 81
:

5 1:60
1 �0 40
:

6 1:92
�2 �0 84
:

7 1:40
1 �0 60
:

8 1:61
1 �0 39
:

9 1:78
�2 �0 56
:

10 1:51
1 �0 49
:

3. Let �t+1 � �t + �t st , where �t is a positive stepsize parameter. Increment


t and go to Step 2.
� �
3a If � � 0, ptj+1 � max ptj + �t stj ; 0 ; 8 j:
4.2.1 Step sizes
Slide 20
� Z (pt) converges to the unconstrained maximum of Z (�), for any stepsize
sequence �t such that
1
X
� t � 1; and lim � � 0:

t!1 t
t�1

� Examples �t � 1�t
� �t � �0 �t; t � 1; 2; : : :;
� �t � Z^Djj�sZjj(2p ) �t; where � satis�es 0 � � � 1, and Z^D is an estimate of
t
t

the optimal value ZD .


4.3 Example Slide 21
Recall p� � 5�3 � 1:66 and ZD � �1�3 � �0:33. Apply subgradient optimiza
tion:

MIT OpenCourseWare
http://ocw.mit.edu

15.093J / 6.255J Optimization Methods


Fall 2009

For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.

You might also like