Pennon A Generalized Augmented Lagrangian Method For Semidefinite Programming
Pennon A Generalized Augmented Lagrangian Method For Semidefinite Programming
Pennon A Generalized Augmented Lagrangian Method For Semidefinite Programming
b
T
x : /(x) 0
(SDP)
where b R
n
and / : R
n
S
m
is a convex operator. The basic idea
of our approach is to generalize the PBM method, developed originally
by Ben-Tal and Zibulevsky for convex NLPs, to problem (SDP). The
method is based on a special choice of a one-dimensional penalty/barrier
function that penalizes inequality constraints. Below we show how to
PENNONAn Augmented Lagrangian Method for SDP 3
use this function to construct another function that penalizes the
matrix inequality constraint in (SDP).
Let : R R have the following properties:
(
0
) strictly convex, strictly monotone increasing and C
2
(
1
) dom = (, b) with 0 < b ,
(
2
) (0) = 0 ,
(
3
)
(0) = 1 ,
(
4
) lim
tb
(t) = ,
(
5
) lim
t
(t) = 0 ,
Let further A = S
T
S, where = diag (
1
,
2
, . . . ,
d
)
T
, be an eigen-
value decomposition of a matrix A. Using , we dene a penalty function
p
: S
m
S
m
as follows:
p
: A S
T
1
p
0 . . . 0
0 p
2
p
.
.
.
.
.
.
.
.
. 0
0 . . . 0 p
d
p
S , (1)
where p > 0 is a given number.
From the denition of it follows that for any p > 0 we have
/(x) 0
p
(/(x)) 0
that means that, for any p > 0, problem (SDP) has the same solution
as the following augmented problem
min
xR
n
b
T
x :
p
(/(x)) 0
. (SDP)
p
(/(x); U
k
)
(iii) p
k+1
< p
k
.
Details of the algorithm, the choice of initial values of x, U and p, the
approximate minimization in step (i) and the update formulas, will be
discussed in detail in subsequent sections. The next section concerns the
choice of the penalty function
p
.
2. The choice of the penalty function
p
As mentioned in the Introduction, Algorithm 1.1 is a generalization of
the PBM method by Ben-Tal and Zibulevsky [3] (introduced for convex
NLPs) to convex SDP problems. In [3], several choices of function
satisfying (
1
)(
5
) are presented. The most ecient one (for convex
NLP) is the quadratic-logarithmic function dened as
ql
(t) =
c
1
1
2
t
2
+c
2
t +c
3
t r
c
4
log(t c
5
) +c
6
t < r
(3)
where r (1, 1) and c
i
, i = 1, . . . , 6, is chosen so that (
1
)(
5
) hold.
It turns out that function which work well in the NLP case may not
be the best choice for SDP problems. The reason is twofold.
First, it may happen that, even if the function and the opera-
tor / are convex, the penalty function
p
may be nonconvex. For in-
stance, function
p
dened through the right, quadratic branch of the
quadratic-logarithmic function
ql
is nonmonotone and its composition
with a convex nonlinear operator / may result in a nonconvex function
p
(/(x)). Even for linear operator /,
p
(/(x)) corresponding to
ql
may be nonconvex. This nonconvexity may obviously bring diculties
to Algorithm 1.1 and requires special treatment.
Second, the general denition (1) of the penalty function
p
may
lead to a very inecient algorithm. The (approximate) minimization in
step (i) of Algorithm 1.1 is performed be the Newton method. Hence we
need to compute the gradient and Hessian of the augmented Lagrangian
(2) at each step of the Newton method. This computation may be
extremely time consuming. Moreover, even if the data of the problem
and the Hessian of the (original) Lagrangian are sparse matrices, the
computation of the Hessian to the augmented Lagrangian involves many
operations with full matrices, when using the general formula (1). The
PENNONAn Augmented Lagrangian Method for SDP 5
detail analysis of the algorithmic complexity will be given in Section 3. It
is based on formulas for the rst and second derivatives of
p
presented
below.
Denote by
i
the divided dierence of i-th order, i = 1, 2, dened by
1
(t
i
, t
j
) :=
(t
i
) (t
j
)
t
i
t
j
for t
i
= t
j
(t
i
) for t
i
= t
j
and
2
(t
i
, t
j
, t
k
) :=
1
(t
i
, t
j
)
1
(t
i
, t
k
)
t
j
t
k
for t
j
= t
k
1
(t
i
, t
k
)
1
(t
j
, t
k
)
t
i
t
j
for t
i
= t
j
, t
j
= t
k
(t
i
) for t
i
= t
j
= t
k
.
Theorem 2.1. Let / : R
n
S
m
be a convex operator. Let further
p
be a function dened by (1). Then for any x R
n
the rst and second
partial derivatives of
p
(/(x)) are given by
x
i
p
(/(x))
= S
1
(
r
(x),
s
(x))
m
r,s=1
[S(x)
T
/(x)
x
i
S(x)]
S
T
(4)
2
x
i
x
j
p
(/(x))
= 2S
k=1
[
2
(
r
(x),
s
(x),
k
(x))]
m
r,s=1
[S(x)
T
/(x)
x
i
S(x)E
kk
S(x)
T
/(x)
x
j
S(x)]
S
T
.
(5)
We can avoid the above mentioned drawbacks by a choice of the func-
tion . In particular, we search a function that allows for a direct
computation of
p
and its rst and second derivatives. The function of
our choice is the reciprocal barrier function
rec
(t) =
1
t 1
1 . (6)
Theorem 2.2. Let / : R
n
S
m
be a convex operator. Let further
rec
p
be a function dened by (1) using
rec
. Then for any x R
n
there exists
6
p > 0 such that
rec
p
(/(x)) = p
2
Z(x) pI (7)
x
i
rec
p
(/(x)) = p
2
Z(x)
/(x)
x
i
Z(x) (8)
2
x
i
x
j
rec
p
(/(x)) = p
2
Z(x)
/(x)
x
i
Z(x)
/(x)
x
j
2
/(x)
x
i
x
j
+
/(x)
x
j
Z(x)
/(x)
x
i
Z(x) (9)
where
Z(x) = (/(x) pI)
1
.
Furthermore,
rec
p
(/(x)) is monotone and convex in x.
Proof. Let I
m
denote the identity matrix of order m. Since Z(x) is
dierentiable and nonsingular at x we have
0 =
x
i
I
m
=
x
i
Z(x)Z
1
(x)
x
i
Z(x)
Z
1
(x) +Z(x)
x
i
Z
1
(x)
, (10)
so the formula
x
i
Z(x) = Z(x)
x
i
Z
1
(x)
Z(x) = Z(x)
/(x)
x
i
Z(x) (11)
follows directly after multiplication of (10) by Z(x) and (8) holds. For
the proof of (9) we dierentiate the right hand side of (11)
2
x
i
x
j
Z =
x
i
Z(x)
/(x)
x
j
Z(x)
x
i
Z(x)
/(x)
x
j
Z(x) Z(x)
x
i
/(x)
x
j
Z(x)
= Z(x)
/(x)
x
i
Z(x)
/(x)
x
j
Z(x) Z(x)
2
/(x)
x
i
x
j
Z(x)
Z(x)
/(x)
x
j
x
i
Z(x)
= Z(x)
/(x)
x
i
Z(x)
/(x)
x
j
Z(x) Z(x)
2
/(x)
x
i
x
j
Z(x)
+Z(x)
/(x)
x
j
Z(x)
/(x)
x
i
Z(x)
PENNONAn Augmented Lagrangian Method for SDP 7
and (9) follows. For the proof of convexity and monotonicity of
rec
p
we
refer to [13].
Using Theorem 2.2 we can compute the value of
rec
p
and its deriva-
tives directly, without the need of eigenvalue decomposition of /(x).
The direct formulas (8)(9) are particularly simple for ane operator
/(x) = A
0
+
n
i=1
x
i
A
i
with A
i
S
m
, i = 0, 1, . . . , n,
when
/(x)
x
i
= A
i
and
2
/(x)
x
i
x
j
= 0.
3. Complexity
Computational complexity of Algorithm 1.1 is dominated by construc-
tion of the Hessian of the augmented Lagrangian (2). Our complexity
analysis is therefore limited to this issue.
3.1. The general approach
As we can easily see from Theorem 2.1, the part of the Hessian cor-
responding to the inner product in formula (2) is given by
k=1
s
T
k
/(x)
x
i
S(x)
Q
k
[S(x)
T
US(x)]
S(x)
T
/(x)
x
j
s
k
n
i,j=1
(12)
where Q
k
denotes the matrix [
2
(
r
(x),
s
(x),
k
(x))]
m
r,s=1
and s
k
is
the k-th row of the matrix S(x). Essentially, the construction is done in
three steps, shown below together with their complexity:
For all k compute matrices S(x)
Q
k
[S(x)
T
US(x)]
S(x)
T
O(m
4
).
For all k, i compute vectors s
T
k
/(x)
x
i
O(nm
3
).
Multiply and sum up expressions above O(m
3
n +m
2
n
2
).
Consequently the Hessian assembling takes O(m
4
+m
3
n +m
2
n
2
) time.
Unfortunately, if the constraint matrices
A(x)
x
i
are sparse, the complexity
formula remains the same. This is due to the fact, that the matrices Q
k
and S(x) are generally dense, even if the matrix /(x) is very sparse.
8
3.2. Function
rec
p
If we replace the general penalty function by the reciprocal function
rec
p
then, according to Theorem 2.2, the part of the Hessian correspond-
ing to the inner product in formula (2) can be written as
Z(x)UZ(x)
/(x)
x
i
Z(x),
/(x)
x
j
n
i,j=1
+
Z(x)UZ(x),
2
/(x)
x
i
x
j
n
i,j=1
+
Z(x)UZ(x)
/(x)
x
j
Z(x),
/(x)
x
i
n
i,j=1
. (13)
It is straightforward to see that the complexity of assembling of (13) is
given by O(m
3
n+m
2
n
2
). In contrast to the general approach, for sparse
constraint matrices with O(1) entries, the complexity formula reduces
to O(m
2
n +n
2
).
4. The code PENNON
Algorithm 1.1 was implemented (mainly) in the C programming lan-
guage and this implementation gave rise to a computer program called
PENNON
1
. In this section we describe implementation details of this
code.
4.1. Block diagonal structure
Many semidenite constraints can be written in block diagonal form
/(x) =
/
1
(x)
/
2
(x)
.
.
.
/
ks
(x)
/
l
(x)
0,
where /
l
(x) is a diagonal matrix of order k
l
, each entry of which has
the form a
T
i
x c
i
. Using this, we can reformulate the original problem
1
http://www2.am.uni-erlangen.de/kocvara/pennon/
PENNONAn Augmented Lagrangian Method for SDP 9
(SDP) as
min
xR
n
b
T
x
s.t. /
j
(x) 0, j = 1, . . . , k
s
,
g
i
(x) 0, i = 1, . . . , k
l
,
where g
i
, i = 1, . . . , k, are real valued ane linear functions. This is
the formulation solved by our algorithm. The corresponding augmented
Lagrangian can be written as follows:
F(x, U, u, p) = b
T
x +
ks
j=1
'U
j
,
p
(/
j
(x))`
S
m
j +
k
l
i=1
'u
i
,
p
(g
i
(x))`
R
,
where U = (U
1
, . . . , U
k
) S
m
1
. . . S
m
ks
and u = (u
1
, . . . , u
k
l
) R
k
l
are the Lagrangian multipliers and p R
ks
R
k
l
is the vector of penalty
parameters associated with the inequality constraints .
4.2. Initialization
As we have seen in Theorem 2.2, our algorithm can start with an
arbitrary primal variable x R
n
. Therefore we simply choose x
0
= 0.
The initial values of the multipliers are set to
U
0
j
=
s
j
I
m
j
, j = 1, . . . , k
s
,
u
0
i
=
l
i
, i = 1, . . . , k
l
,
where I
m
j
are identity matrices of order m
j
and
s
j
= m
j
max
1n
1 +[b
j
[
1 +
A(x)
x
, (14)
l
i
= max
1n
1 +[b
i
[
1 +
g(x)
x
. (15)
Furthermore, we calculate > 0 so that
max
(/
j
(x)) < , j = 1, . . . , k
and set p
0
= e where e R
ks+k
l
is the vector with ones in all compo-
nents.
10
4.3. Unconstrained minimization
The tool used in step (i) of Algorithm 1.1 (approximate unconstrained
minimization) is the modied Newton method combined with a cubic
linesearch. In each step we calculate the search direction d by solving
the Newton equation and nd
max
so that the conditions
max
(/
j
(x
k
+d)) < p
k
j
, j = 1, . . . , k
hold for all 0 < <
max
.
4.4. Update of multipliers
First we would like to motivate the multiplier update formula in Al-
gorithm 1.1.
Proposition 4.1. Let x
k+1
be the minimizer of the augmented Lagrangian
F with respect to x in the k-th iteration. If we choose U
k+1
as in Algo-
rithm 1.1 we have
L(x
k+1
, U
k+1
, p
k
) = 0,
where L denotes the standard Lagrangian of our initial problem (SDP).
An outline of the proof is given next. The gradient of F with respect
to x reads as
x
F(x, U, p) = b +
U, D
A
/(x);
A(x)
x
1
.
.
.
U, D
A
/(x);
A(x)
xn
. (16)
It can be shown that (16) can be written as
b +/
D
A
p
(/(x); U) ,
where /
/(x
k
); U
k
x
F(x
k+1
, U
k
, p
k
) =
x
L(x
k+1
, U
k+1
, p
k
)
and so we get L(x
k+1
, U
k+1
, p
k
) = 0.
For our special choice of the penalty function
rec
p
, the multiplier
update can be written as
U
k+1
= (p
k
)
2
Z(x)U
k
Z(x) , (17)
where Z was dened in Theorem 2.2.
PENNONAn Augmented Lagrangian Method for SDP 11
Numerical test indicated that big changes in the multipliers should
be avoided for two reasons. First, they may lead to a large number of
Newton steps in the subsequent iteration. Second, it may happen that
already after a few steps, the multipliers become ill-conditioned and the
algorithm suers from numerical troubles. To overcome these diculties,
we do the following:
1. Calculate U
k+1
using the update formula in Algorithm 1.1.
2. Choose some positive 1, typically 0.7.
3. If the eigenvalues
min
(U
k
),
max
(U
k
),
min
(U
k+1
) and
max
(U
k+1
)
can be calculated in a reasonable amount of time, check the in-
equalities
max
(U
k+1
)
max
(U
k
)
>
1
1
,
min
(U
k+1
)
min
(U
k
)
< 1 .
4. If both inequalities hold, use the initial update formula. If at least
one of the inequalities is violated or if calculation of the eigenvalues
is too complex, update the current multiplier by
U
new
= U
k
+(U
k+1
U
k
). (18)
4.5. Stopping criteria and penalty update
When testing our algorithm we observed that Newton method needs
many steps during the rst global iterations. To improve this, we adopted
the following strategy: During the rst three iterations we do not update
the penalty vector p at all. Furthermore, we stop the unconstrained min-
imization if |
x
F(x, U, p)| is smaller than some
0
> 0, which is not too
small, typically 1.0. After this kind of warm start, we change the stop-
ping criterion for the unconstrained minimization to |
x
F(x, U, p)| ,
where in most cases = 0.01 is a good choice. Algorithm 1.1 is stopped
if one of the inequalities holds:
[b
T
x
k
F(x
k
, U
k
, p)[
[b
T
x
k
[
< ,
[b
T
x
k
b
T
x
k1
[
[b
T
x[
< ,
where is typically 10
7
.
12
4.6. Sparse linear algebra
Many semidenite programs have very sparse data structure and there-
fore have to be treated by sparse linear algebra routines. In our imple-
mentation, we use sparse linear algebra routines to perform the following
two tasks:
Construction of the Hessian. In each Newton step, the Hessian
of the augmented Lagrangian has to be calculated. As we have seen in
Section 3, the complexity of this task can be drastically reduced if we
make use of sparse structures of the constraint matrices /
j
(x) and the
corresponding partial derivatives
A
j
(x)
x
i
. Since there is a great variety of
dierent sparsity types, we refer to the paper by Fujisawa, Kojima and
Nakata on exploiting sparsity in semidenite programming [6], where
one can nd the ideas we follow in our implementation.
Cholesky factorization. The second task is the factorization of the
Hessian. In the initial iteration, we check the sparsity structure of the
Hessian and do the following:
If the ll-in of the Hessian is below 20%, we make use of the fact
that the sparsity structure will be the same in each Newton step
in all iterations. Therefore we create a symbolic pattern of the
Hessian and store it. Then we factorize the Hessian by the sparse
Cholesky solver of Ng and Peyton [11], which is very ecient for
sparse problems with constant sparsity structure.
Otherwise, if the Hessian is dense, we use the Cholesky solver from
lapack which, in its newest version, is very robust even for small
pivots.
5. Remarks
5.1. SOCP problems
Let us recall that the PBM method was originally developed for large-
scale NLP problems. Our generalized method can therefore naturally
handle problems with both NLP and SDP constraints, whereas the NLP
constraints are penalized by the quadraticlogarithmic function
ql
from
(3) and the augmented Lagrangian contains terms from both kind of con-
straints. The main change in Algorithm 1.1 is in step (ii), the multiplier
update, that is now done separately for dierent kind of constraints.
The method can be thus used, for instance, for solution of Second
Order Conic Programming (SOCP) problems combined with SDP con-
PENNONAn Augmented Lagrangian Method for SDP 13
straints, i.e., problems of the type
min
xR
n
b
T
x
s.t. /(x) 0
A
q
x c
q
q
0
A
l
x c
l
0
where b R
n
, / : R
n
S
m
is, as before, a convex operator, A
q
are
k
q
n matrices and A
l
is an k
l
n matrix. The inequality symbol
q
a
2
x
2
2
+. . . +a
m
x
2
m
,
we work with a (smooth and convex) constraint
a
1
x
1
a
2
x
2
2
+. . . +a
m
x
2
m
+.
The value of can be decreased during the iterations of Algorithm 1.1.
In PENNON we set = p 10
6
, where p is the penalty parameter in
Algorithm 1.1. In this way, we obtain solutions of SOCP problems of
high accuracy. This is demosntrated in Section 6.
5.2. Convex and nonconvex problems
We would like to emphasize that, although used only for linear SDP
so far, Algorithm 1.1 is proposed for general convex problems. This
should be kept in mind when comparing PENNON (on test sets of linear
problems) with other codes that are based on genuine linear algorithms.
We can go even a step further and try to generalize Algorithm 1.1 to
nonlinear nonconvex problems, whereas the nonconvexity can be both
in the NLP and in the SDP constraint. Examples of nonconvex SDP
problems can be found in [1, 8, 9]. How to proceed in this case? The idea
is quite simple: we apply Algorithm 1.1 and whenever we hit a nonconvex
point in Step (i), we switch from the Newton method to the Levenberg-
Marquardt method. More precisely, one step of the minimization method
in step (i) is dened as follows:
Given a current iterate (x, U, p), compute the gradient g and
Hessian H of F at x.
14
Compute the minimal eigenvalue
min
of H. If
min
< 10
3
,
set
H() = H + (
min
+)I.
Compute the search direction
d() =
H()
1
g.
Perform line-search in direction d(). Denote the step-length
by s.
Set
x
new
= x +sd().
Obviously, for a convex F, this is just a Newton step with line-search.
For nonconvex functions, we can use a shift of the spectrum of H with
a xed parameter = 10
3
. This approach proved to work well on
several nonconvex NLP problems and we have reasons to believe that it
will work for nonconvex SDPs, too. Obviously, the xed shift is just the
simplest approach and one can use more sophisticated ones like a plane-
search (w.r.t. and s), as proposed in [8], or an approximate version of
the trust-region algorithm.
5.3. Program MOPED
Program PENNON, both the NLP and SDP versions, was actually
developed as a part of a software package MOPED for material opti-
mization. The goal of this package is to design optimal structures con-
sidered as two- or three-dimensional continuum elastic bodies where the
design variables are the material properties which may vary from point
to point. Our aim is to optimize not only the distribution of material
but also the material properties themselves. We are thus looking for
the ultimately best structure among all possible elastic continua, in a
framework of what is now usually referred to as free material design
(see [16] for details). After analytic reformulation and discretization by
the nite element method, the problem reduces to a large-scale NLP
min
R,xR
N
c
T
x[ x
T
A
i
x for i = 1, . . . , M
,
where M is the number of nite elements and N the number of degrees of
freedom of the displacement vector. For real world problems one should
work with discretizations of size N, M 20 000.
From practical application point of view ([7]), the multiple-load formu-
lation of the free material optimization problem is much more important
PENNONAn Augmented Lagrangian Method for SDP 15
than the above one. Here we look for a structure that is stable with re-
spect to a whole scenario of independent loads and which is the stiest
one in the worst-case sense. In this case, the original min-max-max
formulation can be rewritten as a linear SDP of the following type (for
details, see [2]):
min
R,x(R
N
)
L
=1
(c
)
T
x
[ /
i
(, x) _ 0 for i = 1, . . . , M
;
here L is the number of independent load cases (usually 24) and /
i
:
R
NL+1
S
d
are linear matrix operators (where d is small). Written
in a standard form (SDP), we get a problem with one linear matrix
inequality
min
x(R
n
)
L
a
T
x[
nL
i=1
x
i
B
i
_ 0
,
where B
i
are block diagonal matrices with many (5 000) small (1111
20 20) blocks. Moreover, only few (612) of these blocks are nonzero
in any B
i
, as schematically shown in the gure below.
2
x + x + ...
1
As a result, the Hessian of the augmented Lagrangian associated with
this problem is a large and sparse matrix. PENNON proved to be par-
ticularly ecient for this kind of problems, as shown in the next section.
6. Computational results
Here we describe the results of our testing of PENNON and two other
SDP codes, namely CSDP by Borchers [4] and SDPT3 by Toh, Todd and
T ut unc u [15]. We have chosen these two codes as they were, in average,
the fastest ones in the independent tests performed by Mittelmann [10].
We have used three sets of test problem: the SDPLIB collection of linear
SDPs by Borchers [5]; the set of mater examples from multiple-load free
material optimization (see Section 5.3); and selected problems from the
DIMACS library [12] that combine SOCP and SDP constraints. We used
the default setting of parameters for CSDP and SDPT3. PENNON, too,
was tested with one setting of parameters for all the problems.
16
6.1. SDPLIB
Due to space (and memory) limitations, we do not present here the full
SDPLIB results and select just several representative problems. Table 1
lists the selected SDPLIB problems, along with their dimensions.
We will present two tables with results obtained on two dierent com-
puters. The reason for that is that CSDP implementation under LINUX
seems to be relatively much faster than under Sun Solaris. On the other
hand, we did not have a LINUX computer running matlab, hence the
comparison with SDPT3 was done on a Sun workstation. Table 1 shows
the results of CSDP and PENNON on a 650 MHz Pentium III with
512 KB memory running SuSE LINUX 7.3. PENNON was linked with
the ATLAS library, while CSDP binary was taken from Borchers home-
page [4].
Table 1. Selected SDPLIB problems and computational results using CSDP and
PENNON, performed on a Pentium III PC (650 MHz) with 512 KB memory running
SuSE LINUX 7.3.
CSDP PENNON
problem n m CPU digits CPU digits
arch8 174 335 25 7 79 6
control7 666 105 401 7 327 7
control10 1326 150 1981 6 3400 6
control11 1596 165 3514 6 6230 6
gpp250-4 250 250 33 7 25 7
gpp500-4 501 500 245 7 156 7
hinf15 91 37 1 5 5 3
mcp250-1 250 250 19 7 21 7
mcp500-1 500 500 117 7 175 7
qap9 748 82 21 7 35 5
qap10 1021 101 45 7 107 5
ss30 132 426 167 7 111 7
theta3 1106 150 47 7 97 7
theta4 1949 200 216 7 431 7
theta5 3028 250 686 7 1295 7
theta6 4375 300 1940 7 4346 7
truss7 86 301 1 7 1 7
truss8 496 628 19 7 130 7
equalG11 801 801 749 7 768 6
equalG51 1001 1001 1498 7 3173 7
maxG11 800 800 404 7 611 6
maxG32 2000 2000 5540 7 10924 7
maxG51 1001 1001 875 7 1461 7
qpG11 800 1600 2773 7 3886 7
qpG51 1000 2000 5780 7 7867 7
PENNONAn Augmented Lagrangian Method for SDP 17
Table 2 gives results of SDPT3 and PENNON, obtained on Sun Ultra
10 with 384 MB of memory running Solaris 8. SDPT3 was used within
Matlab 6 and PENNON was linked with the ATLAS library.
Table 2. Selected SDPLIB problems and computational results using SDPT3 and
PENNON, performed on a Sun Ultra 10 with 384 MB of memory running Solaris 8.
SDPT3 PENNON
problem CPU digits CPU digits
arch8 52 7 203 6
control7 263 6 652 7
control10 1194 6 7082 6
control11 1814 6 13130 6
gpp250-4 46 7 42 6
gpp500-4 266 7 252 7
hinf15 16 5 6 3
mcp250-1 24 7 38 7
mcp500-1 109 7 290 7
qap9 31 4 64 5
qap10 55 4 176 5
ss30 141 7 246 7
theta3 64 7 176 7
theta4 212 7 755 7
theta5 657 7 2070 7
truss7 10 6 2 7
truss8 62 7 186 7
equalG11 1136 7 1252 7
equalG51 2450 7 3645 7
maxG11 500 7 1004 7
maxG51 1269 7 2015 7
qpG11 3341 7 7520 7
qpG51 7525 7 13479 7
In most of the SDPLIB problems, SDPT3 and CSDP are faster than
PENNON. This is, basically, due to the number of Newton steps used by
the particular algorithms. Since the complexity of Hessian assembling
is about the same for all three codes, and the data sparsity is handled
in a similar way, the main time dierence is given by the number of
Newton steps. While CSDP and SDPT3 need, in average, 1530 steps,
PENNON needs about 23 times more steps. Recall that this is due to
the fact that PENNON is based on an algorithm for general nonlinear
convex problems and allows to solve larger class of problems. This is the
price we pay for the generality. We believe that, in this light, the code
is competitive.
18
6.2. mater problems
Next we present results of the mater examples. These results are
overtaken from Mittelmann [10] and were obtained
2
on Sun Ultra 60,
450 MHz with 2 GB memory, running Solaris 8. Table 4 shows the di-
mensions of the problems, together with the optimal objective value.
Table 5 presents the test results for CSDP, SDPT3 and PENNON. It
turned out that for this kind of problems, the code SeDuMi by Sturm
[14] was rather competitive, so we included also this code in the table.
Table 3. mater problems
problem n m Optimal value
mater-3 1439 3588 -1.339163e+02
mater-4 4807 12498 -1.342627e+02
mater-5 10143 26820 -1.338016e+02
mater-6 20463 56311 -1.335387e+02
Table 4. Computational results for mater problems using SDPT3, CSDP, SeDuMi,
and PENNON, performed on a Sun Ultra 60 (450 MHz) with 2 GB of memory running
Solaris 8.
SDPT3 CSDP SeDuMi PENNON
problem CPU digits CPU digits CPU digits CPU digits
mater-3 718 7 129 8 59 11 50 10
mater-4 9544 5 2555 8 323 11 222 9
mater-5 51229 5 258391 8 738 10 630 8
mater-6 memory memory 2532 8 1602 8
6.3. DIMACS
Finally, in Table 5 we present results of selected problems from the
DIMACS collection. These are mainly SOCP problems, apart from
filter48-socp that combines SOCP and SDP constraints. The results
demonstrate that we can reach high accuracy even when working with
the smooth reformulation of the SOCP constraints (see Section 5.1).
The results also show the inuence of linear constraints on the eciency
of the algorithm; cf. problems nb and nb-L1. This is due to the fact
that, in our algorithm, the part of the Hessian corresponding to every
2
Except of mater-5 solved by CSDP and mater-6 solved by CSDP and SDPT3. These were
obtained using Sun E6500, 400 MHz with 24 GB memory
REFERENCES 19
(penalized) linear constraint is a dyadic, i.e., possibly full matrix. We
are working on an approach that treats linear constraints separately.
Table 5. Computational results on DIMACS problems using PENNON, performed
on a Pentium III PC (650 MHz) with 512 KB memory running SuSE LINUX 7.3.
Notation like [793x3] indicates that there were 793 (semidenite, second-order, linear)
blocks, each a symetric matrix of order 3.
PENNON
problem n SDP blocks SO blocks lin. blocks CPU digits
nb 123 [793x3] 4 60 7
nb-L1 915 [793x3] 797 141 7
nb-L2 123 [1677,838x3] 4 100 8
nb-L2-bessel 123 [123,838x3] 4 90 8
qssp30 3691 [1891x4] 2 10 6
qssp60 14581 [7381x4] 2 55 5
nql30 3680 [900x3] 3602 17 4
lter48-socp 969 48 49 931 283 6
Acknowledgment
The authors would like to thank Hans Mittelmann for his help when
testing the code and for implementing PENNON on the NEOS server.
This research was supported by BMBF project 03ZOM3ER. The rst
author was partly supported by grant No. 201/00/0080 of the Grant
Agency of the Czech Republic.
References
[1] A. Ben-Tal, F. Jarre, M. Kocvara, A. Nemirovski, and J. Zowe. Optimal de-
sign of trusses under a nonconvex global buckling constraint. Optimization and
Engineering, 1:189213, 2000.
[2] A. Ben-Tal, M. Kocvara, A. Nemirovski, and J. Zowe. Free material design via
semidenite programming. The multi-load case with contact conditions. SIAM
J. Optimization, 9:813832, 1997.
[3] A. Ben-Tal and M. Zibulevsky. Penalty/barrier multiplier methods for convex
programming problems. SIAM J. Optimization, 7:347366, 1997.
[4] B. Borchers. CSDP, a C library for semidenite programming. Optimization
Methods and Software, 11:613623, 1999. Available at
http://www.nmt.edu/~borchers/.
[5] B. Borchers. SDPLIB 1.2, a library of semidenite programming test prob-
lems. Optimization Methods and Software, 11 & 12:683690, 1999. Available at
http://www.nmt.edu/~borchers/.
[6] K. Fujisawa, M. Kojima, and K. Nakata. Exploiting sparsity in primal-dual
interior-point method for semidenite programming. Mathematical Program-
ming, 79:235253, 1997.
20
[7] H.R.E.M. H ornlein, M. Kocvara, and R. Werner. Material optimization: Bridg-
ing the gap between conceptual and preliminary design. Aerospace Science and
Technology, 2001. In print.
[8] F. Jarre. An interior method for nonconvex semidenite programs. Optimization
and Engineering, 1:347372, 2000.
[9] M. Kocvara. On the modelling and solving of the truss design problem with
global stability constraints. Struct. Multidisc. Optimization, 2001. In print.
[10] H. Mittelmann. Benchmarks for optimization software. Available at
http://plato.la.asu.edu/bench.html.
[11] E. Ng and B. W. Peyton. Block sparse cholesky algorithms on advanced unipro-
cessor computers. SIAM J. Scientic Computing, 14:10341056, 1993.
[12] G. Pataki and S. Schieta. The DIMACS library of mixed semidenite-quadratic-
linear problems. Available at
http://dimacs.rutgers.edu/challenges/seventh/instances.
[13] M. Stingl. Konvexe semidenite programmierung. Diploma Thesis, Institute of
Applied Mathematics, University of Erlangen, 1999.
[14] J. Sturm. Using SeDuMi 1.02, a MATLAB toolbox for optimization over sym-
metric cones. Optimization Methods and Software, 11 & 12:625653, 1999. Avail-
able at http://fewcal.kub.nl/sturm/.
[15] R.H. T ut utc u, K.C. Toh, and M.J. Todd. SDPT3 A MATLAB software
package for semidenite-quadratic-linear programming, Version 3.0. Available
at http://www.orie.cornell.edu/~miketodd/todd.html, School of Operations
Research and Industrial Engineering, Cornell University, 2001.
[16] J. Zowe, M. Kocvara, and M. Bendse. Free material optimization via mathe-
matical programming. Mathematical Programming, Series B, 79:445466, 1997.