Structured Sparsity via Alternating Direction Methods∗
Zhiwei (Tony) Qin
†
Donald Goldfarb
‡
Department of Industrial Engineering and Operations Research
Columbia University
New York, NY 10027
December 14, 2011
Abstract
We consider a class of sparse learning problems in high dimensional feature space regularized
by a structured sparsity-inducing norm that incorporates prior knowledge of the group structure
of the features. Such problems often pose a considerable challenge to optimization algorithms
due to the non-smoothness and non-separability of the regularization term. In this paper, we
focus on two commonly adopted sparsity-inducing regularization terms, the overlapping Group
Lasso penalty l1 /l2 -norm and the l1 /l∞ -norm. We propose a unified framework based on the
augmented Lagrangian method, under which problems with both types of regularization and
their variants can be efficiently solved. As one of the core building-blocks of this framework,
we develop new algorithms using a partial-linearization/splitting technique and prove that the
accelerated versions of these algorithms require O( √1ǫ ) iterations to obtain an ǫ-optimal solution. We compare the performance of these algorithms against that of the alternating direction
augmented Lagrangian and FISTA methods on a collection of data sets and apply them to two
real-world problems to compare the relative merits of the two norms.
Keywords: structured sparsity, overlapping Group Lasso, alternating direction methods, variable
splitting, augmented Lagrangian
1
Introduction
For feature learning problems in a high-dimensional space, sparsity in the feature vector is usually
a desirable property. Many statistical models have been proposed in the literature to enforce
sparsity, dating back to the classical Lasso model (l1 -regularization) [45, 10]. The Lasso model is
particularly appealing because it can be solved by very efficient proximal gradient methods; e.g.,
see [12]. However, the Lasso does not take into account the structure of the features [50]. In
many real applications, the features in a learning problem are often highly correlated, exhibiting
a group structure. Structured sparsity has been shown to be effective in those cases. The Group
Lasso model [49, 2, 41] assumes disjoint groups and enforces sparsity on the pre-defined groups
of features. This model has been extended to allow for groups that are hierarchical as well as
overlapping [25, 27, 3] with a wide array of applications from gene selection [27] to computer
∗
Research supported in part by DMS 10-16571, ONR Grant N00014-08-1-1118 and DOE Grant DE-FG0208ER25856.
†
Email:
[email protected].
‡
Email:
[email protected].
1
vision [23, 26]. For image denoising problems, extensions with non-integer block sizes and adaptive
partitions have been proposed in [36, 37]. In this paper, we consider the following basic model of
minimizing the squared-error loss with a regularization term to induce group sparsity:
min L(x) + Ω(x),
x∈Rm
(1)
where
1
kAx − bk2 ,
A ∈ Rn×m ,
2
P
or
Ωl1 /l2 (x) ≡ λ s∈S ws kxs k,
P
Ω(x) =
Ωl1 /l∞ (x) ≡ λ s∈S ws kxs k∞ ,
L(x) =
(2)
S = {s1 , · · · , s|S| } is the set of group indices with |S| = J, and the elements (features) in the
groups possibly overlap [11, 30, 25, 3]. In this model, λ, ws , S are all pre-defined. k · k without a
subscript denotes the l2 -norm. We note that the penalty term Ωl1 /l2 (x) in (2) is different from the
one proposed in [24], although both are called overlapping Group Lasso penalties. In particular,
(1)-(2) cannot be cast into a non-overlapping group lasso problem as done in [24].
1.1
Related Work
Two proximal gradient methods have been proposed to solve a close variant of (1) with an l1 /l2
penalty,
minm L(x) + Ωl1 /l2 (x) + λkxk1 ,
(3)
x∈R
which has an additional l1 -regularization term on x. Chen et al. [11] replace Ωl1 /l2 (x) with a
smooth approximation Ωη (x) by using Nesterov’s smoothing technique [33] and solve the resulting
problem by the Fast Iterative Shrinkage Thresholding algorithm (FISTA) [4]. The parameter η is a
smoothing parameter, upon which the practical and theoretical convergence speed of the algorithm
critically depends. Liu and Ye [29] also apply FISTA to solve (3), but in each iteration, they
transform the computation of the proximal operator associated with the combined penalty term
into an equivalent constrained smooth problem and solve it by Nesterov’s accelerated gradient
descent method [33]. Mairal et al. [30] apply the accelerated proximal gradient method to (1) with
l1 /l∞ penalty and propose a network flow algorithm to solve the proximal problem associated with
Ωl1 /l∞ (x). Mosci et al.’s method [32] for solving the Group Lasso problem in [24] is in the same
spirit as [29], but their approach uses a projected Newton method.
1.2
Our Contributions
We take a unified approach to tackle problem (1) with both l1 /l2 - and l1 /l∞ -regularizations. Our
strategy is to develop efficient algorithms based on the Alternating Linearization Method with
Skipping (ALM-S) [19] and FISTA for solving an equivalent constrained version of problem (1) (to
be introduced in Section 2) in an augmented Lagrangian method framework. Specifically, we make
the following contributions in this paper:
• We build a general framework based on the augmented Lagrangian method, under which
learning problems with both l1 /l2 and l1 /l∞ regularizations (and their variants) can be solved.
This framework allows for experimentation with its key building blocks.
2
• We propose new algorithms: ALM-S with partial splitting (APLM-S) and FISTA with partial
linearization (FISTA-p), to serve as the key building block for this framework. We prove that
APLM-S and FISTA-p have convergence rates of O( k1 ) and O( k12 ) respectively, where k is
the number of iterations. Our algorithms are easy to implement and tune, and they do not
require line-search, eliminating the need to evaluate the objective function at every iteration.
• We evaluate the quality and speed of the proposed algorithms and framework against state-ofthe-art approaches on a rich set of synthetic test data and compare the l1 /l2 and l1 /l∞ models
on breast cancer gene expression data [47] and a video sequence background subtraction task
[30].
2
A Variable-Splitting Augmented Lagrangian Framework
In this section, we present a unified framework, based on variable splitting and the augmented
Lagrangian method for solving (1) with both l1 /l2 - and l1 /l∞ -regularizations. This framework
reformulates problem (1) as an equivalent linearly-constrained problem, by using the following
variable-splitting
P procedure.
Let y ∈ R s∈S |s| be the vector obtained from the vector x ∈ Rm by P
repeating components of x
so that no component of y belongs to more than one group. Let M = s∈S |s|. The relationship
between x and y is specified by the linear constraint Cx = y, where the (i, j)-th element of the
matrix C ∈ RM ×m is
1, if yi is a replicate of xj ,
Ci,j =
(4)
0, otherwise.
For examples of C, refer to [11]. Consequently, (1) is equivalent to
min
s.t.
1
Fobj (x, y) ≡ kAx − bk2 + Ω̃(y)
2
Cx = y,
(5)
where Ω̃(y) is the non-overlapping group-structured penalty term corresponding to Ω(y) defined in
(2). Note that C is a highly sparse matrix, and D = C T C is a diagonal matrix with the diagonal
entries equal to the number of times that each entry of x is included in some group. Problem
(5) now includes two sets of variables x and y, where x appears only in the loss term L(x) and y
appears only in the penalty term Ω̃(y).
All the non-overlapping versions of Ω(·), including the Lasso and Group Lasso, are special cases
of Ω(·), with C = I, i.e. x = y. Hence, (5) in this case is equivalent to applying variable-splitting
on x. Problems with a composite penalty term, such as the Elastic Net, λ1 kxk1 + λ2 kxk2 , can also
be reformulated in a similar way by merging the smooth part of the penalty term (λ2 kxk2 in the
case of the Elastic Net) with the loss function L(x).
To solve (5), we apply the augmented Lagrangian method [22, 38, 34, 6] to it. This method,
Algorithm 2.1, minimizes the augmented Lagrangian
1
1
L(x, y, v) = kAx − bk2 − v T (Cx − y) +
kCx − yk2 + Ω̃(y)
2
2µ
(6)
exactly for a given Lagrange multiplier v in every iteration followed by an update to v. The
parameter µ in (6) controls the amount of weight that is placed on violations of the constraint Cx =
y. Algorithm 2.1 can also be viewed as a dual ascent algorithm applied to P (v) = minx,y L(x, y, v)
[5], where v is the dual variable, µ1 is the step-length, and Cx − y is the gradient ∇v P (v). This
3
Algorithm 2.1 AugLag
1: Choose x0 , y 0 , v 0 .
2: for l = 0, 1, · · · do
3:
(xl+1 , y l+1 ) ← arg minx,y L(x, y, v l )
4:
v l+1 ← v l − µ1 (Cxl+1 − y l+1 )
5:
Update µ according to the chosen updating scheme.
6: end for
algorithm does not require µ to be very small to guarantee convergence to the solution of problem
(5) [34]. However, solving the problem in Line 3 of Algorithm 2.1 exactly can be very challenging
in the case of structured sparsity. We instead seek an approximate minimizer of the augmented
Lagrangian via the abstract subroutine ApproxAugLagMin(x, y, v). The following theorem [40]
guarantees the convergence of this inexact version of Algorithm 2.1.
Theorem 2.1. Let αl := L(xl , y l , v l ) − inf x∈Rm ,y∈RM L(x, y, v l ) and F ∗ be the optimal value of
problem (5). Suppose problem (5) satisfies the modified Slater’s condition, and
∞ √
X
αl < +∞.
(7)
l=1
Then, the sequence {v l } converges to v ∗ , which satisfies
inf
x∈Rm ,y∈RM
Fobj (x, y) − (v ∗ )T (Cx − y) = F ∗ ,
while the sequence {xl , y l } satisfies liml→∞ Cxl − y l = 0 and liml→∞ Fobj (xl , y l ) = F ∗ .
The condition (7) requires the augmented Lagrangian subproblem be solved with increasing
accuracy. We formally state this framework in Algorithm 2.2. We index the iterations of Algorithm
Algorithm 2.2 OGLasso-AugLag
1: Choose x0 , y 0 , v 0 .
2: for l = 0, 1, · · · do
3:
(xl+1 , y l+1 ) ← ApproxAugLagMin(xl , y l , v l ), to compute an approximate minimizer of
L(x, y, v l )
4:
v l+1 ← v l − µ1 (Cxl+1 − y l+1 )
5:
Update µ according to the chosen updating scheme.
6: end for
2.2 by l and call them ‘outer iterations’. In Sections 3, we develop algorithms that implement
ApproxAugLagMin(x, y, v). The iterations of these subroutine are indexed by k and are called
‘inner iterations’.
3
Methods for Approximately Minimizing the Augmented Lagrangian
P
In this section, we use the overlapping Group Lasso penalty Ω(x) = λ s∈S ws kxs k to illustrate
the optimization algorithms under discussion. The case of l1 /l∞ -regularization will be discussed in
Section 4. From now on, we assume without loss of generality that ws = 1 for every group s.
4
3.1
Alternating Direction Augmented Lagrangian (ADAL) Method
The well-known Alternating Direction Augmented Lagrangian (ADAL) method [15, 17, 18, 8]1
approximately minimizes the augmented Lagrangian by minimizing (6) with respect to x and y
alternatingly and then updates the Lagrange multiplier v on each iteration (e.g., see [7], Section 3.4).
Specifically, the single-iteration procedure that serves as the procedure ApproxAugLagMin(x, y, v)
is given below as Algorithm 3.1.
Algorithm 3.1 ADAL
1: Given xl , y l , and v l .
2: xl+1 ← arg minx L(x, y l , v l )
3: y l+1 ← arg miny L(xl+1 , y, v l )
4: return xl+1 , y l+1 .
The ADAL method, also known as the alternating direction method of multipliers (ADMM) and
the split Bregman method, has recently been applied to problems in signal and image processing
[12, 1, 20] and low-rank matrix recovery [28]. Its convergence has been established in [15]. This
method can accommodate a sum of more than two functions.
For example, by applying variableP
splitting (e.g., see [7, 8]) to the problem minx f (x) + K
g
(C
i x), it can be transformed into
i=1 i
min
x,y1 ,··· ,yK
s.t.
f (x) +
K
X
gi (yi )
i=1
yi = Ci x,
i = 1, · · · , K.
The subproblems corresponding to yi ’s can thus be solved simultaneously by the ADAL method.
This so-called simultaneous direction method of multipliers (SDMM) [42] is related to Spingarn’s
method of partial inverses [43] and has been shown to be a special instance of a more general
parallel proximal algorithm with inertia parameters [35].
Note that the problem solved in Line 3 of Algorithm 3.1,
1 l
l+1
l+1
l
2
y
= arg min L(x , y, v ) ≡ arg min
kd − yk + Ω̃(y) ,
(8)
y
y
2µ
where dl = Cxl+1 − µv l , is group-separable and hence can be solved in parallel. As in [39],
each subproblem can be solved by applying the block soft-thresholding operator, T (dls , µλ) ≡
dls
max(0, kdls k − λµ), s = 1, · · · , J. Solving for xl+1 in Line 2 of Algorithm 3.1, i.e.
kdl k
s
xl+1 = arg min L(x, y l , v l ) ≡ arg min
x
x
1
1
kAx − bk2 − (v l )T Cx +
kCx − y l k2 ,
2
2µ
(9)
involves solving the linear system
(AT A +
1
1
D)x = AT b + C T v l + C T y l ,
µ
µ
(10)
where the matrix on the left hand side of (10) has dimension m×m. Many real-world data sets, such
as gene expression data, are highly under-determined. Hence, the number of features (m) is much
1
Recently, Mairal et al. [31] also applied ADAL with two variants based on variable-splitting to the overlapping
Group Lasso problem.
5
larger than the number of samples (n). In such cases, one can use the Sherman-Morrison-Woodbury
formula,
1
(11)
(AT A + D)−1 = µD−1 − µ2 D−1 AT (I + µAD−1 AT )−1 AD−1 ,
µ
and solve instead an n × n linear system involving the matrix I + µAD−1 AT . In addition, as long
as µ stays the same, one has to factorize AT A + µ1 D or I + µAD−1 AT only once and store their
factors for subsequent iterations.
When both n and m are very large, it might be infeasible to compute or store AT A, not to
mention its eigen-decomposition, or the Cholesky decomposition of AT A+ µ1 D. In this case, one can
solve the linear systems using the preconditioned Conjugate Gradient (PCG) method [21]. Similar
comments apply to the other algorithms proposed in Sections 3.2 - 3.4 below.Alternatively, we can
apply FISTA to Line 3 in Algorithm 2.2 (see Section 3.5).
3.2
ALM-S: partial split (APLM-S)
We now consider applying the Alternating Linearization Method with Skipping (ALM-S) from [19]
to approximately minimize (6). In particular, we apply variable splitting (Section 2) to the variable
y, to which the group-sparse regularizer Ω̃ is applied, (the original ALM-S splits both variables x
and y,) and re-formulate (6) as follows.
1
1
kAx − bk2 − v T (Cx − y) +
kCx − yk2 + Ω̃(ȳ)
2
2µ
y = ȳ.
min
x,y,ȳ
s.t.
(12)
Note that the Lagrange multiplier v is fixed here. Defining
f (x, y) :=
g(y)
=
1
1
kAx − bk2 − v T (Cx − y) +
kCx − yk2 ,
2
2µ
X
kys k,
Ω̃(y) = λ
(13)
(14)
s
problem (12) is of the form
min
f (x, y) + g(ȳ)
s.t.
y = ȳ,
(15)
to which we now apply partial-linearization.
3.2.1
Partial linearization and convergence rate analysis
Let us define
F (x, y) := f (x, y) + g(y) = L(x, y; v),
(16)
1
Lρ (x, y, ȳ, γ) := f (x, y) + g(ȳ) + γ T (ȳ − y) + kȳ − yk2 ,
2ρ
(17)
where γ is the Lagrange multiplier in the augmented Lagrangian (17) corresponding to problem
(15), and γg (ȳ) is a sub-gradient of g at ȳ. We now present our partial-split alternating linearization
algorithm to implement ApproxAugLagMin(x, y, v) in Algorithm 2.2.
We note that in Line 6 in Algorithm 3.2,
xk+1 = arg min Lρ (x; y k+1 , ȳ k , γ k ) ≡ arg min f (x; y k+1 ) ≡ arg min f (x; ȳ k ).
x
x
Now, we have a variant of Lemma 2.2 in [19].
6
x
(18)
Algorithm 3.2 APLM-S
1: Given x0 , ȳ 0 , v. Choose ρ, γ 0 , such that −γ 0 ∈ ∂g(ȳ 0 ). Define f (x, y) as in (13).
2: for k = 0, 1, · · · until stopping criterion is satisfied do
3:
(xk+1 , y k+1 ) ← arg minx,y Lρ (x, y, ȳ k , γ k ).
4:
if F (xk+1 , y k+1 ) > Lρ (xk+1 , y k+1 , ȳ k , γ k ) then
5:
y k+1 ← ȳ k
6:
xk+1 ← arg minx f (x, y k+1 ) ≡ arg minx Lρ (x; y k+1 , ȳ k , γ k )
7:
end if
8:
ȳ k+1 ← pf (xk+1 , y k+1 ) ≡ arg minȳ Lρ (xk+1 , y k+1 , ȳ, ∇y f (xk+1 , y k+1 ))
k+1
k+1
9:
γ k+1 ← ∇y f (xk+1 , y k+1 ) − y −ȳ
ρ
10: end for
11: return (xK+1 , ȳ K+1 )
Lemma 3.1. For any (x, y), if q̄ := arg minȳ Lρ (x, y, ȳ, ∇y f (x, y)) and
F (x, q̄) ≤ Lρ (x, y, q̄, ∇y f (x, y)),
(19)
2ρ(F (x̄, ȳ) − F (x, q̄)) ≥ kq̄ − ȳk2 − ky − ȳk2 + 2ρ((x̄ − x)T ∇x f (x, y)).
(20)
then for any (x̄, ȳ),
Similarly, for any ȳ, if (p, q) := arg minx,y Lρ (x, y, ȳ, −γg (ȳ)) and
F ((p, q)) ≤ Lρ ((p, q), ȳ, −γg (ȳ)),
(21)
2ρ(F (x, y) − F ((p, q))) ≥ k(p, q)y − yk2 − kȳ − yk2 .
(22)
then for any (x, y),
Proof. See Appendix A.
Algorithm 3.2 checks condition (21) at Line 4 because the function g is non-smooth and condition
(21) may not hold no matter what the value of ρ is. When this condition is violated, a skipping
step occurs in which the value of y is set to the value of ȳ in the previous iteration (Line 5) and Lρ
re-minimized with respect to x (Line 6) to ensure convergence. Let us define a regular iteration of
Algorithm 3.2 to be an iteration where no skipping step occurs, i.e. Lines 5 and 6 are not executed.
Likewise, we define a skipping iteration to be an iteration where a skipping step occurs. Now, we
are ready to state the iteration complexity result for APLM-S.
Theorem 3.1. Assume that ∇y f (x, y) is Lipschitz continuous with Lipschitz constant Ly (f ), i.e.
for any x, k∇y f (x, y) − ∇y f (x, z)k ≤ Ly (f )ky − zk, for all y and z. For ρ ≤ Ly1(f ) , the iterates
(xk , ȳ k ) in Algorithm 3.2 satisfy
F (xk , ȳ k ) − F (x∗ , y ∗ ) ≤
kȳ 0 − y ∗ k2
,
2ρ(k + kn )
∀k,
(23)
where (x∗ , y ∗ ) is an optimal solution to (12), and kn is the number of regular iterations among the
first k iterations.
Proof. See Appendix B.
7
Remark 3.1. For Theorem 3.1 to hold, we need ρ ≤
1
Ly (f ) .
From the definition of f (x, y) in (13),
1
µ
it is easy to see that Ly (f ) = regardless of the loss function L(x). Hence, we set ρ = µ, so that
condition (19) in Lemma 3.1 is satisfied.
In Section 3.3, we will discuss the case where the iterations entirely consist of skipping steps.
We will show that this is equivalent to ISTA [4] with partial linearization as well as a variant of
ADAL. In this case, the inner Lagrange multiplier γ is redundant.
3.2.2
Solving the subproblems
We now show how to solve the subproblems in Algorithm 3.2. First, observe that since ρ = µ
1
T
2
arg min Lρ (x, y, ȳ, ∇y f (x, y)) ≡ arg min ∇y f (x, y) ȳ +
kȳ − yk + g(ȳ)
(24)
ȳ
ȳ
2µ
)
(
X
1
2
kd − ȳk + λ
kȳs k ,
(25)
≡ arg min
ȳ
2µ
s
where d = Cx − µv. Hence, ȳ can be obtained by applying the block soft-thresholding operator
T (ds , µλ) as in Section 3.1. Next consider the subproblem
1
T
2
min Lρ (x, y, ȳ, γ) ≡ min f (x, y) + γ (ȳ − y) +
kȳ − yk .
(26)
2µ
(x,y)
(x,y)
It is easy to verify that solving the linear system given by the optimality conditions for (26) by
block Gaussian elimination yields the system
1
1
T
A A+
D x = rx + C T r y
(27)
2µ
2
for computing x, where rx = AT b + C T v and ry = −v + γ + ȳρ . Then y can be computed as
( µ2 )(ry + µ1 Cx).
1
As in Section 3.1, only one Cholesky factorization of AT A+ 2µ
D is required for each invocation of
Algorithm 3.2. Hence, the amount of work involved in each iteration of Algorithm 3.2 is comparable
to that of an ADAL iteration.
It is straightforward to derive an accelerated version of Algorithm 3.2, which we shall refer to as
FAPLM-S, that q
corresponds to a partial-split version of the FALM algorithm proposed in [19] and
)
also requires O( L(f
ǫ ) iterations to obtain an ǫ-optimal solution. In Section 3.4, we present an
algorithm FISTA-p, which is a special version of FAPLM-S in which every iteration is a skipping
iteration and which has a much simpler form than FAPLM-S, while having essentially the same
iteration complexity.
It is also possible to apply ALM-S directly, which splits both x and y, to solve the augmented
Lagrangian subproblem. Similar to (12), we reformulate (6) as
min
(x,y),(x̄,ȳ)
s.t.
X
1
1
kAx − bk2 − v T (Cx − y) +
kCx − yk2 + λ
kȳs k
2
2µ
s
(28)
x = x̄,
y = ȳ.
The functions f and g are defined as in (13) and (14), except that now we write g as g(x̄, ȳ) even
though the variable x̄ does not appear in the expression for g. It can be shown that ȳ admits exactly
8
the same expression as in APLM-S, whereas x̄ is obtained by a gradient step, x − ρ∇x f (x, y). To
obtain x, we solve the linear system
1
ρ
1
T
D + I x = rx +
C T ry ,
(29)
A A+
µ+ρ
ρ
µ+ρ
µρ
ry + µ1 Cx .
after which y is computed by y = µ+ρ
Remark 3.2. For ALM-S, the Lipschitz constant for ∇f (x, y) Lf = λmax (AT A) + µ1 dmax , where
dmax = maxi Dii ≥ 1. For the complexity results in [19] to hold, we need ρ ≤ L1f . Since λmax (AT A)
is usually not known, it is necessary to perform a backtracking line-search on ρ to ensure that
F (xk+1 , y k+1 ) ≤ Lρ (xk+1 , y k+1 , x̄k , ȳ k , γ k ). In practice, we adopted the following continuation
µ
scheme instead. We initially set ρ = ρ0 = dmax
and decreased ρ by a factor of β after a given
number of iterations until ρ reached a user-supplied minimum value ρmin . This scheme prevents ρ
from being too small, and hence negatively impacting computational performance. However, in both
cases the left-hand-side of the system (29) has to be re-factorized every time ρ is updated.
As we have seen above, the Lipschitz constant resulting from splitting both x and y is potentially
much larger than µ1 . Hence, partial-linearization reduces the Lipschitz constant and hence improves
the bound on the right-hand-side of (23) and allows Algorithm 3.2 to take larger step sizes (equal
to µ). Compared to ALM-S, solving for x in the skipping step (Line 6) becomes harder. Intuitively,
APLM-S does a better job of ‘load-balancing’ by managing a better trade-off between the hardness
of the subproblems and the practical convergence rate.
3.3
ISTA: partial linearization (ISTA-p)
We can also minimize the augmented Lagrangian (6), which we write as L(x, y, v) = f (x, y) + g(y)
with f (x, y) and g(y) defined as in (13) and (14), using a variant of ISTA that only linearizes
f (x, y) with respect to the y variables. As in Section 3.2, we can set ρ = µ and guarantee the
convergence properties of ISTA-p (see Corollary 3.1 below). Formally, let (x, y) be the current
iterate and (x+ , y + ) be the next iterate. We compute y + by
y + = arg min
Lρ (x, y, y ′ , ∇y f (x, y))
y′
1 X
′
2
′
= arg min
,
(ky
−
d
k
+
λky
k)
yj
j
j
y ′ 2µ
(30)
(31)
j
where dy = Cx − µv. Hence the solution y + to problem (31) is given blockwise by T ([dy ]j , µλ), j =
1, · · · , J.
Now given y + , we solve for x+ by
x+ = arg min
f (x′ , y + )
x′
1
1
′
2
T
′
+
′
+ 2
kAx − bk − v (Cx − y ) +
kCx − y k
= arg min
x′
2
2µ
(32)
The algorithm that implements subroutine ApproxAugLagMin(x, y, v) in Algorithm 2.2 by ISTA
with partial linearization is stated below as Algorithm 3.3.
As we remarked in Section 3.2, Algorithm 3.3 is equivalent to Algorithm 3.2 (APLM-S) where
every iteration is a skipping iteration. Hence, we have from Theorem 3.1.
9
Algorithm 3.3 ISTA-p (partial linearization)
1: Given x0 , ȳ 0 , v. Choose ρ. Define f (x, y) as in (13).
2: for k = 0, 1, · · · until stopping criterion is satisfied do
3:
xk+1 ← arg minx f (x; ȳ k )
4:
ȳ k+1 ← arg miny Lρ (xk+1 , ȳ k , y, ∇y f (xk+1 , ȳ k ))
5: end for
6: return (xK+1 , ȳ K+1 )
Corollary 3.1. Assume ∇y f (·, ·) is Lipschitz continuous with Lipschitz constant Ly (f ). For ρ ≤
1
k k
Ly (f ) , the iterates (x , ȳ ) in Algorithm 3.3 satisfy
F (xk , ȳ k ) − F (x∗ , y ∗ ) ≤
kȳ 0 − y ∗ k2
,
2ρk
∀k,
(33)
where (x∗ , y ∗ ) is an optimal solution to (12).
It is easy to see that (31) is equivalent to (8), and that (32) is the same as (9) in ADAL.
Remark 3.3. We have shown that with a fixed v, the ISTA-p iterations are exactly the same as
the ADAL iterations. The difference between the two algorithms is that ADAL updates the (outer)
Lagrange multiplier v in each iteration, while in ISTA-p, v stays the same throughout the inner
iterations. We can thus view ISTA-p as a variant of ADAL with delayed updating of the Lagrange
multiplier.
The ‘load-balancing’ behavior discussed in Section 3.2 is more obvious for ISTA-p. As we will
see in Section 3.5, if we apply ISTA (with full linearization) to minimize (6), solving for x is simply
a gradient step. Here, we need to minimize f (x, y) with respect to x exactly, while being able to
take larger step sizes in the other subproblem, due to the smaller associated Lipschitz constant.
3.4
FISTA-p
We now present an accelerated version FISTA-p of ISTA-p. FISTA-p is a special case of FAPLM-S
with a skipping step occurring in every iteration.We state the algorithm formally as Algorithm 3.4.
The iteration complexity of FISTA-p (and FAPLM-S) is given by the following theorem.
Algorithm 3.4 FISTA-p (partial linearization)
1: Given x0 , ȳ 0 , v. Choose ρ, and z 0 = ȳ 0 . Define f (x, y) as in (13).
2: for k = 0, 1, · · · , K do
3:
xk+1 ← arg minx f (x; z k )
4:
ȳ k+1 ← arg√miny Lρ (xk+1 , z k , y, ∇y f (xk+1 , z k ))
5:
tk+1 ←
1+4t2k
2
−1
ȳ k+1 + ttkk+1
(ȳ k+1
1+
z k+1 ←
7: end for
8: return (xK+1 , ȳ K+1 )
6:
− ȳ k )
10
Theorem 3.2. Assuming that ∇y f (·) is Lipschitz continuous with Lipschitz constant Ly (f ) and
ρ ≤ Ly1(f ) , the sequence {xk , ȳ k } generated by Algorithm 3.4 satisfies
F (xk , ȳ k ) − F (x∗ , y ∗ ) ≤
2kȳ 0 − y ∗ k2
,
ρ(k + 1)2
(34)
Although we need to solve a linear system in every iteration of Algorithms 3.2, 3.3, and 3.4, the
left-hand-side of the system stays constant throughout the invocation of the algorithms because,
following Remark 3.1, we can always set ρ = µ. Hence, no line-search is necessary, and this step
essentially requires only one backward- and one forward-substitution, the complexity of which is
the same as a gradient step.
3.5
ISTA/FISTA: full linearization
ISTA solves the following problem in each iteration to produce the next iterate
1
2ρ
min
′ ′
x ,y
≡
x′
y′
x+
y+
.
2
−d
+λ
X
s
kys k
X 1
1 ′
kx − dx k2 +
kyj′ − dyj k2 + λkyj′ k ,
2ρ
2ρ
(35)
j
x
dx
− ρ∇f (x, y), and f (x, y) is defined in (13). It is easy to see that we
=
y
dy
can solve for x+ and y + separately in (35). Specifically,
where d =
x+ = dx
d yj
yj+ =
max(0, kdyj k − λρ),
kdyj k
(36)
j = 1, . . . , J.
(37)
Using ISTA to solve the outer augmented Lagrangian (6) subproblem is equivalent to taking only
skipping steps in ALM-S. In our experiments, we used the accelerated version of ISTA, i.e. FISTA
(Algorithm 3.5) to solve (6).
FISTA (resp. ISTA) is, in fact, an inexact version of FISTA-p (resp. ISTA-p), where we
minimize with respect to x a linearized approximation
1
f˜(x, z k ) := f (xk , z k ) + ∇x f (xk , z k )(x − xk ) + kx − xk k2
2ρ
of the quadratic objective function f (x, z k ) in (32). The update to x in Line 3 of Algorithm 3.4 is
replaced by (36) as a result. Similar to FISTA-p, FISTA is also a special skipping version of the
full-split FALM-S. Considering that FISTA has an iteration complexity of O( k12 ), it is not surprising
that FISTA-p has the same iteration complexity.
Remark 3.4. Since FISTA requires only the gradient of f (x, y), it can easily handlePany smooth
N
convex loss function, such as the logistic loss for binary classification, L(x) =
i=1 log(1 +
T
T
exp(−bi ai x)), where ai is the i-th row of A, and b is the vector of labels. Moreover, when the
scale of the data (min{n, m}) is so large that it is impractical to compute the Cholesky factorization of AT A, FISTA is a good choice to serve as the subroutine ApproxAugLagMin(x, y, v) in
OGLasso-AugLag.
11
Algorithm 3.5 FISTA
1: Given x̄0 , ȳ 0 , v. Choose ρ0 . Set t0 = 0, zx0 = x̄0 , zy0 = ȳ 0 . Define f (x, y) as in (13).
2: for k = 0, 1, · · · until stopping criterion is satisfied do
3:
Perform
line-search on ρ, starting from ρ0 .
a backtracking
k
zx
dx
4:
=
− ρ∇f (zxk , zyk )
zyk
dy
5:
x̄k+1 ← dx
dy
6:
ȳjk+1 ← kdyj k max(0, kdyj k − λρ), j = 1, . . . , J.
j
√
1+ 1+4t2k
7:
tk+1 ←
2
−1 k+1
(x̄
− x̄k )
8:
zxk+1 ← x̄k + ttkk+1
−1 k+1
zyk+1 ← ȳ k + ttkk+1
(ȳ
− ȳ k )
10: end for
11: return (x̄K+1 , ȳ K+1 )
9:
4
Overlapping Group l1 /l∞ -Regularization
The subproblems with respect to y (or ȳ) involved in all the algorithms presented in the previous
sections take the following form
1
(38)
min kc − yk2 + Ω̃(y),
y 2ρ
P
where Ω̃(y) = λ s∈S̃ ws kys k∞ in the case of l1 /l∞ -regularization. In (8), for example, c = Cx−µv.
The solution to (38) is the proximal operator of Ω̃ [13, 12]. Similar to the classical Group Lasso,
this problem is block-separable and hence all blocks can be solved simultaneously.
Again, for notational simplicity, we assume ws = 1 ∀s ∈ S̃ and omit it from now on. For each
s ∈ S̃, the subproblem in (38) is of the form
min
ys
1
kcs − ys k2 + ρλkys k∞ .
2
(39)
As shown in [48], the optimal solution to the above problem is cs − P (cs ), where P denotes the
orthogonal projector onto the ball of radius ρλ in the dual norm of the l∞ -norm, i.e. the l1 -norm.
The Euclidean projection onto the simplex can be computed in (expected) linear time [14, 9]. Duchi
et al. [14] show that the problem of computing the Euclidean projection onto the l1 -ball can be
reduced to that of finding the Euclidean projection onto the simplex in the following way. First,
we replace cs in problem (39) by |cs |, where the absolute value is taken component-wise. After
we obtain the projection zs onto the simplex, we can construct the projection onto the l1 -ball by
setting ys∗ = sign(cs )zs , where sign(·) is also taken component-wise.
5
Experiments
We tested the OGLasso-AugLag framework (Algorithm 2.2) with four subroutines: ADAL, APLMS, FISTA-p, and FISTA. We implemented the framework with the first three subroutines in C++
to compare them with the ProxFlow algorithm proposed in [30]. We used the C interface and
BLAS and LAPACK subroutines provided by the AMD Core Math Library (ACML)2 . To compare
2
http://developer.amd.com/libraries/acml/pages/default.aspx. Ideally, we should have used the Intel Math Kernel
Library (Intel MKL), which is optimized for Intel processors, but Intel MKL is not freely available.
12
Algorithm
ADAL
FISTA-p
APLM-S
FISTA
Outer rel. dual residual sl+1
kC T (y l+1 −y l )k
kC T y l k
kC T (ȳ K+1 −z K )k
kC T z K k
kC T (ȳ K+1 −y K+1 )k
kC T
y K+1
k
K
K+1
z
x̄
x
−
zyK
ȳ K+1
zK
x
zyK
Inner iteration
Rel. primal residual Rel. objective gradient residual
-
-
kȳ k+1 −z k k
kz k k
kȳ k+1 −y k+1 k
k+1 k
ky
k
k+1
z
x̄
x
−
zyk
ȳ k+1
zk
x
zyk
kC T (ȳ k+1 −z k )k
kC T z k k
kC T (ȳ k+1 −y k+1 k)
kC T
y k+1
k
k
k+1
z
x̄
x
−
zyk
ȳ k+1
zk
x
zyk
Table 1: Specification of the quantities used in the outer and inner stopping criteria.
with ProxGrad [11], we implemented the framework and all four algorithms in Matlab. We did
not include ALM-S in our experiments because it is time-consuming to find the right ρ for the
inner loops as discussed in Remark 3.2, and our preliminary computational experience showed that
ALM-S was slower than the other algorithms, even when the heuristic ρ-setting scheme discussed
in Remark 3.2 was used, because a large number of steps were skipping steps, which meant that the
computation involved in solving the linear systems in those steps was wasted. All of our experiments
were performed on a laptop PC with an Intel Core 2 Duo 2.0 GHz processor and 4 Gb of memory.
5.1
Algorithm parameters and termination criteria
Each algorithm (framework + subroutine)3 required several parameters to be set and termination
criteria to be specified. We used stopping criteria based on the primal and dual residuals as in [8].
We specify the criteria for each of the algorithms below, but defer their derivation to Appendix C.
The maximum number of outer iterations was set to 500, and the tolerance for the outer loop was
set at ǫout = 10−4 . The number of inner-iterations was capped at 2000, and the tolerance at the
l-th outer iteration for the inner loops was ǫlin . Our termination criterion for the outer iterations
was
max{rl , sl } ≤ ǫout ,
(40)
l
l
kCx −y k
l
where rl = max{kCx
l k,ky l k} is the outer relative primal residual and s is the relative dual residual,
which is given for each algorithm in Table 1. Recall that K +1 is the index of the last inner iteration
of the l-th outer iteration; for example, for APLM-S, (xl+1 , y l+1 ) takes the value of the last inner
iterate (xK+1 , ȳ K+1 ). We stopped the inner iterations when the maximum of the relative primal
residual and the relative objective gradient for the inner problem was less than ǫlin . (See Table 1
for the expressions of these two quantities.) We see there that sl+1 can be obtained directly from
the relative gradient residual computed in the last inner iteration of the l-th outer iteration.
We set µ0 = 0.01 in all algorithms except that we set µ0 = 0.1 in ADAL for the data sets other
than the first synthetic set and the breast cancer data set. We set ρ = µ in FISTA-p and APLM-S
and ρ0 = µ in FISTA.
For Theorem 2.1 to hold, the solution returned by the function ApproxAugLagMin(x, y, v) has
to become increasingly more accurate over the outer iterations. However, it is not possible to
3
For conciseness, we use the subroutine names (e.g. FISTA-p) to represent the full algorithms that consist of the
OGLasso-AugLag framework and the subroutines.
13
evaluate the sub-optimality quantity αl in (7) exactly because the optimal value of the augmented
Lagrangian L(x, y, v l ) is not known in advance. In our experiments, we used the maximum of the
relative primal and dual residuals (max{rl , sl }) as a surrogate to αl for two reasons: First, it has
been shown in [8] that rl and sl are closely related to αl . Second, the quantities rl and sl are
readily available as bi-products of the inner and outer iterations. To ensure that the sequence {ǫlin }
satisfies (7), we basically set:
l
ǫl+1
(41)
in = βin ǫin ,
with ǫ0in = 0.01 and βin = 0.5. However, since we terminate the outer iterations at ǫout > 0, it is
not necessary to solve the subproblems to an accuracy much higher than the one for the outer loop.
On the other hand, it is also important for ǫlin to decrease to below ǫout , since sl is closely related
to the quantities involved in the inner stopping criteria. Hence, we slightly modified (41) and used
l
ǫl+1
in = max{βin ǫin , 0.2ǫout }.
Recently, we became aware of an alternative ‘relative error’ stopping criterion [16] for the inner
loops, which guarantees convergence of Algorithm 2.2. In our context, this criterion essentially
requires that the absolute dual residual is less than a fraction of the absolute primal residual. For
FISTA-p, for instance, this condition requires that the (l + 1)-th iterate satisfies
0
(s̄l+1 )2
wx − xl+1
l+1
2
s̄
+
≤ σ(r̄l+1 )2 ,
wyl − y l+1
µ2
where r̄ and s̄ are the numerators in the expressions for r and s respectively, σ = 0.99, wx0 is a constant, and wy is an auxiliary variable updated in each outer iteration by wyl+1 = wyl − µ12 C T (ȳ K+1 −
z K ). We experimented with this criterion but did not find any computational advantage over the
heuristic based on the relative primal and dual residuals.
5.2
Strategies for updating µ
The penalty parameter µ in the outer augmented Lagrangian (6) not only controls the infeasibility in
the constraint Cx = y, but also serves as the step-length in the y-subproblem (and the x-subproblem
in the case of FISTA). We adopted two kinds of strategies for updating µ. The first one simply
kept µ fixed. In this case, choosing an appropriate µ0 was important for good performance. This
was especially true for ADAL in our computational experiments. Usually, a µ0 in the range of 10−1
to 10−3 worked well.
The second strategy is a dynamic scheme based on the values rl and sl [8]. Since µ1 penalizes
the primal infeasibility, a small µ tends to result in a small primal residual. On the other hand, a
large µ tends to yield a small dual residual. Hence, to keep rl and sl approximately balanced in
each outer iteration, our scheme updated µ as follows:
max{βµl , µmin }, if rl > τ sl
min{µl /β, µmax }, if sl > τ rl
(42)
µl+1 ←
l
µ,
otherwise,
where we set µmax = 10, µmin = 10−6 , τ = 10 and β = 0.5, except for the first synthetic data set,
where we set β = 0.1 for ADAL, FISTA-p, and APLM-S.
5.3
Synthetic examples
To compare our algorithms with the ProxGrad algorithm proposed in [11], we first tested a synthetic
data set (ogl) using the procedure reported in [11] and [24]. The sequence of decision variables x
14
were arranged in groups of ten, with adjacent groups having an overlap of three variables. The
support of x was set to the first half of the variables. Each entry in the design matrix A and the
non-zero entries of x were sampled from i.i.d. standard Gaussian distributions, and the output b
was set to b = Ax + ǫ, where the noise ǫ ∼ N (0, I). Two sets of data were generated as follows:
(a) Fix n = 5000 and vary the number of groups J from 100 to 1000 with increments of 100. (b)
Fix J = 200 and vary n from 1000 to 10000 with increments of 1000. The stopping criterion for
ProxGrad was the same as the one used for FISTA, and we set its smoothing parameter to 10−3 .
Figure 1 plots the CPU times taken by the Matlab version of our algorithms and ProxGrad (also
in Matlab) on theses scalability tests on l1 /l2 -regularization. A subset of the numerical results on
which these plots are based is presented in Tables 4 and 5.
The plots clearly show that the alternating direction methods were much faster than ProxGrad
on these two data sets. Compared to ADAL, FISTA-p performed slightly better, while it showed
obvious computational advantage over its general version APLM-S. In the plot on the left of Figure
1, FISTA exhibited the advantage of a gradient-based algorithm when both n and m are large.
In that case (towards the right end of the plot), the Cholesky factorizations required by ADAL,
APLM-S, and FISTA-p became relatively expensive. When min{n, m} is small or the linear systems
can be solved cheaply, as the plot on the right shows, FISTA-p and ADAL have an edge over FISTA
due to the smaller numbers of inner iterations required.
We generated a second data set (dct) using the approach from [30] for scalability tests on
both the l1 /l2 and l1 /l∞ group penalties. The design matrix A was formed from over-complete
dictionaries of discrete cosine transforms (DCT). The set of groups were all the contiguous sequences
of length five in one-dimensional space. x had about 10% non-zero entries, selected randomly. We
generated the output as b = Ax + ǫ, where ǫ ∼ N (0, 0.01kAxk2 ). We fixed n = 1000 and varied
the number of features m from 5000 to 30000 with increments of 5000. This set of data leads to
considerably harder problems than the previous set because the groups are heavily overlapping, and
the DCT dictionary-based design matrix exhibits local correlations. Due to the excessive running
time required on Matlab, we ran the C++ version of our algorithms for this data set, leaving out
APLM-S and ProxGrad, whose performance compared to the other algorithms is already fairly
clear from Figure 1. For ProxFlow, we set the tolerance on the relative duality gap to 10−4 , the
same as ǫout , and kept all the other parameters at their default values.
Figure 2 presents the CPU times required by the algorithms versus the number of features. In
the case of l1 /l2 -regularization, it is clear that FISTA-p outperformed the other two algorithms.
For l1 /l∞ -regularization, ADAL and FISTA-p performed equally well and compared favorably to
ProxFlow. In both cases, the growth of the CPU times for FISTA follows the same trend as that
for FISTA-p, and they required a similar number of outer iterations, as shown in Tables 6 and 7.
However, FISTA lagged behind in speed due to larger numbers of inner iterations. Unlike in the
case of the ogl data set, Cholesky factorization was not a bottleneck for FISTA-p and ADAL here
because we needed to compute it only once.
To simulate the situation where computing or caching AT A and its Cholesky factorization is
not feasible, we switched ADAL and FISTA-p to PCG mode by always using PCG to solve the
linear systems in the subproblems. We compared the performance of ADAL, FISTA-p, and FISTA
on the previous data set for both l1 /l2 and l1 /l∞ models. The results for ProxFlow are copied
from from Figure 2 and Table 9 to serve as a reference. We experimented with the fixed-value
and the dynamic updating schemes for µ on all three algorithms. From Figure 3, it is clear that
the performance of FISTA-p was significantly improved by using the dynamic scheme. For ADAL,
however, the dynamic scheme worked well only in the l1 /l2 case, whereas the performance turned
worse in general in the l1 /l∞ case. We did not include the results for FISTA with the dynamic
scheme because the solutions obtained were considerably more suboptimal than the ones obtained
15
Figure 1: Scalability test results of the algorithms on the synthetic overlapping Group Lasso data
sets from [11]. The scale of the y-axis is logarithmic. The dynamic scheme for µ was used for all
algorithms except ProxGrad.
with the fixed-µ scheme. Tables 8 and 9 report the best results of the algorithms in each case. The
plots and numerical results show that FISTA-p compares favorably to ADAL and stays competitive
to ProxFlow. In terms of the quality of the solutions, FISTA-p and ADAL also did a better job
than FISTA, as evidenced in Table 9. On the other hand, the gap in CPU time between FISTA
and the other three algorithms is less obvious.
5.4
Real-world Examples
To demonstrate the practical usefulness of our algorithms, we tested our algorithms on two realworld applications.
5.4.1
Breast Cancer Gene Expressions
We used the breast cancer data set [47] with canonical pathways from MSigDB [44]. The data
was collected from 295 breast cancer tumor samples and contains gene expression measurements
for 8,141 genes. The goal was to select a small set of the most relevant genes that yield the best
prediction performance. A detailed description of the data set can be found in [11, 24]. In our
experiment, we performed a regression task to predict the length of survival of the patients. The
canonical pathways naturally provide grouping information of the genes. Hence, we used them as
the groups for the group-structured regularization term Ω(·).
Table 2 summarizes the data attributes. The numerical results for the l1 /l2 -norm are collected
in Table 10, which show that FISTA-p and ADAL were the fastest on this data set. Again, we had
to tune ADAL with different initial values (µ0 ) and updating schemes of µ for speed and quality
of the solution, and we eventually kept µ constant at 0.01. The dynamic updating scheme for µ
also did not work for FISTA, which returned a very suboptimal solution in this case. We instead
16
Figure 2: Scalability test results on the DCT set with l1 /l2 -regularization (left column) and l1 /l∞ regularization (right column). The scale of the y-axis is logarithmic. All of FISTA-p, FITSA, and
ADAL were run with a fixed µ = µ0 .
Data sets
BreastCancerData
N (no. samples)
295
J (no. groups)
637
group size
23.7 (avg)
average frequency
4
Table 2: The Breast Cancer Dataset
adopted a simple scheme of decreasing µ by half every 10 outer iterations. Figure 6 graphically
depicts the performance of the different algorithms. In terms of the outer iterations, APLM-S
behaved identically to FISTA-p, and FISTA also behaved similarly to ADAL. However, APLM-S
and FISTA were considerably slower due to larger numbers of inner iterations.
We plot the root-mean-squared-error (RMSE) over different values of λ (which lead to different
numbers of active genes) in the left half of Figure 4. The training set consists of 200 randomly
selected samples, and the RMSE was computed on the remaining 95 samples. l1 /l2 -regularization
achieved lower RMSE in this case. However, l1 /l∞ -regularization yielded better group sparsity
as shown in Figure 5. The sets of active genes selected by the two models were very similar as
illustrated in the right half of Figure 4. In general, the magnitudes of the coefficients returned
by l1 /l∞ -regularization tended to be similar within a group, whereas those returned by l1 /l2 regularization did not follow that pattern. This is because l1 /l∞ -regularization penalizes only the
maximum element, rather than all the coefficients in a group, resulting in many coefficients having
the same magnitudes.
5.4.2
Video Sequence Background Subtraction
We next considered the video sequence background subtraction task from [30, 23]. The main
objective here is to segment out foreground objects in an image (frame), given a sequence of m
17
Figure 3: Scalability test results on the DCT set with l1 /l2 -regularization (left column) and l1 /l∞ regularization (right column). The scale of the y-axis is logarithmic. FISTA-p and ADAL are in
PCG mode. The dotted lines denote the results obtained with the dynamic updating scheme for µ.
frames from a fixed camera. The data used in this experiment is available online 4 [46]. The basic
setup of the problem is as follows. We represent each frame of n
pixels as a column vector Aj ∈ Rn
and form the matrix A ∈ Rn×m as A ≡ A1 A2 · · · Am . The test frame is represented by
b ∈ Rn . We model the relationship between b and A by b ≈ Ax + e, where x is assumed to be
sparse, and e is the ’noise’ term which is also assumed to be sparse. Ax is thus a sparse linear
combination of the video frame sequence and accounts for the background present in both A and
b. e contains the sparse foreground objects in b. The basic model with l1 -regularization (Lasso) is
1
min kAx + e − bk2 + λ(kxk1 + kek1 ).
x,e 2
(43)
It has been shown in [30] that we can significantly improve the quality of segmentation by applying
a group-structured regularization Ω(·) on e, where the groups are all the overlapping k × k-square
patches in the image. Here, we set k = 3. The model thus becomes
1
min kAx + e − bk2 + λ(kxk1 + kek1 + Ω(e)).
x,e 2
(44)
Note that (44) still fits into the group-sparse framework if we treat the l1 -regularization terms as
the sum of the group norms, where the each groups consists of only one element.
We also considered an alternative model, where a Ridge regularization is applied to x and an
Elastic-Net penalty [50] to e. This model
1
min kAx + e − bk2 + λ1 kek1 + λ2 (kxk2 + kek2 )
x,e 2
4
http://research.microsoft.com/en-us/um/people/jckrumm/wallflower/testimages.htm
18
(45)
Figure 4: On the left: Plot of root-mean-squared-error against the number of active genes for the
Breast Cancer data. The plot is based on the regularization path for ten different values for λ. The
total CPU time (in Matlab) using FISTA-p was 51 seconds for l1 /l2 -regularization and 115 seconds
for l1 /l∞ -regularization. On the right: The recovered sparse gene coefficients for predicting the
length of the survival period. The value of λ used here was the one minimizing the RMSE in the
plot on the left.
Figure 5: Pathway-level sparsity v.s. Gene-level sparsity.
does not yield a sparse x, but sparsity in x is not a crucial factor here. It is, however, well suited for
our partial linearization methods (APLM-S and FISTA-p), since there is no need for the augmented
Lagrangian framework. Of course, we can also apply FISTA to solve (45).
We recovered the foreground objects by solving the above optimization problems and applying
the sparsity pattern of e as a mask for the original test frame. A hand-segmented evaluation image
from [46] served as the ground truth. The regularization parameters λ, λ1 , and λ2 were selected
in such a way that the recovered foreground objects matched the ground truth to the maximum
extent.
FISTA-p was used to solve all three models. The l1 model (43) was treated as a special case of
the group regularization model (44), with each group containing only one component of the feature
19
Figure 6: Objective values v.s. Outer iters and Objective values v.s. CPU time plots for the Breast
Cancer data. The results for ProxGrad are not plotted due to the different objective function that
it minimizes. The red (APLM-S) and blue (FISTA-p) lines overlap in the left column.
vector. 5 For the Ridge/Elastic-Net penalty model, we applied FISTA-p directly without the outer
augmented Lagrangian layer.
The solutions for the l1 /l2 , l1 /l∞ , and Lasso models were not strictly sparse in the sense that
those supposedly zero feature coefficients had non-zero (albeit extremely small) magnitudes, since
we enforced the linear constraints Cx = y through an augmented Lagrangian approach. To obtain
sparse solutions, we truncated the non-sparse solutions using thresholds ranging from 10−9 to 10−3
and selected the threshold that yielded the best accuracy.
Note that because of the additional feature vector e, the data matrix is effectively à =
A In ∈ Rn×(m+n) . For solving (44), FISTA-p has to solve the linear system
!
AT A + µ1 Dx
AT
rx
x
,
(46)
=
A
In + µ1 De
re
e
where D is a diagonal matrix, and Dx , De , rx , re are the components of D and r corresponding to
x and e respectively. In this example, n is much larger than m, e.g. n = 57600, m = 200. To avoid
solving a system of size n × n, we took the Schur complement of In + µ1 De and solved instead the
positive definite m × m system
1
1
1
T
T
−1
A A + Dx − A (I + De ) A x = rx − AT (I + De )−1 re ,
(47)
µ
µ
µ
1
e = diag(1 + De )−1 (re − Ax).
(48)
µ
The l1 /l∞ model yielded the best background separation accuracy (marginally better than the
l1 /l2 model), but it also was the most computationally expensive. (See Table 3 and Figure 7.)
Although the Ridge/Elastic-Net model yielded as poor separation results as the Lasso (l1 ) model,
it was orders of magnitude faster to solve using FISTA-p. We again observed that the dynamic
scheme for µ worked better for FISTA-p than for ADAL. For a constant µ over the entire run,
5
We did not use the original version of FISTA to solve the model as an l1 -regularization problem because it took
too long to converge in our experiments due to extremely small step sizes.
20
Figure 7: Separation results for the video sequence background substraction example. Each training
image had 120 × 160 RGB pixels. The training set contained 200 images in sequence. The accuracy
indicated for each of the different models is the percentage of pixels that matched the ground truth.
Model
l1 /l2
l1 /l∞
l1
ridge + elastic net
Accuracy (percent)
97.17
98.18
87.63
87.89
Total CPU time (s)
2.48e+003
4.07e+003
1.61e+003
1.82e+002
No. parameter values on reg path
8
6
11
64
Table 3: Computational results for the video sequence background subtraction example. The
algorithm used is FISTA-p. We used the Matlab version for the ease of generating the images.
The C++ version runs at least four times faster from our experience in the previous experiments.
We report the best accuracy found on the regularization path of each model. The total CPU time
is recorded for computing the entire regularization path, with the specified number of different
regularization parameter values.
ADAL took at least twice as long as FISTA-p to produce a solution of the same quality. A typical
run of FISTA-p on this problem with the best selected λ took less than 10 outer iterations. On the
other hand, ADAL took more than 500 iterations to meet the stopping criteria.
5.5
Comments on Results
The computational results exhibit two general patterns. First, the simpler algorithms (FISTA-p and
ADAL) were significantly faster than the more general algorithms, such as APLM-S. Interestingly,
the majority of the APLM-S inner iterations consisted of a skipping step for the tests on synthetic
data and the breast cancer data, which means that APLM-S essentially behaved like ISTA-p in
these cases. Indeed, FISTA-p generally required the same number of outer-iterations as APLM-
21
S but much fewer inner-iterations, as predicted by theory. In addition, no computational steps
were wasted and no function evaluations were required for FISTA-p and ADAL. Second, FISTA-p
converged faster (required less iterations) than its full-linearization counterpart FISTA. We have
suggested possible reasons for this in Section 3. On the other hand, FISTA was very effective
for data both of whose dimensions were large because it required only gradient computations and
soft-thresholding operations, and did not require linear systems to be solved.
Our experiments showed that the performance of ADAL (as well as the quality of the solution
that it returned) varied a lot as a function of the parameter settings, and it was tricky to tune them
optimally. In contrast, FISTA-p exhibited fairly stable performance for a simple set of parameters
that we rarely had to alter and in general performed better than ADAL.
It may seem straight-forward to apply FISTA directly to the Lasso problem (43) without the
augmented Lagrangian framework.6 However, as we have seen in our experiments, FISTA took
much longer than AugLag-FISTA-p to solve this problem. We believe that this is further evidence
of the ‘load-balancing’ property of the latter algorithm that we discussed in Section 3.2. It also
demonstrates the versatility of our approach to regularized learning problems.
6
Conclusion
We have built a unified framework for solving sparse learning problems involving group-structured
regularization, in particular, the l1 /l2 - or l1 /l∞ -regularization of arbitrarily overlapping groups of
variables. For the key building-block of this framework, we developed new efficient algorithms based
on alternating partial-linearization/splitting, with proven convergence rates. In addition, we have
also incorporated ADAL and FISTA into our framework. Computational tests on several sets of
synthetic test data demonstrated the relative strength of the algorithms, and through two real-world
applications we compared the relative merits of these structured sparsity-inducing norms. Among
the algorithms studied, FISTA-p and ADAL performed the best on most of the data sets, and
FISTA appeared to be a good alternative choice for large-scale data. From our experience, FISTAp is easier to configure and is more robust to variations in the algorithm parameters. Together,
they form a flexible and versatile suite of methods for group-sparse problems of different sizes.
7
Acknowledgement
We would like to thank Katya Scheinberg and Shiqian Ma for many helpful discussions, and Xi
Chen for providing the Matlab code for ProxGrad.
A
Proof of Lemma 3.1
F (x̄, ȳ) − F (x, q̄) ≥ F (x̄, ȳ) − Lρ (x, y, q̄, ∇y f (x, y))
= F (x̄, ȳ) − f (x, y) + ∇y f (x, y)T (q̄ − y)
1
2
+ kq̄ − yk + g(q̄) .
2ρ
6
(49)
To avoid confusion with our algorithms that consist of inner-outer iterations, we prefix our algorithms with
‘AugLag’ here.
22
From the optimality of q̄, we also have
1
γg (q̄) + ∇y f (x, y) + (q̄ − y) = 0.
ρ
(50)
Since F (x, y) = f (x, y) + g(y), and f and g are convex functions, for any (x̄, ȳ),
F (x̄, ȳ) ≥ g(q̄) + (ȳ − q̄)T γg (q̄) + f (x, y) + (ȳ − y)T ∇y f (x, y) + (x̄ − x)T ∇x f (x, y).
(51)
Therefore, from (49), (50), and (51), it follows that
F (x̄, ȳ) − F (x, q̄) ≥ g(q̄) + (ȳ − q̄)T γg (q̄) + f (x, y) + (ȳ − y)T ∇y f (x, y) + (x̄ − x)T ∇x f (x, y)
1
T
2
− f (x, y) + ∇y f (x, y) (q̄ − y) + kq̄ − yk + g(q̄)
2ρ
1
= (ȳ − q̄)T (γg (q̄) + ∇y f (x, y)) − kq̄ − yk2 + (x̄ − x)T ∇x f (x, y)
2ρ
1
1
= (ȳ − q̄)T − (q̄ − y) − kq̄ − yk2 + (x̄ − x)T ∇x f (x, y)
ρ
2ρ
1
=
(kq̄ − ȳk2 − ky − ȳk2 ) + (x̄ − x)T ∇x f (x, y).
(52)
2ρ
The proof for the second part of the lemma is very similar, but we give it for completeness.
1
2
T
(53)
F (x, y) − F ((p, q)) ≥ F (x, y) − f ((p, q)) + g(ȳ) + γg (x̄, y) ((p, q)y − ȳ) + k(p, q)y − ȳk
2ρ
By the optimality of (p, q), we have
∇x f ((p, q)) = 0,
1
∇y f ((p, q)) + γg (ȳ) + ((p, q)y − ȳ) = 0.
ρ
(54)
(55)
Since F (x, y) = f (x, y) + g(y), it follows from the convexity of both f and g and (54) that
F (x, y) ≥ g(ȳ) + (y − ȳ)T γg (x̄, y) + f ((p, q)) + (y − (p, q)y )T ∇y f ((p, q)).
(56)
Now combining (53), (55), and (56), it follows that
1
F (x, y) − F ((p, q)) ≥ (y − (p, q)y )T (γg (x̄, y)ȳ + ∇y f ((p, q))) − k(p, q)y − ȳk2
2ρ
1
1
(ȳ − (p, q)y ) − k(p, q)y − ȳk2
= (y − (p, q)y )T
ρ
2ρ
1
(k(p, q)y − yk2 − ky − ȳk2 ).
=
2ρ
B
(57)
Proof of Theorem 3.1
Let I be the set of all regular iteration indices among the first k − 1 iterations, and let Ic be its
complement. For all n ∈ Ic , y n+1 = ȳ n .
For n ∈ I, we can apply Lemma 3.1 since (21) automatically holds, and (19) holds when
1
∗ ∗
n
n+1 , y n+1 ), and
ρ ≤ L(f
) . In (22), by letting (x, y) = (x , y ), and ȳ = ȳ , we get (p, q) = (x
2ρ(F (x∗ , y ∗ ) − F (xn+1 , y n+1 )) ≥ ky n+1 − y ∗ k2 − kȳ n − y ∗ k2 .
23
(58)
In (20), by letting (x̄, ȳ) = (x∗ , y ∗ ), (x, y) = (xn+1 , y n+1 ), we get q̄ = ȳ n+1 and
2ρ(F (x∗ , y ∗ ) − F (xn+1 , ȳ n+1 )) ≥ kȳ n+1 − y ∗ k2 − ky n+1 − y ∗ k2 + (x∗ − xn+1 )T ∇x f (xn+1 , y n+1 )
= kȳ n+1 − y ∗ k2 − ky n+1 − y ∗ k2 .,
(59)
since ∇x f (xn+1 , y n+1 ) = 0, for n ∈ I by (54) and for n ∈ Ic by (18). Adding (59) to (58), we get
2ρ(2F (x∗ , y ∗ ) − F (xn+1 , y n+1 ) − F (xn+1 , ȳ n+1 )) ≥ kȳ n+1 − y ∗ k2 − kȳ n − y ∗ k2 .
(60)
For n ∈ Ic , since ∇x f (xn+1 , y n+1 ) = 0, we have that (59) holds. Since y n+1 = ȳ n , it follows that
2ρ(F (x∗ , y ∗ ) − F (xn+1 , ȳ n+1 )) ≥ kȳ n+1 − y ∗ k2 − kȳ n − y ∗ k2 .
(61)
Summing (60) and (61) over n = 0, 1, . . . , k − 1 and observing that 2|I| + |Ic | = k + kn , we obtain
!
k−1
X
X
F (xn+1 , y n+1 )
(62)
2ρ (k + kn )F (x∗ , y ∗ ) −
F (xn+1 , ȳ n+1 ) −
n=1
≥
k−1
X
n=0
k
n∈I
(kȳ n+1 − y ∗ k2 − kȳ n − y ∗ k2 )
= kȳ − y ∗ k2 − kȳ 0 − y ∗ k2
≥ −kȳ 0 − y ∗ k2 .
In Lemma 3.1, by letting (x̄, ȳ) = (xn+1 , y n+1 ) in (20) instead of (x∗ , y ∗ ), we have from (59) that
2ρ(F (xn+1 , y n+1 ) − F (xn+1 , ȳ n+1 )) ≥ kȳ n+1 − y n+1 k2 ≥ 0.
(63)
Similarly, for n ∈ I, if we let (x, y) = (xn , ȳ n ) instead of (x∗ , y ∗ ) in (58), we have
2ρ(F (xn , ȳ n ) − F (xn+1 , y n+1 )) ≥ ky n+1 − ȳ n k2 ≥ 0.
(64)
For n ∈ Ic , y n+1 = ȳ n ; from (18), since xn+1 = arg minx F (x, y) with y = ȳ n = y n+1 ,
2ρ(F (xn , ȳ n ) − F (xn+1 , y n+1 )) ≥ 0.
(65)
Hence, from (63) and (64) to (65), F (xn , y n ) ≥ F (xn , ȳ n ) ≥ F (xn+1 , y n+1 ) ≥ F (xn+1 , ȳ n+1 ). Then,
we have
k−1
X
X
F (xn+1 , y n+1 ) ≥ kn F (xk , y k ).
(66)
F (xn+1 , ȳ n+1 ) ≥ kF (xk , ȳ k ), and
n=0
Combining (62) and (66) yields 2ρ(k + kn
C
n∈I
)(F (x∗ , y ∗ )
− F (xk , ȳ k )) ≥ −kȳ 0 − y ∗ k2 .
Derivation of the Stopping Criteria
In this section, we show that the quantities that we use in our stopping criteria correspond to
the primal and dual residuals [8] for the outer iterations and the gradient residuals for the inner
iterations. We first consider the inner iterations.
24
FISTA-p The necessary and sufficient optimality conditions for problem (12) or (15) are primal
feasibility
ȳ ∗ − y ∗ = 0,
(67)
and vanishing of the gradient of the objective function at (x∗ , ȳ ∗ ), i.e.
0 = ∇x f (x∗ , ȳ ∗ ),
∗
∗
(68)
∗
0 ∈ ∇y f (x , ȳ ) + ∂g(ȳ ).
(69)
Since y k+1 = z k , the primal residual is thus ȳ k+1 − y k+1 = ȳ k+1 − z k . It follows from the
optimality of xk+1 in Line 3 of Algorithm 3.4 that
AT (Axk+1 − b) − C T v l +
1 T
1
C (Cxk+1 − ȳ k+1 ) + C T (ȳ k+1 − z k ) = 0
µ
µ
1 T k
∇x f (xk+1 , ȳ k+1 ) =
C (z − ȳ k+1 ).
µ
Similarly, from the optimality of ȳ k+1 in Line 4, we have that
1
0 ∈ ∂g(ȳ k+1 ) + ∇y f (xk+1 , z k ) + (ȳ k+1 − z k )
ρ
1
1
= ∂g(ȳ k+1 ) + ∇y f (xk+1 , ȳ k+1 ) − (ȳ k+1 − z k ) + (ȳ k+1 − z k )
µ
ρ
k+1
k+1 k+1
= ∂g(ȳ ) + ∇y f (x , ȳ ),
where the last step follows from µ = ρ. Hence, we see that µ1 C T (z k − ȳ k+1 ) is the gradient
residual corresponding to (68), while (69) is satisfied in every inner iteration.
APLM-S The primal residual is ȳ k+1 − y k+1 from (67). Following the derivation for FISTA-p, it
is not hard to verify that (69) is always satisfied, and the gradient residual corresponding to
(68) is µ1 C T (y k+1 − ȳ k+1 ).
FISTA Similar to FISTA-p, the necessary and sufficient optimality conditions for problem (28)
are primal feasibility
(x∗ , y ∗ ) = (x̄∗ , ȳ ∗ ),
and vanishing of the objective gradient at (x̄∗ , ȳ ∗ ),
0 = ∇x f (x̄∗ , ȳ ∗ ),
0 ∈ ∇y f (x̄∗ , ȳ ∗ ) + ∂g(ȳ ∗ ).
Clearly, the primal residual is (x̄k+1 − zxk , ȳ k+1 − zyk ) since (xk+1 , y k+1 ) ≡ (zxk , zyk ). From the
optimality of (x̄k+1 , ȳ k+1 ), it follows that
1
0 = ∇x f (zxk , zyk ) + (x̄k+1 − zxk ),
ρ
1
0 ∈ ∂g(ȳ k+1 ) + ∇y f (zxk , zyk ) + (ȳ k+1 − zyk ).
ρ
Here, we simply use ρ1 (x̄k+1 − zxk ) and ρ1 (ȳ k+1 − zyk ) to approximate the gradient residuals.
25
Data Sets
ogl-5000-100-10-3
ogl-5000-600-10-3
ogl-5000-1000-10-3
Algs
ADAL
APLM-S
FISTA-p
FISTA
ProxGrad
ADAL
APLM-S
FISTA-p
FISTA
ProxGrad
ADAL
APLM-S
FISTA-p
FISTA
ProxGrad
CPU (s)
1.70e+000
1.71e+000
9.08e-001
2.74e+000
7.92e+001
6.75e+001
1.79e+002
4.77e+001
3.28e+001
7.96e+002
2.83e+002
8.06e+002
2.49e+002
5.21e+001
1.64e+003
Iters
61
8
8
10
3858
105
9
9
12
5608
151
10
10
13
6471
Avg Sub-iters
1.00e+000
4.88e+000
4.38e+000
7.30e+000
1.00e+000
1.74e+001
8.56e+000
1.36e+001
1.00e+000
2.76e+001
1.28e+001
1.55e+001
-
F (x)
1.9482e+005
1.9482e+005
1.9482e+005
1.9482e+005
1.4603e+006
1.4603e+006
1.4603e+006
1.4603e+006
2.6746e+006
2.6746e+006
2.6746e+006
2.6746e+006
-
Table 4: Numerical results for ogl set 1. For ProxGrad, Avg Sub-Iters and F (x) fields are not
applicable since the algorithm is not based on an outer-inner iteration scheme, and the objective
function that it minimizes is different from ours. We tested ten problems with J = 100, · · · , 1000,
but only show the results for three of them to save space.
Next, we consider the outer iterations. The necessary and sufficient optimality conditions for
problem (5) are primal feasibility
Cx∗ − y ∗ = 0,
(70)
and dual feasibility
0 = ∇L(x∗ ) − C T v ∗
∗
∗
0 ∈ ∂ Ω̃(y ) + v .
Clearly, the primal residual is rl = Cxl −y l . The dual residual is
(71)
(72)
∇L(xl+1 ) − C T (v l − µ1 (Cxl+1 − ȳ l+1 ))
∂ Ω̃(y l+1 ) + v l − µ1 (Cxl+1 − ȳ l+1 )
recalling that v l+1 = v l − µ1 (Cxl+1 − ȳ l+1 ). The above is simply the gradient of the augmented
Lagrangian (6) evaluated at (xl , y l , v l ). Now, since the objective function of an inner iteration is
the augmented Lagrangian with v = v l , the dual residual for an outer iteration is readily available
from the gradient residual computed for the last inner iteration of the outer iteration.
D
Numerical Results
References
[1] M. Afonso, J. Bioucas-Dias, and M. Figueiredo. An augmented Lagrangian approach to the
constrained optimization formulation of imaging inverse problems. Image Processing, IEEE
Transactions on, (99):1–1, 2009.
26
!
,
Data Sets
ogl-1000-200-10-3
ogl-5000-200-10-3
ogl-10000-200-10-3
Algs
ADAL
APLM-S
FISTA-p
FISTA
ProxGrad
ADAL
APLM-S
FISTA-p
FISTA
ProxGrad
ADAL
APLM-S
FISTA-p
FISTA
ProxGrad
CPU (s)
4.18e+000
1.64e+001
3.85e+000
2.92e+000
1.16e+002
5.04e+000
8.42e+000
3.96e+000
6.54e+000
1.68e+002
6.41e+000
1.46e+001
5.60e+000
1.09e+001
3.31e+002
Iters
77
9
9
11
4137
63
8
9
10
4345
44
10
10
10
6186
Avg Sub-iters
1.00e+000
2.32e+001
1.02e+001
1.44e+001
1.00e+000
8.38e+000
6.56e+000
9.70e+000
1.00e+000
7.60e+000
5.50e+000
8.50e+000
-
F (x)
9.6155e+004
9.6156e+004
9.6156e+004
9.6158e+004
4.1573e+005
4.1576e+005
4.1572e+005
4.1573e+005
1.0026e+006
1.0026e+006
1.0026e+006
1.0027e+006
-
Table 5: Numerical results for ogl set 2. We ran the test for ten problems with n = 1000, · · · , 10000,
but only show the results for three of them to save space.
[2] F. Bach. Consistency of the group Lasso and multiple kernel learning. The Journal of Machine
Learning Research, 9:1179–1225, 2008.
[3] F. Bach. Structured sparsity-inducing norms through submodular functions. In J. Lafferty,
C. K. I. Williams, J. Shawe-Taylor, R. Zemel, and A. Culotta, editors, Advances in Neural
Information Processing Systems 23, pages 118–126. 2010.
[4] A. Beck and M. Teboulle. A fast iterative shrinkage-thresholding algorithm for linear inverse
problems. SIAM Journal on Imaging Sciences, 2(1):183–202, 2009.
[5] D. Bertsekas. Multiplier methods: a survey. Automatica, 12(2):133–145, 1976.
[6] D. Bertsekas. Nonlinear programming. Athena Scientific Belmont, MA, 1999.
[7] D. Bertsekas and J. Tsitsiklis. Parallel and distributed computation: numerical methods.
Prentice-Hall, Inc., 1989.
[8] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein. Distributed optimization and statistical learning via the alternating direction method of multipliers. Machine Learning, 3(1):1–123,
2010.
[9] P. Brucker. An o (n) algorithm for quadratic knapsack problems. Operations Research Letters,
3(3):163–166, 1984.
[10] S. Chen, D. Donoho, and M. Saunders. Atomic decomposition by basis pursuit. SIAM journal
on scientific computing, 20(1):33–61, 1999.
[11] X. Chen, Q. Lin, S. Kim, J. Peña, J. Carbonell, and E. Xing. An Efficient ProximalGradient Method for Single and Multi-task Regression with Structured Sparsity. Arxiv preprint
arXiv:1005.4717, 2010.
27
Data Sets
ogl-dct-1000-5000-1
ogl-dct-1000-10000-1
ogl-dct-1000-15000-1
ogl-dct-1000-20000-1
ogl-dct-1000-25000-1
ogl-dct-1000-30000-1
Algs
ADAL
FISTA-p
FISTA
ADAL
FISTA-p
FISTA
ADAL
FISTA-p
FISTA
ADAL
FISTA-p
FISTA
ADAL
FISTA-p
FISTA
ADAL
FISTA-p
FISTA
CPU (s)
1.14e+001
1.21e+001
2.49e+001
3.31e+001
2.54e+001
6.33e+001
6.09e+001
3.95e+001
9.73e+001
9.52e+001
6.66e+001
1.81e+002
1.54e+002
7.50e+001
1.76e+002
1.87e+002
8.79e+001
2.24e+002
Iters
194
20
24
398
41
44
515
52
54
626
63
64
882
88
89
957
96
94
Avg Sub-iters
1.00e+000
1.11e+001
2.51e+001
1.00e+000
5.61e+000
1.74e+001
1.00e+000
4.44e+000
1.32e+001
1.00e+000
6.10e+000
1.61e+001
1.00e+000
3.20e+000
8.64e+000
1.00e+000
2.86e+000
8.54e+000
F (x)
8.4892e+002
8.4892e+002
8.4893e+002
1.4887e+003
1.4887e+003
1.4887e+003
2.7506e+003
2.7506e+003
2.7506e+003
3.3415e+003
3.3415e+003
3.3415e+003
4.1987e+003
4.1987e+003
4.1987e+003
4.6111e+003
4.6111e+003
4.6111e+003
Table 6: Numerical results for dct set 2 (scalability test) with l1 /l2 -regularization. All three
algorithms were ran in factorization mode with a fixed µ = µ0 .
[12] P. Combettes and J. Pesquet. Proximal splitting methods in signal processing. Fixed-Point
Algorithms for Inverse Problems in Science and Engineering, pages 185–212, 2011.
[13] P. Combettes and V. Wajs. Signal recovery by proximal forward-backward splitting. Multiscale
Modeling and Simulation, 4(4):1168–1200, 2006.
[14] J. Duchi, S. Shalev-Shwartz, Y. Singer, and T. Chandra. Efficient projections onto the l 1-ball
for learning in high dimensions. In Proceedings of the 25th international conference on Machine
learning, pages 272–279. ACM, 2008.
[15] J. Eckstein and D. Bertsekas. On the Douglas-Rachford splitting method and the proximal
point algorithm for maximal monotone operators. Mathematical Programming, 55(1):293–318,
1992.
[16] J. Eckstein and P. Silva. A practical relative error criterion for augmented lagrangians. Technical report, Rutgers University, 2011.
[17] D. Gabay and B. Mercier. A dual algorithm for the solution of nonlinear variational problems
via finite element approximation. Computers & Mathematics with Applications, 2(1):17–40,
1976.
[18] R. Glowinski and A. Marroco. Sur l’approximation, par elements finis d’ordre un, et la resolution, par penalisation-dualite d’une classe de problemes de dirichlet non lineares. Rev.
Francaise d’Automat. Inf. Recherche Operationelle, (9):41–76, 1975.
28
Data Sets
ogl-dct-1000-5000-1
ogl-dct-1000-10000-1
ogl-dct-1000-15000-1
ogl-dct-1000-20000-1
ogl-dct-1000-25000-1
ogl-dct-1000-30000-1
Algs
ADAL
FISTA-p
FISTA
ProxFlow
ADAL
FISTA-p
FISTA
ProxFlow
ADAL
FISTA-p
FISTA
ProxFlow
ADAL
FISTA-p
FISTA
ProxFlow
ADAL
FISTA-p
FISTA
ProxFlow
ADAL
FISTA-p
FISTA
ProxFlow
CPU (s)
1.53e+001
1.61e+001
3.02e+001
1.97e+001
3.30e+001
3.16e+001
7.27e+001
3.67e+001
4.83e+001
5.40e+001
8.64e+001
9.91e+001
8.09e+001
8.09e+001
1.48e+002
2.55e+002
7.48e+001
1.15e+002
2.09e+002
1.38e+002
9.99e+001
1.55e+002
2.60e+002
1.07e+002
Iters
266
10
16
330
10
24
328
15
23
463
16
26
309
30
38
359
29
39
-
Avg Sub-iters
1.00e+000
3.05e+001
4.09e+001
1.00e+000
3.10e+001
3.25e+001
1.00e+000
2.52e+001
2.66e+001
1.00e+000
2.88e+001
2.93e+001
1.00e+000
1.83e+001
2.30e+001
1.00e+000
2.17e+001
2.25e+001
-
F (x)
7.3218e+002
7.3219e+002
7.3233e+002
7.3236e+002
1.2707e+003
1.2708e+003
1.2708e+003
1.2709e+003
2.2444e+003
2.2444e+003
2.2449e+003
2.2467e+003
2.6340e+003
2.6340e+003
2.6342e+003
2.6357e+003
3.5566e+003
3.5566e+003
3.5568e+003
3.5571e+003
3.7057e+003
3.7057e+003
3.7060e+003
3.7063e+003
Table 7: Numerical results for dct set 2 (scalability test) with l1 /l∞ -regularization. The algorithm
configurations are exactly the same as in Table 6.
[19] D. Goldfarb, S. Ma, and K. Scheinberg. Fast alternating linearization methods for minimizing
the sum of two convex functions. Arxiv preprint arXiv:0912.4571v2, 2009.
[20] T. Goldstein and S. Osher. The split bregman method for l1-regularized problems. SIAM
Journal on Imaging Sciences, 2:323, 2009.
[21] G. Golub and C. Van Loan. Matrix computations. Johns Hopkins Univ Pr, 1996.
[22] M. Hestenes. Multiplier and gradient methods. Journal of optimization theory and applications,
4(5):303–320, 1969.
[23] J. Huang, T. Zhang, and D. Metaxas. Learning with structured sparsity. In Proceedings of
the 26th Annual International Conference on Machine Learning, pages 417–424. ACM, 2009.
[24] L. Jacob, G. Obozinski, and J. Vert. Group Lasso with overlap and graph Lasso. In Proceedings
of the 26th Annual International Conference on Machine Learning, pages 433–440. ACM, 2009.
[25] R. Jenatton, J. Audibert, and F. Bach. Structured variable selection with sparsity-inducing
norms. Stat, 1050, 2009.
29
Data Sets
ogl-dct-1000-5000-1
ogl-dct-1000-10000-1
ogl-dct-1000-15000-1
ogl-dct-1000-20000-1
ogl-dct-1000-25000-1
ogl-dct-1000-30000-1
Algs
FISTA-p
FISTA
ADAL
FISTA-p
FISTA
ADAL
FISTA-p
FISTA
ADAL
FISTA-p
FISTA
ADAL
FISTA-p
FISTA
ADAL
FISTA-p
FISTA
ADAL
CPU (s)
1.83e+001
2.49e+001
1.35e+001
3.16e+001
6.33e+001
4.43e+001
4.29e+001
9.73e+001
5.37e+001
7.53e+001
1.81e+002
1.57e+002
7.41e+001
1.76e+002
8.79e+001
8.95e+001
2.24e+002
1.12e+002
Iters
12
24
181
14
44
270
14
54
216
13
64
390
15
89
231
14
94
249
Avg Sub-iters
2.34e+001
2.51e+001
1.00e+000
1.73e+001
1.74e+001
1.00e+000
1.51e+001
1.32e+001
1.00e+000
2.06e+001
1.61e+001
1.00e+000
1.47e+001
8.64e+000
1.00e+000
1.58e+001
8.54e+000
1.00e+000
F (x)
8.4892e+002
8.4893e+002
8.4892e+002
1.4887e+003
1.4887e+003
1.4887e+003
2.7506e+003
2.7506e+003
2.7506e+003
3.3416e+003
3.3415e+003
3.3415e+003
4.1987e+003
4.1987e+003
4.1987e+003
4.6111e+003
4.6111e+003
4.6111e+003
Table 8: Numerical results for the DCT set with l1 /l2 -regularization. FISTA-p and ADAL were
ran in PCG mode with the dynamic scheme for updating µ. µ was fixed at µ0 for FISTA.
[26] R. Jenatton, G. Obozinski, and F. Bach. Structured sparse principal component analysis.
Arxiv preprint arXiv:0909.1440, 2009.
[27] S. Kim and E. Xing. Tree-guided group lasso for multi-task regression with structured sparsity.
In Proceedings of the 27th Annual International Conference on Machine Learning, 2010.
[28] Z. Lin, M. Chen, L. Wu, and Y. Ma. The augmented lagrange multiplier method for exact
recovery of corrupted low-rank matrices. Arxiv preprint arXiv:1009.5055, 2010.
[29] J. Liu and J. Ye. Fast Overlapping Group Lasso. Arxiv preprint arXiv:1009.0306, 2010.
[30] J. Mairal, R. Jenatton, G. Obozinski, and F. Bach. Network flow algorithms for structured
sparsity. In J. Lafferty, C. K. I. Williams, J. Shawe-Taylor, R. Zemel, and A. Culotta, editors,
Advances in Neural Information Processing Systems 23, pages 1558–1566. 2010.
[31] J. Mairal, R. Jenatton, G. Obozinski, and F. Bach. Convex and Network flow Optimization
for Structured Sparsity. Arxiv preprint arXiv:1104.1872v1, 2011.
[32] S. Mosci, S. Villa, A. Verri, and L. Rosasco. A primal-dual algorithm for group sparse regularization with overlapping groups. In at NIPS, 2010.
[33] Y. Nesterov. Smooth minimization of non-smooth functions. Mathematical Programming,
103(1):127–152, 2005.
[34] J. Nocedal and S. Wright. Numerical optimization. Springer verlag, 1999.
[35] J. Pesquet and N. Pustelnik. A parallel inertial proximal optimization method. Preprint, 2010.
30
Data Sets
ogl-dct-1000-5000-1
ogl-dct-1000-10000-1
ogl-dct-1000-15000-1
ogl-dct-1000-20000-1
ogl-dct-1000-25000-1
ogl-dct-1000-30000-1
Algs
FISTA-p
ADAL
FISTA
ProxFlow
FISTA-p
ADAL
FISTA
ProxFlow
FISTA-p
ADAL
FISTA
ProxFlow
FISTA-p
ADAL
FISTA
ProxFlow
FISTA-p
ADAL
FISTA
ProxFlow
FISTA-p
ADAL
FISTA
ProxFlow
CPU (s)
2.30e+001
1.89e+001
3.02e+001
1.97e+001
5.09e+001
4.77e+001
7.27e+001
3.67e+001
6.33e+001
9.41e+001
8.64e+001
9.91e+001
8.21e+001
1.59e+002
1.48e+002
2.55e+002
1.43e+002
1.20e+002
2.09e+002
1.38e+002
1.75e+002
2.01e+002
2.60e+002
1.07e+002
Iters
11
265
16
11
323
24
12
333
23
12
415
26
13
310
38
13
361
39
-
Avg Sub-iters
2.93e+001
1.00e+000
4.09e+001
3.16e+001
1.00e+000
3.25e+001
2.48e+001
1.00e+000
2.66e+001
2.42e+001
1.00e+000
2.93e+001
2.98e+001
1.00e+000
2.30e+001
3.18e+001
1.00e+000
2.25e+001
-
F (x)
7.3219e+002
7.3218e+002
7.3233e+002
7.3236e+002
1.2708e+003
1.2708e+003
1.2708e+003
1.2709e+003
2.2445e+003
2.2444e+003
2.2449e+003
2.2467e+003
2.6341e+003
2.6340e+003
2.6342e+003
2.6357e+003
3.5567e+003
3.5566e+003
3.5568e+003
3.5571e+003
3.7057e+003
3.7057e+003
3.7060e+003
3.7063e+003
Table 9: Numerical results for the DCT set with l1 /l∞ -regularization. FISTA-p and ADAL were
ran in PCG mode. The dynamic updating scheme for µ was applied to FISTA-p, while µ was fixed
at µ0 for ADAL and FISTA.
[36] G. Peyre and J. Fadili. Group sparsity with overlapping partition functions. In EUSIPCO
2011, 2011.
[37] G. Peyre, J. Fadili, and C. Chesneau. Adaptive structured block sparsity via dyadic partitioning. In EUSIPCO 2011, 2011.
[38] M. Powell. Optimization, chapter A Method for Nonlinear Constraints in Minimization Problems. Academic Press, New York, New York, 1972.
[39] Z. Qin, K. Scheinberg, and D. Goldfarb. Efficient Block-coordinate Descent Algorithms for
the Group Lasso. 2010.
[40] R. Rockafellar. The multiplier method of hestenes and powell applied to convex programming.
Journal of Optimization Theory and Applications, 12(6):555–562, 1973.
[41] V. Roth and B. Fischer. The group-lasso for generalized linear models: uniqueness of solutions and efficient algorithms. In Proceedings of the 25th international conference on Machine
learning, pages 848–855. ACM, 2008.
31
Data Sets
BreastCancerData
Algs
ADAL
APLM-S
FISTA-p
FISTA
ProxGrad
CPU (s)
6.24e+000
4.02e+001
6.86e+000
5.11e+001
7.76e+002
Iters
136
12
12
75
6605
Avg Sub-iters
1.00e+000
4.55e+001
1.48e+001
1.29e+001
1.00e+000
F (x)
2.9331e+003
2.9331e+003
2.9331e+003
2.9340e+003
-
Table 10: Numerical results for Breast Cancer Data using l1 /l2 -regularization. In this experiment,
we kept µ constant at 0.01 for ADAL. The CPU time is for a single run on the entire data set with
the value of λ selected to minimize the RMSE in Figure 4.
[42] S. Setzer, G. Steidl, and T. Teuber. Deblurring poissonian images by split bregman techniques.
Journal of Visual Communication and Image Representation, 21(3):193–199, 2010.
[43] J. Spingarn. Partial inverse of a monotone operator. Applied mathematics & optimization,
10(1):247–265, 1983.
[44] A. Subramanian, P. Tamayo, V. Mootha, S. Mukherjee, B. Ebert, M. Gillette, A. Paulovich,
S. Pomeroy, T. Golub, E. Lander, et al. Gene set enrichment analysis: a knowledge-based approach for interpreting genome-wide expression profiles. Proceedings of the National Academy
of Sciences of the United States of America, 102(43):15545, 2005.
[45] R. Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical
Society. Series B (Methodological), 58(1):267–288, 1996.
[46] K. Toyama, J. Krumm, B. Brumitt, and B. Meyers. Wallflower: Principles and practice of
background maintenance. In iccv, page 255. Published by the IEEE Computer Society, 1999.
[47] M. Van De Vijver, Y. He, L. van’t Veer, H. Dai, A. Hart, D. Voskuil, G. Schreiber, J. Peterse,
C. Roberts, M. Marton, et al. A gene-expression signature as a predictor of survival in breast
cancer. New England Journal of Medicine, 347(25):1999, 2002.
[48] S. Wright, R. Nowak, and M. Figueiredo. Sparse reconstruction by separable approximation.
IEEE Transactions on Signal Processing, 57(7):2479–2493, 2009.
[49] M. Yuan and Y. Lin. Model selection and estimation in regression with grouped variables.
Journal of the Royal Statistical Society: Series B (Statistical Methodology), 68(1):49–67, 2006.
[50] H. Zou and T. Hastie. Regularization and variable selection via the elastic net. Journal of the
Royal Statistical Society: Series B (Statistical Methodology), 67(2):301–320, 2005.
32