26 May, 2010 (IMSc)
Limits of Approximation Algorithms
Lec. 13: Approximation Algorithms for Unique Games
Lecturer: Prahladh Harsha
Scribe: Prajakta Nimbhorkar
In the last lecture, we proved an inapproximability result for the MAX-CUT problem.
We also introduced the unique label cover problem and Khot’s unique games conjecture.
Today, we will see how well unique games can be approximated. In particular, we will
see Trevisan’s algorithm for approximating unique games on low-diameter graphs [Tre08].
We will then briefly discuss the algorithm of Arora et al. for approximating unique games
on expanders [AKK+ 08], and a recent sub-exponential time algorithm for unique games on
general graphs due to Arora et al [ABS10]. The references for this lecture include the above
cited papers and Lecture 8 of the DIMACS tutorial on Limits of approximation [HC09].
13.1
Recap: Unique Games and the Unique Games Conjecture
An instance of unique games consists of a bipartite graph G = (U, V, E), a label set [m],
and a set of permutations π = {πe |e ∈ E} on the label set. Thus ∀e = (u, v) ∈ E
π(u,v) : [m] → [m] is a permutation on [m]. The desired output consists of a labeling
A : U ∪ V → [m] that maximizes the number of satisfied edges. An edge e = (u, v) is said
to be satisfied if label(v) = π(u,v) (label(u)). It can be seen that, if all the edges can be
satisfied, it is trivial to find such a labeling: pick a vertex u and choose its label arbitrarily.
This gives unique labels to vertices which are reachable from u in G. Cycle through all the
m labels for u.
Therefore, given m, and completeness and soundness parameters 1−ε and δ respectively,
a gap-version of unique label cover GAP1−ε,δ U LC(m) is defined as follows:
Y ES = {(G, π)|∃ labeling A such that the number of edges satisfied ≥ (1 − ε)|E|}
N O = {(G, π)|∀ labelings A the number of edges satisfied ≤ δ|E|}
The goal is to distinguish between the Y ES and N O instances.
Unique Games Conjecture ([Kho02]). For all ε, δ, there exists m such that gap1−ε,δ ULC(m)
does not have a polynomial-time algorithm.
13.2
Approximation Algorithms for Unique Games
The purpose behind designing approximation algorithms is to refute UGC. Thus, for some
ε, δ, and for each m, we want an algorithm A such that, if the input ULC instance is (1−ε)satisfiable, then A outputs a labeling that satisfies at least δ|E| edges. Thus A works on
highly satisfiable instances and it separates YES and NO instances. (Recall that complete
satisfiability is trivial.)
13-1
We consider a special case viz. MAX-CUT, where the label set has size m = 2. Thus
the instance consists of a graph G = (V, E), and the promise that it either has a cut that
contains at least (1 − ε)-fraction of edges or all the cuts in G contain less than δ-fraction
of edges. We have seen a randomized algorithm for MAX-CUT that puts the vertices
into one of the two partitions with equal probability and gives 1/2 + ε-approximation.
Goemans-Williamson’s algorithm gives αgw ≈ 0.87856 approximation. The question is to
determine whether we can do better when we have the above promise. We will see that
Goemans-Williamson’s algorithm works better on large cuts.
Lemma 13.2.1. There exist constants ε0 ∈ (0, 1) and c such that for all ε < ε0 if
MAX-CUT(G) ≥ (1 − ε)|E|, then Goemans-Williamson’s algorithm outputs a cut that
√
contains at least (1 − Ω( ε))|E| edges.
Proof. Recall that the algorithm involves solving the following SDP relaxation:
Maximize Z =
such that
1
E[1 − hvi , vj i]
2
∀ikvi k2 = 1
where vi ∈ Rn . If MAX-CUT(G) ≥ (1 − ε)|E| then Z ≥ (1 − ε). We show that the
Goemans-Williamson’s rounding gives a solution that satisfies the condition in the lemma.
Recall that the rounding involves picking a random hyperplane passing through origin and
partitioning the vectors in the solution depending on which side of the hyperplane they lie.
This gives that
h cos−1 (hv , v i) i
i j
E[cutGW (G)] = E
π
1−hv ,v i
−1
i j
and y = h(x) = cos π (ρ) , where ρ = hvi , vj i. Consider the function h as
Let xe =
2
defined above. It is easy to check there exist constants ε′0 ∈ (0, 1) and c such that for all
√
ε < ε0 if x ≥ 1 − ε, then y = h(x) ≥ 1 − c ε. Thus, if each of the xe ’s in the expectation
satisfied xe ≥ 1 − ε, we would be done.
However, we only have the promise that E[xe ] ≥ 1 − ε. If the function h were convex, we
could get E[h(xe )] ≥ h(E[xe ]) which gives the desired bound. Unfortunately this is not the
case. So let h̃ be the largest convex function under h. It is easy to check that there exists an
ε′′0 such that for all x ≥ 1 − ε′′0 , we have h(x) = h̃(x) (see Figure 13.2). Let ε0 = min{ε′0 , ε′′0 }.
We then have
E[cut] = E[h(xe )] ≥ E[h̃(xe )] ≥ h̃(E[xe ]) = h(E[xe ])
where the last equality holds because h and h̃ are equal in the large-cut region, that is, the
region close to 1 (see Figure 13.2).
√
Thus Goemans-Williamson’s algorithm can find a cut of size at least 1 − c ε if there
is a cut of size at least 1 − ε. We know that ∀ρ, ε GAP 1−ρ ′ cos−1 (ρ) ′ - MAX-CUT is
2
−ε ,
π
+ε
UG-hard, whereas here we have a polynomial-time algorithm for GAP1−ε,1−c√ε . Thus a
slight improvement over Goemans-Williamson’s algorithm implies refutation of the unique
games conjecture.
13-2
3.5
3
2.5
y = h(x)
=
cos−1 (ρ)
π
2
1.5
1
0.5
0
0
0.2
0.4
x=
Figure 1: Plot of h(x) =
equal to h, pointwise
13.3
cos−1 (1−2x)
π
0.6
0.8
1
h = h̃
1−ρ
2
and h̃(x), the largest convex function less than or
Trevisan’s Algorithm for Approximating Unique Games
Now we will see Trevisan’s algorithm for approximating unique games [Tre08]. This algorithm gives a constant fraction approximation for sub-constant values of ε. It works by
solving and rounding an SDP relation of unique games.
SDP relaxation for unique games: The SDP consists of variables vi for each vertex v
in the unique games instance and for each label i ∈ [m]. A variable vi = 1 if the vertex v is
labelled i. The integer program is given by
X
X
Maximize
ui vπe (i)
e=(u,v)∈E
subject to
vi ∈ {0, 1}
i∈[m]
∀v ∀i 6= j ∈ [m]
X
vi = 1
i∈[m]
13-3
vi vj = 0
The corresponding SDP relaxation is
X
Maximize
e=(u,v)∈E
subject to
∀v
∀v
∀v
∀i 6= j
X
i∈[m]
hui , vπe (i) i
hvi , vj i = 0
∀i 6= j ∈ [m] hvi , vj i = 0
X
kvi k2 = 1
i∈[m]
∀u, v, i, j
hui , vj i ≥ 0
Thus for each vertex, we have m mutually orthogonal vectors, such that their squared ℓ2
norms sum up to 1.
Note that we can tighten the above SDP by adding any constraint as long as the feasible
solutions of the integer program satisfy them. Trevisan, in particular, adds the triangle
inequalities which can be easily verified for the feasible integer solutions.
kwh − ui k2 ≤ kwh − vj k2 + kvj − ui k2
kvj − ui k2 ≥ kvj k2 − kui k2
for each u, v, w, i, j, h.
Using the identity ka − bk2 = kak2 + kbk2 − 2ha, bi, the objective function becomes
X
1 X
1−
kui − vπe (i) k2
2
e=(u,v)∈E
13.3.1
i∈[m]
Rounding the SDP
P
We now need to round the SDP to get an integral solution. Observe that i kui k2 = 1,
i.e., the squared ℓ2 norms of ui ’s define a probability distribution on the set of labels for
u. This gives one natural rounding algorithm: for each vertex u picking label i for u with
probability kui k2 .
However, the above rounding procedure ignores possible correlations between edges.
For instance, suppose an edge e = (u, v) contributes 1 to the SDP solution, or equivalently
ui = vπe (i) for all i. Note that this means, that the set of vectors is the same for u and v,
however in a permuted order. Thus, if we had chosen the label i for u (this happens with
probability kui k2 ), it makes sense to choose label j for v such that ui = vj as this would
ensure that j = πe (i) and edge e is satisfied.
Let us see how we can generalize this idea even when
v) ∈ E contributes
P the edge e = (u,
2
1 − ε to SDP solution. In this case, we have that i∈[m] kui − vπe (i) k ≤ 2ε. In this case,
even though the bundle of vectors for u and v are not the same, the two bundles are “close”.
Thus, if we pick a vector ui for u, we would like to pick the vector vj for v such that kui −vj k2
is minimized. More formally, we do the following.
Edge-level rounding for edge e = (u, v)
1. For u, pick i with probability kui k2 and label u with i.
13-4
2. For v, (u, v) ∈ E, pick j such that kvj − ui k2 is minimized. Label v by j (Break ties
arbitrarily.)
However, we need to prove that this satisfies πe . This is shown by the following claim.
P
2
Claim 13.3.1.
i∈[m] kui − vπe (i) k ≤ 2ε ⇒ P r[label(v) 6= πe (label(u))] ≤ 4ε.
Proof. Note that the randomness is only in Step 1. We define BAD ⊆ [m] such that i ∈BAD
if ∃j 6= πe (i) such that kui − vj k2 ≤ kui − vπe (i) k2 . We will show that for all i ∈ BAD, we
have kui k2 ≤ 2kui − vπe (i) k2 . Assuming this we can prove the claim as follows.
X
P r[label(v) 6= πe (label(u))] =
kui k2
i∈BAD
≤ 2
X
i∈BAD
kui − vπe (i) k2 ≤ 4ε
Now to prove the assumption. Denote ui = a, vπe (i) = b1 , vj = b2 . We have ka − b2 k2 ≤
ka − b1 k2 , hb1 , b2 i = 0. We want to conclude that kak2 ≤ 2ka − b1 k2 .
Case 1 : kb1 k2 ≤ 21 kak2 . In this case, the inequality follows since ka − b1 k2 ≥ kak2 − kb1 k2 .
Case 2 : kb2 k2 ≤ 21 kak2 . In this case, the inequality follows since ka − b1 k2 ≥ ka − b2 k2 .
Case 3 : kb1 k2 , kb2 k2 ≥ 12 kak2 . As b1 , b2 are mutually orthogonal, kb1 −b2 k2 = kb1 k2 +kb2 k2 ≥
kak2 . But by traingle inequality, kb1 − b2 k2 ≤ ka − b1 k2 + ka − b2 k2 ≤ 2ka − b1 k2 .
This completes the proof of the claim.
However, the above tells us how to round such that a particular edge is satisfied. We need
a more global rounding procedure. We will first see how the above idea easily generalizes to
low-diameter graphs and then show how any graph can be decomposed to a low-diameter
graph by discarding a small fraction of the edges.
13.3.2
Approximating unique games on low-diameter graphs
Now we will see an approximation algorithm for unique games when the underlying graph
has low-diameter (in fact low-radius) [Tre08]. Assume that there is a vertex r ∈ V such
that for each vertex u, dG (u, r) ≤ d where dG denotes the distance in graph G and d is the
radius of G. The algorithm involves solving the SDP and then using appropriate rounding
scheme to get an approximate solution for unique games instance. The idea is to choose a
randomized rounding scheme for r and propagate it to all the other vertices.
Rounding:
1. Choose label i for r with probability kri k2 .
2. For each v ∈ V , label v with j so as to minimize kvj − ri k2 (breaking ties arbitrarily).
We will now show that for each edge e = (u, v) if the contribution of the edge e to the SDP
solution is large, then the probability that the above rounding satisfies the constraint πe
will also be large.
13-5
ε
to the SDP solution, then P r(u,v)=e∈E [e is satisfied ] ≥
Lemma 13.3.2. If each edge contributes 1− 8(d+1)
1 − ε.
Proof. Let v0 = r, v1 , . . . , vt = u be a path from r to u of length at most d. Define
the permutation πu to be the composition of permutations along the path (i.e., πu =
π(ut−1 ,ut ) ◦ π(ut−2 ,ut−1 ) ◦ . . . π(u0 ,u1 ) and πv = π(u,v) ◦ πu . Now, clearly edge e is satisfied if
label(u) = πu (label(r)) and P
label(v) = πv (lable(r)).
ε
. Hence, by triangle inequality, we
For all edges e = (u, v), i∈[m] kui − vπe (i) k2 ≤ 4(d+1)
have
X
i∈[m]
kr − uπu (i) k2 ≤
X
i∈[m]
kr− vπv (i) k2 ≤
ε
4
(13.3.1)
ε
4
(13.3.2)
From 13.3.1, P r[πu (label(r)) 6= label(u)] ≤ 2ε . From 13.3.2, P r[πv (label(r)) 6= label(v)] ≤ 2ε .
Now the lemma follows.
13.3.3
Approximating unique games on general graphs
If G is a general graph (not a low-diameter graph), we decompose it into low-radius components without discarding too many edges. In particular, we discard at most O(ε)-fraction
of edges.
Graph decomposition: Given a graph G, it is decomposed into low-radius components
as follows. Each component is created as follows.
1. Start with a vertex u.
2. Define G0 = {u} ∪ N (u). The idea is to expand by one BFS layer at a time and look
at the number of newly added edges.
3. Fix some α > 1. While E(Gi ∪ N (Gi )) > αE(Gi ), Gi+1 = Gi ∪ N (Gi ), i ← i + 1.
4. Output Gi
Thus a component is expanded as long as more than a constant fraction of new edges are
added. If the radius of the component is d, then the number of edges in the component is
|E|
at least αd . Since, this can be at most |E|, we must have d ≤ log
log α . Thus, the radius of
|E|
the component is at most log
log α ≈ O(log n) for constant α.
Let Ei be the number of edges cut while forming ith component and Fi be the number of
edges in the component. We must have (Ei + Fi ) ≤ αF
Pi . Hence, the component has at least
Ei
Fi ≥ α−1
edges. The total number of edges is |E| ≥
Fi which by the above argument is
|E|
1 P
. We choose α = 1 + ε′ .
at least α−1 i Ei . Thus, the number of edges cut is at most 1/(α−1)
log |E|
log |E|
Thus we have decomposition of graph into components of radius log(1+ε
′ ) = O(
ε′ ), with
′
at most ε |E| inter-component edges.
Now the algorithm is given in the following steps: (Assume SDP solution ≥ (1 − η)|E|.)
13-6
1. Remove all the edges that contribute < (1 −
most 3ε |E| such edges.)
3η
ε )
to the SDP solution. (There are at
2. Decompose the graph into components of radius O( logε n ) by discarding at most 3ε |E|
edges.
3. Run Trevisan’s algorithm.
2
ε
ε
The algorithm works if 1 − 3η
ε ≥ 1 − 8(d+1) ≥ 1 − C log n . Thus, if the unique label cover
instance is (1 − η)-satisfiable, we can find labeling that satisfies (1 − ε)-fraction of edges,
cε3
cε2
where η = log
n for some constant c. Note that for expanders, we get log n .
13.4
Approximating unique games on expanders
There are better approximations known for unique games, when the underlying graph is an
expander [AKK+ 08, KT07, Kol10]. We give an overview of these results.
Lemma 13.4.1 ([AKK+ 08]). If a ULC instance (G, π) is (1 − ε)-satisfiable, and G has
spectral expansion λ, then there is a polynomial-time algorithm that finds a labeling satisfying
(1 − Õ( λε ))-fraction of edges.
T
Lxk
Let λ2 be the second eigenvalue of the Laplacian of G. Then λ2 = minx⊥1 kx
,
kxT xk
where L is the Laplacian matrix of G. Let v1 , . . . , vn be the vertices in G. Associate
vectors z1 , . . . , zn ∈ Rn respectively with each of the vertices. It is known that λ2 =
minz1 ,...,zn ∈Rn
E(i,j)∈E [kzi −zj k2 ]
.
E(i,j)∈V 2 [kzi −zj k2 ]
We will use this characterization of λ2 .
The algorithm consists of solving an SDP and then rounding the solution to get labels
for the vertices. The SDP is same as in the previous section, with the exception that only
the triangle inequalities hui , vj i ≥ 0 are used.
We describe the rounding here. Given a set of vectors
P for u, v, we want to determine
a permutation σuv : [m] → [m] that minimizes ρuv = i∈[m] kui − vσuv (i) k2 . This is the
best permutation that aligns the ui s and vj s. This is a simple minimum weight bipartite
matching problem, and hence can be exactly solved. The rounding procedure thus has
following steps:
1. Pick u ∈ V at random. Label u by i with probability kui k2 .
2. For each v ∈ V , label v by σuv (i).
The analysis involves showing that the SDP solution is large if and only if Ee=(u,v)∈E [
vπe (i) k2 ] ≤ ε.
P
i∈[m] kui −
Improvements and generalizations:
1. [KT07, Kol10]: This is a generalization of the algorithm of [AKK+ 08] to the case when
there are k eigenvalues of (the normalized adjacency matrix of )G that lie between 1
and some constant λ. Note that k = 1 in the above algorithm. Define rankλ∗ (G) =
number of eigenvalues of G between 1 and λ. If k = nε , [Kol10] gives a sub-exponential
time algorithm.
13-7
2. [ABS10]: A few weeks ago, Arora, Barak and Steurer gave a sub-exponential time
algorithm for UG building on the above ideas. This paper describes a way to decompose a graph into components, where each component has rankλ∗ = nε , by discarding
at most an ε-fraction of edges. The algorithm is iterative, where an SDP for the
i + 1st largest eigenvalue is written from the solution of the SDP for the ith largest
eigenvalue. The result is given in the following lemma:
Lemma 13.4.2 ([ABS10]). If G is (1 − ε)-satisfiable, then the algorithm finds a
labeling that satisfies at least 12 -fraction of edges. Moreover, the algorithm runs in
ε
time 2n .
References
[ABS10]
Sanjeev Arora, Boaz Barak, and David Steurer. Subexponential algorithms for
unique games and related problems, 2010. (manuscript).
[AKK+ 08] Sanjeev Arora, Subhash Khot, Alexandra Kolla, David Steurer, Madhur
Tulsiani, and Nisheeth K. Vishnoi. Unique games on expanding constraint graphs
are easy: extended abstract. In Proc. 40th ACM Symp. on Theory of Computing (STOC),
pages 21–28. 2008. doi:10.1145/1374376.1374380.
[HC09]
Prahladh Harsha and Moses Charikar. Limits of approximation algorithms: PCPs
and unique games, 2009. (DIMACS Tutorial, July 20-21, 2009). arXiv:1002.3864.
[Kho02]
Subhash Khot. On the power of unique 2-prover 1-round games. In Proc. 34th ACM
Symp. on Theory of Computing (STOC), pages 767–775. 2002. doi:10.1145/509907.
510017.
[Kol10]
Alexandra Kolla. Spectral algorithms for unique games. In Proc. 25th IEEE Conference on Computational Complexity, pages 122–130. 2010. eccc:TR10-029, doi:
10.1109/CCC.2010.20.
[KT07]
Alexandra Kolla and Madhur Tulsiani. Playing random and expanding unique
games, 2007. (manuscript).
[Tre08]
Luca Trevisan. Approximation algorithms for unique games. Theory of Computing,
4(1):111–128, 2008. (Preliminary version in 46th FOCS, 2005). eccc:TR05-034, doi:
10.4086/toc.2008.v004a005.
13-8