arXiv:1012.0952v1 [cs.NE] 4 Dec 2010
Faster Black-Box Algorithms Through Higher Arity
Operators
Benjamin Doerr
Max-Planck-Institut für Informatik
Campus E1 4
66123 Saarbrücken, Germany
Timo Kötzing
Max-Planck-Institut für Informatik
Campus E1 4
66123 Saarbrücken, Germany∗
Daniel Johannsen
Max-Planck-Institut für Informatik
Campus E1 4
66123 Saarbrücken, Germany
Per Kristian Lehre
DTU Informatics
Technical University of Denmark
2800 Lyngby, Denmark†
Markus Wagner
Max-Planck-Institut für Informatik
Campus E1 4
66123 Saarbrücken, Germany
Carola Winzen
Max-Planck-Institut für Informatik
Campus E1 4
66123 Saarbrücken, Germany‡
Abstract
We extend the work of Lehre and Witt (GECCO 2010) on the unbiased black-box model by considering higher arity variation operators.
∗
Timo Kötzing was supported by the Deutsche Forschungsgemeinschaft (DFG) under
grant NE 1182/5-1.
†
Supported by Deutsche Forschungsgemeinschaft (DFG) under grant no. WI 3552/1-1.
‡
Carola Winzen is a recipient of the Google Europe Fellowship in Randomized Algorithms, and this research is supported in part by this Google Fellowship.
1
In particular, we show that already for binary operators the black-box
complexity of LeadingOnes drops from Θ(n2 ) for unary operators to
O(n log n). For OneMax, the Ω(n log n) unary black-box complexity
drops to O(n) in the binary case. For k-ary operators, k ≤ n, the
OneMax-complexity further decreases to O(n/ log k).
1
Introduction
When we analyze the optimization time of randomized search heuristics, we
typically assume that the heuristic does not know anything about the objective function apart from its membership in a large class of functions, e.g.,
linear or monotone functions. Thus, the function is typically considered to
be given as a black-box, i.e., in order to optimize the function, the algorithm
needs to query the function values of various search points. The algorithm
may then use the information on the function values to create new search
points. We call the minimum number of function evaluations needed for a
randomized search heuristic to optimize any function f of a given function
class F the black-box complexity of F. We may restrict the algorithms with
respect to how it creates new search points from the information collected
in previous steps. Intuitively, the stronger restrictions that are imposed on
which search points the algorithm can query next, the larger the black-box
complexity of the function class.
Black-box complexity for search heuristics was introduced in 2006 by
Droste, Jansen, and Wegener [DJW06]. We call their model the unrestricted
black-box model as it imposes few restrictions on how the algorithm may create new search points from the information at hand. This model was the first
attempt towards creating a complexity theory for randomized search heuristics. However, the authors prove bounds that deviate from those known for
well-studied search heuristics, such as random local search and evolutionary
algorithms. For example, the well-studied function class OneMax has an
unrestricted black-box complexity of Θ(n/ log n) whereas standard search
Model
Arity
OneMax
unbiased
unbiased
unrestricted
1
1<k≤n
n/a
Θ(n log n)
O(n/ log k)
Ω(n/ log n)
O(n/ log n)
LeadingOnes
[LW10]
(here)
[DJW06]
[AW09]
Θ(n2 )
O(n log n)
Ω(n)
[LW10]
(here)
[DJW06]
Table 1: Black-Box Complexity of OneMax and LeadingOnes. Note that
upper bounds for the unbiased unary black-box complexity immediately
carry over to higher arities. Similarly, lower bounds for the unrestricted
black-box model also hold for the unbiased model.
2
heuristics only achieve a Ω(n log n) runtime. Similarly, the class LeadingOnes has a linear unrestricted black-box complexity but we typically observe a Ω(n2 ) behavior for standard heuristics.
These gaps, among other reasons, motivated Lehre and Witt [LW10]
to propose an alternative model. In their unbiased black-box model the
algorithm may only invoke a so-called unbiased variation operator to create
new search points. A variation operator returns a new search point given one
or more previous search points. Now, intuitively, the unbiasedness condition
implies that the variation operator is symmetric with respect to the bit
values and bit positions. Or, to be more precise, it must be invariant under
Hamming-automorphisms. We give a formal definition of the two black-box
models in Section 2.
Among other problem instances, Lehre and Witt analyze the unbiased
black-box complexity of the two function classes OneMax and LeadingOnes. They can show that the complexity of OneMax and LeadingOnes
match the above mentioned Θ(n log n) and, respectively, Θ(n2 ) bounds, if
we only allow unary operators. I.e., if the variation operator may only use
the information from at most one previously queried search point, the unbiased black-box complexity matches the runtime of the well-known (1 + 1)
Evolutionary Algorithm.
In their first work, Lehre and Witt give no results on the black-box
complexity of higher arity models. A variation operator is said to be of arity
k if it creates new search points by recombining up to k previously queried
search points. We are interested in higher arity black-box models because
they include commonly used search heuristics which are not covered by the
unary operators. Among such heuristics are evolutionary algorithms that
employ uniform crossover, particle swarm optimization [KE01], ant colony
optimization [DS04] and estimation of distribution algorithms [LL02].
Although search heuristics that employ higher arity operators are poorly
understood from a theoretical point of view, there are some results proving
that situations exist where higher arity is helpful. For example, Doerr, Klein,
and Happ [DHK08] show that a concatenation operator reduces the runtime
on the all-pairs shortest path problem. Refer to the same paper for further
references.
Extending the work from Lehre and Witt, we analyze higher arity blackbox complexities of OneMax and LeadingOnes. In particular, we show
that, surprisingly, the unbiased black-box complexity drops from Θ(n2 ) in
the unary case to O(n log n) for LeadingOnes and from Θ(n log n) to an
at most linear complexity for OneMax. As the bounds for unbiased unary
black-box complexities immediately carry over to all higher arity unbiased
black-box complexities, we see that increasing the arity of the variation
operators provably helps to decrease the complexity. We are optimistic that
the ideas developed to prove the bounds can be further exploited to achieve
reduced black-box complexities also for other function classes.
3
In this work, we also prove that increasing the arity further does again
help. In particular, we show that for every k ≤ n, the unbiased k-ary blackbox complexity of OneMax can be bounded by O(n/ log k). This bound is
optimal for k = n, because the unbiased black-box complexity can always
be bounded below by the unrestricted black-box complexity, which is known
to be Ω(n/ log n) for OneMax [DJW06].
Note that a comparison between the unrestricted black-box complexity
and the unbiased black-box complexity of LeadingOnes cannot be derived
that easily. The asymptotic linear unrestricted black-box complexity mentioned above is only known to hold for a subclass of the class LeadingOnes
considered in this work.
Table 1 summarizes the results obtained in this paper, and provides a
comparison with known results on black-box complexity of OneMax and
LeadingOnes.
2
Unrestricted and Unbiased Black-Box Complexities
In this section, we formally define the two black-box models by Droste,
Jansen, and Wegener [DJW06], and Lehre and Witt [LW10]. We call the
first model the unrestricted black-box model, and the second model the unbiased black-box model. Each model specifies a class of algorithms. The
black-box complexity of a function class is then defined with respect to the
algorithms specified by the corresponding model. We start by describing the
two models, then provide the corresponding definitions of black-box complexity.
In both models, one is faced with a class of pseudo-Boolean functions F
that is known to the algorithm. An adversary chooses a function f from
this class. The function f itself remains unknown to the algorithm. The
algorithm can only gain knowledge about the function f by querying an
oracle for the function value of search points. The goal of the algorithm
is to find a globally optimal search point for the function. Without loss of
generality, we consider maximization as objective. The two models differ in
the information available to the algorithm, and the search points that can
be queried.
Let us begin with some notation. Throughout this paper, we consider the
maximization of pseudo-Boolean functions f : {0, 1}n → R. In particular, n
will always denote the length of the bitstring to be optimized. For a bitstring
x ∈ {0, 1}n , we write x = x1 · · · xn . For convenience, we denote the positive
integers by N. For k ∈ N, we use the notion [k] as a shorthand for the set
{1, . . . , k}. Analogously, we define [0..k] := [k] ∪ {0}. Furthermore, let Sk
denote the set of all permutations of [k]. With slight abuse of notation, we
write σ(x) := xσ(1) · · · xσ(n) for σ ∈ Sn . Furthermore, the bitwise exclusive4
or is denoted by ⊕. For any bitstring x we denote its complement by x.
Finally, we use standard notation for asymptotic growth of functions (see,
e.g., [CLRS01]). In particular, we denote by on (g(n)) the set of all functions
f that satisfy limn→∞ f (n)/g(n) = 0.
The unrestricted black-box model contains all algorithms which can be
formalized as in Algorithm 1. A basic feature is that this scheme does
not force any relationship between the search points of subsequent queries.
Thus, this model contains a broad class of algorithms.
Algorithm 1: Unrestricted Black-Box Algorithm
1
2
3
4
5
6
Choose a probability distribution p0 on {0, 1}n .
Sample x0 according to p0 and query f (x0 ).
for t = 1, 2, 3, . . . until termination condition met do
Depending on ((x0 , f (x0 )), . . . , (xt−1 , f (xt−1 ))), choose
a probability distribution pt on {0, 1}n .
Sample xt according to pt , and query f (xt ).
To exclude some algorithms whose behavior does not resemble those of
typical search-heuristics, one can impose further restrictions. The unbiased
black-box model introduced in [LW10] restricts Algorithm 1 in two ways.
First, the decisions made by the algorithm only depends on the observed
fitness values, and not the actual search points. Second, the algorithm can
only query search points obtained by variation operators that are unbiased
in the following sense. By imposing these two restrictions, the black-box
complexity matches the runtime of popular search heuristics on example
functions.
Definition 1. (Unbiased k-ary variation operator [LW10]) Let k ∈
N. An unbiased k-ary distribution D(· | x1 , . . . , xk ) is a conditional probability distribution over {0, 1}n , such that for all bitstrings y, z ∈ {0, 1}n , and
each permutation σ ∈ Sn , the following two conditions hold.
(i) D(y | x1 , . . . , xk ) = D(y ⊕ z | x1 ⊕ z, . . . , xk ⊕ z),
(ii) D(y | x1 , . . . , xk ) = D(σ(y) | σ(x1 ), . . . , σ(xk )) .
An unbiased k-ary variation operator p is a k-ary operator which samples
its output according to an unbiased k-ary distribution.
The first condition in Definition 1 is referred to as ⊕-invariance, and
the second condition is referred to as permutation invariance. Note that
the combination of these two conditions can be characterized as invariance
under Hamming-automorphisms: D(· | x1 , . . . , xk ) is unbiased if and only
if, for all α : {0, 1}n → {0, 1}n preserving the Hamming distance and all
5
bitstrings y, D(y | x1 , . . . , xk ) = D(α(y) | α(x1 ), . . . , α(xk )). We refer to 1ary and 2-ary variation operators as unary and binary variation operators,
respectively. The unbiased k-ary black-box model contains all algorithms
which follow the scheme of Algorithm 2. While being a restriction of the
old model, the unbiased model still captures the most widely studied search
heuristics, including most evolutionary algorithms, simulated annealing and
random local search.
Note that in line 5 of Algorithm 2, y 1 , . . . , y k don’t necessarily have to be
the k immediately previously queried ones. That is, the algorithm is allowed
to choose any k previously sampled search points.
We now define black-box complexity formally. We will use query complexity as the cost model, where the algorithm is only charged for queries
to the oracle, and all other computation is free. The runtime TA,f of a randomized algorithm A on a function f ∈ F is hence the expected number
of oracle queries until the optimal search point is queried for the first time.
The expectation is taken with respect to the random choices made by the
algorithm.
Definition 2 (Black-box complexity). The complexity of a class of pseudoBoolean functions F with respect to a class of algorithms A, is defined as
TA (F) := minA∈A maxf ∈F TA,f .
The unrestricted black-box complexity is the complexity with respect to
the algorithms covered by Algorithm 1. For any given k ∈ N, the unbiased
k-ary black-box complexity is the complexity with respect to the algorithms
covered by Algorithm 2. Furthermore, the unbiased ∗-ary black-box complexity is the complexity with respect to the algorithms covered by Algorithm 2,
without limitation on the arity of the operators used.
It is easy to see that every unbiased k-ary operator p can be simulated
by an unbiased (k+1)-ary operator p′ defined as p′ (z | x1 , . . . , xk , xk+1 ) :=
p(z | x1 , . . . , xk ). Hence, the unbiased k-ary black-box complexity is an
upper bound for the unbiased (k+1)-ary black-box complexity. Similarly,
the set of unbiased black-box algorithms for any arity is contained in the set
of unrestricted black-box algorithms. Therefore, the unrestricted black-box
complexity is a lower bound for the unbiased k-ary black-box complexity
(for all k ∈ N).
3
The Unbiased ∗-Ary Black-Box Complexity of
OneMax
In this section, we show that the unbiased black-box complexity of OneMax
is Θ(n/ log n) with a leading constant between one and two. We begin with
the formal definition of the function class OneMaxn . We will omit the
subscript “n” if the size of the input is clear from the context.
6
Algorithm 2: Unbiased k-ary Black-Box Algorithm
Sample x0 uniformly at random from {0, 1}n and query f (x0 ).
for t = 1, 2, 3, . . . until termination condition met do
Depending on (f (x0 ), . . . , f (xt−1 )), choose
an unbiased k-ary variation operator pt , and
k previously queried search points y 1 , . . . , y k .
Sample xt according to pt (y 1 , . . . , y k ), and query f (xt ).
1
2
3
4
5
6
Definition 3 (OneMax). For all n ∈ N and each z ∈ {0, 1}n we define
Omz : {0, 1}n → N, x 7→ |{j ∈ [n] | xj = zj }|.1 The class OneMaxn is
defined as OneMaxn := {Omz | z ∈ {0, 1}n } .
To motivate the definitions, let us briefly mention that we do not further
consider the optimization of specific functions such as Om(1,...,1) , since they
would have an unrestricted black-box complexity of 1: The algorithm asking
for the bitstring (1, . . . , 1) in the first step easily optimizes the function in
just one step. Thus, we need to consider some generalizations of these
functions. For the unrestricted black-box model, we already have a lower
bound by Droste Jansen, and Wegener [DJW06]. For the same model, an
algorithm which matches this bound in order of magnitude is given by Anil
and Wiegand in [AW09].
Theorem 4. The unrestricted black-box complexity of OneMaxn is
Θ(n/ log n). Moreover, the leading constant is at least 1.
As already mentioned, the lower bound on the complexity of OneMaxn
in the unrestricted black-box model from Theorem 4 directly carries over to
the stricter unbiased black-box model.
Corollary 5. The unbiased ∗-ary black-box complexity of OneMaxn is at
least n/ log n.
Moreover, an upper bound on the complexity of OneMax in the unbiased black-box model can be derived using the same algorithmic approach
as given for the unrestricted black-box model (compare [AW09] and Theorem 4).
Theorem 6. The unbiased ∗-ary black-box complexity of OneMaxn is at
2n
most (1 + on (1)) log
n.
In return, this theorem also applies to the unrestricted black-box model
and refines Theorem 4 by explicitly bounding the leading constant of the
unrestricted black-box complexity for OneMax by a factor of two of the
1
Intuitively, Omz is the function of n minus the Hamming distance to z.
7
lower bound. The result in Theorem 6 is based on Algorithm 3. This
algorithm makes use of the operator uniformSample that samples a bitstring uniformly at random, which clearly is an unbiased (0-ary) variation operator. Further, it makes use of another family of operators:
chooseConsistentu1 ,...,ut (x1 , . . . , xt ) chooses a z ∈ {0, 1}n uniformly at random such that, for all i ∈ [t], Omz (xi ) = ui (if there exists one, and any
bitstring uniformly at random otherwise). It is easy to see that this is a
family of unbiased variation operators.
Algorithm 3: Optimizing OneMax with unbiased variation operators.
input Integer n ∈ N and function f ∈ OneMaxn ;
log n 2n
initialization t ← 1 + 4 log
log n
log n ;
repeat
foreach i ∈ [t] do
xi ← uniformSample ();
1
2
3
4
5
w ← chooseConsistentf (x1 ),...,f (xt ) (x1 , . . . , xt );
until f (w) = n;
output w;
6
7
8
An upper bound of (1 + on (1))2n/ log n for the expected runtime of
Algorithm 3 follows directly from the following theorem which implies that
the number of repetitions of steps 4 to 6 follows a geometric distribution
with success probability 1 − on (1). This proves Theorem 6.
Theorem 7. Let n be sufficiently large (i. e., let n ≥ N0 for some fixed
log n 2n
constant N0 ∈ N). Let z ∈ {0, 1}n and let X be a set of t ≥ 1+ 4 log
log n
log n
n
samples chosen from {0, 1} uniformly at random and mutually independent.
Then the probability that there exists an element y ∈ {0, 1}n such that y 6= z
and Omy (x) = Omz (x) for all x ∈ X is bounded from above by 2−t/2 .
The previous theorem is a refinement of Theorem 1 in [AW09], and
its proof follows the proof of Theorem 1 in [AW09], clarifying some inconsistencies2 in that proof. To show Theorem 7, we first give a bound on a
combinatorial quantity used later in its proof (compare Lemma 1 in [AW09]).
Proposition 8. For sufficiently large n,
4 log log n
2n
t≥ 1+
,
log n
log n
2
For example, in the proof of Lemma 1 in [AW09] the following claim is made. Let d(n)
be a monotone increasing sequence that tends to infinity. Then for sufficient large n the
sequence hd (n) = ( πd(n)
)1/(2 ln n) is bounded away from 1 by a constant b > 1. Clearly,
8
this is not the case. For example, for d(n) = log n, the sequence hlog (n) converges to 1.
8
and even d ∈ {2, . . . , n}, it holds that
t
n
d
−d
≤ 2−3t/4 .
2
d/2
d
−1/2 d
2 . Therefore,
Proof. By Stirling’s formula, we have d/d2 ≤ πd
2
t −t/2
n
n
πd
d
−d
.
≤
2
d/2
2
d
d
(1)
(2)
We distinguish two cases. First, we consider the case 2 ≤ d < n/(log n)3 .
d
By Stirling’s formula, it holds that nd ≤ en
. Thus, we get from (2) that
d
t −t/2
n
d
en d πd
−d
2
≤
d/2
d
2
d
(3)
2d
πd
t
en
−log
log
(
)
(
))
(
d
2
2.
=2 t
We bound d by its minimal value 2 and maximal value n/(log n)3 , and t
by 2n/ log n to obtain
2d
en
πd
1
en
log
− log
≤
− log π.
log
t
d
2
(log n)2
2
Since the first term on the right hand side converges to 0 and since
log π > 3/2, the exponent in (3) can be bounded from above by -3t/4, if n
is sufficiently large. Thus, we obtain inequality (1) for 2 ≤ d < n/(log n)3 .
Next, we consider
the case n/(log n)3 ≤ d ≤ n. By the binomial formula,
n
it holds that d ≤ 2n . Thus,
−t/2
−t/2
πd t
2n
n
πd
πd
(4)
≤ 2n
= 2( t −log 2 ) 2 .
2
2
d
log n 2n
We bound πd/2 by n/(log n)3 and t by 1 + 4 log
log n
log n to obtain
2n
πd
log n
− log
≤
− log(n/(log n)3 )
log n
t
2
1 + 4 log
log n
=
=
log n
1+
4 log log n
log n
3 log log n +
=−
− log n + 3 log log n
4 log log n
log n (− log n +
log n
1 + 4 log
log n
3 log log n)
log n − 12 log log n
log log n.
log n + 4 log log n
Again, for sufficiently large n the right hand side becomes smaller than −3/2.
We combine the previous inequality with inequalities (2) and (4) to show
inequality (1) for n/(log n)3 ≤ d ≤ n.
9
With the previous proposition at hand, we finally prove Theorem 7.
Proof of Theorem 7. Let n be sufficiently large, z ∈ {0, 1}n , and X a set
log n 2n
n
of t ≥ 1 + 4 log
log n
log n samples chosen from {0, 1} uniformly at random
and mutually independent.
For d ∈ [n], let Ad := {y ∈ {0, 1}n n − Omz (y) = d} be the set of
all points with Hamming distance d from z. Let d ∈ [n] and y ∈ Ad . We
say the point y is consistent with x if Omy (x) = Omz (x) holds. Intuitively,
this means that Omy is a possible target function, given the fitness of x.
It is easy to see that y is consistent with x if and only if x and y (and
therefore x and z) differ in exactly half of the d bits that differ between y
and z. Therefore, y is never consistent
with x if d is odd and the probability
that y is consistent with x is dd/2 2−d if d is even.
Let p be the probability that there exists a point y ∈ {0, 1}n \ {z} such
that y is consistent with all x ∈ X. Then,
[
\
“y is consistent with x” .
p = Pr
y∈{0,1}n \{z} x∈X
Thus, by the union bound, we have
\
X
“y is consistent with x” .
p≤
Pr
y∈{0,1}n \{z}
x∈X
Since, for a fixed y, the events “y is consistent with x” are mutually independent for all x ∈ X, it holds that
p≤
n X Y
X
Pr(“y is consistent with x”).
d=1 y∈Ad x∈X
We substitute the probability that a fixed y ∈ {0, 1}n is consistent
with a
randomly chosen x ∈ {0, 1}n as given above. Using |Ad | = nd , we obtain
p≤
X
d∈{1,...,n} : d even
t
n
d
−d
2
d/2
d
Finally, we apply Proposition 8 and have p ≤ n2−3t/4 which concludes the
proof since n ≤ 2t/4 for sufficiently large n (as t in Ω(n/ log n)).
4
The Unbiased k-Ary Black-Box Complexity of
OneMax
In this section, we show that higher arity indeed enables the construction of
faster black-box algorithms. In particular, we show the following result.
10
Theorem 9. For every k ∈ [n] with k ≥ 2, the unbiased k-ary black-box
complexity of OneMaxn is at most linear in n. Moreover, it is at most
(1 + ok (1))2n/ log k.
This result is surprising, since in [LW10], Lehre and Witt prove that the
unbiased unary black-box complexity of the class of all functions f with a
unique global optimum is Ω(n log n). Thus, we gain a factor of log n when
switching from unary to binary variation operators.
To prove Theorem 9, we introduce two different algorithms interesting
on their own. Both algorithms share the idea to track which bits have
already been optimized. That way we can avoid flipping them again in
future iterations of the algorithm.
The first algorithm proves that the unbiased binary black-box complexity of OneMaxn is at most linear in n if the arity is at least two. For the
general case, with k ≥ 3, we give a different algorithm that provides asymptotically better bounds for k growing in n. We use the idea that the whole
bitstring can be divided into smaller substrings, and subsequently those can
be independently optimized. We show that this is possible, and together
with Theorem 6, this yields the above bound for OneMaxn in the k-ary
case for k ≥ 3.
4.1
The Binary Case
We begin with the binary case. We use the three unbiased variation operators uniformSample (as described in Section 3), complement
and flipOneWhereDifferent defined as follows. The unary operator
complement(x) returns the bitwise complement of x. The binary operator flipOneWhereDifferent(x, y) returns a copy of x, where one of the bits
that differ in x and y is chosen uniformly at random and then flipped. It
is easy to see that complement and flipOneWhereDifferent are unbiased
variation operators.
Lemma 10. With exponentially small probability of failure, the optimization
time of Algorithm 4 on the class OneMaxn is at most (1+ε)2n, for all ε > 0.
The algorithm only involves binary operators.
Proof. We first prove that the algorithm is correct. Assume that the instance
has optimum z, for some z ∈ {0, 1}n . We show that the following invariant
is satisfied in the beginning of every iteration of the main loop (steps 4-12):
for all i ∈ [n], if xi = yi , then xi = zi . In other words, the positions where
x and y have the same bit value are optimized. The invariant clearly holds
in the first iteration, as x and y differ in all bit positions. A bit flip is only
accepted if the fitness value is strictly higher, an event which occurs with
positive probability. Hence, if the invariant holds in the current iteration,
then it also holds in the following iteration. By induction, the invariant
property now holds in every iteration of the main loop.
11
Algorithm 4: Optimizing OneMax with unbiased binary variation operators.
1
2
3
4
5
6
7
8
9
10
11
12
13
input Integer n ∈ N and function f ∈ OneMaxn ;
initialization x ← uniformSample();
y ← complement(x);
repeat
Choose b ∈ {0, 1} uniformly at random;
if b = 1 then
x′ ← flipOneWhereDifferent(x, y);
if f (x′ ) > f (x) then x ← x′ ;
else
y ′ ← flipOneWhereDifferent(y, x);
if f (y ′ ) > f (y) then y ← y ′ ;
until f (x) = n;
output x;
We then analyze the runtime of the algorithm. Let T be the number
of iterations needed until n bit positions have been optimized. Due to the
invariant property, this is the same as the time needed to reduce the Hamming distance between x and y from n to 0. An iteration is successful, i.e.,
the Hamming distance is reduced by 1, with probability 1/2 independently
of previous trials. The random variable T is therefore negative binomially
distributed with parameters n and 1/2. It can be related to a binomially distributed random variable X with parameters 2n(1 + ε) and 1/2 by
Pr(T ≥ 2n(1 + ε)) = Pr(X ≤ n). Finally, by applying a Chernoff bound
with respect to X, we obtain Pr(T ≥ 2n(1 + ε)) ≤ exp(−ε2 n/2(1 + ε)).
It is easy to see that Algorithm 4 yields the same bounds on the class of
monotone functions, which is defined as follows.
Definition 11 (Monotone functions). Let n ∈ N and let z ∈ {0, 1}n . A
function f : {0, 1}n → R is said to be monotone with respect to z if for all
y, y ′ ∈ {0, 1}n with {i ∈ [n] | yi = zi } ( {i ∈ [n] | yi′ = zi } it holds that
f (y) < f (y ′ ). The class Monotonen contains all such functions that are
monotone with respect to some z ∈ {0, 1}n .
Now, let f be a monotone function with respect to z and let y and
y ′ be two bitstrings which differ only in the i-th position. Assume that
yi 6= zi and yi′ = zi . It follows from the monotonicity of f that f (y) < f (y ′ ).
Consequently, Algorithm 4 optimizes f as fast as any function in OneMaxn .
Corollary 12. The unbiased binary black-box complexity of Monotonen
is O(n).
12
Note that Monotonen strictly includes the class of linear functions with
non-zero weights.
4.2
Proof of Theorem 9 for Arity k ≥ 3
For the case of arity k ≥ 3 we analyze the following Algorithm 5 and show
that its optimization time on OneMaxn is at most (1 + ok (1))2n/ log k.
Informally, the algorithm splits the bitstring into blocks of length k. The
n/k blocks are then optimized separately using a variant of Algorithm 3,
each in expected time (1 − ok (1))2k/ log k.
In detail, Algorithm 5 maintains its state using three bitstrings x, y
and z. Bitstring x represents the preliminary solution. The positions in
which bitstrings x and y differ represent the remaining blocks to be optimized, and the positions in which bitstrings y and z differ represent the
current block to be optimized. Due to permutation invariance, it can be
assumed without loss of generality that the bitstrings can be expressed by
x = αβγ, y = αβγ, and z = αβγ, see Step 6 of Algorithm 5. The algorithm uses an operator called flipKWhereDifferentℓ to select a new block
of size ℓ to optimize. The selected block is optimized by calling the subroutine optimizeSelectedn,ℓ , and the optimized block is inserted into the
preliminary solution using the operator update.
The operators in Algorithm 5 are defined as follows. The operator
flipKWhereDifferentk (x, y) generates the bitstring z. This is done by making a copy of y, choosing ℓ := min{k, H(x, y)} bit positions for which x and y
differ uniformly at random, and flipping them. The operator update(a, b, c)
returns a bitstring a′ which in each position i ∈ [n] independently, takes the
value a′i = bi if ai = ci , and a′i = ai otherwise. Clearly, both these operators
are unbiased. The operators uniformSample and complement have been
defined in previous sections.
It remains to define the subroutine optimizeSelectedn,k . This subroutine is a variant of Algorithm 3 that only optimizes a selected block of bit
positions positions, and leaves the other blocks unchanged. The block is
represented by the bit positions in which bitstrings y and z differ. Due to
permutation-invariance, we assume that they are of the form y = ασ and
z = ασ, for some bitstrings α ∈ {0, 1}k , and σ ∈ {0, 1}n−k . The operator uniformSample in Algorithm 3 is replaced by a 2-ary operator defined
by: randomWhereDifferent(x, y) chooses z, where for each i ∈ [n], the
value of bit zi is xi or yi with equal probability. Note that this operator is
the same as the standard uniform crossover operator. The operator family
chooseConsistent in Algorithm 3 is replaced by a family of (r + 2)-ary operators defined by: chooseConsistentSelectedu1 ,...,ur (x1 , . . . , xr , ασ, ασ)
chooses zσ, where the prefix z is sampled uniformly at random from the
set Zu,x = {z ∈ {0, 1}k | ∀i ∈ [t] Omz (xi1 xi2 · · · xik ) = ui }. If the set Zu,x
is empty, then z is sampled uniformly at random among all bitstrings of
13
Algorithm 5: Optimizing OneMax with unbiased k-ary variation operators,
for k ≥ 3.
1
2
3
4
5
6
7
8
9
10
input Integers n, k ∈ N, and function f ∈ OneMaxn ;
initialization x1 ← uniformSample(), y 1 ← complement(x), and
τ ← ⌈ nk ⌉;
foreach t ∈ [τ ] do
ℓ(t) ← min{k, n − k(t − 1)};
z ← flipKWhereDifferentℓ(t) (xt , y t );
Assume that xt = αβγ, y t = αβγ, and z = αβγ;
wt βγ ← optimizeSelectedn,ℓ(t) (αβγ, αβγ);
wt βγ ← update(αβγ, wt βγ, αβγ);
xt+1 ← wt βγ and y t+1 ← wt βγ;
output xτ +1 ;
length k. Informally, the set Zu,x corresponds to the subset of functions in
OneMaxn that are consistent with the function values u1 , u2 , . . . , ur on the
inputs x1 , x2 , . . . , xr . It is easy to see that this operator is unbiased.
Algorithm 6: optimizeSelected used in Algorithm 5.
1
2
3
4
5
6
7
8
input Integers n, k ∈ N, and bitstrings ασ and ασ, where α ∈ {0, 1}k
and σ ∈ {0, 1}n−k ;
o
n
log k 2k
initialization r ← min k − 2, 1 + 4 log
,
log k
log k
fσ ← f (ασ)+f2(ασ)−k ;
repeat
foreach i ∈ [r] do
xi σ ← randomWhereDifferent(ασ, ασ);
wσ ←
chooseConsistentSelectedf (x1 σ)−fσ ,...,f (xr σ)−fσ (x1 σ, . . . , xr σ, ασ, ασ);
until f (wσ) = k + fσ ;
output wσ;
Proof of Theorem 9 for arity k ≥ 3. To prove the correctness of the algorithm, assume without loss of generality the input f = OneMax for which
the correct output is 1n .
We first claim that a call to optimizeSelectedn,k (ασ, ασ) will terminate after a finite number of iterations with output 1k σ almost surely.
The variable fσ is assigned in line 2 of Algorithm 6, and it is easy to
see that it takes the value fσ = f (0k σ). It follows from linearity of f
14
and from f (α0n−k ) + f (α0n−k ) = k, that f (w0n−k ) = f (wσ) − f (0k σ) =
f (wσ) − fσ . The termination condition f (wσ) = k + fσ is therefore equivalent to the condition wσ = 1k σ. For all x ∈ {0, 1}k , it holds that
Om(1,...,1) (x) = f (x), so 1k is member of the set Zu,x . Hence, every invocation of chooseConsistentSelected returns 1k σ with non-zero probability. Therefore, the algorithm terminates after every iteration with non-zero
probability, and the claim holds.
We then prove by induction the invariant property that for all t ∈ [τ +1],
and i ∈ [n], if xti = yit then xti = yit = 1. The invariant clearly holds for
t = 1, so assume that the invariant also holds for t = j ≤ τ . Without loss
of generality, xj = αβγ, y j = αβγ, xj+1 = wj βγ, and y j+1 = wj βγ. By
the claim above and the induction hypothesis, both the common prefix wj
and the common suffix γ consist of only 1-bits. So the invariant holds for
t = j + 1, and by induction also for all t ∈ [τ + 1].
It is easy to see that for all t ≤ τ , the Hamming distance between
xt+1 = wt βγ and y t+1 = wt βγ is H(xt+1 , y t+1 ) = H(αβγ, αβγ) − ℓ(t) =
H(xt , y t ) − ℓ(t). By induction, it therefore holds that
H(x
τ +1
,y
τ +1
1
1
) = H(x , y ) −
τ
X
ℓ(t)
t=1
= n−
τ
X
min{k, n − k(t − 1)} = 0.
t=1
Hence, by the invariant above, the algorithm returns the correct output
xτ +1 = y τ +1 = 1n .
The runtime of the algorithm in each iteration is dominated by the subroutine optimizeSelected. Note that by definition, the probability that
chooseConsistentSelectedf (x1 σ)−fσ ,...,f (xr σ)−fσ (x1 σ, . . . , xr σ, ασ, ασ)
chooses zσ in {0, 1}n , is the same as the probability that
chooseConsistentf (x1 ),...,f (xr ) (x1 , . . . , xr ) chooses z in {0, 1}k .
To
finish the proof, we distinguish between two cases.
Case 1: k ≤ 53. In this case, it suffices3 to prove that the runtime is O(n). For the case k = 2, this follows from Lemma 10. For
2 < k ≤ 53, it holds that r = k − 2. Each iteration in optimizeSelected
uses r + 1 = k − 1 = O(1) function evaluations, and the probability
that chooseConsistentSelected optimizes a block of k bits is at least
1 − (1 − 2−k )r = Ω(1) (when w = 1k ). Thus, the expected optimization time
for a block is O(1), and for the entire bitstring
it is atmost (n/k) · O(1).
log k 2k
Case 2: k ≥ 54. In this case, r = 1 + 4 log
log k
log k holds. Hence,
with an analysis analogous to that in the proof of Theorem 6, we can show
3
Assume that the expected runtime is less than cn for some constant c > 0 when
k ≤ 53. It is necessary to show that cn ≤ 2n/ log k + h(k)2n/ log k, for some function h,
where limk→∞ h(k) → 0. This can easily be shown by choosing any such function h, where
h(k) ≥ c log k/2 for k ≤ 53.
15
that the expected runtime of optimizeSelected is at most (1 + ok (1))2(k −
2)/ log(k − 2). Thus, the expected runtime is at most n/k · (1 + ok (1))2(k −
2)/ log(k − 2) = (1 + ok (1))2n/ log k.
5
The Complexity of LeadingOnes
In this section, we show that allowing k-ary variation operators, for k > 1,
greatly reduces the black-box complexity of the LeadingOnes functions
class, namely from Θ(n2 ) down to O(n log n). We define the class LeadingOnes as follows.
Definition 13 (LeadingOnes). Let n ∈ N. Let σ ∈ Sn be a permutation of
the set [n] and let z ∈ {0, 1}n . The function Loz,σ is defined via Loz,σ (x) :=
max{i ∈ [0..n] | zσ(i) = xσ(i) }. We set LeadingOnesn := {Loz,σ | z ∈
{0, 1}n , σ ∈ Sn } .
The class LeadingOnes is well-studied. Already in 2002, Droste, Jansen
and Wegener [DJW02] proved that the classical (1 + 1) EA has an expected
optimization time of Θ(n2 ) on LeadingOnes. This bound seems to be
optimal among the commonly studied versions of evolutionary algorithms.
In [LW10], the authors prove that the unbiased unary black-box complexity
of LeadingOnes is Θ(n2 ).
Droste, Jansen and Wegener [DJW06] consider a subclass of
LeadingOnesn , namely LeadingOnes0n := {Loz,id | z ∈ {0, 1}n }, where
id denotes the identity mapping on [n]. Hence their function class is not
permutation invariant. In this restricted setting, they prove a black-box
complexity of Θ(n). Of course, their lower bound of Ω(n) is a lower bound
for the unrestricted black-box complexity of the general LeadingOnesn
class, and consequently, a lower bound also for the unbiased black-box complexities of this class.
The following theorem is the main result in this section.
Theorem 14. The unbiased binary black-box complexity of LeadingOnesn
is O(n log n).
The key ingredient of the two black-box algorithms that yield our upper
bound is an emulation of a binary search which determines the (unique) bit
that increases the fitness and does flip this bit. Surprisingly, this can be
done already with a binary operator. This works in spite of the fact that
we also follow the general approach of the previous section of keeping two
individuals x and y such that for all bit positions in which x and y agree,
the corresponding bit value equals the one of the optimal solution.
We will use the two unbiased binary variation operators randomWhereDifferent (as described in Section 4.2) and
16
switchIfDistanceOne. The operator switchIfDistanceOne(y, y ′ ) returns y ′ if y and y ′ differ in exactly one bit, and returns y otherwise. It is
easy to see that switchIfDistanceOne is an unbiased variation operators.
We call a pair (x, y) of search points critical, if the following two conditions are satisfied. (i) f (x) ≥ f (y). (ii) There are exactly f (y) bit-positions
i ∈ [n] such that xi = yi . The following is a simple observation.
Lemma 15. Let f ∈ LeadingOnesn . If (x, y) is a critical pair, then either
f (x) = n = f (y) or f (x) > f (y).
If f (x) > f (y), then the unique bit-position k such that flipping the k-th
bit in x reduces its fitness to f (y) – or equivalently, the unique bit-position
such that flipping this bit in y increases y’s fitness – shall be called the
critical bit-position. We also call f (y) the value of the pair (x, y).
Note that the above definition does only use some function values of f ,
but not the particular definition of f . If f = Loσ,z , then the above implies
that x and y are equal on the bit-positions σ(1), . . . , σ(f (y)) and are different
on all other bit-positions. Also, the critical bit-position is σ(f (y) + 1), and
the only way to improve the fitness of y is flipping this particular bit-position
(and keeping the positions σ(1), . . . , σ(f (y)) unchanged). The central part
of Algorithm 7, which is contained in lines 3 to 9, manages to transform a
critical pair of value v < n into one of value v + 1 in O(log n) time. This is
analyzed in the following lemma.
Lemma 16. Assume that the execution of Algorithm 7 is before line 4, and
that the current value of (x, y) is a critical pair of value v < n. Then after
an expected number of O(log n) iterations, the loop in lines 5-9 is left and
(x, y) or (y, x) is a critical pair of value v + 1.
Proof. Let k be the critical bit-position of the pair (x, y). Let y ′ = x be a
copy of x. Let J := {i ∈ [n] | yi 6= yi′ }. Our aim is to flip all bits of y ′ with
index in J \ {k}.
We define y ′′ by flipping each bit of y ′ with index in J with probability 1/2. Equivalently, we can say that yi′′ equals yi′ for all i such that
yi′ = yi , and is random for all other i (thus, we obtain such y ′′ by applying
randomWhereDifferent(y, y ′ )).
With probability exactly 1/2, the critical bit was not flipped (“success”),
and consequently, f (y ′′ ) > f (y). In this case (due to independence), each
other bit with index in J has a chance of 1/2 of being flipped. So with
constant probability at least 1/2, {i ∈ [n] | yi 6= yi′′ } \ {k} is at most half the
size of J \ {k}. In this success case, we take y ′′ as new value for y ′ .
In consequence, the cardinality of J \ {k} does never increase, and with
probability at least 1/4, it decreases by at least 50%. Consequently, after an
expected number of O(log n) iterations, we have |J| = 1, namely J = {k}.
We check this via an application of switchIfDistanceOne.
17
We are now ready to prove the main result of this section.
Proof of Theorem 14. We regard the following invariant: (x, y) or (y, x) is
a critical pair. This is clearly satisfied after execution of line 1. From
Lemma 16, we see that a single execution of the outer loop does not dissatisfy
our invariant. Hence by Lemma 15, our algorithm is correct (provided it
terminates). The algorithm does indeed terminate, namely in O(n log n)
time, because, again by Lemma 16, each iteration of the outer loop increases
the value of the critical pair by one.
Algorithm 7: Optimizing LeadingOnes with unbiased binary variation
operators.
1
2
3
4
5
6
7
8
9
10
11
6
initialization x ← uniformSample(); y ← complement(x);
repeat
if f (y) > f (x) then (x, y) ← (y, x);
y ′ ← x;
repeat
y ′′ ← randomWhereDifferent(y, y ′ );
if f (y ′′ ) > f (y) then y ′ ← y ′′ ;
y ← switchIfDistanceOne(y, y ′ );
until f (y) = f (y ′ );
until f (x) = f (y);
output x;
Conclusion and Future Work
We continue the study of the unbiased black-box model introduced
in [LW10]. For the first time, we analyze variation operators with arity
higher than one. Our results show that already two-ary operators can allow
significantly faster algorithms.
The problem OneMax cannot be solved in shorter time than Ω(n log n)
with unary variation operators [LW10]. However, the runtime can be reduced to O(n) with binary operators. The runtime can be decreased
even further with higher arities than two. For k-ary variation operators,
2 ≤ k ≤ n, the runtime can be reduced to O(n/ log k), which for k = nΘ(1)
matches the lower bound in the classical black-box model. A similar positive
effect of higher arity variation operators can be observed for the function
class LeadingOnes. While this function class cannot be optimized faster
than Ω(n2 ) with unary variation operators [LW10], we show that the runtime can be reduced to O(n log n) with binary, or higher arity variation
operators.
18
Despite the restrictions imposed by the unbiasedness conditions, our
analysis demonstrates that black-box algorithms can employ new and more
efficient search heuristics with higher arity variation operators. In particular,
binary variation operators allow a memory mechanism that can be used to
implement binary search on the positions in the bitstring. The algorithm
can thereby focus on parts of the bitstring that has not previously been
investigated.
An important open problem arising from this work is to provide lower
bounds in the unbiased black-box model for higher arities than one. Due
to the greatly enlarged computational power of black-box algorithms using
higher arity operators (as seen in this paper), proving lower bounds in this
model seems significantly harder than in the unary model.
References
[AW09]
G. Anil and R. P. Wiegand, Black-box search by elimination of
fitness functions, Proc. of Foundations of Genetic Algorithms
(FOGA’09), ACM, 2009, pp. 67–78.
[CLRS01] T. H. Cormen, C. E. Leiserson, R. L. Rivest, and C. Stein, Introduction to Algorithms, 2nd ed., McGraw Hill, 2001.
[DHK08] B. Doerr, E. Happ, and C. Klein, Crossover can provably be useful
in evolutionary computation, Proc. of Genetic and Evolutionary
Computation Conference (GECCO ’08), ACM, 2008, pp. 539–
546.
[DJW02] S. Droste, T. Jansen, and I. Wegener, On the analysis of the
(1+1) evolutionary algorithm, Theoretical Computer Science 276
(2002), 51–81.
[DJW06]
, Upper and lower bounds for randomized search heuristics in black-box optimization, Theoretical Computer Science 39
(2006), 525–544.
[DS04]
M. Dorigo and T. Stützle, Ant colony optimization, MIT Press,
2004.
[KE01]
J. Kennedy and R. C. Eberhart, Swarm intelligence, Morgan
Kaufmann Publishers Inc., 2001.
[LL02]
P. Larrañaga and J. A. Lozano, Estimation of distribution algorithms: a new tool for evolutionary computation, Kluwer Academic Publishers, 2002.
19
[LW10]
P. K. Lehre and C. Witt, Black-box search by unbiased variation, Proc. of Genetic and Evolutionary Computation Conference
(GECCO ’10), ACM, 2010, pp. 1441–1448.
20