Understanding Machine Learning P2

Download as pdf or txt
Download as pdf or txt
You are on page 1of 224

17 Multiclass, Ranking, and Complex

Prediction Problems

Multiclass categorization is the problem of classifying instances into one of several


possible target classes. That is, we are aiming at learning a predictor h : X ! Y,
where Y is a finite set of categories. Applications include, for example, catego-
rizing documents according to topic (X is the set of documents and Y is the set
of possible topics) or determining which object appears in a given image (X is
the set of images and Y is the set of possible objects).
The centrality of the multiclass learning problem has spurred the development
of various approaches for tackling the task. Perhaps the most straightforward
approach is a reduction from multiclass classification to binary classification. In
Section 17.1 we discuss the most common two reductions as well as the main
drawback of the reduction approach.
We then turn to describe a family of linear predictors for multiclass problems.
Relying on the RLM and SGD frameworks from previous chapters, we describe
several practical algorithms for multiclass prediction.
In Section 17.3 we show how to use the multiclass machinery for complex pre-
diction problems in which Y can be extremely large but has some structure on
it. This task is often called structured output learning. In particular, we demon-
strate this approach for the task of recognizing handwritten words, in which Y
is the set of all possible strings of some bounded length (hence, the size of Y is
exponential in the maximal length of a word).
Finally, in Section 17.4 and Section 17.5 we discuss ranking problems in which
the learner should order a set of instances according to their “relevance.” A typ-
ical application is ordering results of a search engine according to their relevance
to the query. We describe several performance measures that are adequate for
assessing the performance of ranking predictors and describe how to learn linear
predictors for ranking problems efficiently.

17.1 One-versus-All and All-Pairs

The simplest approach to tackle multiclass prediction problems is by reduction


to binary classification. Recall that in multiclass prediction we would like to learn
a function h : X ! Y. Without loss of generality let us denote Y = {1, . . . , k}.
In the One-versus-All method (a.k.a. One-versus-Rest) we train k binary clas-

Understanding Machine Learning, c 2014 by Shai Shalev-Shwartz and Shai Ben-David


Published 2014 by Cambridge University Press.
Personal use only. Not for distribution. Do not post.
Please link to http://www.cs.huji.ac.il/~shais/UnderstandingMachineLearning
228 Multiclass, Ranking, and Complex Prediction Problems

sifiers, each of which discriminates between one class and the rest of the classes.
That is, given a training set S = (x1 , y1 ), . . . , (xm , ym ), where every yi is in Y, we
construct k binary training sets, S1 , . . . , Sk , where Si = (x1 , ( 1)1[y1 6=i] ), . . . , (xm , ( 1)1[ym 6=i] ).
In words, Si is the set of instances labeled 1 if their label in S was i, and 1
otherwise. For every i 2 [k] we train a binary predictor hi : X ! {±1} based on
Si , hoping that hi (x) should equal 1 if and only if x belongs to class i. Then,
given h1 , . . . , hk , we construct a multiclass predictor using the rule

h(x) 2 argmax hi (x). (17.1)


i2[k]

When more than one binary hypothesis predicts “1” we should somehow decide
which class to predict (e.g., we can arbitrarily decide to break ties by taking the
minimal index in argmaxi hi (x)). A better approach can be applied whenever
each hi hides additional information, which can be interpreted as the confidence
in the prediction y = i. For example, this is the case in halfspaces, where the
actual prediction is sign(hw, xi), but we can interpret hw, xi as the confidence
in the prediction. In such cases, we can apply the multiclass rule given in Equa-
tion (17.1) on the real valued predictions. A pseudocode of the One-versus-All
approach is given in the following.

One-versus-All
input:
training set S = (x1 , y1 ), . . . , (xm , ym )
algorithm for binary classification A
foreach i 2 Y
let Si = (x1 , ( 1)1[y1 6=i] ), . . . , (xm , ( 1)1[ym 6=i] )
let hi = A(Si )
output:
the multiclass hypothesis defined by h(x) 2 argmaxi2Y hi (x)

Another popular reduction is the All-Pairs approach, in which all pairs of


classes are compared to each other. Formally, given a training set S = (x1 , y1 ), . . . , (xm , ym ),
where every yi is in [k], for every 1  i < j  k we construct a binary training
sequence, Si,j , containing all examples from S whose label is either i or j. For
each such an example, we set the binary label in Si,j to be +1 if the multiclass
label in S is i and 1 if the multiclass label in S is j. Next, we train a binary
classification algorithm based on every Si,j to get hi,j . Finally, we construct
a multiclass classifier by predicting the class that had the highest number of
“wins.” A pseudocode of the All-Pairs approach is given in the following.
17.1 One-versus-All and All-Pairs 229

All-Pairs
input:
training set S = (x1 , y1 ), . . . , (xm , ym )
algorithm for binary classification A
foreach i, j 2 Y s.t. i < j
initialize Si,j to be the empty sequence
for t = 1, . . . , m
If yt = i add (xt , 1) to Si,j
If yt = j add (xt , 1) to Si,j
let hi,j = A(Si,j )
output:
the multiclass hypothesis
⇣P defined by ⌘
h(x) 2 argmaxi2Y j2Y sign(j i) h i,j (x)

Although reduction methods such as the One-versus-All and All-Pairs are


simple and easy to construct from existing algorithms, their simplicity has a
price. The binary learner is not aware of the fact that we are going to use its
output hypotheses for constructing a multiclass predictor, and this might lead
to suboptimal results, as illustrated in the following example.
Example 17.1 Consider a multiclass categorization problem in which the in-
stance space is X = R2 and the label set is Y = {1, 2, 3}. Suppose that instances
of the di↵erent classes are located in nonintersecting balls as depicted in the fol-
lowing.

1 2 3

Suppose that the probability masses of classes 1, 2, 3 are 40%, 20%, and 40%,
respectively. Consider the application of One-versus-All to this problem, and as-
sume that the binary classification algorithm used by One-versus-All is ERM
with respect to the hypothesis class of halfspaces. Observe that for the prob-
lem of discriminating between class 2 and the rest of the classes, the optimal
halfspace would be the all negative classifier. Therefore, the multiclass predic-
tor constructed by One-versus-All might err on all the examples from class 2
(this will be the case if the tie in the definition of h(x) is broken by the nu-
⌘ label). In contrast, if we⇣ choose⌘ hi (x) = hwi , xi,
merical value ⇣of the class
where w1 = p1 , p1 , w2 = (0, 1), and w3 = p1 , p1 , then the classi-
2 2 2 2
fier defined by h(x) = argmaxi hi (x) perfectly predicts all the examples. We see
230 Multiclass, Ranking, and Complex Prediction Problems

that even though the approximation error of the class of predictors of the form
h(x) = argmaxi hwi , xi is zero, the One-versus-All approach might fail to find a
good predictor from this class.

17.2 Linear Multiclass Predictors

In light of the inadequacy of reduction methods, in this section we study a more


direct approach for learning multiclass predictors. We describe the family of
linear multiclass predictors. To motivate the construction of this family, recall
that a linear predictor for binary classification (i.e., a halfspace) takes the form
h(x) = sign(hw, xi).
An equivalent way to express the prediction is as follows:
h(x) = argmax hw, yxi,
y2{±1}

where yx is the vector obtained by multiplying each element of x by y.


This representation leads to a natural generalization of halfspaces to multiclass
problems as follows. Let : X ⇥ Y ! Rd be a class-sensitive feature mapping.
That is, takes as input a pair (x, y) and maps it into a d dimensional feature
vector. Intuitively, we can think of the elements of (x, y) as score functions that
assess how well the label y fits the instance x. We will elaborate on later on.
Given and a vector w 2 Rd , we can define a multiclass predictor, h : X ! Y,
as follows:
h(x) = argmax hw, (x, y)i.
y2Y

That is, the prediction of h for the input x is the label that achieves the highest
weighted score, where weighting is according to the vector w.
Let W be some set of vectors in Rd , for example, W = {w 2 Rd : kwk  B},
for some scalar B > 0. Each pair ( , W ) defines a hypothesis class of multiclass
predictors:
H ,W = {x 7! argmax hw, (x, y)i : w 2 W }.
y2Y

Of course, the immediate question, which we discuss in the sequel, is how to


construct a good . Note that if Y = {±1} and we set (x, y) = yx and
W = Rd , then H ,W becomes the hypothesis class of homogeneous halfspace
predictors for binary classification.

17.2.1 How to Construct


As mentioned before, we can think of the elements of (x, y) as score functions
that assess how well the label y fits the instance x. Naturally, designing a good
is similar to the problem of designing a good feature mapping (as we discussed in
17.2 Linear Multiclass Predictors 231

Chapter 16 and as we will discuss in more detail in Chapter 25). Two examples
of useful constructions are given in the following.

The Multivector Construction:


Let Y = {1, . . . , k} and let X = Rn . We define : X ⇥ Y ! Rd , where d = nk,
as follows

(x, y) = [ 0, . . . , 0 , x1 , . . . , xn , 0, . . . , 0 ]. (17.2)
| {z } | {z } | {z }
2R(y 1)n 2Rn 2R(k y)n

That is, (x, y) is composed of k vectors, each of which is of dimension n, where


we set all the vectors to be the all zeros vector except the y’th vector, which is
set to be x. It follows that we can think of w 2 Rnk as being composed of k
weight vectors in Rn , that is, w = [w1 ; . . . ; wk ], hence the name multivec-
tor construction. By the construction we have that hw, (x, y)i = hwy , xi, and
therefore the multiclass prediction becomes

h(x) = argmax hwy , xi.


y2Y

A geometric illustration of the multiclass prediction over X = R2 is given in the


following.
w2
w1

w3 w4

TF-IDF:
The previous definition of (x, y) does not incorporate any prior knowledge
about the problem. We next describe an example of a feature function that
does incorporate prior knowledge. Let X be a set of text documents and Y be a
set of possible topics. Let d be a size of a dictionary of words. For each word in the
dictionary, whose corresponding index is j, let T F (j, x) be the number of times
the word corresponding to j appears in the document x. This quantity is called
Term-Frequency. Additionally, let DF (j, y) be the number of times the word
corresponding to j appears in documents in our training set that are not about
topic y. This quantity is called Document-Frequency and measures whether word
j is frequent in other topics. Now, define : X ⇥ Y ! Rd to be such that
⇣ ⌘
m
j (x, y) = T F (j, x) log DF (j,y) ,

where m is the total number of documents in our training set. The preced-
ing quantity is called term-frequency-inverse-document-frequency or TF-IDF for
232 Multiclass, Ranking, and Complex Prediction Problems

short. Intuitively, j (x, y) should be large if the word corresponding to j ap-


pears a lot in the document x but does not appear at all in documents that are
not on topic y. If this is the case, we tend to believe that the document x is on
topic y. Note that unlike the multivector construction described previously, in
the current construction the dimension of does not depend on the number of
topics (i.e., the size of Y).

17.2.2 Cost-Sensitive Classification


So far we used the zero-one loss as our performance measure of the quality of
h(x). That is, the loss of a hypothesis h on an example (x, y) is 1 if h(x) 6= y and
0 otherwise. In some situations it makes more sense to penalize di↵erent levels
of loss for di↵erent mistakes. For example, in object recognition tasks, it is less
severe to predict that an image of a tiger contains a cat than predicting that
the image contains a whale. This can be modeled by specifying a loss function,
: Y ⇥ Y ! R+ , where for every pair of labels, y 0 , y, the loss of predicting
the label y 0 when the correct label is y is defined to be (y 0 , y). We assume
that (y, y) = 0. Note that the zero-one loss can be easily modeled by setting
(y 0 , y) = 1[y0 6=y] .

17.2.3 ERM
We have defined the hypothesis class H ,W and specified a loss function . To
learn the class with respect to the loss function, we can apply the ERM rule with
respect to this class. That is, we search for a multiclass hypothesis h 2 H ,W ,
parameterized by a vector w, that minimizes the empirical risk with respect to
,
m
1 X
LS (h) = (h(xi ), yi ).
m i=1

We now show that when W = Rd and we are in the realizable case, then it is
possible to solve the ERM problem efficiently using linear programming. Indeed,
in the realizable case, we need to find a vector w 2 Rd that satisfies
8i 2 [m], yi = argmaxhw, (xi , y)i.
y2Y

Equivalently, we need that w will satisfy the following set of linear inequalities
8i 2 [m], 8y 2 Y \ {yi }, hw, (xi , yi )i > hw, (xi , y)i.
Finding w that satisfies the preceding set of linear equations amounts to solving
a linear program.
As in the case of binary classification, it is also possible to use a generalization
of the Perceptron algorithm for solving the ERM problem. See Exercise 2.
In the nonrealizable case, solving the ERM problem is in general computa-
tionally hard. We tackle this difficulty using the method of convex surrogate
17.2 Linear Multiclass Predictors 233

loss functions (see Section 12.3). In particular, we generalize the hinge loss to
multiclass problems.

17.2.4 Generalized Hinge Loss


Recall that in binary classification, the hinge loss is defined to be max{0, 1
yhw, xi}. We now generalize the hinge loss to multiclass predictors of the form

hw (x) = argmax hw, (x, y0 )i.


y 0 2Y

Recall that a surrogate convex loss should upper bound the original nonconvex
loss, which in our case is (hw (x), y). To derive an upper bound on (hw (x), y)
we first note that the definition of hw (x) implies that

hw, (x, y)i  hw, (x, hw (x))i.

Therefore,

(hw (x), y)  (hw (x), y) + hw, (x, hw (x)) (x, y)i.

Since hw (x) 2 Y we can upper bound the right-hand side of the preceding by
def
max
0
( (y 0 , y) + hw, (x, y 0 ) (x, y)i) = `(w, (x, y)). (17.3)
y 2Y

We use the term “generalized hinge loss” to denote the preceding expression. As
we have shown, `(w, (x, y)) (hw (x), y). Furthermore, equality holds when-
ever the score of the correct label is larger than the score of any other label, y 0 ,
by at least (y 0 , y), namely,

8y 0 2 Y \ {y}, hw, (x, y)i hw, (x, y0 )i + (y 0 , y).

It is also immediate to see that `(w, (x, y)) is a convex function with respect to w
since it is a maximum over linear functions of w (see Claim 12.5 in Chapter 12),
and that `(w, (x, y)) is ⇢-Lipschitz with ⇢ = maxy0 2Y k (x, y 0 ) (x, y)k.
Remark 17.2 We use the name “generalized hinge loss” since in the binary
case, when Y = {±1}, if we set (x, y) = yx 2 , then the generalized hinge loss
becomes the vanilla hinge loss for binary classification,

`(w, (x, y)) = max{0, 1 yhw, xi}.

Geometric Intuition:
The feature function : X ⇥ Y ! Rd maps each x into |Y| vectors in Rd .
The value of `(w, (x, y)) will be zero if there exists a direction w such that
when projecting the |Y| vectors onto this direction we obtain that each vector is
represented by the scalar hw, (x, y)i, and we can rank the di↵erent points on
the basis of these scalars so that

• The point corresponding to the correct y is top-ranked


234 Multiclass, Ranking, and Complex Prediction Problems

• For each y 0 6= y, the di↵erence between hw, (x, y)i and hw, (x, y 0 )i is larger
than the loss of predicting y 0 instead of y. The di↵erence hw, (x, y)i
hw, (x, y 0 )i is also referred to as the “margin” (see Section 15.1).
This is illustrated in the following figure:
w
(x, y)

)
00
,y
(x, y 00 )

(y
(y
,y 0
)

(x, y 0 )

17.2.5 Multiclass SVM and SGD


Once we have defined the generalized hinge loss, we obtain a convex-Lipschitz
learning problem and we can apply our general techniques for solving such prob-
lems. In particular, the RLM technique we have studied in Chapter 13 yields the
multiclass SVM rule:
Multiclass SVM
input: (x1 , y1 ), . . . , (xm , ym )
parameters:
regularization parameter > 0
loss function : Y ⇥ Y ! R+
class-sensitive feature mapping : X ⇥ Y ! Rd
solve:
m
!
1 X
min kwk +2
max ( (y 0 , yi ) + hw, (xi , y 0 ) (xi , yi )i)
w2Rd m i=1 y0 2Y

output the predictor hw (x) = argmaxy2Y hw, (x, y)i

We can solve the optimization problem associated with multiclass SVM us-
ing generic convex optimization algorithms (or using the method described in
Section 15.5). Let us analyze the risk of the resulting hypothesis. The analysis
seamlessly follows from our general analysis for convex-Lipschitz problems given
in Chapter 13. In particular, applying Corollary 13.8 and using the fact that the
generalized hinge loss upper bounds the loss, we immediately obtain an analog
of Corollary 15.7:
corollary 17.1 Let D be a distribution over X ⇥ Y, let : X ⇥ Y ! Rd ,
and assume that for all x 2 X and y 2 Y we have k (x, y)k  ⇢/2. Let B > 0.
17.2 Linear Multiclass Predictors 235

q
2
Consider running Multiclass SVM with = B2⇢2 m on a training set S ⇠ Dm
and let hw be the output of Multiclass SVM. Then,
r
g hinge g hinge 8⇢2 B 2
E m [LD (hw )]  E m [LD (w)]  min LD (u) + ,
S⇠D S⇠D u:kukB m
where LD (h) = E(x,y)⇠D [ (h(x), y)] and LgD hinge (w) = E(x,y)⇠D [`(w, (x, y))]
with ` being the generalized hinge-loss as defined in Equation (17.3).
We can also apply the SGD learning framework for minimizing LgD hinge (w) as
described in Chapter 14. Recall Claim 14.6, which dealt with subgradients of max
functions. In light of this claim, in order to find a subgradient of the generalized
hinge loss all we need to do is to find y 2 Y that achieves the maximum in the
definition of the generalized hinge loss. This yields the following algorithm:
SGD for Multiclass Learning
parameters:
Scalar ⌘ > 0, integer T > 0
loss function : Y ⇥ Y ! R+
class-sensitive feature mapping : X ⇥ Y ! Rd
initialize: w(1) = 0 2 Rd
for t = 1, 2, . . . , T
sample (x, y) ⇠ D
find ŷ 2 argmaxy0 2Y (y 0 , y) + hw(t) , (x, y 0 ) (x, y)i
set vt = (x, ŷ) (x, y)
update w(t+1) = w(t) ⌘vt
PT
output w̄ = T1 t=1 w(t)

Our general analysis of SGD given in Corollary 14.12 immediately implies:


corollary 17.2 Let D be a distribution over X ⇥ Y, let : X ⇥ Y ! Rd ,
and assume that for all x 2 X and y 2 Y we have k (x, y)k  ⇢/2. Let B > 0.
Then, for every ✏ > 0, if we run SGD for multiclass learning with a number of
iterations (i.e., number of examples)
B 2 ⇢2
T
✏2
q
B2
and with ⌘ = ⇢2 T , then the output of SGD satisfies

E [LD (hw̄ )]  E [LgD hinge


(w̄)]  min LgD hinge
(u) + ✏.
S⇠D m S⇠D m u:kukB

Remark 17.3 It is interesting to note that the risk bounds given in Corol-
lary 17.1 and Corollary 17.2 do not depend explicitly on the size of the label
set Y, a fact we will rely on in the next section. However, the bounds may de-
pend implicitly on the size of Y via the norm of (x, y) and the fact that the
bounds are meaningful only when there exists some vector u, kuk  B, for which
LgD hinge (u) is not excessively large.
236 Multiclass, Ranking, and Complex Prediction Problems

17.3 Structured Output Prediction

Structured output prediction problems are multiclass problems in which Y is


very large but is endowed with a predefined structure. The structure plays a
key role in constructing efficient algorithms. To motivate structured learning
problems, consider the problem of optical character recognition (OCR). Suppose
we receive an image of some handwritten word and would like to predict which
word is written in the image. To simplify the setting, suppose we know how to
segment the image into a sequence of images, each of which contains a patch of
the image corresponding to a single letter. Therefore, X is the set of sequences
of images and Y is the set of sequences of letters. Note that the size of Y grows
exponentially with the maximal length of a word. An example of an image x
corresponding to the label y = “workable” is given in the following.

To tackle structure prediction we can rely on the family of linear predictors


described in the previous section. In particular, we need to define a reasonable
loss function for the problem, , as well as a good class-sensitive feature mapping,
. By “good” we mean a feature mapping that will lead to a low approximation
error for the class of linear predictors with respect to and . Once we do this,
we can rely, for example, on the SGD learning algorithm defined in the previous
section.
However, the huge size of Y poses several challenges:

1. To apply the multiclass prediction we need to solve a maximization problem


over Y. How can we predict efficiently when Y is so large?
2. How do we train w efficiently? In particular, to apply the SGD rule we again
need to solve a maximization problem over Y.
3. How can we avoid overfitting?

In the previous section we have already shown that the sample complexity of
learning a linear multiclass predictor does not depend explicitly on the number
of classes. We just need to make sure that the norm of the range of is not too
large. This will take care of the overfitting problem. To tackle the computational
challenges we rely on the structure of the problem, and define the functions and
so that calculating the maximization problems in the definition of hw and in
the SGD algorithm can be performed efficiently. In the following we demonstrate
one way to achieve these goals for the OCR task mentioned previously.
To simplify the presentation, let us assume that all the words in Y are of length
r and that the number of di↵erent letters in our alphabet is q. Let y and y0 be two
17.3 Structured Output Prediction 237

words (i.e., sequences of letters) in Y. We define the function (y0 , y) to be the


Pr
average number of letters that are di↵erent in y 0 and y, namely, 1r i=1 1[yi 6=yi0 ] .
Next, let us define a class-sensitive feature mapping (x, y). It will be conve-
nient to think about x as a matrix of size n ⇥ r, where n is the number of pixels
in each image, and r is the number of images in the sequence. The j’th column
of x corresponds to the j’th image in the sequence (encoded as a vector of gray
level values of pixels). The dimension of the range of is set to be d = n q + q 2 .
The first nq feature functions are “type 1” features and take the form:
r
1X
i,j,1 (x, y) = xi,t 1[yt =j] .
r t=1

That is, we sum the value of the i’th pixel only over the images for which y
assigns the letter j. The triple index (i, j, 1) indicates that we are dealing with
feature (i, j) of type 1. Intuitively, such features can capture pixels in the image
whose gray level values are indicative of a certain letter. The second type of
features take the form
r
1X
i,j,2 (x, y) = 1[y =i] 1[yt 1 =j]
.
r t=2 t

That is, we sum the number of times the letter i follows the letter j. Intuitively,
these features can capture rules like “It is likely to see the pair ‘qu’ in a word”
or “It is unlikely to see the pair ‘rz’ in a word.” Of course, some of these features
will not be very useful, so the goal of the learning process is to assign weights to
features by learning the vector w, so that the weighted score will give us a good
prediction via
hw (x) = argmax hw, (x, y)i.
y2Y

It is left to show how to solve the optimization problem in the definition


of hw (x) efficiently, as well as how to solve the optimization problem in the
definition of ŷ in the SGD algorithm. We can do this by applying a dynamic
programming procedure. We describe the procedure for solving the maximization
in the definition of hw and leave as an exercise the maximization problem in the
definition of ŷ in the SGD algorithm.
To derive the dynamic programming procedure, let us first observe that we
can write
Xr
(x, y) = (x, yt , yt 1 ),
t=1

for an appropriate : X ⇥ [q] ⇥ [q] [ {0} ! Rd , and for simplicity we assume


that y0 is always equal to 0. Indeed, each feature function i,j,1 can be written
in terms of

i,j,1 (x, yt , yt 1 ) = xi,t 1[yt =j] ,


238 Multiclass, Ranking, and Complex Prediction Problems

while the feature function i,j,2 can be written in terms of

i,j,2 (x, yt , yt 1 ) = 1[yt =i] 1[yt 1 =j]


.
Therefore, the prediction can be written as
r
X
hw (x) = argmax hw, (x, yt , yt 1 )i. (17.4)
y2Y t=1

In the following we derive a dynamic programming procedure that solves every


problem of the form given in Equation (17.4). The procedure will maintain a
matrix M 2 Rq,r such that

X
Ms,⌧ = max hw, (x, yt , yt 1 )i.
(y1 ,...,y⌧ ):y⌧ =s
t=1

Clearly, the maximum of hw, (x, y)i equals maxs Ms,r . Furthermore, we can
calculate M in a recursive manner:
Ms,⌧ = max
0
(Ms0 ,⌧ 1 + hw, (x, s, s0 )i) . (17.5)
s

This yields the following procedure:

Dynamic Programming for Calculating hw (x) as Given


in Equation (17.4)
input: a matrix x 2 Rn,r and a vector w
initialize:
foreach s 2 [q]
Ms,1 = hw, (x, s, 1)i
for ⌧ = 2, . . . , r
foreach s 2 [q]
set Ms,⌧ as in Equation (17.5)
set Is,⌧ to be the s0 that maximizes Equation (17.5)
set yt = argmaxs Ms,r
for ⌧ = r, r 1, . . . , 2
set y⌧ 1 = Iy⌧ ,⌧
output: y = (y1 , . . . , yr )

17.4 Ranking

Ranking is the problem of ordering a set of instances according to their “rele-


vance.” A typical application is ordering results of a search engine according to
their relevance to the query. Another example is a system that monitors elec-
tronic transactions and should alert for possible fraudulent transactions. Such a
system should order transactions according to how suspicious they are.
S1
Formally, let X ⇤ = n=1 X n be the set of all sequences of instances from
17.4 Ranking 239

X of arbitrary length. A ranking hypothesis, h, is a function that receives a


sequence of instances x̄ = (x1 , . . . , xr ) 2 X ⇤ , and returns a permutation of [r].
It is more convenient to let the output of h be a vector y 2 Rr , where by
sorting the elements of y we obtain the permutation over [r]. We denote by
⇡(y) the permutation over [r] induced by y. For example, for r = 5, the vector
y = (2, 1, 6, 1, 0.5) induces the permutation ⇡(y) = (4, 3, 5, 1, 2). That is,
if we sort y in an ascending order, then we obtain the vector ( 1, 0.5, 1, 2, 6).
Now, ⇡(y)i is the position of yi in the sorted vector ( 1, 0.5, 1, 2, 6). This
notation reflects that the top-ranked instances are those that achieve the highest
values in ⇡(y).
In the notation of our PAC learning model, the examples domain is Z =
S1
r=1 (X ⇥ R ), and the hypothesis class, H, is some set of ranking hypotheses.
r r

We next turn to describe loss functions for ranking. There are many possible ways
to define such loss functions, and here we list a few examples. In all the examples
S1
we define `(h, (x̄, y)) = (h(x̄), y), for some function : r=1 (Rr ⇥ Rr ) ! R+ .
• 0–1 Ranking loss: (y0 , y) is zero if y and y0 induce exactly the same
ranking and (y0 , y) = 1 otherwise. That is, (y0 , y) = 1[⇡(y0 )6=⇡(y)] . Such
a loss function is almost never used in practice as it does not distinguish
between the case in which ⇡(y0 ) is almost equal to ⇡(y) and the case in
which ⇡(y0 ) is completely di↵erent from ⇡(y).
• Kendall-Tau Loss: We count the number of pairs (i, j) that are in di↵erent
order in the two permutations. This can be written as
r 1 X
X r
2
(y0 , y) = 1[sign(yi0 yj0 )6=sign(yi yj )] .
r(r 1) i=1 j=i+1

This loss function is more useful than the 0–1 loss as it reflects the level of
similarity between the two rankings.
• Normalized Discounted Cumulative Gain (NDCG): This measure em-
phasizes the correctness at the top of the list by using a monotonically
nondecreasing discount function D : N ! R+ . We first define a discounted
cumulative gain measure:
r
X
G(y0 , y) = D(⇡(y0 )i ) yi .
i=1

In words, if we interpret yi as a score of the “true relevance” of item i, then


we take a weighted sum of the relevance of the elements, while the weight
of yi is determined on the basis of the position of i in ⇡(y0 ). Assuming that
all elements of y are nonnegative, it is easy to verify that 0  G(y0 , y) 
G(y, y). We can therefore define a normalized discounted cumulative gain
by the ratio G(y0 , y)/G(y, y), and the corresponding loss function would
be
Xr
G(y0 , y) 1
(y0 , y) = 1 = (D(⇡(y)i ) D(⇡(y0 )i )) yi .
G(y, y) G(y, y) i=1
240 Multiclass, Ranking, and Complex Prediction Problems

We can easily see that (y0 , y) 2 [0, 1] and that (y0 , y) = 0 whenever
⇡(y0 ) = ⇡(y).
A typical way to define the discount function is by
(
1
if i 2 {r k + 1, . . . , r}
D(i) = log2 (r i+2)
0 otherwise

where k < r is a parameter. This means that we care more about elements
that are ranked higher, and we completely ignore elements that are not at
the top-k ranked elements. The NDCG measure is often used to evaluate
the performance of search engines since in such applications it makes sense
completely to ignore elements that are not at the top of the ranking.

Once we have a hypothesis class and a ranking loss function, we can learn a
ranking function using the ERM rule. However, from the computational point of
view, the resulting optimization problem might be hard to solve. We next discuss
how to learn linear predictors for ranking.

17.4.1 Linear Predictors for Ranking


A natural way to define a ranking function is by projecting the instances onto
some vector w and then outputting the resulting scalars as our representation
of the ranking function. That is, assuming that X ⇢ Rd , for every w 2 Rd we
define a ranking function

hw ((x1 , . . . , xr )) = (hw, x1 i, . . . , hw, xr i). (17.6)

As we discussed in Chapter 16, we can also apply a feature mapping that maps
instances into some feature space and then takes the inner products with w in the
feature space. For simplicity, we focus on the simpler form as in Equation (17.6).
Given some W ⇢ Rd , we can now define the hypothesis class HW = {hw :
w 2 W }. Once we have defined this hypothesis class, and have chosen a ranking
loss function, we can apply the ERM rule as follows: Given a training set, S =
(x̄1 , y1 ), . . . , (x̄m , ym ), where each (x̄i , yi ) is in (X ⇥ R)ri , for some ri 2 N, we
Pm
should search w 2 W that minimizes the empirical loss, i=1 (hw (x̄i ), yi ).
As in the case of binary classification, for many loss functions this problem is
computationally hard, and we therefore turn to describe convex surrogate loss
functions. We describe the surrogates for the Kendall tau loss and for the NDCG
loss.

A Hinge Loss for the Kendall Tau Loss Function:


We can think of the Kendall tau loss as an average of 0 1 losses for each pair.
In particular, for every (i, j) we can rewrite

1[sign(yi0 yj0 )6=sign(yi yj )] = 1[sign(yi yj )(yi0 yj0 )0] .


17.4 Ranking 241

In our case, yi0 yj0 = hw, xi xj i. It follows that we can use the hinge loss upper
bound as follows:
1[sign(yi yj )(yi0 yj0 )0]  max {0, 1 sign (yi yj ) hw, xi xj i} .
Taking the average over the pairs we obtain the following surrogate convex loss
for the Kendall tau loss function:
r 1 X
X r
2
(hw (x̄), y)  max {0, 1 sign(yi yj ) hw, xi xj i} .
r(r 1) i=1 j=i+1

The right-hand side is convex with respect to w and upper bounds the Kendall
tau loss. It is also a ⇢-Lipschitz function with parameter ⇢  maxi,j kxi xj k.

A Hinge Loss for the NDCG Loss Function:


The NDCG loss function depends on the predicted ranking vector y0 2 Rr via
the permutation it induces. To derive a surrogate loss function we first make
the following observation. Let V be the set of all permutations of [r] encoded as
vectors; namely, each v 2 V is a vector in [r]r such that for all i 6= j we have
vi 6= vj . Then (see Exercise 4),
r
X
⇡(y0 ) = argmax vi yi0 . (17.7)
v2V i=1
Pr
Let us denote (x̄, v) = i=1 vi xi ; it follows that
r
X
⇡(hw (x̄)) = argmax vi hw, xi i
v2V i=1
* r
+
X
= argmax w, vi x i
v2V i=1

= argmaxhw, (x̄, v)i.


v2V

On the basis of this observation, we can use the generalized hinge loss for cost-
sensitive multiclass classification as a surrogate loss function for the NDCG loss
as follows:
(hw (x̄), y)  (hw (x̄), y) + hw, (x̄, ⇡(hw (x̄)))i hw, (x̄, ⇡(y))i
 max [ (v, y) + hw, (x̄, v)i hw, (x̄, ⇡(y))i]
v2V
" r
#
X
= max (v, y) + (vi ⇡(y)i ) hw, xi i . (17.8)
v2V
i=1

The right-hand side is a convex function with respect to w.


We can now solve the learning problem using SGD as described in Section 17.2.5.
The main computational bottleneck is calculating a subgradient of the loss func-
tion, which is equivalent to finding v that achieves the maximum in Equa-
tion (17.8) (see Claim 14.6). Using the definition of the NDCG loss, this is
242 Multiclass, Ranking, and Complex Prediction Problems

equivalent to solving the problem


r
X
argmin (↵i vi + i D(vi )),
v2V i=1

where ↵i = hw, xi i and i = yi /G(y, y). We can think of this problem a little
bit di↵erently by defining a matrix A 2 Rr,r where

Ai,j = j↵i + D(j) i.

Now, let us think about each j as a “worker,” each i as a “task,” and Ai,j as
the cost of assigning task i to worker j. With this view, the problem of finding
v becomes the problem of finding an assignment of the tasks to workers of
minimal cost. This problem is called “the assignment problem” and can be solved
efficiently. One particular algorithm is the “Hungarian method” (Kuhn 1955).
Another way to solve the assignment problem is using linear programming. To
do so, let us first write the assignment problem as
r
X
argmin Ai,j Bi,j (17.9)
B2Rr,r
+ i,j=1
r
X
s.t. 8i 2 [r], Bi,j = 1
j=1
Xr
8j 2 [r], Bi,j = 1
i=1

8i, j, Bi,j 2 {0, 1}

A matrix B that satisfies the constraints in the preceding optimization problem


is called a permutation matrix. This is because the constraints guarantee that
there is at most a single entry of each row that equals 1 and a single entry of each
column that equals 1. Therefore, the matrix B corresponds to the permutation
v 2 V defined by vi = j for the single index j that satisfies Bi,j = 1.
The preceding optimization is still not a linear program because of the com-
binatorial constraint Bi,j 2 {0, 1}. However, as it turns out, this constraint is
redundant – if we solve the optimization problem while simply omitting the
combinatorial constraint, then we are still guaranteed that there is an optimal
solution that will satisfy this constraint. This is formalized later.
P
Denote hA, Bi = i,j Ai,j Bi,j . Then, Equation (17.9) is the problem of mini-
mizing hA, Bi such that B is a permutation matrix.
A matrix B 2 Rr,r is called doubly stochastic if all elements of B are non-
negative, the sum of each row of B is 1, and the sum of each column of B is 1.
Therefore, solving Equation (17.9) without the constraints Bi,j 2 {0, 1} is the
problem

argminhA, Bi s.t. B is a doubly stochastic matrix. (17.10)


B2Rr,r
17.5 Bipartite Ranking and Multivariate Performance Measures 243

The following claim states that every doubly stochastic matrix is a convex
combination of permutation matrices.
claim 17.3 ((Birkho↵ 1946, Von Neumann 1953)) The set of doubly stochastic
matrices in Rr,r is the convex hull of the set of permutation matrices in Rr,r .
On the basis of the claim, we easily obtain the following:
lemma 17.4 There exists an optimal solution of Equation (17.10) that is also
an optimal solution of Equation (17.9).
Proof Let B be a solution of Equation (17.10). Then, by Claim 17.3, we can
P
write B = i i Ci , where each Ci is a permutation matrix, each i > 0, and
P
i i = 1. Since all the Ci are also doubly stochastic, we clearly have that
hA, Bi  hA, Ci i for every i. We claim that there is some i for which hA, Bi =
hA, Ci i. This must be true since otherwise, if for every i hA, Bi < hA, Ci i, we
would have that
* +
X X X
hA, Bi = A, i Ci = i hA, Ci i > i hA, Bi = hA, Bi,
i i i

which cannot hold. We have thus shown that some permutation matrix, Ci ,
satisfies hA, Bi = hA, Ci i. But, since for every other permutation matrix C we
have hA, Bi  hA, Ci we conclude that Ci is an optimal solution of both Equa-
tion (17.9) and Equation (17.10).

17.5 Bipartite Ranking and Multivariate Performance Measures

In the previous section we described the problem of ranking. We used a vector


y 2 Rr for representing an order over the elements x1 , . . . , xr . If all elements in y
are di↵erent from each other, then y specifies a full order over [r]. However, if two
elements of y attain the same value, yi = yj for i 6= j, then y can only specify a
partial order over [r]. In such a case, we say that xi and xj are of equal relevance
according to y. In the extreme case, y 2 {±1}r , which means that each xi is
either relevant or nonrelevant. This setting is often called “bipartite ranking.” For
example, in the fraud detection application mentioned in the previous section,
each transaction is labeled as either fraudulent (yi = 1) or benign (yi = 1).
Seemingly, we can solve the bipartite ranking problem by learning a binary
classifier, applying it on each instance, and putting the positive ones at the top
of the ranked list. However, this may lead to poor results as the goal of a binary
learner is usually to minimize the zero-one loss (or some surrogate of it), while the
goal of a ranker might be significantly di↵erent. To illustrate this, consider again
the problem of fraud detection. Usually, most of the transactions are benign (say
99.9%). Therefore, a binary classifier that predicts “benign” on all transactions
will have a zero-one error of 0.1%. While this is a very small number, the resulting
predictor is meaningless for the fraud detection application. The crux of the
244 Multiclass, Ranking, and Complex Prediction Problems

problem stems from the inadequacy of the zero-one loss for what we are really
interested in. A more adequate performance measure should take into account
the predictions over the entire set of instances. For example, in the previous
section we have defined the NDCG loss, which emphasizes the correctness of the
top-ranked items. In this section we describe additional loss functions that are
specifically adequate for bipartite ranking problems.
As in the previous section, we are given a sequence of instances, x̄ = (x1 , . . . , xr ),
and we predict a ranking vector y0 2 Rr . The feedback vector is y 2 {±1}r . We
define a loss that depends on y0 and y and depends on a threshold ✓ 2 R. This
threshold transforms the vector y0 2 Rr into the vector (sign(yi0 ✓), . . . , sign(yr0
✓)) 2 {±1}r . Usually, the value of ✓ is set to be 0. However, as we will see, we
sometimes set ✓ while taking into account additional constraints on the problem.
The loss functions we define in the following depend on the following 4 num-
bers:
True positives: a = |{i : yi = +1 ^ sign(yi0 ✓) = +1}|
False positives: b = |{i : yi = 1^ sign(yi0 ✓) = +1}|
(17.11)
False negatives: c = |{i : yi = +1 ^ sign(yi0 ✓) = 1}|
True negatives: d = |{i : yi = 1^ sign(yi0 ✓) = 1}|

The recall (a.k.a. sensitivity) of a prediction vector is the fraction of true


a
positives y0 “catches,” namely, a+c . The precision is the fraction of correct
a
predictions among the positive labels we predict, namely, a+b . The specificity
d
is the fraction of true negatives that our predictor “catches,” namely, d+b .
Note that as we decrease ✓ the recall increases (attaining the value 1 when
✓ = 1). On the other hand, the precision and the specificity usually decrease
as we decrease ✓. Therefore, there is a tradeo↵ between precision and recall, and
we can control it by changing ✓. The loss functions defined in the following use
various techniques for combining both the precision and recall.

• Averaging sensitivity and specificity: ⇣ This measure


⌘ is the average of the
1 a d
sensitivity and specificity, namely, 2 a+c + d+b . This is also the accuracy
on positive examples averaged with the accuracy on negative examples.
Here, ⇣we set ✓ =⌘ 0 and the corresponding loss function is (y0 , y) =
1 a d
1 2 a+c + d+b .
• F1 -score: The F1 score is the harmonic mean of the precision and recall:
2
1 1 . Its maximal value (of 1) is obtained when both precision
Precision + Recall
and recall are 1, and its minimal value (of 0) is obtained whenever one of
them is 0 (even if the other one is 1). The F1 score can be written using
2a
the numbers a, b, c as follows; F1 = 2a+b+c . Again, we set ✓ = 0, and the
0
loss function becomes (y , y) = 1 F1 .
• F -score: It is like F1 score, but we attach 2 times more importance to
1+ 2
recall than to precision, that is, 1
+ 2 1
. It can also be written as
Precision Recall
17.5 Bipartite Ranking and Multivariate Performance Measures 245

2
F = (1+ (1+ )a
2 )a+b+ 2 c . Again, we set ✓ = 0, and the loss function becomes

(y0 , y) = 1 F .
• Recall at k: We measure the recall while the prediction must contain at most
k positive labels. That is, we should set ✓ so that a + b  k. This is conve-
nient, for example, in the application of a fraud detection system, where a
bank employee can only handle a small number of suspicious transactions.
• Precision at k: We measure the precision while the prediction must contain
at least k positive labels. That is, we should set ✓ so that a + b k.

The measures defined previously are often referred to as multivariate perfor-


mance measures. Note that these measures are highly di↵erent from the average
b+d
zero-one loss, which in the preceding notation equals a+b+c+d . In the aforemen-
tioned example of fraud detection, when 99.9% of the examples are negatively
labeled, the zero-one loss of predicting that all the examples are negatives is
0.1%. In contrast, the recall of such prediction is 0 and hence the F1 score is also
0, which means that the corresponding loss will be 1.

17.5.1 Linear Predictors for Bipartite Ranking


We next describe how to train linear predictors for bipartite ranking. As in the
previous section, a linear predictor for ranking is defined to be

hw (x̄) = (hw, x1 i, . . . , hw, xr i).

The corresponding loss function is one of the multivariate performance measures


described before. The loss function depends on y0 = hw (x̄) via the binary vector
it induces, which we denote by

b(y0 ) = (sign(y10 ✓), . . . , sign(yr0 ✓)) 2 {±1}r . (17.12)

As in the previous section, to facilitate an efficient algorithm we derive a convex


surrogate loss function on . The derivation is similar to the derivation of the
generalized hinge loss for the NDCG ranking loss, as described in the previous
section.
Our first observation is that for all the values of ✓ defined before, there is some
V ✓ {±1}r such that b(y0 ) can be rewritten as
r
X
b(y0 ) = argmax vi yi0 . (17.13)
v2V i=1

This is clearly true for the case ✓ = 0 if we choose V = {±1}r . The two measures
for which ✓ is not taken to be 0 are precision at k and recall at k. For precision
at k we can take V to be the set V k , containing all vectors in {±1}r whose
number of ones is at least k. For recall at k, we can take V to be Vk , which is
defined analogously. See Exercise 5.
246 Multiclass, Ranking, and Complex Prediction Problems

Once we have defined b as in Equation (17.13), we can easily derive a convex


surrogate loss as follows. Assuming that y 2 V , we have that

(hw (x̄), y) = (b(hw (x̄)), y)


r
X
 (b(hw (x̄)), y) + (bi (hw (x̄)) yi )hw, xi i
i=1
" r
#
X
 max (v, y) + (vi yi ) hw, xi i . (17.14)
v2V
i=1

The right-hand side is a convex function with respect to w.


We can now solve the learning problem using SGD as described in Section 17.2.5.
The main computational bottleneck is calculating a subgradient of the loss func-
tion, which is equivalent to finding v that achieves the maximum in Equa-
tion (17.14) (see Claim 14.6).
In the following we describe how to find this maximizer efficiently for any
performance measure that can be written as a function of the numbers a, b, c, d
given in Equation (17.11), and for which the set V contains all elements in {±1}r
for which the values of a, b satisfy some constraints. For example, for “recall at
k” the set V is all vectors for which a + b  k.
The idea is as follows. For any a, b 2 [r], let

Ȳa,b = {v : |{i : vi = 1 ^ yi = 1}| = a ^ |{i : vi = 1 ^ yi = 1}| = b } .

Any vector v 2 V falls into Ȳa,b for some a, b 2 [r]. Furthermore, if Ȳa,b \ V
is not empty for some a, b 2 [r] then Ȳa,b \ V = Ȳa,b . Therefore, we can search
within each Ȳa,b that has a nonempty intersection with V separately, and then
take the optimal value. The key observation is that once we are searching only
within Ȳa,b , the value of is fixed so we only need to maximize the expression

r
X
max vi hw, xi i.
v2Ȳa,b
i=1

Suppose the examples are sorted so that hw, x1 i ··· hw, xr i. Then, it is
easy to verify that we would like to set vi to be positive for the smallest indices
i. Doing this, with the constraint on a, b, amounts to setting vi = 1 for the a
top ranked positive examples and for the b top-ranked negative examples. This
yields the following procedure.
17.6 Summary 247

Solving Equation (17.14)


input:
(x1 , . . . , xr ), (y1 , . . . , yr ), w, V,
assumptions:
is a function of a, b, c, d
V contains all vectors for which f (a, b) = 1 for some function f
initialize:
P = |{i : yi = 1}|, N = |{i : yi = 1}|
µ = (hw, x1 i, . . . , hw, xr i), ↵? = 1
sort examples so that µ1 µ2 · · · µr
let i1 , . . . , iP be the (sorted) indices of the positive examples
let j1 , . . . , jN be the (sorted) indices of the negative examples
for a = 0, 1, . . . , P
c=P a
for b = 0, 1, . . . , N such that f (a, b) = 1
d=N b
calculate using a, b, c, d
set v1 , . . . , vr s.t. vi1 = · · · = via = vj1 = · · · = vjb = 1
and the rest of the elements of v equal 1
Pr
set ↵ = + i=1 vi µi
if ↵ ↵?
↵? = ↵, v? = v
output v?

17.6 Summary

Many real world supervised learning problems can be cast as learning a multiclass
predictor. We started the chapter by introducing reductions of multiclass learning
to binary learning. We then described and analyzed the family of linear predictors
for multiclass learning. We have shown how this family can be used even if the
number of classes is extremely large, as long as we have an adequate structure
on the problem. Finally, we have described ranking problems. In Chapter 29 we
study the sample complexity of multiclass learning in more detail.

17.7 Bibliographic Remarks

The One-versus-All and All-Pairs approach reductions have been unified un-
der the framework of Error Correction Output Codes (ECOC) (Dietterich &
Bakiri 1995, Allwein, Schapire & Singer 2000). There are also other types of re-
ductions such as tree-based classifiers (see, for example, Beygelzimer, Langford
& Ravikumar (2007)). The limitations of reduction techniques have been studied
248 Multiclass, Ranking, and Complex Prediction Problems

in (Daniely et al. 2011, Daniely, Sabato & Shwartz 2012). See also Chapter 29,
in which we analyze the sample complexity of multiclass learning.
Direct approaches to multiclass learning with linear predictors have been stud-
ied in (Vapnik 1998, Weston & Watkins 1999, Crammer & Singer 2001). In par-
ticular, the multivector construction is due to Crammer & Singer (2001).
Collins (2000) has shown how to apply the Perceptron algorithm for structured
output problems. See also Collins (2002). A related approach is discriminative
learning of conditional random fields; see La↵erty, McCallum & Pereira (2001).
Structured output SVM has been studied in (Weston, Chapelle, Vapnik, Elissee↵
& Schölkopf 2002, Taskar, Guestrin & Koller 2003, Tsochantaridis, Hofmann,
Joachims & Altun 2004).
The dynamic procedure we have presented for calculating the prediction hw (x)
in the structured output section is similar to the forward-backward variables
calculated by the Viterbi procedure in HMMs (see, for instance, (Rabiner &
Juang 1986)). More generally, solving the maximization problem in structured
output is closely related to the problem of inference in graphical models (see, for
example, Koller & Friedman (2009)).
Chapelle, Le & Smola (2007) proposed to learn a ranking function with respect
to the NDCG loss using ideas from structured output learning. They also ob-
served that the maximization problem in the definition of the generalized hinge
loss is equivalent to the assignment problem.
Agarwal & Roth (2005) analyzed the sample complexity of bipartite ranking.
Joachims (2005) studied the applicability of structured output SVM to bipartite
ranking with multivariate performance measures.

17.8 Exercises

1. Consider a set S of examples in Rn ⇥[k] for which there exist vectors µ1 , . . . , µk


such that every example (x, y) 2 S falls within a ball centered at µy whose
radius is r 1. Assume also that for every i 6= j, kµi µj k 4r. Con-
sider concatenating each instance by the constant 1 and then applying the
multivector construction, namely,

(x, y) = [ 0, . . . , 0 , x1 , . . . , xn , 1 , 0, . . . , 0 ].
| {z } | {z } | {z }
2R(y 1)(n+1) 2Rn+1 2R(k y)(n+1)

Show that there exists a vector w 2 Rk(n+1) such that `(w, (x, y)) = 0 for
every (x, y) 2 S.
Hint: Observe that for every example (x, y) 2 S we can write x = µy + v for
some kvk  r. Now, take w = [w1 , . . . , wk ], where wi = [µi , kµi k2 /2].
2. Multiclass Perceptron: Consider the following algorithm:
17.8 Exercises 249

Multiclass Batch Perceptron


Input:
A training set (x1 , y1 ), . . . , (xm , ym )
A class-sensitive feature mapping : X ⇥ Y ! Rd
Initialize: w(1) = (0, . . . , 0) 2 Rd
For t = 1, 2, . . .
If (9 i and y 6= yi s.t. hw(t) , (xi , yi )i  hw(t) , (xi , y)i) then
w(t+1) = w(t) + (xi , yi ) (xi , y)
else
output w(t)

Prove the following:


theorem 17.5 Assume that there exists w? such that for all i and for all
y 6= yi it holds that hw? , (xi , yi )i hw? , (xi , y)i+1. Let R = maxi,y k (xi , yi )
(xi , y)k. Then, the multiclass Perceptron algorithm stops after at most (Rkw? k)2
iterations, and when it stops it holds that 8i 2 [m], yi = argmaxy hw(t) , (xi , y)i.
3. Generalize the dynamic programming procedure given in Section 17.3 for solv-
ing the maximization problem given in the definition of ĥ in the SGD proce-
Pr
dure for multiclass prediction. You can assume that (y0 , y) = t=1 (yt0 , yt )
for some arbitrary function .
4. Prove that Equation (17.7) holds.
5. Show that the two definitions of ⇡ as defined in Equation (17.12) and Equa-
tion (17.13) are indeed equivalent for all the multivariate performance mea-
sures.
18 Decision Trees

A decision tree is a predictor, h : X ! Y, that predicts the label associated with


an instance x by traveling from a root node of a tree to a leaf. For simplicity
we focus on the binary classification setting, namely, Y = {0, 1}, but decision
trees can be applied for other prediction problems as well. At each node on the
root-to-leaf path, the successor child is chosen on the basis of a splitting of the
input space. Usually, the splitting is based on one of the features of x or on a
predefined set of splitting rules. A leaf contains a specific label. An example of
a decision tree for the papayas example (described in Chapter 2) is given in the
following:

Color?
pale green to pale yellow
other

not-tasty Softness?

other gives slightly to palm pressure

not-tasty tasty

To check if a given papaya is tasty or not, the decision tree first examines
the color of the Papaya. If this color is not in the range pale green to pale
yellow, then the tree immediately predicts that the papaya is not tasty without
additional tests. Otherwise, the tree turns to examine the softness of the papaya.
If the softness level of the papaya is such that it gives slightly to palm pressure,
the decision tree predicts that the papaya is tasty. Otherwise, the prediction is
“not-tasty.” The preceding example underscores one of the main advantages of
decision trees – the resulting classifier is very simple to understand and interpret.

Understanding Machine Learning, c 2014 by Shai Shalev-Shwartz and Shai Ben-David


Published 2014 by Cambridge University Press.
Personal use only. Not for distribution. Do not post.
Please link to http://www.cs.huji.ac.il/~shais/UnderstandingMachineLearning
18.1 Sample Complexity 251

18.1 Sample Complexity

A popular splitting rule at internal nodes of the tree is based on thresholding the
value of a single feature. That is, we move to the right or left child of the node on
the basis of 1[xi <✓] , where i 2 [d] is the index of the relevant feature and ✓ 2 R
is the threshold. In such cases, we can think of a decision tree as a splitting of
the instance space, X = Rd , into cells, where each leaf of the tree corresponds
to one cell. It follows that a tree with k leaves can shatter a set of k instances.
Hence, if we allow decision trees of arbitrary size, we obtain a hypothesis class
of infinite VC dimension. Such an approach can easily lead to overfitting.
To avoid overfitting, we can rely on the minimum description length (MDL)
principle described in Chapter 7, and aim at learning a decision tree that on one
hand fits the data well while on the other hand is not too large.
For simplicity, we will assume that X = {0, 1}d . In other words, each instance
is a vector of d bits. In that case, thresholding the value of a single feature
corresponds to a splitting rule of the form 1[xi =1] for some i = [d]. For instance,
we can model the “papaya decision tree” earlier by assuming that a papaya is
parameterized by a two-dimensional bit vector x 2 {0, 1}2 , where the bit x1
represents whether the color is pale green to pale yellow or not, and the bit x2
represents whether the softness is gives slightly to palm pressure or not. With
this representation, the node Color? can be replaced with 1[x1 =1] , and the node
Softness? can be replaced with 1[x2 =1] . While this is a big simplification, the
algorithms and analysis we provide in the following can be extended to more
general cases.
With the aforementioned simplifying assumption, the hypothesis class becomes
finite, but is still very large. In particular, any classifier from {0, 1}d to {0, 1}
can be represented by a decision tree with 2d leaves and depth of d + 1 (see
Exercise 1). Therefore, the VC dimension of the class is 2d , which means that
the number of examples we need to PAC learn the hypothesis class grows with
2d . Unless d is very small, this is a huge number of examples.
To overcome this obstacle, we rely on the MDL scheme described in Chapter 7.
The underlying prior knowledge is that we should prefer smaller trees over larger
trees. To formalize this intuition, we first need to define a description language
for decision trees, which is prefix free and requires fewer bits for smaller decision
trees. Here is one possible way: A tree with n nodes will be described in n + 1
blocks, each of size log2 (d + 3) bits. The first n blocks encode the nodes of the
tree, in a depth-first order (preorder), and the last block marks the end of the
code. Each block indicates whether the current node is:

• An internal node of the form 1[xi =1] for some i 2 [d]


• A leaf whose value is 1
• A leaf whose value is 0
• End of the code
252 Decision Trees

Overall, there are d + 3 options, hence we need log2 (d + 3) bits to describe each
block.
Assuming each internal node has two children,1 it is not hard to show that
this is a prefix-free encoding of the tree, and that the description length of a tree
with n nodes is (n + 1) log2 (d + 3).
By Theorem 7.7 we have that with probability of at least 1 over a sample
of size m, for every n and every decision tree h 2 H with n nodes it holds that
r
(n + 1) log2 (d + 3) + log(2/ )
LD (h)  LS (h) + . (18.1)
2m
This bound performs a tradeo↵: on the one hand, we expect larger, more complex
decision trees to have a smaller training risk, LS (h), but the respective value of
n will be larger. On the other hand, smaller decision trees will have a smaller
value of n, but LS (h) might be larger. Our hope (or prior knowledge) is that we
can find a decision tree with both low empirical risk, LS (h), and a number of
nodes n not too high. Our bound indicates that such a tree will have low true
risk, LD (h).

18.2 Decision Tree Algorithms

The bound on LD (h) given in Equation (18.1) suggests a learning rule for decision
trees – search for a tree that minimizes the right-hand side of Equation (18.1).
Unfortunately, it turns out that solving this problem is computationally hard.2
Consequently, practical decision tree learning algorithms are based on heuristics
such as a greedy approach, where the tree is constructed gradually, and locally
optimal decisions are made at the construction of each node. Such algorithms
cannot guarantee to return the globally optimal decision tree but tend to work
reasonably well in practice.
A general framework for growing a decision tree is as follows. We start with
a tree with a single leaf (the root) and assign this leaf a label according to a
majority vote among all labels over the training set. We now perform a series of
iterations. On each iteration, we examine the e↵ect of splitting a single leaf. We
define some “gain” measure that quantifies the improvement due to this split.
Then, among all possible splits, we either choose the one that maximizes the
gain and perform it, or choose not to split the leaf at all.
In the following we provide a possible implementation. It is based on a popular
decision tree algorithm known as “ID3” (short for “Iterative Dichotomizer 3”).
We describe the algorithm for the case of binary features, namely, X = {0, 1}d ,
1 We may assume this without loss of generality, because if a decision node has only one
child, we can replace the node by its child without a↵ecting the predictions of the decision
tree.
2 More precisely, if NP6=P then no algorithm can solve Equation (18.1) in time polynomial
in n, d, and m.
18.2 Decision Tree Algorithms 253

and therefore all splitting rules are of the form 1[xi =1] for some feature i 2 [d].
We discuss the case of real valued features in Section 18.2.3.
The algorithm works by recursive calls, with the initial call being ID3(S, [d]),
and returns a decision tree. In the pseudocode that follows, we use a call to a
procedure Gain(S, i), which receives a training set S and an index i and evaluates
the gain of a split of the tree according to the ith feature. We describe several
gain measures in Section 18.2.1.

ID3(S, A)
Input: training set S, feature subset A ✓ [d]
if all examples in S are labeled by 1, return a leaf 1
if all examples in S are labeled by 0, return a leaf 0
if A = ;, return a leaf whose value = majority of labels in S
else :
Let j = argmaxi2A Gain(S, i)
if all examples in S have the same label
Return a leaf whose value = majority of labels in S
else
Let T1 be the tree returned by ID3({(x, y) 2 S : xj = 1}, A \ {j}).
Let T2 be the tree returned by ID3({(x, y) 2 S : xj = 0}, A \ {j}).
Return the tree:
xj = 1?

T2 T1

18.2.1 Implementations of the Gain Measure


Di↵erent algorithms use di↵erent implementations of Gain(S, i). Here we present
three. We use the notation PS [F ] to denote the probability that an event holds
with respect to the uniform distribution over S.
Train Error: The simplest definition of gain is the decrease in training error.
Formally, let C(a) = min{a, 1 a}. Note that the training error before splitting on
feature i is C(PS [y = 1]), since we took a majority vote among labels. Similarly,
the error after splitting on feature i is
P[xi = 1] C(P[y = 1|xi = 1]) + P[xi = 0]C(P[y = 1|xi = 0]).
S S S S

Therefore, we can define Gain to be the di↵erence between the two, namely,
Gain(S, i) := C(P[y = 1])
S
⇣ ⌘
P[xi = 1] C(P[y = 1|xi = 1]) + P[xi = 0]C(P[y = 1|xi = 0]) .
S S S S
254 Decision Trees

Information Gain: Another popular gain measure that is used in the ID3
and C4.5 algorithms of Quinlan (1993) is the information gain. The information
gain is the di↵erence between the entropy of the label before and after the split,
and is achieved by replacing the function C in the previous expression by the
entropy function,
C(a) = a log(a) (1 a) log(1 a).
Gini Index: Yet another definition of a gain, which is used by the CART
algorithm of Breiman, Friedman, Olshen & Stone (1984), is the Gini index,
C(a) = 2a(1 a).
Both the information gain and the Gini index are smooth and concave upper
bounds of the train error. These properties can be advantageous in some situa-
tions (see, for example, Kearns & Mansour (1996)).

18.2.2 Pruning
The ID3 algorithm described previously still su↵ers from a big problem: The
returned tree will usually be very large. Such trees may have low empirical risk,
but their true risk will tend to be high – both according to our theoretical
analysis, and in practice. One solution is to limit the number of iterations of ID3,
leading to a tree with a bounded number of nodes. Another common solution is
to prune the tree after it is built, hoping to reduce it to a much smaller tree,
but still with a similar empirical error. Theoretically, according to the bound in
Equation (18.1), if we can make n much smaller without increasing LS (h) by
much, we are likely to get a decision tree with a smaller true risk.
Usually, the pruning is performed by a bottom-up walk on the tree. Each node
might be replaced with one of its subtrees or with a leaf, based on some bound
or estimate of LD (h) (for example, the bound in Equation (18.1)). A pseudocode
of a common template is given in the following.
Generic Tree Pruning Procedure
input:
function f (T, m) (bound/estimate for the generalization error
of a decision tree T , based on a sample of size m),
tree T .
foreach node j in a bottom-up walk on T (from leaves to root):
find T 0 which minimizes f (T 0 , m), where T 0 is any of the following:
the current tree after replacing node j with a leaf 1.
the current tree after replacing node j with a leaf 0.
the current tree after replacing node j with its left subtree.
the current tree after replacing node j with its right subtree.
the current tree.
let T := T 0 .
18.3 Random Forests 255

18.2.3 Threshold-Based Splitting Rules for Real-Valued Features


In the previous section we have described an algorithm for growing a decision
tree assuming that the features are binary and the splitting rules are of the
form 1[xi =1] . We now extend this result to the case of real-valued features and
threshold-based splitting rules, namely, 1[xi <✓] . Such splitting rules yield decision
stumps, and we have studied them in Chapter 10.
The basic idea is to reduce the problem to the case of binary features as
follows. Let x1 , . . . , xm be the instances of the training set. For each real-valued
feature i, sort the instances so that x1,i  · · ·  xm,i . Define a set of thresholds
✓0,i , . . . , ✓m+1,i such that ✓j,i 2 (xj,i , xj+1,i ) (where we use the convention x0,i =
1 and xm+1,i = 1). Finally, for each i and j we define the binary feature
1[xi <✓j,i ] . Once we have constructed these binary features, we can run the ID3
procedure described in the previous section. It is easy to verify that for any
decision tree with threshold-based splitting rules over the original real-valued
features there exists a decision tree over the constructed binary features with
the same training error and the same number of nodes.
If the original number of real-valued features is d and the number of examples
is m, then the number of constructed binary features becomes dm. Calculating
the Gain of each feature might therefore take O(dm2 ) operations. However, using
a more clever implementation, the runtime can be reduced to O(dm log(m)). The
idea is similar to the implementation of ERM for decision stumps as described
in Section 10.1.1.

18.3 Random Forests

As mentioned before, the class of decision trees of arbitrary size has infinite VC
dimension. We therefore restricted the size of the decision tree. Another way
to reduce the danger of overfitting is by constructing an ensemble of trees. In
particular, in the following we describe the method of random forests, introduced
by Breiman (2001).
A random forest is a classifier consisting of a collection of decision trees, where
each tree is constructed by applying an algorithm A on the training set S and
an additional random vector, ✓, where ✓ is sampled i.i.d. from some distribution.
The prediction of the random forest is obtained by a majority vote over the
predictions of the individual trees.
To specify a particular random forest, we need to define the algorithm A and
the distribution over ✓. There are many ways to do this and here we describe one
particular option. We generate ✓ as follows. First, we take a random subsample
from S with replacements; namely, we sample a new training set S 0 of size m0
using the uniform distribution over S. Second, we construct a sequence I1 , I2 , . . .,
where each It is a subset of [d] of size k, which is generated by sampling uniformly
at random elements from [d]. All these random variables form the vector ✓. Then,
256 Decision Trees

the algorithm A grows a decision tree (e.g., using the ID3 algorithm) based on
the sample S 0 , where at each splitting stage of the algorithm, the algorithm is
restricted to choosing a feature that maximizes Gain from the set It . Intuitively,
if k is small, this restriction may prevent overfitting.

18.4 Summary

Decision trees are very intuitive predictors. Typically, if a human programmer


creates a predictor it will look like a decision tree. We have shown that the VC
dimension of decision trees with k leaves is k and proposed the MDL paradigm
for learning decision trees. The main problem with decision trees is that they
are computationally hard to learn; therefore we described several heuristic pro-
cedures for training them.

18.5 Bibliographic Remarks

Many algorithms for learning decision trees (such as ID3 and C4.5) have been
derived by Quinlan (1986). The CART algorithm is due to Breiman et al. (1984).
Random forests were introduced by Breiman (2001). For additional reading we
refer the reader to (Hastie, Tibshirani & Friedman 2001, Rokach 2007).
The proof of the hardness of training decision trees is given in Hyafil & Rivest
(1976).

18.6 Exercises

1. 1. Show that any binary classifier h : {0, 1}d 7! {0, 1} can be implemented
as a decision tree of height at most d + 1, with internal nodes of the form
(xi = 0?) for some i 2 {1, . . . , d}.
2. Conclude that the VC dimension of the class of decision trees over the
domain {0, 1}d is 2d .
2. (Suboptimality of ID3)
Consider the following training set, where X = {0, 1}3 and Y = {0, 1}:
((1, 1, 1), 1)
((1, 0, 0), 1)
((1, 1, 0), 0)
((0, 0, 1), 0)
Suppose we wish to use this training set in order to build a decision tree of
depth 2 (i.e., for each input we are allowed to ask two questions of the form
(xi = 0?) before deciding on the label).
18.6 Exercises 257

1. Suppose we run the ID3 algorithm up to depth 2 (namely, we pick the root
node and its children according to the algorithm, but instead of keeping
on with the recursion, we stop and pick leaves according to the majority
label in each subtree). Assume that the subroutine used to measure the
quality of each feature is based on the entropy function (so we measure the
information gain), and that if two features get the same score, one of them
is picked arbitrarily. Show that the training error of the resulting decision
tree is at least 1/4.
2. Find a decision tree of depth 2 that attains zero training error.
19 Nearest Neighbor

Nearest Neighbor algorithms are among the simplest of all machine learning
algorithms. The idea is to memorize the training set and then to predict the
label of any new instance on the basis of the labels of its closest neighbors in
the training set. The rationale behind such a method is based on the assumption
that the features that are used to describe the domain points are relevant to
their labelings in a way that makes close-by points likely to have the same label.
Furthermore, in some situations, even when the training set is immense, finding
a nearest neighbor can be done extremely fast (for example, when the training
set is the entire Web and distances are based on links).
Note that, in contrast with the algorithmic paradigms that we have discussed
so far, like ERM, SRM, MDL, or RLM, that are determined by some hypothesis
class, H, the Nearest Neighbor method figures out a label on any test point
without searching for a predictor within some predefined class of functions.
In this chapter we describe Nearest Neighbor methods for classification and
regression problems. We analyze their performance for the simple case of binary
classification and discuss the efficiency of implementing these methods.

19.1 k Nearest Neighbors

Throughout the entire chapter we assume that our instance domain, X , is en-
dowed with a metric function ⇢. That is, ⇢ : X ⇥X ! R is a function that returns
the distance between any two elements of X . For example,
qP if X = Rd then ⇢ can
d
be the Euclidean distance, ⇢(x, x0 ) = kx x0 k = i=1 (xi x0i )2 .
Let S = (x1 , y1 ), . . . , (xm , ym ) be a sequence of training examples. For each
x 2 X , let ⇡1 (x), . . . , ⇡m (x) be a reordering of {1, . . . , m} according to their
distance to x, ⇢(x, xi ). That is, for all i < m,

⇢(x, x⇡i (x) )  ⇢(x, x⇡i+1 (x) ).

For a number k, the k-NN rule for binary classification is defined as follows:

Understanding Machine Learning, c 2014 by Shai Shalev-Shwartz and Shai Ben-David


Published 2014 by Cambridge University Press.
Personal use only. Not for distribution. Do not post.
Please link to http://www.cs.huji.ac.il/~shais/UnderstandingMachineLearning
19.2 Analysis 259

Figure 19.1 An illustration of the decision boundaries of the 1-NN rule. The points
depicted are the sample points, and the predicted label of any new point will be the
label of the sample point in the center of the cell it belongs to. These cells are called a
Voronoi Tessellation of the space.

k-NN
input: a training sample S = (x1 , y1 ), . . . , (xm , ym )
output: for every point x 2 X ,
return the majority label among {y⇡i (x) : i  k}

When k = 1, we have the 1-NN rule:

hS (x) = y⇡1 (x) .

A geometric illustration of the 1-NN rule is given in Figure 19.1.


For regression problems, namely, Y = R, one can define the prediction to be
Pk
the average target of the k nearest neighbors. That is, hS (x) = k1 i=1 y⇡i (x) .
More generally, for some function : (X ⇥ Y)k ! Y, the k-NN rule with respect
to is:
hS (x) = (x⇡1 (x) , y⇡1 (x) ), . . . , (x⇡k (x) , y⇡k (x) ) . (19.1)

It is easy to verify that we can cast the prediction by majority of labels (for
classification) or by the averaged target (for regression) as in Equation (19.1) by
an appropriate choice of . The generality can lead to other rules; for example, if
Y = R, we can take a weighted average of the targets according to the distance
from x:
Xk
⇢(x, x⇡i (x) )
hS (x) = Pk y⇡i (x) .
i=1 j=1 ⇢(x, x⇡j (x) )

19.2 Analysis

Since the NN rules are such natural learning methods, their generalization prop-
erties have been extensively studied. Most previous results are asymptotic con-
sistency results, analyzing the performance of NN rules when the sample size, m,
260 Nearest Neighbor

goes to infinity, and the rate of convergence depends on the underlying distribu-
tion. As we have argued in Section 7.4, this type of analysis is not satisfactory.
One would like to learn from finite training samples and to understand the gen-
eralization performance as a function of the size of such finite training sets and
clear prior assumptions on the data distribution. We therefore provide a finite-
sample analysis of the 1-NN rule, showing how the error decreases as a function
of m and how it depends on properties of the distribution. We will also explain
how the analysis can be generalized to k-NN rules for arbitrary values of k. In
particular, the analysis specifies the number of examples required to achieve a
true error of 2LD (h? ) + ✏, where h? is the Bayes optimal hypothesis, assuming
that the labeling rule is “well behaved” (in a sense we will define later).

19.2.1 A Generalization Bound for the 1-NN Rule


We now analyze the true error of the 1-NN rule for binary classification with
the 0-1 loss, namely, Y = {0, 1} and `(h, (x, y)) = 1[h(x)6=y] . We also assume
throughout the analysis that X = [0, 1]d and ⇢ is the Euclidean distance.
We start by introducing some notation. Let D be a distribution over X ⇥ Y.
Let DX denote the induced marginal distribution over X and let ⌘ : Rd ! R be
the conditional probability1 over the labels, that is,

⌘(x) = P[y = 1|x].

Recall that the Bayes optimal rule (that is, the hypothesis that minimizes LD (h)
over all functions) is
h? (x) = 1[⌘(x)>1/2] .

We assume that the conditional probability function ⌘ is c-Lipschitz for some


c > 0: Namely, for all x, x0 2 X , |⌘(x) ⌘(x0 )|  c kx x0 k. In other words, this
assumption means that if two vectors are close to each other then their labels
are likely to be the same.
The following lemma applies the Lipschitzness of the conditional probability
function to upper bound the true error of the 1-NN rule as a function of the
expected distance between each test instance and its nearest neighbor in the
training set.

lemma 19.1 Let X = [0, 1]d , Y = {0, 1}, and D be a distribution over X ⇥ Y
for which the conditional probability function, ⌘, is a c-Lipschitz function. Let
S = (x1 , y1 ), . . . , (xm , ym ) be an i.i.d. sample and let hS be its corresponding
1-NN hypothesis. Let h? be the Bayes optimal rule for ⌘. Then,

E [LD (hS )]  2 LD (h? ) + c E [kx x⇡1 (x) k].


S⇠D m S⇠D m ,x⇠D

D({(x0 ,1):x0 2B(x, )})


1 Formally, P[y = 1|x] = lim !0 D({(x0 ,y):x0 2B(x, ),y2Y}) , where B(x, ) is a ball of radius
centered around x.
19.2 Analysis 261

Proof Since LD (hS ) = E(x,y)⇠D [1[hS (x)6=y] ], we obtain that ES [LD (hS )] is the
probability to sample a training set S and an additional example (x, y), such
that the label of ⇡1 (x) is di↵erent from y. In other words, we can first sample
m unlabeled examples, Sx = (x1 , . . . , xm ), according to DX , and an additional
unlabeled example, x ⇠ DX , then find ⇡1 (x) to be the nearest neighbor of x in
Sx , and finally sample y ⇠ ⌘(x) and y⇡1 (x) ⇠ ⌘(⇡1 (x)). It follows that

E[LD (hS )] = E
m ,x⇠D ,y⇠⌘(x),y 0 ⇠⌘(⇡ (x))
[1[y6=y0 ] ]
S Sx ⇠DX X 1

= E
m ,x⇠D
P [y 6= y 0 ] . (19.2)
Sx ⇠DX X y⇠⌘(x),y 0 ⇠⌘(⇡1 (x))

We next upper bound Py⇠⌘(x),y0 ⇠⌘(x0 ) [y 6= y 0 ] for any two domain points x, x0 :

P [y 6= y 0 ] = ⌘(x0 )(1 ⌘(x)) + (1 ⌘(x0 ))⌘(x)


y⇠⌘(x),y 0 ⇠⌘(x0 )

= (⌘(x) ⌘(x) + ⌘(x0 ))(1 ⌘(x))


0
+ (1 ⌘(x) + ⌘(x) ⌘(x ))⌘(x)
= 2⌘(x)(1 ⌘(x)) + (⌘(x) ⌘(x0 ))(2⌘(x) 1).

Using |2⌘(x) 1|  1 and the assumption that ⌘ is c-Lipschitz, we obtain that


the probability is at most:

P [y 6= y 0 ]  2⌘(x)(1 ⌘(x)) + c kx x0 k.
y⇠⌘(x),y 0 ⇠⌘(x0 )

Plugging this into Equation (19.2) we conclude that

E[LD (hS )]  E[2⌘(x)(1 ⌘(x))] + c E [kx x⇡1 (x) k].


S x S,x

Finally, the error of the Bayes optimal classifier is

LD (h? ) = E[min{⌘(x), 1 ⌘(x)}] E[⌘(x)(1 ⌘(x))].


x x

Combining the preceding two inequalities concludes our proof.

The next step is to bound the expected distance between a random x and its
closest element in S. We first need the following general probability lemma. The
lemma bounds the probability weight of subsets that are not hit by a random
sample, as a function of the size of that sample.

lemma 19.2 Let C1 , . . . , Cr be a collection of subsets of some domain set, X .


Let S be a sequence of m points sampled i.i.d. according to some probability
distribution, D over X . Then,
2 3
X r
E 4 P[Ci ]5  .
S⇠D m me
i:Ci \S=;
262 Nearest Neighbor

Proof From the linearity of expectation, we can rewrite:


2 3
X Xr
⇥ ⇤
E4 P[Ci ]5 = P[Ci ] E 1[Ci \S=;] .
S S
i:Ci \S=; i=1

Next, for each i we have


⇥ ⇤ P[Ci ] m
E 1[Ci \S=;] = P[Ci \ S = ;] = (1 P[Ci ])m  e .
S S

Combining the preceding two equations we get


2 3
X X r
E4 P[Ci ]5  P[Ci ] e P[Ci ] m  r max P[Ci ] e P[Ci ] m
.
S i
i:Ci \S=; i=1

ma 1
Finally, by a standard calculus, maxa ae  me and this concludes the proof.

Equipped with the preceding lemmas we are now ready to state and prove the
main result of this section – an upper bound on the expected error of the 1-NN
learning rule.
theorem 19.3 Let X = [0, 1]d , Y = {0, 1}, and D be a distribution over X ⇥ Y
for which the conditional probability function, ⌘, is a c-Lipschitz function. Let
hS denote the result of applying the 1-NN rule to a sample S ⇠ Dm . Then,
p 1
E m [LD (hS )]  2 LD (h? ) + 4 c d m d+1 .
S⇠D

Proof Fix some ✏ = 1/T , for some integer T , let r = T d and let C1 , . . . , Cr be the
cover of the set X using boxes of length ✏: Namely, for every (↵1 , . . . , ↵d ) 2 [T ]d ,
there exists a set Ci of the form {x : 8j, xj 2 [(↵j 1)/T, ↵j /T ]}. An illustration
for d = 2, T = 5 and the set corresponding to ↵ = (2, 4) is given in the following.

p p
For each x, x0 in the same box we have kx x0 k  d ✏. Otherwise, kx x0 k  d.
Therefore,
2 2 3 2 3 3
[ p [ p
E [kx x⇡1 (x) k]  E 4P 4 Ci 5 d + P 4 Ci 5 ✏ d5 ,
x,S S
i:Ci \S=; i:Ci \S6=;
S
and by combining Lemma 19.2 with the trivial bound P[ i:Ci \S6=; Ci ]  1 we
get that
p
E [kx x⇡1 (x) k]  d me r
+✏ .
x,S
19.2 Analysis 263

Since the number of boxes is r = (1/✏)d we get that


p ⇣ d d ⌘
E [kx x⇡1 (x) k]  d 2 m✏ e + ✏ .
S,x

Combining the preceding with Lemma 19.1 we obtain that


p ⇣ d d ⌘
E[LD (hS )]  2 LD (h? ) + c d 2 m✏ e + ✏ .
S
1/(d+1)
Finally, setting ✏ = 2 m and noting that
d d
2 ✏ 2 2 d md/(d+1)
d
+✏= + 2 m 1/(d+1)
me me
= m 1/(d+1) (1/e + 2)  4m 1/(d+1)
we conclude our proof.
The theorem implies that if we first fix the data-generating distribution and
then let m go to infinity, then the error of the 1-NN rule converges to twice the
Bayes error. The analysis can be generalized to larger pvalues of k, showing that
the expected error of the k-NN rule converges to (1 + 8/k) times the error of
the Bayes classifier. This is formalized in Theorem 19.5, whose proof is left as a
guided exercise.

19.2.2 The “Curse of Dimensionality”


The upper bound given in Theorem 19.3 grows with c (the Lipschitz coefficient
of ⌘) and with d, the Euclidean dimension of the domain set X . In fact, it is easy
to see that a necessary condition
p for the last term in Theorem 19.3 to be smaller
than ✏ is that m (4 c d/✏)d+1 . That is, the size of the training set should
increase exponentially with the dimension. The following theorem tells us that
this is not just an artifact of our upper bound, but, for some distributions, this
amount of examples is indeed necessary for learning with the NN rule.
theorem 19.4 For any c > 1, and every learning rule, L, there exists a
distribution over [0, 1]d ⇥ {0, 1}, such that ⌘(x) is c-Lipschitz, the Bayes error of
the distribution is 0, but for sample sizes m  (c + 1)d /2, the true error of the
rule L is greater than 1/4.
Proof Fix any values of c and d. Let Gdc be the grid on [0, 1]d with distance of
1/c between points on the grid. That is, each point on the grid is of the form
(a1 /c, . . . , ad /c) where ai is in {0, . . . , c 1, c}. Note that, since any two distinct
points on this grid are at least 1/c apart, any function ⌘ : GD C ! [0, 1] is a
c-Lipschitz function. It follows that the set of all c-Lipschitz functions over Gdc
contains the set of all binary valued functions over that domain. We can therefore
invoke the No-Free-Lunch result (Theorem 5.1) to obtain a lower bound on the
needed sample sizes for learning that class. The number of points on the grid is
(c + 1)d ; hence, if m < (c + 1)d /2, Theorem 5.1 implies the lower bound we are
after.
264 Nearest Neighbor

The exponential dependence on the dimension is known as the curse of di-


mensionality. As we saw, the 1-NN rule might fail if the number of examples is
smaller than ⌦((c+1)d ). Therefore, while the 1-NN rule does not restrict itself to
a predefined set of hypotheses, it still relies on some prior knowledge – its success
depends on the assumption that the dimension and the Lipschitz constant of the
underlying distribution, ⌘, are not too high.

19.3 Efficient Implementation*

Nearest Neighbor is a learning-by-memorization type of rule. It requires the


entire training data set to be stored, and at test time, we need to scan the entire
data set in order to find the neighbors. The time of applying the NN rule is
therefore ⇥(d m). This leads to expensive computation at test time.
When d is small, several results from the field of computational geometry have
proposed data structures that enable to apply the NN rule in time o(dO(1) log(m)).
However, the space required by these data structures is roughly mO(d) , which
makes these methods impractical for larger values of d.
To overcome this problem, it was suggested to improve the search method by
allowing an approximate search. Formally, an r-approximate search procedure is
guaranteed to retrieve a point within distance of at most r times the distance
to the nearest neighbor. Three popular approximate algorithms for NN are the
kd-tree, balltrees, and locality-sensitive hashing (LSH). We refer the reader, for
example, to (Shakhnarovich, Darrell & Indyk 2006).

19.4 Summary

The k-NN rule is a very simple learning algorithm that relies on the assumption
that “things that look alike must be alike.” We formalized this intuition using
the Lipschitzness of the conditional probability. We have shown that with a suf-
ficiently large training set, the risk of the 1-NN is upper bounded by twice the
risk of the Bayes optimal rule. We have also derived a lower bound that shows
the “curse of dimensionality” – the required sample size might increase expo-
nentially with the dimension. As a result, NN is usually performed in practice
after a dimensionality reduction preprocessing step. We discuss dimensionality
reduction techniques later on in Chapter 23.

19.5 Bibliographic Remarks

Cover & Hart (1967) gave the first analysis of 1-NN, showing that its risk con-
verges to twice the Bayes optimal error under mild conditions. Following a lemma
due to Stone (1977), Devroye & Györfi (1985) have shown that the k-NN rule
19.6 Exercises 265

is consistent (with respect to the hypothesis class of all functions from Rd to


{0, 1}). A good presentation of the analysis is given in the book of Devroye et al.
(1996). Here, we give a finite sample guarantee that explicitly underscores the
prior assumption on the distribution. See Section 7.4 for a discussion on con-
sistency results. Finally, Gottlieb, Kontorovich & Krauthgamer (2010) derived
another finite sample bound for NN that is more similar to VC bounds.

19.6 Exercises

In this exercise we will prove the following theorem for the k-NN rule.
theorem 19.5 Let X = [0, 1]d , Y = {0, 1}, and D be a distribution over X ⇥ Y
for which the conditional probability function, ⌘, is a c-Lipschitz function. Let hS
denote the result of applying the k-NN rule to a sample S ⇠ Dm , where k 10.
Let h? be the Bayes optimal hypothesis. Then,
r ! ⇣ p ⌘
8
E[LD (hS )]  1 + LD (h? ) + 6 c d + k m 1/(d+1) .
S k
1. Prove the following lemma.
lemma 19.6 Let C1 , . . . , Cr be a collection of subsets of some domain set,
X . Let S be a sequence of m points sampled i.i.d. according to some probability
distribution, D over X . Then, for every k 2,
2 3
X 2rk
E 4 P[Ci ]5  .
S⇠D m m
i:|Ci \S|<k

Hints:
• Show that
2 3
X r
X
E4 P[Ci ]5 = P[Ci ] P [|Ci \ S| < k] .
S S
i:|Ci \S|<k i=1

• Fix some i and suppose that k < P[Ci ] m/2. Use Cherno↵’s bound to show
that
P[Ci ] m/8
P [|Ci \ S| < k]  P [|Ci \ S| < P[Ci ]m/2]  e .
S S
ma 1
• Use the inequality maxa ae  me to show that for such i we have
P[Ci ] m/8 8
P[Ci ] P [|Ci \ S| < k]  P[Ci ]e  .
S me
• Conclude the proof by using the fact that for the case k P[Ci ] m/2 we
clearly have:
2k
P[Ci ] P [|Ci \ S| < k]  P[Ci ]  .
S m
266 Nearest Neighbor

2. We use the notation y ⇠ p as a shorthand for “y is a Bernoulli random variable


with expected value p.” Prove the following lemma:
lemma 19.7 Let k 10 and let Z1 , . . . , Zk be independent Bernoulli random
P Pk
variables with P[Zi = 1] = pi . Denote p = k1 i pi and p0 = k1 i=1 Zi . Show
that
r !
8
E P [y 6= 1[p0 >1/2] ]  1 + P [y 6= 1[p>1/2] ].
Z1 ,...,Zk y⇠p k y⇠p
Hints:
W.l.o.g. assume that p  1/2. Then, Py⇠p [y 6= 1[p>1/2] ] = p. Let y 0 = 1[p0 >1/2] .
• Show that
E P [y 6= y 0 ] p= P [p0 > 1/2](1 2p).
Z1 ,...,Zk y⇠p Z1 ,...,Zk

• Use Cherno↵’s bound (Lemma B.3) to show that


k p h( 2p
1
1)
P[p0 > 1/2]  e ,
where
h(a) = (1 + a) log(1 + a) a.
• To conclude the proof of the lemma, you can rely on the following inequality
(without proving it): For every p 2 [0, 1/2] and k 10:
r
kp+k (log(2p)+1) 8
(1 2p) e 2  p.
k
3. Fix some p, p0 2 [0, 1] and y 0 2 {0, 1}. Show that
P [y 6= y 0 ]  P 0 [y 6= y 0 ] + |p p0 |.
y⇠p y⇠p

4. Conclude the proof of the theorem according to the following steps:


• As in the proof of Theorem 19.3, six some ✏ > 0 and let C1 , . . . , Cr be the
cover of the set X usingpboxes of length ✏. For each x,px0 in the same
box we have kx x0 k  d ✏. Otherwise, kx x0 k  2 d. Show that
2 3
X
E[LD (hS )]  E 4 P[Ci ]5
S S
i:|Ci \S|<k
h p i
+ max P hS (x) 6= y | 8j 2 [k], kx x⇡j (x) k  ✏ d . (19.3)
i S,(x,y)

• Bound the first summand using Lemma 19.6.


• To bound the second summand, let us fix S|x and x p such that all the k
neighbors of x in S|x are at distance of at most ✏ d from x. W.l.o.g
assume that the k NN are x1 , . . . , xk . Denote pi = ⌘(xi ) and let p =
1
P
k i pi . Use Exercise 3 to show that

E P [hS (x) 6= y]  E P [hS (x) 6= y] + |p ⌘(x)|.


y1 ,...,yj y⇠⌘(x) y1 ,...,yj y⇠p
19.6 Exercises 267

W.l.o.g. assume that p  1/2. Now use Lemma 19.7 to show that
r !
8
P P [hS (x) 6= y]  1 + P [1[p>1/2] 6= y].
y1 ,...,yj y⇠p k y⇠p

• Show that
P [1[p>1/2] 6= y] = p = min{p, 1 p}  min{⌘(x), 1 ⌘(x)} + |p ⌘(x)|.
y⇠p

• Combine all the preceding to obtain that the second summand in Equa-
tion (19.3) is bounded by
r !
8 p
1+ LD (h? ) + 3 c ✏ d.
k

• Use r = (2/✏)d to obtain that:


r !
8 p 2(2/✏)d k
E[LD (hS )]  1 + LD (h? ) + 3 c ✏ d + .
S k m
1/(d+1)
Set ✏ = 2m and use
p 2k ⇣ p ⌘
6 c m 1/(d+1) d + m 1/(d+1)
 6c d + k m 1/(d+1)
e
to conclude the proof.
20 Neural Networks

An artificial neural network is a model of computation inspired by the structure


of neural networks in the brain. In simplified models of the brain, it consists of
a large number of basic computing devices (neurons) that are connected to each
other in a complex communication network, through which the brain is able to
carry out highly complex computations. Artificial neural networks are formal
computation constructs that are modeled after this computation paradigm.
Learning with neural networks was proposed in the mid-20th century. It yields
an e↵ective learning paradigm and has recently been shown to achieve cutting-
edge performance on several learning tasks.
A neural network can be described as a directed graph whose nodes correspond
to neurons and edges correspond to links between them. Each neuron receives
as input a weighted sum of the outputs of the neurons connected to its incoming
edges. We focus on feedforward networks in which the underlying graph does not
contain cycles.
In the context of learning, we can define a hypothesis class consisting of neural
network predictors, where all the hypotheses share the underlying graph struc-
ture of the network and di↵er in the weights over edges. As we will show in
Section 20.3, every predictor over n variables that can be implemented in time
T (n) can also be expressed as a neural network predictor of size O(T (n)2 ), where
the size of the network is the number of nodes in it. It follows that the family
of hypothesis classes of neural networks of polynomial size can suffice for all
practical learning tasks, in which our goal is to learn predictors which can be
implemented efficiently. Furthermore, in Section 20.4 we will show that the sam-
ple complexity of learning such hypothesis classes is also bounded in terms of the
size of the network. Hence, it seems that this is the ultimate learning paradigm
we would want to adapt, in the sense that it both has a polynomial sample com-
plexity and has the minimal approximation error among all hypothesis classes
consisting of efficiently implementable predictors.
The caveat is that the problem of training such hypothesis classes of neural net-
work predictors is computationally hard. This will be formalized in Section 20.5.
A widely used heuristic for training neural networks relies on the SGD frame-
work we studied in Chapter 14. There, we have shown that SGD is a successful
learner if the loss function is convex. In neural networks, the loss function is
highly nonconvex. Nevertheless, we can still implement the SGD algorithm and

Understanding Machine Learning, c 2014 by Shai Shalev-Shwartz and Shai Ben-David


Published 2014 by Cambridge University Press.
Personal use only. Not for distribution. Do not post.
Please link to http://www.cs.huji.ac.il/~shais/UnderstandingMachineLearning
20.1 Feedforward Neural Networks 269

hope it will find a reasonable solution (as happens to be the case in several
practical tasks). In Section 20.6 we describe how to implement SGD for neural
networks. In particular, the most complicated operation is the calculation of the
gradient of the loss function with respect to the parameters of the network. We
present the backpropagation algorithm that efficiently calculates the gradient.

20.1 Feedforward Neural Networks

The idea behind neural networks is that many neurons can be joined together
by communication links to carry out complex computations. It is common to
describe the structure of a neural network as a graph whose nodes are the neurons
and each (directed) edge in the graph links the output of some neuron to the
input of another neuron. We will restrict our attention to feedforward network
structures in which the underlying graph does not contain cycles.
A feedforward neural network is described by a directed acyclic graph, G =
(V, E), and a weight function over the edges, w : E ! R. Nodes of the graph
correspond to neurons. Each single neuron is modeled as a simple scalar func-
tion, : R ! R. We will focus on three possible functions for : the sign
function, (a) = sign(a), the threshold function, (a) = 1[a>0] , and the sig-
moid function, (a) = 1/(1 + exp( a)), which is a smooth approximation to the
threshold function. We call the “activation” function of the neuron. Each edge
in the graph links the output of some neuron to the input of another neuron.
The input of a neuron is obtained by taking a weighted sum of the outputs of
all the neurons connected to it, where the weighting is according to w.
To simplify the description of the calculation performed by the network, we
further assume that the network is organized in layers. That is, the set of nodes
can be decomposed into a union of (nonempty) disjoint subsets, V = [· Tt=0 Vt ,
such that every edge in E connects some node in Vt 1 to some node in Vt , for
some t 2 [T ]. The bottom layer, V0 , is called the input layer. It contains n + 1
neurons, where n is the dimensionality of the input space. For every i 2 [n], the
output of neuron i in V0 is simply xi . The last neuron in V0 is the “constant”
neuron, which always outputs 1. We denote by vt,i the ith neuron of the tth layer
and by ot,i (x) the output of vt,i when the network is fed with the input vector x.
Therefore, for i 2 [n] we have o0,i (x) = xi and for i = n + 1 we have o0,i (x) = 1.
We now proceed with the calculation in a layer by layer manner. Suppose we
have calculated the outputs of the neurons at layer t. Then, we can calculate
the outputs of the neurons at layer t + 1 as follows. Fix some vt+1,j 2 Vt+1 .
Let at+1,j (x) denote the input to vt+1,j when the network is fed with the input
vector x. Then,
X
at+1,j (x) = w((vt,r , vt+1,j )) ot,r (x),
r: (vt,r ,vt+1,j )2E
270 Neural Networks

and
ot+1,j (x) = (at+1,j (x)) .

That is, the input to vt+1,j is a weighted sum of the outputs of the neurons in Vt
that are connected to vt+1,j , where weighting is according to w, and the output
of vt+1,j is simply the application of the activation function on its input.
Layers V1 , . . . , VT 1 are often called hidden layers. The top layer, VT , is called
the output layer. In simple prediction problems the output layer contains a single
neuron whose output is the output of the network.
We refer to T as the number of layers in the network (excluding V0 ), or the
“depth” of the network. The size of the network is |V |. The “width” of the
network is maxt |Vt |. An illustration of a layered feedforward neural network of
depth 2, size 10, and width 5, is given in the following. Note that there is a
neuron in the hidden layer that has no incoming edges. This neuron will output
the constant (0).

Input Hidden Output


layer layer layer
(V0 ) (V1 ) (V2 )

v1,1
x1 v0,1
v1,2
x2 v0,2
v1,3 v2,1 Output
x3 v0,3
v1,4
constant v0,4
v1,5

20.2 Learning Neural Networks

Once we have specified a neural network by (V, E, , w), we obtain a function


hV,E, ,w : R|V0 | 1 ! R|VT | . Any set of such functions can serve as a hypothesis
class for learning. Usually, we define a hypothesis class of neural network predic-
tors by fixing the graph (V, E) as well as the activation function and letting
the hypothesis class be all functions of the form hV,E, ,w for some w : E ! R.
The triplet (V, E, ) is often called the architecture of the network. We denote
the hypothesis class by

HV,E, = {hV,E, ,w : w is a mapping from E to R}. (20.1)


20.3 The Expressive Power of Neural Networks 271

That is, the parameters specifying a hypothesis in the hypothesis class are the
weights over the edges of the network.
We can now study the approximation error, estimation error, and optimization
error of such hypothesis classes. In Section 20.3 we study the approximation
error of HV,E, by studying what type of functions hypotheses in HV,E, can
implement, in terms of the size of the underlying graph. In Section 20.4 we
study the estimation error of HV,E, , for the case of binary classification (i.e.,
VT = 1 and is the sign function), by analyzing its VC dimension. Finally, in
Section 20.5 we show that it is computationally hard to learn the class HV,E, ,
even if the underlying graph is small, and in Section 20.6 we present the most
commonly used heuristic for training HV,E, .

20.3 The Expressive Power of Neural Networks

In this section we study the expressive power of neural networks, namely, what
type of functions can be implemented using a neural network. More concretely,
we will fix some architecture, V, E, , and will study what functions hypotheses
in HV,E, can implement, as a function of the size of V .
We start the discussion with studying which type of Boolean functions (i.e.,
functions from {±1}n to {±1}) can be implemented by HV,E,sign . Observe that
for every computer in which real numbers are stored using b bits, whenever we
calculate a function f : Rn ! R on such a computer we in fact calculate a
function g : {±1}nb ! {±1}b . Therefore, studying which Boolean functions can
be implemented by HV,E,sign can tell us which functions can be implemented on
a computer that stores real numbers using b bits.
We begin with a simple claim, showing that without restricting the size of the
network, every Boolean function can be implemented using a neural network of
depth 2.

claim 20.1 For every n, there exists a graph (V, E) of depth 2, such that
HV,E,sign contains all functions from {±1}n to {±1}.

Proof We construct a graph with |V0 | = n + 1, |V1 | = 2n + 1, and |V2 | = 1. Let


E be all possible edges between adjacent layers. Now, let f : {±1}n ! {±1}
be some Boolean function. We need to show that we can adjust the weights so
that the network will implement f . Let u1 , . . . , uk be all vectors in {±1}n on
which f outputs 1. Observe that for every i and every x 2 {±1}n , if x 6= ui
then hx, ui i  n 2 and if x = ui then hx, ui i = n. It follows that the function
gi (x) = sign(hx, ui i n + 1) equals 1 if and only if x = ui . It follows that we can
adapt the weights between V0 and V1 so that for every i 2 [k], the neuron v1,i
implements the function gi (x). Next, we observe that f (x) is the disjunction of
272 Neural Networks

the functions gi (x), and therefore can be written as


k
!
X
f (x) = sign gi (x) + k 1 ,
i=1

which concludes our proof.

The preceding claim shows that neural networks can implement any Boolean
function. However, this is a very weak property, as the size of the resulting
network might be exponentially large. In the construction given at the proof of
Claim 20.1, the number of nodes in the hidden layer is exponentially large. This
is not an artifact of our proof, as stated in the following theorem.

theorem 20.2 For every n, let s(n) be the minimal integer such that there
exists a graph (V, E) with |V | = s(n) such that the hypothesis class HV,E,sign
contains all the functions from {0, 1}n to {0, 1}. Then, s(n) is exponential in n.
Similar results hold for HV,E, where is the sigmoid function.

Proof Suppose that for some (V, E) we have that HV,E,sign contains all functions
from {0, 1}n to {0, 1}. It follows that it can shatter the set of m = 2n vectors in
{0, 1}n and hence the VC dimension of HV,E,sign is 2n . On the other hand, the
VC dimension of HV,E,sign is bounded by O(|E| log(|E|))  O(|V |3 ), as we will
show in the next section. This implies that |V | ⌦(2n/3 ), which concludes our
proof for the case of networks with the sign activation function. The proof for
the sigmoid case is analogous.

Remark 20.1 It is possible to derive a similar theorem for HV,E, for any , as
long as we restrict the weights so that it is possible to express every weight using
a number of bits which is bounded by a universal constant. We can even con-
sider hypothesis classes where di↵erent neurons can employ di↵erent activation
functions, as long as the number of allowed activation functions is also finite.
Which functions can we express using a network of polynomial size? The pre-
ceding claim tells us that it is impossible to express all Boolean functions using
a network of polynomial size. On the positive side, in the following we show
that all Boolean functions that can be calculated in time O(T (n)) can also be
expressed by a network of size O(T (n)2 ).

theorem 20.3 Let T : N ! N and for every n, let Fn be the set of functions
that can be implemented using a Turing machine using runtime of at most T (n).
Then, there exist constants b, c 2 R+ such that for every n, there is a graph
(Vn , En ) of size at most c T (n)2 + b such that HVn ,En ,sign contains Fn .

The proof of this theorem relies on the relation between the time complexity
of programs and their circuit complexity (see, for example, Sipser (2006)). In a
nutshell, a Boolean circuit is a type of network in which the individual neurons
20.3 The Expressive Power of Neural Networks 273

implement conjunctions, disjunctions, and negation of their inputs. Circuit com-


plexity measures the size of Boolean circuits required to calculate functions. The
relation between time complexity and circuit complexity can be seen intuitively
as follows. We can model each step of the execution of a computer program as a
simple operation on its memory state. Therefore, the neurons at each layer of the
network will reflect the memory state of the computer at the corresponding time,
and the translation to the next layer of the network involves a simple calculation
that can be carried out by the network. To relate Boolean circuits to networks
with the sign activation function, we need to show that we can implement the
operations of conjunction, disjunction, and negation, using the sign activation
function. Clearly, we can implement the negation operator using the sign activa-
tion function. The following lemma shows that the sign activation function can
also implement conjunctions and disjunctions of its inputs.

lemma 20.4 Suppose that a neuron v, that implements the sign activation
function, has k incoming edges, connecting it to neurons whose outputs are in
{±1}. Then, by adding one more edge, linking a “constant” neuron to v, and
by adjusting the weights on the edges to v, the output of v can implement the
conjunction or the disjunction of its inputs.

Proof Simply observe that if f : {±1}k ! {±1} is the⇣ conjunction func-



Pk
tion, f (x) = ^i xi , then it can be written as f (x) = sign 1 k + i=1 xi .
Similarly,
⇣ the disjunction
⌘ function, f (x) = _i xi , can be written as f (x) =
Pk
sign k 1 + i=1 xi .

So far we have discussed Boolean functions. In Exercise 1 we show that neural


networks are universal approximators. That is, for every fixed precision param-
eter, ✏ > 0, and every Lipschitz function f : [ 1, 1]n ! [ 1, 1], it is possible to
construct a network such that for every input x 2 [ 1, 1]n , the network outputs
a number between f (x) ✏ and f (x) + ✏. However, as in the case of Boolean
functions, the size of the network here again cannot be polynomial in n. This is
formalized in the following theorem, whose proof is a direct corollary of Theo-
rem 20.2 and is left as an exercise.

theorem 20.5 Fix some ✏ 2 (0, 1). For every n, let s(n) be the minimal integer
such that there exists a graph (V, E) with |V | = s(n) such that the hypothesis class
HV,E, , with being the sigmoid function, can approximate, to within precision
of ✏, every 1-Lipschitz function f : [ 1, 1]n ! [ 1, 1]. Then s(n) is exponential
in n.

20.3.1 Geometric Intuition


We next provide several geometric illustrations of functions f : R2 ! {±1}
and show how to express them using a neural network with the sign activation
function.
274 Neural Networks

Let us start with a depth 2 network, namely, a network with a single hidden
layer. Each neuron in the hidden layer implements a halfspace predictor. Then,
the single neuron at the output layer applies a halfspace on top of the binary
outputs of the neurons in the hidden layer. As we have shown before, a halfspace
can implement the conjunction function. Therefore, such networks contain all
hypotheses which are an intersection of k 1 halfspaces, where k is the number
of neurons in the hidden layer; namely, they can express all convex polytopes
with k 1 faces. An example of an intersection of 5 halfspaces is given in the
following.

We have shown that a neuron in layer V2 can implement a function that


indicates whether x is in some convex polytope. By adding one more layer, and
letting the neuron in the output layer implement the disjunction of its inputs,
we get a network that computes the union of polytopes. An illustration of such
a function is given in the following.

20.4 The Sample Complexity of Neural Networks

Next we discuss the sample complexity of learning the class HV,E, . Recall that
the fundamental theorem of learning tells us that the sample complexity of learn-
ing a hypothesis class of binary classifiers depends on its VC dimension. There-
fore, we focus on calculating the VC dimension of hypothesis classes of the form
HV,E, , where the output layer of the graph contains a single neuron.
We start with the sign activation function, namely, with HV,E,sign . What is
the VC dimension of this class? Intuitively, since we learn |E| parameters, the
VC dimension should be order of |E|. This is indeed the case, as formalized by
the following theorem.

theorem 20.6 The VC dimension of HV,E,sign is O(|E| log(|E|)).


20.4 The Sample Complexity of Neural Networks 275

Proof To simplify the notation throughout the proof, let us denote the hy-
pothesis class by H. Recall the definition of the growth function, ⌧H (m), from
Section 6.5.1. This function measures maxC⇢X :|C|=m |HC |, where HC is the re-
striction of H to functions from C to {0, 1}. We can naturally extend the defi-
nition for a set of functions from X to some finite set Y, by letting HC be the
restriction of H to functions from C to Y, and keeping the definition of ⌧H (m)
intact.
Our neural network is defined by a layered graph. Let V0 , . . . , VT be the layers
of the graph. Fix some t 2 [T ]. By assigning di↵erent weights on the edges
between Vt 1 and Vt , we obtain di↵erent functions from R|Vt 1 | ! {±1}|Vt | . Let
H(t) be the class of all possible such mappings from R|Vt 1 | ! {±1}|Vt | . Then,
H can be written as a composition, H = H(T ) . . . H(1) . In Exercise 4 we show
that the growth function of a composition of hypothesis classes is bounded by
the products of the growth functions of the individual classes. Therefore,
T
Y
⌧H (m)  ⌧H(t) (m).
t=1

In addition, each H(t) can be written as a product of function classes, H(t) =


H(t,1) ⇥ · · · ⇥ H(t,|Vt |) , where each H(t,j) is all functions from layer t 1 to {±1}
that the jth neuron of layer t can implement. In Exercise 3 we bound product
classes, and this yields
|Vt |
Y
⌧H(t) (m)  ⌧H(t,i) (m).
i=1

Let dt,i be the number of edges that are headed to the ith neuron of layer t.
Since the neuron is a homogenous halfspace hypothesis and the VC dimension
of homogenous halfspaces is the dimension of their input, we have by Sauer’s
lemma that
⇣ ⌘dt,i
⌧H(t,i) (m)  dem
t,i
 (em)dt,i .

Overall, we obtained that


P
⌧H (m)  (em) t,i dt,i
= (em)|E| .
Now, assume that there are m shattered points. Then, we must have ⌧H (m) =
2m , from which we obtain
2m  (em)|E| ) m  |E| log(em)/ log(2).
The claim follows by Lemma A.2.
Next, we consider HV,E, , where is the sigmoid function. Surprisingly, it
turns out that the VC dimension of HV,E, is lower bounded by ⌦(|E|2 ) (see
Exercise 5.) That is, the VC dimension is the number of tunable parameters
squared. It is also possible to upper bound the VC dimension by O(|V |2 |E|2 ),
but the proof is beyond the scope of this book. In any case, since in practice
276 Neural Networks

we only consider networks in which the weights have a short representation as


floating point numbers with O(1) bits, by using the discretization trick we easily
obtain that such networks have a VC dimension of O(|E|), even if we use the
sigmoid activation function.

20.5 The Runtime of Learning Neural Networks

In the previous sections we have shown that the class of neural networks with an
underlying graph of polynomial size can express all functions that can be imple-
mented efficiently, and that the sample complexity has a favorable dependence
on the size of the network. In this section we turn to the analysis of the time
complexity of training neural networks.
We first show that it is NP hard to implement the ERM rule with respect to
HV,E,sign even for networks with a single hidden layer that contain just 4 neurons
in the hidden layer.
theorem 20.7 Let k 3. For every n, let (V, E) be a layered graph with n
input nodes, k + 1 nodes at the (single) hidden layer, where one of them is the
constant neuron, and a single output node. Then, it is NP hard to implement the
ERM rule with respect to HV,E,sign .
The proof relies on a reduction from the k-coloring problem and is left as
Exercise 6.
One way around the preceding hardness result could be that for the purpose
of learning, it may suffice to find a predictor h 2 H with low empirical error,
not necessarily an exact ERM. However, it turns out that even the task of find-
ing weights that result in close-to-minimal empirical error is computationally
infeasible (see (Bartlett & Ben-David 2002)).
One may also wonder whether it may be possible to change the architecture
of the network so as to circumvent the hardness result. That is, maybe ERM
with respect to the original network structure is computationally hard but ERM
with respect to some other, larger, network may be implemented efficiently (see
Chapter 8 for examples of such cases). Another possibility is to use other acti-
vation functions (such as sigmoids, or any other type of efficiently computable
activation functions). There is a strong indication that all of such approaches
are doomed to fail. Indeed, under some cryptographic assumption, the problem
of learning intersections of halfspaces is known to be hard even in the repre-
sentation independent model of learning (see Klivans & Sherstov (2006)). This
implies that, under the same cryptographic assumption, any hypothesis class
which contains intersections of halfspaces cannot be learned efficiently.
A widely used heuristic for training neural networks relies on the SGD frame-
work we studied in Chapter 14. There, we have shown that SGD is a successful
learner if the loss function is convex. In neural networks, the loss function is
highly nonconvex. Nevertheless, we can still implement the SGD algorithm and
20.6 SGD and Backpropagation 277

hope it will find a reasonable solution (as happens to be the case in several
practical tasks).

20.6 SGD and Backpropagation

The problem of finding a hypothesis in HV,E, with a low risk amounts to the
problem of tuning the weights over the edges. In this section we show how to
apply a heuristic search for good weights using the SGD algorithm. Throughout
this section we assume that is the sigmoid function, (a) = 1/(1 + e a ), but
the derivation holds for any di↵erentiable scalar function.
Since E is a finite set, we can think of the weight function as a vector w 2 R|E| .
Suppose the network has n input neurons and k output neurons, and denote by
hw : Rn ! Rk the function calculated by the network if the weight function is
defined by w. Let us denote by (hw (x), y) the loss of predicting hw (x) when
the target is y 2 Y. For concreteness, we will take to be the squared loss,
1 2
(hw (x), y) = 2 khw (x) yk ; however, similar derivation can be obtained for
every di↵erentiable function. Finally, given a distribution D over the examples
domain, Rn ⇥ Rk , let LD (w) be the risk of the network, namely,

LD (w) = E [ (hw (x), y)] .


(x,y)⇠D

Recall the SGD algorithm for minimizing the risk function LD (w). We repeat
the pseudocode from Chapter 14 with a few modifications, which are relevant
to the neural network application because of the nonconvexity of the objective
function. First, while in Chapter 14 we initialized w to be the zero vector, here
we initialize w to be a randomly chosen vector with values close to zero. This
is because an initialization with the zero vector will lead all hidden neurons to
have the same weights (if the network is a full layered network). In addition,
the hope is that if we repeat the SGD procedure several times, where each time
we initialize the process with a new random vector, one of the runs will lead
to a good local minimum. Second, while a fixed step size, ⌘, is guaranteed to
be good enough for convex problems, here we utilize a variable step size, ⌘t , as
defined in Section 14.4.2. Because of the nonconvexity of the loss function, the
choice of the sequence ⌘t is more significant, and it is tuned in practice by a trial
and error manner. Third, we output the best performing vector on a validation
set. In addition, it is sometimes helpful to add regularization on the weights,
with parameter . That is, we try to minimize LD (w) + 2 kwk2 . Finally, the
gradient does not have a closed form solution. Instead, it is implemented using
the backpropagation algorithm, which will be described in the sequel.
278 Neural Networks

SGD for Neural Networks


parameters:
number of iterations ⌧
step size sequence ⌘1 , ⌘2 , . . . , ⌘⌧
regularization parameter > 0
input:
layered graph (V, E)
di↵erentiable activation function : R ! R
initialize:
choose w(1) 2 R|E| at random
(from a distribution s.t. w(1) is close enough to 0)
for i = 1, 2, . . . , ⌧
sample (x, y) ⇠ D
calculate gradient vi = backpropagation(x, y, w, (V, E), )
update w(i+1) = w(i) ⌘i (vi + w(i) )
output:
w̄ is the best performing w(i) on a validation set

Backpropagation
input:
example (x, y), weight vector w, layered graph (V, E),
activation function : R ! R
initialize:
denote layers of the graph V0 , . . . , VT where Vt = {vt,1 , . . . , vt,kt }
define Wt,i,j as the weight of (vt,j , vt+1,i )
(where we set Wt,i,j = 0 if (vt,j , vt+1,i ) 2/ E)
forward:
set o0 = x
for t = 1, . . . , T
for i = 1, . . . , kt
P kt 1
set at,i = j=1 Wt 1,i,j ot 1,j
set ot,i = (at,i )
backward:
set T = oT y
for t = T 1, T 2, . . . , 1
for i = 1, . . . , kt
Pkt+1 0
t,i = j=1 Wt,j,i t+1,j (at+1,j )
output:
foreach edge (vt 1,j , vt,i ) 2 E
set the partial derivative to t,i 0 (at,i ) ot 1,j
20.6 SGD and Backpropagation 279

Explaining How Backpropagation Calculates the Gradient:


We next explain how the backpropagation algorithm calculates the gradient of
the loss function on an example (x, y) with respect to the vector w. Let us first
recall a few definitions from vector calculus. Each element of the gradient is
the partial derivative with respect to the variable in w corresponding to one of
the edges of the network. Recall the definition of a partial derivative. Given a
function f : Rn ! R, the partial derivative with respect to the ith variable at w
is obtained by fixing the values of w1 , . . . , wi 1 , wi+1 , wn , which yields the scalar
function g : R ! R defined by g(a) = f ((w1 , . . . , wi 1 , wi + a, wi+1 , . . . , wn )),
and then taking the derivative of g at 0. For a function with multiple outputs,
f : Rn ! Rm , the Jacobian of f at w 2 Rn , denoted Jw (f ), is the m ⇥ n matrix
whose i, j element is the partial derivative of fi : Rn ! R w.r.t. its jth variable
at w. Note that if m = 1 then the Jacobian matrix is the gradient of the function
(represented as a row vector). Two examples of Jacobian calculations, which we
will later use, are as follows.

• Let f (w) = Aw for A 2 Rm,n . Then Jw (f ) = A.


• For every n, we use the notation to denote the function from Rn to Rn
which applies the sigmoid function element-wise. That is, ↵ = (✓) means
1
that for every i we have ↵i = (✓i ) = 1+exp( ✓i ) . It is easy to verify
that J✓ ( ) is a diagonal matrix whose (i, i) entry is 0 (✓i ), where 0 is
the derivative function of the (scalar) sigmoid function, namely, 0 (✓i ) =
1 0
(1+exp(✓i ))(1+exp( ✓i )) . We also use the notation diag( (✓)) to denote this
matrix.

The chain rule for taking the derivative of a composition of functions can be
written in terms of the Jacobian as follows. Given two functions f : Rn ! Rm
and g : Rk ! Rn , we have that the Jacobian of the composition function,
(f g) : Rk ! Rm , at w, is

Jw (f g) = Jg(w) (f )Jw (g).

For example, for g(w) = Aw, where A 2 Rn,k , we have that


0
Jw ( g) = diag( (Aw)) A.

To describe the backpropagation algorithm, let us first decompose V into the


layers of the graph, V = [· Tt=0 Vt . For every t, let us write Vt = {vt,1 , . . . , vt,kt },
where kt = |Vt |. In addition, for every t denote Wt 2 Rkt+1 ,kt a matrix which
gives a weight to every potential edge between Vt and Vt+1 . If the edge exists in
E then we set Wt,i,j to be the weight, according to w, of the edge (vt,j , vt+1,i ).
Otherwise, we add a “phantom” edge and set its weight to be zero, Wt,i,j = 0.
Since when calculating the partial derivative with respect to the weight of some
edge we fix all other weights, these additional “phantom” edges have no e↵ect
on the partial derivative with respect to existing edges. It follows that we can
assume, without loss of generality, that all edges exist, that is, E = [t (Vt ⇥Vt+1 ).
280 Neural Networks

Next, we discuss how to calculate the partial derivatives with respect to the
edges from Vt 1 to Vt , namely, with respect to the elements in Wt 1 . Since we
fix all other weights of the network, it follows that the outputs of all the neurons
in Vt 1 are fixed numbers which do not depend on the weights in Wt 1 . Denote
the corresponding vector by ot 1 . In addition, let us denote by `t : Rkt ! R the
loss function of the subnetwork defined by layers Vt , . . . , VT as a function of the
outputs of the neurons in Vt . The input to the neurons of Vt can be written as
at = Wt 1 ot 1 and the output of the neurons of Vt is ot = (at ). That is, for
every j we have ot,j = (at,j ). We obtain that the loss, as a function of Wt 1 ,
can be written as

gt (Wt 1) = `t (ot ) = `t ( (at )) = `t ( (Wt 1 ot 1 )).

It would be convenient to rewrite this as follows. Let wt 1 2 Rkt 1 kt be the


column vector obtained by concatenating the rows of Wt 1 and then taking the
transpose of the resulting long vector. Define by Ot 1 the kt ⇥ (kt 1 kt ) matrix
0 > 1
ot 1 0 ··· 0
B C
B 0 o>
t 1 ··· 0 C
B C
Ot 1 = B . .. .. .. C . (20.2)
B . . C
@ . . . A
0 0 ··· o>
t 1

Then, Wt 1 ot 1 = Ot 1 wt 1 , so we can also write

gt (wt 1) = `t ( (Ot 1 wt 1 )).

Therefore, applying the chain rule, we obtain that


0
Jwt 1 (gt ) = J (Ot 1 wt 1)
(`t ) diag( (Ot 1 wt 1 )) Ot 1 .

Using our notation we have ot = (Ot 1 wt 1 ) and at = Ot 1 wt 1 , which yields


0
Jwt 1 (gt ) = Jot (`t ) diag( (at )) Ot 1.

Let us also denote t = Jot (`t ). Then, we can further rewrite the preceding as
0
Jwt 1 (gt ) = t,1 (at,1 ) o>
t 1 , ... , t,kt
0
(at,kt ) o>
t 1 . (20.3)

It is left to calculate the vector t = Jot (`t ) for every t. This is the gradient
of `t at ot . We calculate this in a recursive manner. First observe that for the
last layer we have that `T (u) = (u, y), where is the loss function. Since we
1 2
assume that (u, y) = 2 ku yk we obtain that Ju (`T ) = (u y). In particular,
T = JoT (`T ) = (oT y). Next, note that

`t (u) = `t+1 ( (Wt u)).

Therefore, by the chain rule,


0
Ju (`t ) = J (Wt u) (`t+1 )diag( (Wt u))Wt .
20.7 Summary 281

In particular,
0
t = Jot (`t ) = J (Wt ot ) (`t+1 )diag( (Wt ot ))Wt
= Jot+1 (`t+1 )diag( 0 (at+1 ))Wt
0
= t+1 diag( (at+1 ))Wt .

In summary, we can first calculate the vectors {at , ot } from the bottom of
the network to its top. Then, we calculate the vectors { t } from the top of
the network back to its bottom. Once we have all of these vectors, the partial
derivatives are easily obtained using Equation (20.3). We have thus shown that
the pseudocode of backpropagation indeed calculates the gradient.

20.7 Summary

Neural networks over graphs of size s(n) can be used to describephypothesis


classes of all predictors that can be implemented in runtime of O( s(n)). We
have also shown that their sample complexity depends polynomially on s(n)
(specifically, it depends on the number of edges in the network). Therefore, classes
of neural network hypotheses seem to be an excellent choice. Regrettably, the
problem of training the network on the basis of training data is computationally
hard. We have presented the SGD framework as a heuristic approach for training
neural networks and described the backpropagation algorithm which efficiently
calculates the gradient of the loss function with respect to the weights over the
edges.

20.8 Bibliographic Remarks

Neural networks were extensively studied in the 1980s and early 1990s, but with
mixed empirical success. In recent years, a combination of algorithmic advance-
ments, as well as increasing computational power and data size, has led to a
breakthrough in the e↵ectiveness of neural networks. In particular, “deep net-
works” (i.e., networks of more than 2 layers) have shown very impressive practical
performance on a variety of domains. A few examples include convolutional net-
works (Lecun & Bengio 1995), restricted Boltzmann machines (Hinton, Osindero
& Teh 2006), auto-encoders (Ranzato, Huang, Boureau & Lecun 2007, Bengio &
LeCun 2007, Collobert & Weston 2008, Lee, Grosse, Ranganath & Ng 2009, Le,
Ranzato, Monga, Devin, Corrado, Chen, Dean & Ng 2012), and sum-product
networks (Livni, Shalev-Shwartz & Shamir 2013, Poon & Domingos 2011). See
also (Bengio 2009) and the references therein.
The expressive power of neural networks and the relation to circuit complexity
have been extensively studied in (Parberry 1994). For the analysis of the sample
complexity of neural networks we refer the reader to (Anthony & Bartlet 1999).
Our proof technique of Theorem 20.6 is due to Kakade and Tewari lecture notes.
282 Neural Networks

Klivans & Sherstov (2006) have shown that for any c > 0, intersections of nc
halfspaces over {±1}n are not efficiently PAC learnable, even if we allow repre-
sentation independent learning. This hardness result relies on the cryptographic
assumption that there is no polynomial time solution to the unique-shortest-
vector problem. As we have argued, this implies that there cannot be an efficient
algorithm for training neural networks, even if we allow larger networks or other
activation functions that can be implemented efficiently.
The backpropagation algorithm has been introduced in Rumelhart, Hinton &
Williams (1986).

20.9 Exercises

1. Neural Networks are universal approximators: Let f : [ 1, 1]n !


[ 1, 1] be a ⇢-Lipschitz function. Fix some ✏ > 0. Construct a neural net-
work N : [ 1, 1]n ! [ 1, 1], with the sigmoid activation function, such that
for every x 2 [ 1, 1]n it holds that |f (x) N (x)|  ✏.
Hint: Similarly to the proof of Theorem 19.3, partition [ 1, 1]n into small
boxes. Use the Lipschitzness of f to show that it is approximately constant
at each box. Finally, show that a neural network can first decide which box
the input vector belongs to, and then predict the averaged value of f at that
box.
2. Prove Theorem 20.5.
Hint: For every f : { 1, 1}n ! { 1, 1} construct a 1-Lipschitz function
g : [ 1, 1]n ! [ 1, 1] such that if you can approximate g then you can express
f.
3. Growth function of product: For i = 1, 2, let Fi be a set of functions from
X to Yi . Define H = F1 ⇥ F2 to be the Cartesian product class. That is, for
every f1 2 F1 and f2 2 F2 , there exists h 2 H such that h(x) = (f1 (x), f2 (x)).
Prove that ⌧H (m)  ⌧F1 (m) ⌧F2 (m).
4. Growth function of composition: Let F1 be a set of functions from X
to Z and let F2 be a set of functions from Z to Y. Let H = F2 F1 be the
composition class. That is, for every f1 2 F1 and f2 2 F2 , there exists h 2 H
such that h(x) = f2 (f1 (x)). Prove that ⌧H (m)  ⌧F2 (m)⌧F1 (m).
5. VC of sigmoidal networks: In this exercise we show that there is a graph
(V, E) such that the VC dimension of the class of neural networks over these
graphs with the sigmoid activation function is ⌦(|E|2 ). Note that for every ✏ >
0, the sigmoid activation function can approximate the threshold activation
function, 1[Pi xi ] , up to accuracy ✏. To simplify the presentation, throughout
the exercise we assume that we can exactly implement the activation function
1[Pi xi >0] using a sigmoid activation function.
Fix some n.
1. Construct a network, N1 , with O(n) weights, which implements a function
from R to {0, 1}n and satisfies the following property. For every x 2 {0, 1}n ,
20.9 Exercises 283

if we feed the network with the real number 0.x1 x2 . . . xn , then the output
of the network will be x.
Hint: Denote ↵ = 0.x1 x2 . . . xn and observe that 10k ↵ 0.5 is at least 0.5
if xk = 1 and is at most 0.3 if xk = 1.
2. Construct a network, N2 , with O(n) weights, which implements a function
from [n] to {0, 1}n such that N2 (i) = ei for all i. That is, upon receiving
the input i, the network outputs the vector of all zeros except 1 at the i’th
neuron.
(i) (i) (i)
3. Let ↵1 , . . . , ↵n be n real numbers such that every ↵i is of the form 0.a1 a2 . . . an ,
(i)
with aj 2 {0, 1}. Construct a network, N3 , with O(n) weights, which im-
plements a function from [n] to R, and satisfies N2 (i) = ↵i for every i 2 [n].
4. Combine N1 , N3 to obtain a network that receives i 2 [n] and output a(i) .
(i)
5. Construct a network N4 that receives (i, j) 2 [n] ⇥ [n] and outputs aj .
2
Hint: Observe that the AND function over {0, 1} can be calculated using
O(1) weights.
6. Conclude that there is a graph with O(n) weights such that the VC di-
mension of the resulting hypothesis class is n2 .
6. Prove Theorem 20.7.
Hint: The proof is similar to the hardness of learning intersections of halfs-
paces – see Exercise 32 in Chapter 8.
Part III
Additional Learning Models
21 Online Learning

In this chapter we describe a di↵erent model of learning, which is called online


learning. Previously, we studied the PAC learning model, in which the learner
first receives a batch of training examples, uses the training set to learn a hy-
pothesis, and only when learning is completed uses the learned hypothesis for
predicting the label of new examples. In our papayas learning problem, this
means that we should first buy a bunch of papayas and taste them all. Then, we
use all of this information to learn a prediction rule that determines the taste
of new papayas. In contrast, in online learning there is no separation between a
training phase and a prediction phase. Instead, each time we buy a papaya, it
is first considered a test example since we should predict whether it is going to
taste good. Then, after taking a bite from the papaya, we know the true label,
and the same papaya can be used as a training example that can help us improve
our prediction mechanism for future papayas.

Concretely, online learning takes place in a sequence of consecutive rounds.


On each online round, the learner first receives an instance (the learner buys
a papaya and knows its shape and color, which form the instance). Then, the
learner is required to predict a label (is the papaya tasty?). At the end of the
round, the learner obtains the correct label (he tastes the papaya and then knows
whether it is tasty or not). Finally, the learner uses this information to improve
his future predictions.

To analyze online learning, we follow a similar route to our study of PAC


learning. We start with online binary classification problems. We consider both
the realizable case, in which we assume, as prior knowledge, that all the labels are
generated by some hypothesis from a given hypothesis class, and the unrealizable
case, which corresponds to the agnostic PAC learning model. In particular, we
present an important algorithm called Weighted-Majority. Next, we study online
learning problems in which the loss function is convex. Finally, we present the
Perceptron algorithm as an example of the use of surrogate convex loss functions
in the online learning model.

Understanding Machine Learning, c 2014 by Shai Shalev-Shwartz and Shai Ben-David


Published 2014 by Cambridge University Press.
Personal use only. Not for distribution. Do not post.
Please link to http://www.cs.huji.ac.il/~shais/UnderstandingMachineLearning
288 Online Learning

21.1 Online Classification in the Realizable Case

Online learning is performed in a sequence of consecutive rounds, where at round


t the learner is given an instance, xt , taken from an instance domain X , and is
required to provide its label. We denote the predicted label by pt . After predicting
the label, the correct label, yt 2 {0, 1}, is revealed to the learner. The learner’s
goal is to make as few prediction mistakes as possible during this process. The
learner tries to deduce information from previous rounds so as to improve its
predictions on future rounds.
Clearly, learning is hopeless if there is no correlation between past and present
rounds. Previously in the book, we studied the PAC model in which we assume
that past and present examples are sampled i.i.d. from the same distribution
source. In the online learning model we make no statistical assumptions regard-
ing the origin of the sequence of examples. The sequence is allowed to be deter-
ministic, stochastic, or even adversarially adaptive to the learner’s own behavior
(as in the case of spam e-mail filtering). Naturally, an adversary can make the
number of prediction mistakes of our online learning algorithm arbitrarily large.
For example, the adversary can present the same instance on each online round,
wait for the learner’s prediction, and provide the opposite label as the correct
label.
To make nontrivial statements we must further restrict the problem. The real-
izability assumption is one possible natural restriction. In the realizable case, we
assume that all the labels are generated by some hypothesis, h? : X ! Y. Fur-
thermore, h? is taken from a hypothesis class H, which is known to the learner.
This is analogous to the PAC learning model we studied in Chapter 3. With this
restriction on the sequence, the learner should make as few mistakes as possible,
assuming that both h? and the sequence of instances can be chosen by an ad-
versary. For an online learning algorithm, A, we denote by MA (H) the maximal
number of mistakes A might make on a sequence of examples which is labeled by
some h? 2 H. We emphasize again that both h? and the sequence of instances
can be chosen by an adversary. A bound on MA (H) is called a mistake-bound and
we will study how to design algorithms for which MA (H) is minimal. Formally:

definition 21.1 (Mistake Bounds, Online Learnability) Let H be a hypoth-


esis class and let A be an online learning algorithm. Given any sequence S =
(x1 , h? (y1 )), . . . , (xT , h? (yT )), where T is any integer and h? 2 H, let MA (S) be
the number of mistakes A makes on the sequence S. We denote by MA (H) the
supremum of MA (S) over all sequences of the above form. A bound of the form
MA (H)  B < 1 is called a mistake bound. We say that a hypothesis class H is
online learnable if there exists an algorithm A for which MA (H)  B < 1.

Our goal is to study which hypothesis classes are learnable in the online model,
and in particular to find good learning algorithms for a given hypothesis class.
Remark 21.1 Throughout this section and the next, we ignore the computa-
21.1 Online Classification in the Realizable Case 289

tional aspect of learning, and do not restrict the algorithms to be efficient. In


Section 21.3 and Section 21.4 we study efficient online learning algorithms.
To simplify the presentation, we start with the case of a finite hypothesis class,
namely, |H| < 1.
In PAC learning, we identified ERM as a good learning algorithm, in the sense
that if H is learnable then it is learnable by the rule ERMH . A natural learning
rule for online learning is to use (at any online round) any ERM hypothesis,
namely, any hypothesis which is consistent with all past examples.
Consistent
input: A finite hypothesis class H
initialize: V1 = H
for t = 1, 2, . . .
receive xt
choose any h 2 Vt
predict pt = h(xt )
receive true label yt = h? (xt )
update Vt+1 = {h 2 Vt : h(xt ) = yt }

The Consistent algorithm maintains a set, Vt , of all the hypotheses which


are consistent with (x1 , y1 ), . . . , (xt 1 , yt 1 ). This set is often called the version
space. It then picks any hypothesis from Vt and predicts according to this hy-
pothesis.
Obviously, whenever Consistent makes a prediction mistake, at least one
hypothesis is removed from Vt . Therefore, after making M mistakes we have
|Vt |  |H| M . Since Vt is always nonempty (by the realizability assumption it
contains h? ) we have 1  |Vt |  |H| M . Rearranging, we obtain the following:
corollary 21.2 Let H be a finite hypothesis class. The Consistent algorithm
enjoys the mistake bound MConsistent (H)  |H| 1.
It is rather easy to construct a hypothesis class and a sequence of examples on
which Consistent will indeed make |H| 1 mistakes (see Exercise 1). Therefore,
we present a better algorithm in which we choose h 2 Vt in a smarter way. We
shall see that this algorithm is guaranteed to make exponentially fewer mistakes.
Halving
input: A finite hypothesis class H
initialize: V1 = H
for t = 1, 2, . . .
receive xt
predict pt = argmaxr2{0,1} |{h 2 Vt : h(xt ) = r}|
(in case of a tie predict pt = 1)
receive true label yt = h? (xt )
update Vt+1 = {h 2 Vt : h(xt ) = yt }
290 Online Learning

theorem 21.3 Let H be a finite hypothesis class. The Halving algorithm


enjoys the mistake bound MHalving (H)  log2 (|H|).

Proof We simply note that whenever the algorithm errs we have |Vt+1 |  |Vt |/2,
(hence the name Halving). Therefore, if M is the total number of mistakes, we
have
M
1  |VT +1 |  |H| 2 .
Rearranging this inequality we conclude our proof.
Of course, Halving’s mistake bound is much better than Consistent’s mistake
bound. We already see that online learning is di↵erent from PAC learning—while
in PAC, any ERM hypothesis is good, in online learning choosing an arbitrary
ERM hypothesis is far from being optimal.

21.1.1 Online Learnability


We next take a more general approach, and aim at characterizing online learn-
ability. In particular, we target the following question: What is the optimal online
learning algorithm for a given hypothesis class H?
We present a dimension of hypothesis classes that characterizes the best achiev-
able mistake bound. This measure was proposed by Nick Littlestone and we
therefore refer to it as Ldim(H).
To motivate the definition of Ldim it is convenient to view the online learning
process as a game between two players: the learner versus the environment. On
round t of the game, the environment picks an instance xt , the learner predicts a
label pt 2 {0, 1}, and finally the environment outputs the true label, yt 2 {0, 1}.
Suppose that the environment wants to make the learner err on the first T rounds
of the game. Then, it must output yt = 1 pt , and the only question is how it
should choose the instances xt in such a way that ensures that for some h? 2 H
we have yt = h? (xt ) for all t 2 [T ].
A strategy for an adversarial environment can be formally described as a
binary tree, as follows. Each node of the tree is associated with an instance from
X . Initially, the environment presents to the learner the instance associated with
the root of the tree. Then, if the learner predicts pt = 1 the environment will
declare that this is a wrong prediction (i.e., yt = 0) and will traverse to the right
child of the current node. If the learner predicts pt = 0 then the environment
will set yt = 1 and will traverse to the left child. This process will continue and
at each round, the environment will present the instance associated with the
current node.
Formally, consider a complete binary tree of depth T (we define the depth of
the tree as the number of edges in a path from the root to a leaf). We have
2T +1 1 nodes in such a tree, and we attach an instance to each node. Let
v1 , . . . , v2T +1 1 be these instances. We start from the root of the tree, and set
x1 = v1 . At round t, we set xt = vit where it is the current node. At the end of
21.1 Online Classification in the Realizable Case 291

v1
h1 h2 h3 h4

v2 v3 v1 0 0 1 1
v2 0 1 ⇤ ⇤
v3 ⇤ ⇤ 0 1

Figure 21.1 An illustration of a shattered tree of depth 2. The dashed path


corresponds to the sequence of examples ((v1 , 1), (v3 , 0)). The tree is shattered by
H = {h1 , h2 , h3 , h4 }, where the predictions of each hypothesis in H on the instances
v1 , v2 , v3 is given in the table (the ’*’ mark means that hj (vi ) can be either 1 or 0).

round t, we go to the left child of it if yt = 0 or to the right child if yt = 1. That


Pt 1
is, it+1 = 2it +yt . Unraveling the recursion we obtain it = 2t 1 + j=1 yj 2t 1 j .
The preceding strategy for the environment succeeds only if for every (y1 , . . . , yT )
there exists h 2 H such that yt = h(xt ) for all t 2 [T ]. This leads to the following
definition.

definition 21.4 (H Shattered Tree) A shattered tree of depth d is a sequence


of instances v1 , . . . , v2d 1 in X such that for every labeling (y1 , . . . , yd ) 2 {0, 1}d
there exists h 2 H such that for all t 2 [d] we have h(vit ) = yt where it =
Pt 1
2t 1 + j=1 yj 2t 1 j .

An illustration of a shattered tree of depth 2 is given in Figure 21.1.

definition 21.5 (Littlestone’s Dimension (Ldim)) Ldim(H) is the maximal


integer T such that there exists a shattered tree of depth T , which is shattered
by H.

The definition of Ldim and the discussion above immediately imply the fol-
lowing:

lemma 21.6 No algorithm can have a mistake bound strictly smaller than
Ldim(H); namely, for every algorithm, A, we have MA (H) Ldim(H).

Proof Let T = Ldim(H) and let v1 , . . . , v2T 1 be a sequence that satisfies the
requirements in the definition of Ldim. If the environment sets xt = vit and
yt = 1 pt for all t 2 [T ], then the learner makes T mistakes while the definition
of Ldim implies that there exists a hypothesis h 2 H such that yt = h(xt ) for all
t.

Let us now give several examples.


Example 21.2 Let H be a finite hypothesis class. Clearly, any tree that is shat-
tered by H has depth of at most log2 (|H|). Therefore, Ldim(H)  log2 (|H|).
Another way to conclude this inequality is by combining Lemma 21.6 with The-
orem 21.3.
Example 21.3 Let X = {1, . . . , d} and H = {h1 , . . . , hd } where hj (x) = 1 i↵
292 Online Learning

x = j. Then, it is easy to show that Ldim(H) = 1 while |H| = d can be arbitrarily


large. Therefore, this example shows that Ldim(H) can be significantly smaller
than log2 (|H|).
Example 21.4 Let X = [0, 1] and H = {x 7! 1[x<a] : a 2 [0, 1]}; namely, H is
the class of thresholds on the interval [0, 1]. Then, Ldim(H) = 1. To see this,
consider the tree

1/2

1/4 3/4

1/8 3/8 5/8 7/8

This tree is shattered by H. And, because of the density of the reals, this tree
can be made arbitrarily deep.
Lemma 21.6 states that Ldim(H) lower bounds the mistake bound of any
algorithm. Interestingly, there is a standard algorithm whose mistake bound
matches this lower bound. The algorithm is similar to the Halving algorithm.
Recall that the prediction of Halving is made according to a majority vote of
the hypotheses which are consistent with previous examples. We denoted this
set by Vt . Put another way, Halving partitions Vt into two sets: Vt+ = {h 2 Vt :
h(xt ) = 1} and Vt = {h 2 Vt : h(xt ) = 0}. It then predicts according to the
larger of the two groups. The rationale behind this prediction is that whenever
Halving makes a mistake it ends up with |Vt+1 |  0.5 |Vt |.
The optimal algorithm we present in the following uses the same idea, but
instead of predicting according to the larger class, it predicts according to the
class with larger Ldim.

Standard Optimal Algorithm (SOA)


input: A hypothesis class H
initialize: V1 = H
for t = 1, 2, . . .
receive xt
(r)
for r 2 {0, 1} let Vt = {h 2 Vt : h(xt ) = r}
(r)
predict pt = argmaxr2{0,1} Ldim(Vt )
(in case of a tie predict pt = 1)
receive true label yt
update Vt+1 = {h 2 Vt : h(xt ) = yt }

The following lemma formally establishes the optimality of the preceding al-
gorithm.
21.1 Online Classification in the Realizable Case 293

lemma 21.7 SOA enjoys the mistake bound MSOA (H)  Ldim(H).

Proof It suffices to prove that whenever the algorithm makes a prediction mis-
take we have Ldim(Vt+1 )  Ldim(Vt ) 1. We prove this claim by assuming the
contrary, that is, Ldim(Vt+1 ) = Ldim(Vt ). If this holds true, then the definition
(r)
of pt implies that Ldim(Vt ) = Ldim(Vt ) for both r = 1 and r = 0. But, then
we can construct a shaterred tree of depth Ldim(Vt ) + 1 for the class Vt , which
leads to the desired contradiction.

Combining Lemma 21.7 and Lemma 21.6 we obtain:

corollary 21.8 Let H be any hypothesis class. Then, the standard optimal
algorithm enjoys the mistake bound MSOA (H) = Ldim(H) and no other algorithm
can have MA (H) < Ldim(H).

Comparison to VC Dimension
In the PAC learning model, learnability is characterized by the VC dimension of
the class H. Recall that the VC dimension of a class H is the maximal number
d such that there are instances x1 , . . . , xd that are shattered by H. That is, for
any sequence of labels (y1 , . . . , yd ) 2 {0, 1}d there exists a hypothesis h 2 H
that gives exactly this sequence of labels. The following theorem relates the VC
dimension to the Littlestone dimension.

theorem 21.9 For any class H, VCdim(H)  Ldim(H), and there are classes
for which strict inequality holds. Furthermore, the gap can be arbitrarily larger.

Proof We first prove that VCdim(H)  Ldim(H). Suppose VCdim(H) = d and


let x1 , . . . , xd be a shattered set. We now construct a complete binary tree of
instances v1 , . . . , v2d 1 , where all nodes at depth i are set to be xi – see the
following illustration:

x1

x2 x2

x3 x3 x3 x3

Now, the definition of a shattered set clearly implies that we got a valid shattered
tree of depth d, and we conclude that VCdim(H)  Ldim(H). To show that the
gap can be arbitrarily large simply note that the class given in Example 21.4 has
VC dimension of 1 whereas its Littlestone dimension is infinite.
294 Online Learning

21.2 Online Classification in the Unrealizable Case

In the previous section we studied online learnability in the realizable case. We


now consider the unrealizable case. Similarly to the agnostic PAC model, we
no longer assume that all labels are generated by some h? 2 H, but we require
the learner to be competitive with the best fixed predictor from H. This is
captured by the regret of the algorithm, which measures how “sorry” the learner
is, in retrospect, not to have followed the predictions of some hypothesis h 2 H.
Formally, the regret of an algorithm A relative to h when running on a sequence
of T examples is defined as
" T T
#
X X
RegretA (h, T ) = sup |pt yt | |h(xt ) yt | , (21.1)
(x1 ,y1 ),...,(xT ,yT ) t=1 t=1

and the regret of the algorithm relative to a hypothesis class H is

RegretA (H, T ) = sup RegretA (h, T ). (21.2)


h2H

We restate the learner’s goal as having the lowest possible regret relative to H.
An interesting question is whether we can derive an algorithm with low regret,
meaning that RegretA (H, T ) grows sublinearly with the number of rounds, T ,
which implies that the di↵erence between the error rate of the learner and the
best hypothesis in H tends to zero as T goes to infinity.
We first show that this is an impossible mission—no algorithm can obtain a
sublinear regret bound even if |H| = 2. Indeed, consider H = {h0 , h1 }, where h0
is the function that always returns 0 and h1 is the function that always returns
1. An adversary can make the number of mistakes of any online algorithm be
equal to T , by simply waiting for the learner’s prediction and then providing
the opposite label as the true label. In contrast, for any sequence of true labels,
y1 , . . . , yT , let b be the majority of labels in y1 , . . . , yT , then the number of
mistakes of hb is at most T /2. Therefore, the regret of any online algorithm
might be at least T T /2 = T /2, which is not sublinear in T . This impossibility
result is attributed to Cover (Cover 1965).
To sidestep Cover’s impossibility result, we must further restrict the power
of the adversarial environment. We do so by allowing the learner to randomize
his predictions. Of course, this by itself does not circumvent Cover’s impossibil-
ity result, since in deriving this result we assumed nothing about the learner’s
strategy. To make the randomization meaningful, we force the adversarial envir-
onment to decide on yt without knowing the random coins flipped by the learner
on round t. The adversary can still know the learner’s forecasting strategy and
even the random coin flips of previous rounds, but it does not know the actual
value of the random coin flips used by the learner on round t. With this (mild)
change of game, we analyze the expected number of mistakes of the algorithm,
where the expectation is with respect to the learner’s own randomization. That
is, if the learner outputs ŷt where P[ŷt = 1] = pt , then the expected loss he pays
21.2 Online Classification in the Unrealizable Case 295

on round t is

P[ŷt 6= yt ] = |pt yt |.

Put another way, instead of having the predictions of the learner being in {0, 1}
we allow them to be in [0, 1], and interpret pt 2 [0, 1] as the probability to predict
the label 1 on round t.
With this assumption it is possible to derive a low regret algorithm. In partic-
ular, we will prove the following theorem.

theorem 21.10 For every hypothesis class H, there exists an algorithm for
online classification, whose predictions come from [0, 1], that enjoys the regret
bound

T
X T
X p
8h 2 H, |pt yt | |h(xt ) yt |  2 min{log(|H|) , Ldim(H) log(eT )} T .
t=1 t=1

Furthermore,
⇣p no ⌘algorithm can achieve an expected regret bound smaller than
⌦ Ldim(H) T .

We will provide a constructive proof of the upper bound part of the preceding
theorem. The proof of the lower bound part can be found in (Ben-David, Pal, &
Shalev-Shwartz 2009).
The proof of Theorem 21.10 relies on the Weighted-Majority algorithm for
learning with expert advice. This algorithm is important by itself and we dedicate
the next subsection to it.

21.2.1 Weighted-Majority
Weighted-majority is an algorithm for the problem of prediction with expert ad-
vice. In this online learning problem, on round t the learner has to choose the
advice of d given experts. We also allow the learner to randomize his choice by
defining a distribution over the d experts, that is, picking a vector w(t) 2 [0, 1]d ,
P (t) (t)
with i wi = 1, and choosing the ith expert with probability wi . After the
learner chooses an expert, it receives a vector of costs, vt 2 [0, 1]d , where vt,i
is the cost of following the advice of the ith expert. If the learner’s predic-
tions are randomized, then its loss is defined to be the averaged cost, namely,
P (t) (t)
i wi vt,i = hw , vt i. The algorithm assumes that the number of rounds T is
given. In Exercise 4 we show how to get rid of this dependence using the doubling
trick.
296 Online Learning

Weighted-Majority
input: number ofp experts, d ; number of rounds, T
parameter: ⌘ = 2 log(d)/T
initialize: w̃(1) = (1, . . . , 1)
for t = 1, 2, . . .
P (t)
set w(t) = w̃(t) /Zt where Zt = i w̃i
(t)
choose expert i at random according to P[i] = wi
receive costs of all experts vt 2 [0, 1]d
pay cost hw(t) , vt i
(t+1) (t)
update rule 8i, w̃i = w̃i e ⌘vt,i

The following theorem is key for analyzing the regret bound of Weighted-
Majority.

theorem 21.11 Assuming that T > 2 log(d), the Weighted-Majority algo-


rithm enjoys the bound
T
X T
X p
hw(t) , vt i min vt,i  2 log(d) T .
i2[d]
t=1 t=1

Proof We have:

Zt+1 X w̃ (t) X (t)


i ⌘vt,i ⌘vt,i
log = log e = log wi e .
Zt i
Z t i

Using the inequality e a  1 a + a2 /2, which holds for all a 2 (0, 1), and the
P (t)
fact that i wi = 1, we obtain

Zt+1 X (t)
log  log wi 1 ⌘vt,i + ⌘ 2 vt,i
2
/2
Zt i
X (t)
= log 1 wi ⌘vt,i ⌘ 2 vt,i
2
/2 .
i
| {z }
def
=b

Next, note that b 2 (0, 1). Therefore, taking log of the two sides of the inequality
1 b  e b we obtain the inequality log(1 b)  b, which holds for all b  1,
and obtain
Zt+1 X (t)
log  wi ⌘vt,i ⌘ 2 vt,i
2
/2
Zt i
X (t)
= ⌘ hw(t) , vt i + ⌘ 2 2
wi vt,i /2
i

 ⌘ hw(t) , vt i + ⌘ 2 /2.
21.2 Online Classification in the Unrealizable Case 297

Summing this inequality over t we get


T
X T
X
Zt+1 T ⌘2
log(ZT +1 ) log(Z1 ) = log  ⌘ hw(t) , vt i + . (21.3)
t=1
Zt t=1
2
P
(T +1)
Next, we lower bound ZT +1 . For each i, we can rewrite w̃i = e ⌘ t vt,i and
we get that
!
X P ⇣ P ⌘ X
⌘ t vt,i
log ZT +1 = log e log max e ⌘ t vt,i = ⌘ min vt,i .
i i
i t

Combining the preceding with Equation (21.3) and using the fact that log(Z1 ) =
log(d) we get that

X T
X T ⌘2
⌘ min vt,i log(d)  ⌘ hw(t) , vt i + ,
i
t t=1
2

which can be rearranged as follows:


T
X X log(d) ⌘ T
hw(t) , vt i min vt,i  + .
t=1
i
t
⌘ 2

Plugging the value of ⌘ into the equation concludes our proof.

Proof of Theorem 21.10


Equipped with the Weighted-Majority algorithm and Theorem 21.11, we are
ready to prove Theorem 21.10. We start with the simpler case, in which H is
a finite class, and let us write H = {h1 , . . . , hd }. In this case, we can refer to
each hypothesis, hi , as an expert, whose advice is to predict hi (xt ), and whose
cost is vt,i = |hi (xt ) yt |. The prediction of the algorithm will therefore be
P (t)
pt = i wi hi (xt ) 2 [0, 1], and the loss is
d
X d
X
(t) (t)
|pt yt | = wi hi (xt ) yt = wi (hi (xt ) yt ) .
i=1 i=1

Now, if yt = 1, then for all i, hi (xt ) yt  0. Therefore, the above equals to


P (t)
i wi |hi (xt ) yt |. If yt = 0 then for all i, hi (xt ) yt 0, and the above also
P (t)
equals i wi |hi (xt ) yt |. All in all, we have shown that
d
X (t)
|pt yt | = wi |hi (xt ) yt | = hw(t) , vt i.
i=1
P
Furthermore, for each i, t vt,i is exactly the number of mistakes hypothesis hi
makes. Applying Theorem 21.11 we obtain
298 Online Learning

corollary 21.12 Let H be a finite hypothesis class. There exists an algorithm


for online classification, whose predictions come from [0, 1], that enjoys the regret
bound
XT XT p
|pt yt | min |h(xt ) yt |  2 log(|H|) T .
h2H
t=1 t=1

Next, we consider the case of a general hypothesis class. Previously, we con-


structed an expert for each individual hypothesis. However, if H is infinite this
leads to a vacuous bound. The main idea is to construct a set of experts in a
more sophisticated way. The challenge is how to define a set of experts that, on
one hand, is not excessively large and, on the other hand, contains experts that
give accurate predictions.
We construct the set of experts so that for each hypothesis h 2 H and every
sequence of instances, x1 , x2 , . . . , xT , there exists at least one expert in the set
which behaves exactly as h on these instances. For each L  Ldim(H) and each
sequence 1  i1 < i2 < · · · < iL  T we define an expert. The expert simulates
the game between SOA (presented in the previous section) and the environment
on the sequence of instances x1 , x2 , . . . , xT assuming that SOA makes a mistake
precisely in rounds i1 , i2 , . . . , iL . The expert is defined by the following algorithm.

Expert(i1 , i2 , . . . , iL )
input A hypothesis class H ; Indices i1 < i2 < · · · < iL
initialize: V1 = H
for t = 1, 2, . . . , T
receive xt
(r)
for r 2 {0, 1} let Vt = {h⇣ 2 Vt⌘: h(xt ) = r}
(r)
define ỹt = argmaxr Ldim Vt
(in case of a tie set ỹt = 0)
if t 2 {i1 , i2 , . . . , iL }
predict ŷt = 1 ỹt
else
predict ŷt = ỹt
(ŷ )
update Vt+1 = Vt t

Note that each such expert can give us predictions at every round t while only
observing the instances x1 , . . . , xt . Our generic online learning algorithm is now
an application of the Weighted-Majority algorithm with these experts.
To analyze the algorithm we first note that the number of experts is
Ldim(H) ✓ ◆
X T
d= . (21.4)
L
L=0

It can be shown that when T Ldim(H) + 2, the right-hand side of the equation
Ldim(H)
is bounded by (eT /Ldim(H)) (the proof can be found in Lemma A.5).
21.2 Online Classification in the Unrealizable Case 299

Theorem 21.11 tells us that the expected number of mistakes


p of Weighted-Majority
is at most the number of mistakes of the best expert plus 2 log(d) T . We will
next show that the number of mistakes of the best expert is at most the number
of mistakes of the best hypothesis in H. The following key lemma shows that,
on any sequence of instances, for each hypothesis h 2 H there exists an expert
with the same behavior.

lemma 21.13 Let H be any hypothesis class with Ldim(H) < 1. Let x1 , x2 , . . . , xT
be any sequence of instances. For any h 2 H, there exists L  Ldim(H) and in-
dices 1  i1 < i2 < · · · < iL  T such that when running Expert(i1 , i2 , . . . , iL )
on the sequence x1 , x2 , . . . , xT , the expert predicts h(xt ) on each online round
t = 1, 2, . . . , T .

Proof Fix h 2 H and the sequence x1 , x2 , . . . , xT . We must construct L and the


indices i1 , i2 , . . . , iL . Consider running SOA on the input (x1 , h(x1 )), (x2 , h(x2 )),
. . ., (xT , h(xT )). SOA makes at most Ldim(H) mistakes on such input. We define
L to be the number of mistakes made by SOA and we define {i1 , i2 , . . . , iL } to
be the set of rounds in which SOA made the mistakes.
Now, consider the Expert(i1 , i2 , . . . , iL ) running on the sequence x1 , x2 , . . . , xT .
By construction, the set Vt maintained by Expert(i1 , i2 , . . . , iL ) equals the set Vt
maintained by SOA when running on the sequence (x1 , h(x1 )), . . . , (xT , h(xT )).
The predictions of SOA di↵er from the predictions of h if and only if the round is
in {i1 , i2 , . . . , iL }. Since Expert(i1 , i2 , . . . , iL ) predicts exactly like SOA if t is not
in {i1 , i2 , . . . , iL } and the opposite of SOAs’ predictions if t is in {i1 , i2 , . . . , iL },
we conclude that the predictions of the expert are always the same as the pre-
dictions of h.

The previous lemma holds in particular for the hypothesis in H that makes the
least number of mistakes on the sequence of examples, and we therefore obtain
the following:

corollary 21.14 Let (x1 , y1 ), (x2 , y2 ), . . . , (xT , yT ) be a sequence of examples


and let H be a hypothesis class with Ldim(H) < 1. There exists L  Ldim(H)
and indices 1  i1 < i2 < · · · < iL  T , such that Expert(i1 , i2 , . . . , iL ) makes
at most as many mistakes as the best h 2 H does, namely,

T
X
min |h(xt ) yt |
h2H
t=1

mistakes on the sequence of examples.

Together with Theorem 21.11, the upper bound part of Theorem 21.10 is
proven.
300 Online Learning

21.3 Online Convex Optimization

In Chapter 12 we studied convex learning problems and showed learnability


results for these problems in the agnostic PAC learning framework. In this section
we show that similar learnability results hold for convex problems in the online
learning framework. In particular, we consider the following problem.

Online Convex Optimization


definitions:
hypothesis class H ; domain Z ; loss function ` : H ⇥ Z ! R
assumptions:
H is convex
8z 2 Z, `(·, z) is a convex function
for t = 1, 2, . . . , T
learner predicts a vector w(t) 2 H
environment responds with zt 2 Z
learner su↵ers loss `(w(t) , zt )

As in the online classification problem, we analyze the regret of the algorithm.


Recall that the regret of an online algorithm with respect to a competing hy-
pothesis, which here will be some vector w? 2 H, is defined as
T
X T
X
? (t)
RegretA (w , T ) = `(w , zt ) `(w? , zt ). (21.5)
t=1 t=1

As before, the regret of the algorithm relative to a set of competing vectors, H,


is defined as
RegretA (H, T ) = sup RegretA (w? , T ).
w? 2H

In Chapter 14 we have shown that Stochastic Gradient Descent solves convex


learning problems in the agnostic PAC model. We now show that a very similar
algorithm, Online Gradient Descent, solves online convex learning problems.

Online Gradient Descent


parameter: ⌘ > 0
initialize: w(1) = 0
for t = 1, 2, . . . , T
predict w(t)
receive zt and let ft (·) = `(·, zt )
choose vt 2 @ft (w(t) )
update:
1
1. w(t+ 2 ) = w(t) ⌘vt
1
2. w(t+1) = argminw2H kw w(t+ 2 ) k
21.4 The Online Perceptron Algorithm 301

theorem 21.15 The Online Gradient Descent algorithm enjoys the following
regret bound for every w? 2 H,
T
kw? k2 ⌘X
RegretA (w? , T )  + kvt k2 .
2⌘ 2 t=1
p
If we further assume that ft is ⇢-Lipschitz for all t, then setting ⌘ = 1/ T yields
1 p
RegretA (w? , T )  (kw? k2 + ⇢2 ) T .
2
B
If we further assume that H is B-bounded and we set ⌘ = p
⇢ T
then
p
RegretA (H, T )  B ⇢ T .
Proof The analysis is similar to the analysis of Stochastic Gradient Descent
1
with projections. Using the projection lemma, the definition of w(t+ 2 ) , and the
definition of subgradients, we have that for every t,
kw(t+1) w ? k2 kw(t) w? k2
1 1
= kw(t+1) w? k2 kw(t+ 2 ) w? k2 + kw(t+ 2 ) w? k2 kw(t) w? k2
1
 kw(t+ 2 ) w ? k2 kw(t) w? k2
= kw(t) ⌘vt w ? k2 kw(t) w? k2
= 2⌘hw(t) w? , vt i + ⌘ 2 kvt k2
 2⌘(ft (w(t) ) ft (w? )) + ⌘ 2 kvt k2 .
Summing over t and observing that the left-hand side is a telescopic sum we
obtain that
T
X T
X
kw(T +1) w ? k2 kw(1) w? k2  2⌘ (ft (w(t) ) ft (w? )) + ⌘ 2 kvt k2 .
t=1 t=1
(1)
Rearranging the inequality and using the fact that w = 0, we get that
T
X T
kw(1) w ? k2 kw(T +1) w? k2 ⌘X
(ft (w(t) ) ft (w? ))  + kvt k2
t=1
2⌘ 2 t=1
T
kw? k2 ⌘X
 + kvt k2 .
2⌘ 2 t=1

This proves the first bound in the theorem. The second bound follows from the
assumption that ft is ⇢-Lipschitz, which implies that kvt k  ⇢.

21.4 The Online Perceptron Algorithm

The Perceptron is a classic online learning algorithm for binary classification with
the hypothesis class of homogenous halfspaces, namely, H = {x 7! sign(hw, xi) :
302 Online Learning

w 2 Rd }. In Section 9.1.2 we have presented the batch version of the Perceptron,


which aims to solve the ERM problem with respect to H. We now present an
online version of the Perceptron algorithm.
Let X = Rd , Y = { 1, 1}. On round t, the learner receives a vector xt 2 Rd .
The learner maintains a weight vector w(t) 2 Rd and predicts pt = sign(hw(t) , xt i).
Then, it receives yt 2 Y and pays 1 if pt 6= yt and 0 otherwise.
The goal of the learner is to make as few prediction mistakes as possible. In
Section 21.1 we characterized the optimal algorithm and showed that the best
achievable mistake bound depends on the Littlestone dimension of the class.
We show later that if d 2 then Ldim(H) = 1, which implies that we have
no hope of making few prediction mistakes. Indeed, consider the tree for which
v1 = ( 12 , 1, 0, . . . , 0), v2 = ( 14 , 1, 0, . . . , 0), v3 = ( 34 , 1, 0, . . . , 0), etc. Because of
the density of the reals, this tree is shattered by the subset of H which contains
all hypotheses that are parametrized by w of the form w = ( 1, a, 0, . . . , 0), for
a 2 [0, 1]. We conclude that indeed Ldim(H) = 1.
To sidestep this impossibility result, the Perceptron algorithm relies on the
technique of surrogate convex losses (see Section 12.3). This is also closely related
to the notion of margin we studied in Chapter 15.
A weight vector w makes a mistake on an example (x, y) whenever the sign of
hw, xi does not equal y. Therefore, we can write the 0 1 loss function as follows

`(w, (x, y)) = 1[yhw,xi0] .

On rounds on which the algorithm makes a prediction mistake, we shall use the
hinge-loss as a surrogate convex loss function

ft (w) = max{0, 1 yt hw, xt i}.

The hinge-loss satisfies the two conditions:

• ft is a convex function
• For all w, ft (w) `(w, (xt , yt )). In particular, this holds for w(t) .

On rounds on which the algorithm is correct, we shall define ft (w) = 0. Clearly,


ft is convex in this case as well. Furthermore, ft (w(t) ) = `(w(t) , (xt , yt )) = 0.
Remark 21.5 In Section 12.3 we used the same surrogate loss function for all the
examples. In the online model, we allow the surrogate to depend on the specific
round. It can even depend on w(t) . Our ability to use a round specific surrogate
stems from the worst-case type of analysis we employ in online learning.
Let us now run the Online Gradient Descent algorithm on the sequence of
functions, f1 , . . . , fT , with the hypothesis class being all vectors in Rd (hence,
the projection step is vacuous). Recall that the algorithm initializes w(1) = 0
and its update rule is
w(t+1) = w(t) ⌘vt

for some vt 2 @ft (w(t) ). In our case, if yt hw(t) , xt i > 0 then ft is the zero
21.4 The Online Perceptron Algorithm 303

function and we can take vt = 0. Otherwise, it is easy to verify that vt = yt x t


is in @ft (w(t) ). We therefore obtain the update rule
(
(t+1) w(t) if yt hw(t) , xt i > 0
w =
w(t) + ⌘yt xt otherwise

Denote by M the set of rounds in which sign(hw(t) , xt i) 6= yt . Note that on


round t, the prediction of the Perceptron can be rewritten as
!
X
(t)
pt = sign(hw , xt i) = sign ⌘ yi hxi , xt i .
i2M:i<t

This form implies that the predictions of the Perceptron algorithm and the set
M do not depend on the actual value of ⌘ as long as ⌘ > 0. We have therefore
obtained the Perceptron algorithm:
Perceptron
initialize: w1 = 0
for t = 1, 2, . . . , T
receive xt
predict pt = sign(hw(t) , xt i)
if yt hw(t) , xt i  0
w(t+1) = w(t) + yt xt
else
w(t+1) = w(t)

To analyze the Perceptron, we rely on the analysis of Online Gradient De-


scent given in the previous section. In our case, the subgradient of ft we use
in the Perceptron is vt = 1[yt hw(t) ,xt i0] yt xt . Indeed, the Perceptron’s update
is w(t+1) = w(t) vt , and as discussed before this is equivalent to w(t+1) =
w(t) ⌘vt for every ⌘ > 0. Therefore, Theorem 21.15 tells us that
T
X T
X T
1 ⌘X
ft (w(t) ) ft (w? )  kw? k22 + kvt k22 .
t=1 t=1
2⌘ 2 t=1
PT
Since ft (w(t) ) is a surrogate for the 0 1 loss we know that t=1 ft (w(t) ) |M|.
Denote R = maxt kxt k; then we obtain
T
X 1 ⌘
|M| ft (w? )  kw? k22 + |M| R2
t=1
2⌘ 2
?
kw k
Setting ⌘ = p and rearranging, we obtain
R |M|

p T
X
|M| Rkw? k |M| ft (w? )  0. (21.6)
t=1

This inequality implies


304 Online Learning

theorem 21.16 Suppose that the Perceptron algorithm runs on a sequence


(x1 , y1 ), . . . , (xT , yT ) and let R = maxt kxt k. Let M be the rounds on which the
Perceptron errs and let ft (w) = 1[t2M] [1 yt hw, xt i]+ . Then, for every w?
s
X X
? ?
|M|  ft (w ) + R kw k ft (w? ) + R2 kw? k2 .
t t

In particular, if there exists w? such that yt hw? , xt i 1 for all t then

|M|  R2 kw? k2 .

Proof The theorem follows from Equation (21.6) and the following claim: Given
p p
x, b, c 2 R+ , the inequality x b x c  0 implies that x  c + b2 + b c. The
last claim can be easily derived by analyzing the roots of the convex parabola
Q(y) = y 2 by c.

The last assumption of Theorem 21.16 is called separability with large margin
(see Chapter 15). That is, there exists w? that not only satisfies that the point
xt lies on the correct side of the halfspace, it also guarantees that xt is not too
close to the decision boundary. More specifically, the distance from xt to the
decision boundary is at least = 1/kw? k and the bound becomes (R/ )2 .
When the separability assumption does not hold, the bound involves the term
[1 yt hw? , xt i]+ which measures how much the separability with margin require-
ment is violated.
As a last remark we note that there can be cases in which there exists some
w? that makes zero errors on the sequence but the Perceptron will make many
errors. Indeed, this is a direct consequence of the fact that Ldim(H) = 1. The
way we sidestep this impossibility result is by assuming more on the sequence of
examples – the bound in Theorem 21.16 will be meaningful only if the cumulative
P
surrogate loss, t ft (w? ) is not excessively large.

21.5 Summary

In this chapter we have studied the online learning model. Many of the results
we derived for the PAC learning model have an analog in the online model. First,
we have shown that a combinatorial dimension, the Littlestone dimension, char-
acterizes online learnability. To show this, we introduced the SOA algorithm (for
the realizable case) and the Weighted-Majority algorithm (for the unrealizable
case). We have also studied online convex optimization and have shown that
online gradient descent is a successful online learner whenever the loss function
is convex and Lipschitz. Finally, we presented the online Perceptron algorithm
as a combination of online gradient descent and the concept of surrogate convex
loss functions.
21.6 Bibliographic Remarks 305

21.6 Bibliographic Remarks

The Standard Optimal Algorithm was derived by the seminal work of Lit-
tlestone (1988). A generalization to the nonrealizable case, as well as other
variants like margin-based Littlestone’s dimension, were derived in (Ben-David
et al. 2009). Characterizations of online learnability beyond classification have
been obtained in (Abernethy, Bartlett, Rakhlin & Tewari 2008, Rakhlin, Srid-
haran & Tewari 2010, Daniely et al. 2011). The Weighted-Majority algorithm is
due to (Littlestone & Warmuth 1994) and (Vovk 1990).
The term “online convex programming” was introduced by Zinkevich (2003)
but this setting was introduced some years earlier by Gordon (1999). The Per-
ceptron dates back to Rosenblatt (Rosenblatt 1958). An analysis for the re-
alizable case (with margin assumptions) appears in (Agmon 1954, Minsky &
Papert 1969). Freund and Schapire (Freund & Schapire 1999) presented an anal-
ysis for the unrealizable case with a squared-hinge-loss based on a reduction to
the realizable case. A direct analysis for the unrealizable case with the hinge-loss
was given by Gentile (Gentile 2003).
For additional information we refer the reader to Cesa-Bianchi & Lugosi (2006)
and Shalev-Shwartz (2011).

21.7 Exercises

1. Find a hypothesis class H and a sequence of examples on which Consistent


makes |H| 1 mistakes.
2. Find a hypothesis class H and a sequence of examples on which the mistake
bound of the Halving algorithm is tight.
3. Let d 2, X = {1, . . . , d} and let H = {hj : j 2 [d]}, where hj (x) = 1[x=j] .
Calculate MHalving (H) (i.e., derive lower and upper bounds on MHalving (H),
and prove that they are equal).
4. The Doubling Trick:
In Theorem 21.15, the parameter ⌘ depends on the time horizon T . In this
exercise we show how to get rid of this dependence by a simple trick.
p
Consider an algorithm that enjoys a regret bound of the form ↵ T , but
its parameters require the knowledge of T . The doubling trick, described in
the following, enables us to convert such an algorithm into an algorithm that
does not need to know the time horizon. The idea is to divide the time into
periods of increasing size and run the original algorithm on each period.

The Doubling Trick


input: algorithm A whose parameters depend on the time horizon
for m = 0, 1, 2, . . .
run A on the 2m rounds t = 2m , . . . , 2m+1 1
306 Online Learning

p
Show that if the regret of A on each period of 2m rounds is at most ↵ 2m ,
then the total regret is at most
p
2 p
p ↵ T.
2 1
5. Online-to-batch Conversions: In this exercise we demonstrate how a suc-
cessful online learning algorithm can be used to derive a successful PAC
learner as well.
Consider a PAC learning problem for binary classification parameterized
by an instance domain, X , and a hypothesis class, H. Suppose that there exists
an online learning algorithm, A, which enjoys a mistake bound MA (H) < 1.
Consider running this algorithm on a sequence of T examples which are sam-
pled i.i.d. from a distribution D over the instance space X , and are labeled by
some h? 2 H. Suppose that for every round t, the prediction of the algorithm
is based on a hypothesis ht : X ! {0, 1}. Show that
MA (H)
E[LD (hr )]  ,
T
where the expectation is over the random choice of the instances as well as a
random choice of r according to the uniform distribution over [T ].
Hint: Use similar arguments to the ones appearing in the proof of Theo-
rem 14.8.
22 Clustering

Clustering is one of the most widely used techniques for exploratory data anal-
ysis. Across all disciplines, from social sciences to biology to computer science,
people try to get a first intuition about their data by identifying meaningful
groups among the data points. For example, computational biologists cluster
genes on the basis of similarities in their expression in di↵erent experiments; re-
tailers cluster customers, on the basis of their customer profiles, for the purpose
of targeted marketing; and astronomers cluster stars on the basis of their spacial
proximity.
The first point that one should clarify is, naturally, what is clustering? In-
tuitively, clustering is the task of grouping a set of objects such that similar
objects end up in the same group and dissimilar objects are separated into dif-
ferent groups. Clearly, this description is quite imprecise and possibly ambiguous.
Quite surprisingly, it is not at all clear how to come up with a more rigorous
definition.
There are several sources for this difficulty. One basic problem is that the
two objectives mentioned in the earlier statement may in many cases contradict
each other. Mathematically speaking, similarity (or proximity) is not a transi-
tive relation, while cluster sharing is an equivalence relation and, in particular,
it is a transitive relation. More concretely, it may be the case that there is a
long sequence of objects, x1 , . . . , xm such that each xi is very similar to its two
neighbors, xi 1 and xi+1 , but x1 and xm are very dissimilar. If we wish to make
sure that whenever two elements are similar they share the same cluster, then
we must put all of the elements of the sequence in the same cluster. However,
in that case, we end up with dissimilar elements (x1 and xm ) sharing a cluster,
thus violating the second requirement.
To illustrate this point further, suppose that we would like to cluster the points
in the following picture into two clusters.

A clustering algorithm that emphasizes not separating close-by points (e.g., the
Single Linkage algorithm that will be described in Section 22.1) will cluster this
input by separating it horizontally according to the two lines:

Understanding Machine Learning, c 2014 by Shai Shalev-Shwartz and Shai Ben-David


Published 2014 by Cambridge University Press.
Personal use only. Not for distribution. Do not post.
Please link to http://www.cs.huji.ac.il/~shais/UnderstandingMachineLearning
308 Clustering

In contrast, a clustering method that emphasizes not having far-away points


share the same cluster (e.g., the 2-means algorithm that will be described in
Section 22.1) will cluster the same input by dividing it vertically into the right-
hand half and the left-hand half:

Another basic problem is the lack of “ground truth” for clustering, which is a
common problem in unsupervised learning. So far in the book, we have mainly
dealt with supervised learning (e.g., the problem of learning a classifier from
labeled training data). The goal of supervised learning is clear – we wish to
learn a classifier which will predict the labels of future examples as accurately
as possible. Furthermore, a supervised learner can estimate the success, or the
risk, of its hypotheses using the labeled training data by computing the empirical
loss. In contrast, clustering is an unsupervised learning problem; namely, there
are no labels that we try to predict. Instead, we wish to organize the data in
some meaningful way. As a result, there is no clear success evaluation procedure
for clustering. In fact, even on the basis of full knowledge of the underlying data
distribution, it is not clear what is the “correct” clustering for that data or how
to evaluate a proposed clustering.
Consider, for example, the following set of points in R2 :

and suppose we are required to cluster them into two clusters. We have two
highly justifiable solutions:
Clustering 309

This phenomenon is not just artificial but occurs in real applications. A given
set of objects can be clustered in various di↵erent meaningful ways. This may
be due to having di↵erent implicit notions of distance (or similarity) between
objects, for example, clustering recordings of speech by the accent of the speaker
versus clustering them by content, clustering movie reviews by movie topic versus
clustering them by the review sentiment, clustering paintings by topic versus
clustering them by style, and so on.
To summarize, there may be several very di↵erent conceivable clustering so-
lutions for a given data set. As a result, there is a wide variety of clustering
algorithms that, on some input data, will output very di↵erent clusterings.

A Clustering Model:
Clustering tasks can vary in terms of both the type of input they have and the
type of outcome they are expected to compute. For concreteness, we shall focus
on the following common setup:

Input — a set of elements, X , and a distance function over it. That is, a function
d : X ⇥ X ! R+ that is symmetric, satisfies d(x, x) = 0 for all x 2 X
and often also satisfies the triangle inequality. Alternatively, the function
could be a similarity function s : X ⇥ X ! [0, 1] that is symmetric
and satisfies s(x, x) = 1 for all x 2 X . Additionally, some clustering
algorithms also require an input parameter k (determining the number
of required clusters).
Output — a partition of the domain set X into subsets. That is, C = (C1 , . . . Ck )
Sk
where i=1 Ci = X and for all i 6= j, Ci \ Cj = ;. In some situations the
clustering is “soft,” namely, the partition of X into the di↵erent clusters
is probabilistic where the output is a function assigning to each domain
point, x 2 X , a vector (p1 (x), . . . , pk (x)), where pi (x) = P[x 2 Ci ] is
the probability that x belongs to cluster Ci . Another possible output is
a clustering dendrogram (from Greek dendron = tree, gramma = draw-
ing), which is a hierarchical tree of domain subsets, having the singleton
sets in its leaves, and the full domain as its root. We shall discuss this
formulation in more detail in the following.
310 Clustering

In the following we survey some of the most popular clustering methods. In


the last section of this chapter we return to the high level discussion of what is
clustering.

22.1 Linkage-Based Clustering Algorithms

Linkage-based clustering is probably the simplest and most straightforward paradigm


of clustering. These algorithms proceed in a sequence of rounds. They start from
the trivial clustering that has each data point as a single-point cluster. Then,
repeatedly, these algorithms merge the “closest” clusters of the previous cluster-
ing. Consequently, the number of clusters decreases with each such round. If kept
going, such algorithms would eventually result in the trivial clustering in which
all of the domain points share one large cluster. Two parameters, then, need to
be determined to define such an algorithm clearly. First, we have to decide how
to measure (or define) the distance between clusters, and, second, we have to
determine when to stop merging. Recall that the input to a clustering algorithm
is a between-points distance function, d. There are many ways of extending d to
a measure of distance between domain subsets (or clusters). The most common
ways are

1. Single Linkage clustering, in which the between-clusters distance is defined


by the minimum distance between members of the two clusters, namely,
def
D(A, B) = min{d(x, y) : x 2 A, y 2 B}

2. Average Linkage clustering, in which the distance between two clusters is


defined to be the average distance between a point in one of the clusters and
a point in the other, namely,
def 1 X
D(A, B) = d(x, y)
|A||B|
x2A, y2B

3. Max Linkage clustering, in which the distance between two clusters is defined
as the maximum distance between their elements, namely,
def
D(A, B) = max{d(x, y) : x 2 A, y 2 B}.

The linkage-based clustering algorithms are agglomerative in the sense that they
start from data that is completely fragmented and keep building larger and
larger clusters as they proceed. Without employing a stopping rule, the outcome
of such an algorithm can be described by a clustering dendrogram: that is, a tree
of domain subsets, having the singleton sets in its leaves, and the full domain as
its root. For example, if the input is the elements X = {a, b, c, d, e} ⇢ R2 with
the Euclidean distance as depicted on the left, then the resulting dendrogram is
the one depicted on the right:
22.2 k-Means and Other Cost Minimization Clusterings 311

{a, b, c, d, e}

a {b, c, d, e}
e
d {b, c} {d, e}

c
b {a} {b} {c} {d} {e}

The single linkage algorithm is closely related to Kruskal’s algorithm for finding
a minimal spanning tree on a weighted graph. Indeed, consider the full graph
whose vertices are elements of X and the weight of an edge (x, y) is the distance
d(x, y). Each merge of two clusters performed by the single linkage algorithm
corresponds to a choice of an edge in the aforementioned graph. It is also possible
to show that the set of edges the single linkage algorithm chooses along its run
forms a minimal spanning tree.
If one wishes to turn a dendrogram into a partition of the space (a clustering),
one needs to employ a stopping criterion. Common stopping criteria include

• Fixed number of clusters – fix some parameter, k, and stop merging clusters
as soon as the number of clusters is k.
• Distance upper bound – fix some r 2 R+ . Stop merging as soon as all the
between-clusters distances are larger than r. We can also set r to be
↵ max{d(x, y) : x, y 2 X } for some ↵ < 1. In that case the stopping
criterion is called “scaled distance upper bound.”

22.2 k-Means and Other Cost Minimization Clusterings

Another popular approach to clustering starts by defining a cost function over a


parameterized set of possible clusterings and the goal of the clustering algorithm
is to find a partitioning (clustering) of minimal cost. Under this paradigm, the
clustering task is turned into an optimization problem. The objective function
is a function from pairs of an input, (X , d), and a proposed clustering solution
C = (C1 , . . . , Ck ), to positive real numbers. Given such an objective function,
which we denote by G, the goal of a clustering algorithm is defined as finding, for
a given input (X , d), a clustering C so that G((X , d), C) is minimized. In order
to reach that goal, one has to apply some appropriate search algorithm.
As it turns out, most of the resulting optimization problems are NP-hard, and
some are even NP-hard to approximate. Consequently, when people talk about,
say, k-means clustering, they often refer to some particular common approxima-
tion algorithm rather than the cost function or the corresponding exact solution
of the minimization problem.
Many common objective functions require the number of clusters, k, as a
312 Clustering

parameter. In practice, it is often up to the user of the clustering algorithm to


choose the parameter k that is most suitable for the given clustering problem.
In the following we describe some of the most common objective functions.
• The k-means objective function is one of the most popular clustering
objectives. In k-means the data is partitioned into disjoint sets C1 , . . . , Ck
where each Ci is represented by a centroid µi . It is assumed that the input
set X is embedded in some larger metric space (X 0 , d) (so that X ✓ X 0 )
and centroids are members of X 0 . The k-means objective function measures
the squared distance between each point in X to the centroid of its cluster.
The centroid of Ci is defined to be
X
µi (Ci ) = argmin d(x, µ)2 .
µ2X 0
x2Ci

Then, the k-means objective is


k X
X
Gk means ((X , d), (C1 , . . . , Ck )) = d(x, µi (Ci ))2 .
i=1 x2Ci

This can also be rewritten as


k X
X
Gk means ((X , d), (C1 , . . . , Ck )) = min d(x, µi )2 . (22.1)
µ1 ,...µk 2X 0
i=1 x2Ci

The k-means objective function is relevant, for example, in digital com-


munication tasks, where the members of X may be viewed as a collection
of signals that have to be transmitted. While X may be a very large set
of real valued vectors, digital transmission allows transmitting of only a
finite number of bits for each signal. One way to achieve good transmis-
sion under such constraints is to represent each member of X by a “close”
member of some finite set µ1 , . . . µk , and replace the transmission of any
x 2 X by transmitting the index of the closest µi . The k-means objective
can be viewed as a measure of the distortion created by such a transmission
representation scheme.
• The k-medoids objective function is similar to the k-means objective,
except that it requires the cluster centroids to be members of the input
set. The objective function is defined by
k X
X
GK medoid ((X , d), (C1 , . . . , Ck )) = min d(x, µi )2 .
µ1 ,...µk 2X
i=1 x2Ci

• The k-median objective function is quite similar to the k-medoids objec-


tive, except that the “distortion” between a data point and the centroid
of its cluster is measured by distance, rather than by the square of the
distance:
k X
X
GK median ((X , d), (C1 , . . . , Ck )) = min d(x, µi ).
µ1 ,...µk 2X
i=1 x2Ci
22.2 k-Means and Other Cost Minimization Clusterings 313

An example where such an objective makes sense is the facility location


problem. Consider the task of locating k fire stations in a city. One can
model houses as data points and aim to place the stations so as to minimize
the average distance between a house and its closest fire station.

The previous examples can all be viewed as center-based objectives. The so-
lution to such a clustering problem is determined by a set of cluster centers,
and the clustering assigns each instance to the center closest to it. More gener-
ally, center-based objective is determined by choosing some monotonic function
f : R+ ! R+ and then defining
k X
X
Gf ((X , d), (C1 , . . . Ck )) = min f (d(x, µi )),
µ1 ,...µk 2X 0
i=1 x2Ci

where X 0 is either X or some superset of X .


Some objective functions are not center based. For example, the sum of in-
cluster distances (SOD)
k
X X
GSOD ((X , d), (C1 , . . . Ck )) = d(x, y)
i=1 x,y2Ci

and the MinCut objective that we shall discuss in Section 22.3 are not center-
based objectives.

22.2.1 The k-Means Algorithm


The k-means objective function is quite popular in practical applications of clus-
tering. However, it turns out that finding the optimal k-means solution is of-
ten computationally infeasible (the problem is NP-hard, and even NP-hard to
approximate to within some constant). As an alternative, the following simple
iterative algorithm is often used, so often that, in many cases, the term k-means
Clustering refers to the outcome of this algorithm rather than to the cluster-
ing that minimizes the k-means objective cost. We describe the algorithm with
respect to the Euclidean distance function d(x, y) = kx yk.

k-Means
input: X ⇢ Rn ; Number of clusters k
initialize: Randomly choose initial centroids µ1 , . . . , µk
repeat until convergence
8i 2 [k] set Ci = {x 2 X : i = argminj kx µj k}
(break ties in some arbitrary manner)
P
8i 2 [k] update µi = |C1i | x2Ci x

lemma 22.1 Each iteration of the k-means algorithm does not increase the
k-means objective function (as given in Equation (22.1)).
314 Clustering

Proof To simplify the notation, let us use the shorthand G(C1 , . . . , Ck ) for the
k-means objective, namely,

k X
X
G(C1 , . . . , Ck ) = min kx µ i k2 . (22.2)
µ1 ,...,µk 2Rn
i=1 x2Ci

P P
It is convenient to define µ(Ci ) = |C1i | x2Ci x and note that µ(Ci ) = argminµ2Rn x2Ci kx
µk2 . Therefore, we can rewrite the k-means objective as

k X
X
G(C1 , . . . , Ck ) = kx µ(Ci )k2 . (22.3)
i=1 x2Ci

(t 1) (t 1)
Consider the update at iteration t of the k-means algorithm. Let C1 , . . . , Ck
(t 1) (t 1) (t) (t)
be the previous partition, let µi = µ(Ci ), and let C1 , . . . , Ck be the
new partition assigned at iteration t. Using the definition of the objective as
given in Equation (22.2) we clearly have that

k
X X
(t) (t) (t 1) 2
G(C1 , . . . , Ck )  kx µi k . (22.4)
i=1 x2C (t)
i

(t) (t)
In addition, the definition of the new partition (C1 , . . . , Ck ) implies that it
Pk P (t 1) 2
minimizes the expression i=1 x2Ci kx µi k over all possible partitions
(C1 , . . . , Ck ). Hence,

k
X X k
X X
(t 1) 2 (t 1) 2
kx µi k  kx µi k . (22.5)
i=1 x2C (t) i=1 x2C (t 1)
i i

Using Equation (22.3) we have that the right-hand side of Equation (22.5) equals
(t 1) (t 1)
G(C1 , . . . , Ck ). Combining this with Equation (22.4) and Equation (22.5),
(t) (t) (t 1) (t 1)
we obtain that G(C1 , . . . , Ck )  G(C1 , . . . , Ck ), which concludes our
proof.

While the preceding lemma tells us that the k-means objective is monotonically
nonincreasing, there is no guarantee on the number of iterations the k-means al-
gorithm needs in order to reach convergence. Furthermore, there is no nontrivial
lower bound on the gap between the value of the k-means objective of the al-
gorithm’s output and the minimum possible value of that objective function. In
fact, k-means might converge to a point which is not even a local minimum (see
Exercise 2). To improve the results of k-means it is often recommended to repeat
the procedure several times with di↵erent randomly chosen initial centroids (e.g.,
we can choose the initial centroids to be random points from the data).
22.3 Spectral Clustering 315

22.3 Spectral Clustering

Often, a convenient way to represent the relationships between points in a data


set X = {x1 , . . . , xm } is by a similarity graph; each vertex represents a data
point xi , and every two vertices are connected by an edge whose weight is their
similarity, Wi,j = s(xi , xj ), where W 2 Rm,m . For example, we can set Wi,j =
exp( d(xi , xj )2 / 2 ), where d(·, ·) is a distance function and is a parameter.
The clustering problem can now be formulated as follows: We want to find a
partition of the graph such that the edges between di↵erent groups have low
weights and the edges within a group have high weights.
In the clustering objectives described previously, the focus was on one side
of our intuitive definition of clustering – making sure that points in the same
cluster are similar. We now present objectives that focus on the other requirement
– points separated into di↵erent clusters should be nonsimilar.

22.3.1 Graph Cut


Given a graph represented by a similarity matrix W , the simplest and most
direct way to construct a partition of the graph is to solve the mincut problem,
which chooses a partition C1 , . . . , Ck that minimizes the objective
k
X X
cut(C1 , . . . , Ck ) = Wr,s .
i=1 r2Ci ,s2C
/ i

For k = 2, the mincut problem can be solved efficiently. However, in practice it


often does not lead to satisfactory partitions. The problem is that in many cases,
the solution of mincut simply separates one individual vertex from the rest of the
graph. Of course, this is not what we want to achieve in clustering, as clusters
should be reasonably large groups of points.
Several solutions to this problem have been suggested. The simplest solution
is to normalize the cut and define the normalized mincut objective as follows:
Xk X
1
RatioCut(C1 , . . . , Ck ) = Wr,s .
i=1
|C i|
r2Ci ,s2C
/ i

The preceding objective assumes smaller values if the clusters are not too small.
Unfortunately, introducing this balancing makes the problem computationally
hard to solve. Spectral clustering is a way to relax the problem of minimizing
RatioCut.

22.3.2 Graph Laplacian and Relaxed Graph Cuts


The main mathematical object for spectral clustering is the graph Laplacian
matrix. There are several di↵erent definitions of graph Laplacian in the literature,
and in the following we describe one particular definition.
316 Clustering

definition 22.2 (Unnormalized Graph Laplacian) The unnormalized graph


Laplacian is the m ⇥ m matrix L = D W where D is a diagonal matrix with
Pm
Di,i = j=1 Wi,j . The matrix D is called the degree matrix.

The following lemma underscores the relation between RatioCut and the Lapla-
cian matrix.

lemma 22.3 Let C1 , . . . , Ck be a clustering and let H 2 Rm,k be the matrix


such that

Hi,j = p 1 1[i2Cj ] .
|Cj |

Then, the columns of H are orthonormal to each other and

RatioCut(C1 , . . . , Ck ) = trace(H > L H).

Proof Let h1 , . . . , hk be the columns of H. The fact that these vectors are
orthonormal is immediate from the definition. Next, by standard algebraic ma-
Pk
nipulations, it can be shown that trace(H > L H) = i=1 h> i Lhi and that for
any vector v we have
!
1 X X X 1X
>
v Lv = Dr,r vr2 2 vr vs Wr,s + Ds,s vs2 = Wr,s (vr vs ) 2 .
2 r r,s s
2 r,s

Applying this with v = hi and noting that (hi,r hi,s )2 is nonzero only if
r 2 Ci , s 2
/ Ci or the other way around, we obtain that

1 X
h>
i Lhi = Wr,s .
|Ci |
r2Ci ,s2C
/ i

Therefore, to minimize RatioCut we can search for a matrix


p H whose columns
are orthonormal and such that each Hi,j is either 0 or 1/ |Cj |. Unfortunately,
this is an integer programming problem which we cannot solve efficiently. Instead,
we relax the latter requirement and simply search an orthonormal matrix H 2
Rm,k that minimizes trace(H > L H). As we will see in the next chapter about
PCA (particularly, the proof of Theorem 23.2), the solution to this problem is
to set U to be the matrix whose columns are the eigenvectors corresponding to
the k minimal eigenvalues of L. The resulting algorithm is called Unnormalized
Spectral Clustering.
22.4 Information Bottleneck* 317

22.3.3 Unnormalized Spectral Clustering


Unnormalized Spectral Clustering
Input: W 2 Rm,m ; Number of clusters k
Initialize: Compute the unnormalized graph Laplacian L
Let U 2 Rm,k be the matrix whose columns are the eigenvectors of L
corresponding to the k smallest eigenvalues
Let v1 , . . . , vm be the rows of U
Cluster the points v1 , . . . , vm using k-means
Output: Clusters C1 , . . . , CK of the k-means algorithm

The spectral clustering algorithm starts with finding the matrix H of the k
eigenvectors corresponding to the smallest eigenvalues of the graph Laplacian
matrix. It then represents points according to the rows of H. It is due to the
properties of the graph Laplacians that this change of representation is useful.
In many situations, this change of representation enables the simple k-means
algorithm to detect the clusters seamlessly. Intuitively, if H is as defined in
Lemma 22.3 then each point in the new representation is an indicator vector
whose value is nonzero only on the element corresponding to the cluster it belongs
to.

22.4 Information Bottleneck*

The information bottleneck method is a clustering technique introduced by


Tishby, Pereira, and Bialek. It relies on notions from information theory. To
illustrate the method, consider the problem of clustering text documents where
each document is represented as a bag-of-words; namely, each document is a
vector x = {0, 1}n , where n is the size of the dictionary and xi = 1 i↵ the word
corresponding to index i appears in the document. Given a set of m documents,
we can interpret the bag-of-words representation of the m documents as a joint
probability over a random variable x, indicating the identity of a document (thus
taking values in [m]), and a random variable y, indicating the identity of a word
in the dictionary (thus taking values in [n]).
With this interpretation, the information bottleneck refers to the identity of
a clustering as another random variable, denoted C, that takes values in [k]
(where k will be set by the method as well). Once we have formulated x, y, C
as random variables, we can use tools from information theory to express a
clustering objective. In particular, the information bottleneck objective is

min I(x; C) I(C; y) ,


p(C|x)

where I(·; ·) is the mutual information between two random variables,1 is a


1 That is, given a probability function, p over the pairs (x, C),
318 Clustering

parameter, and the minimization is over all possible probabilistic assignments of


points to clusters. Intuitively, we would like to achieve two contradictory goals.
On one hand, we would like the mutual information between the identity of
the document and the identity of the cluster to be as small as possible. This
reflects the fact that we would like a strong compression of the original data. On
the other hand, we would like high mutual information between the clustering
variable and the identity of the words, which reflects the goal that the “relevant”
information about the document (as reflected by the words that appear in the
document) is retained. This generalizes the classical notion of minimal sufficient
statistics2 used in parametric statistics to arbitrary distributions.
Solving the optimization problem associated with the information bottleneck
principle is hard in the general case. Some of the proposed methods are similar
to the EM principle, which we will discuss in Chapter 24.

22.5 A High Level View of Clustering

So far, we have mainly listed various useful clustering tools. However, some fun-
damental questions remain unaddressed. First and foremost, what is clustering?
What is it that distinguishes a clustering algorithm from any arbitrary function
that takes an input space and outputs a partition of that space? Are there any
basic properties of clustering that are independent of any specific algorithm or
task?
One method for addressing such questions is via an axiomatic approach. There
have been several attempts to provide an axiomatic definition of clustering. Let
us demonstrate this approach by presenting the attempt made by Kleinberg
(2003).
Consider a clustering function, F , that takes as input any finite domain X
with a dissimilarity function d over its pairs and returns a partition of X .
Consider the following three properties of such a function:

Scale Invariance (SI) For any domain set X , dissimilarity function d, and
any ↵ > 0, the following should hold: F (X , d) = F (X , ↵d) (where
def
(↵d)(x, y) = ↵ d(x, y)).
Richness (Ri) For any finite X and every partition C = (C1 , . . . Ck ) of X (into
nonempty subsets) there exists some dissimilarity function d over X such
that F (X , d) = C.
P P ⇣ ⌘
p(a,b)
I(x; C) = a b p(a, b) log p(a)p(b) , where the sum is over all values x can take and all
values C can take.
2 A sufficient statistic is a function of the data which has the property of sufficiency with
respect to a statistical model and its associated unknown parameter, meaning that “no
other statistic which can be calculated from the same sample provides any additional
information as to the value of the parameter.” For example, if we assume that a variable is
distributed normally with a unit variance and an unknown expectation, then the average
function is a sufficient statistic.
22.5 A High Level View of Clustering 319

Consistency (Co) If d and d0 are dissimilarity functions over X , such that


for every x, y 2 X , if x, y belong to the same cluster in F (X , d) then
d0 (x, y)  d(x, y) and if x, y belong to di↵erent clusters in F (X , d) then
d0 (x, y) d(x, y), then F (X , d) = F (X , d0 ).
A moment of reflection reveals that the Scale Invariance is a very natural
requirement – it would be odd to have the result of a clustering function depend
on the units used to measure between-point distances. The Richness requirement
basically states that the outcome of the clustering function is fully controlled by
the function d, which is also a very intuitive feature. The third requirement,
Consistency, is the only requirement that refers to the basic (informal) definition
of clustering – we wish that similar points will be clustered together and that
dissimilar points will be separated to di↵erent clusters, and therefore, if points
that already share a cluster become more similar, and points that are already
separated become even less similar to each other, the clustering function should
have even stronger “support” of its previous clustering decisions.
However, Kleinberg (2003) has shown the following “impossibility” result:
theorem 22.4 There exists no function, F , that satisfies all the three proper-
ties: Scale Invariance, Richness, and Consistency.
Proof Assume, by way of contradiction, that some F does satisfy all three
properties. Pick some domain set X with at least three points. By Richness,
there must be some d1 such that F (X , d1 ) = {{x} : x 2 X } and there also exists
some d2 such that F (X , d2 ) 6= F (X , d1 ).
Let ↵ 2 R+ be such that for every x, y 2 X , ↵d2 (x, y) d1 (x, y). Let d3 =
↵d2 . Consider F (X , d3 ). By the Scale Invariance property of F , we should have
F (X , d3 ) = F (X , d2 ). On the other hand, since all distinct x, y 2 X reside in
di↵erent clusters w.r.t. F (X , d1 ), and d3 (x, y) d1 (x, y), the Consistency of F
implies that F (X , d3 ) = F (X , d1 ). This is a contradiction, since we chose d1 , d2
so that F (X , d2 ) 6= F (X , d1 ).
It is important to note that there is no single “bad property” among the three
properties. For every pair of the the three axioms, there exist natural clustering
functions that satisfy the two properties in that pair (one can even construct such
examples just by varying the stopping criteria for the Single Linkage clustering
function). On the other hand, Kleinberg shows that any clustering algorithm
that minimizes any center-based objective function inevitably fails the consis-
tency property (yet, the k-sum-of-in-cluster-distances minimization clustering
does satisfy Consistency).
The Kleinberg impossibility result can be easily circumvented by varying the
properties. For example, if one wishes to discuss clustering functions that have
a fixed number-of-clusters parameter, then it is natural to replace Richness by
k-Richness (namely, the requirement that every partition of the domain into k
subsets is attainable by the clustering function). k-Richness, Scale Invariance
and Consistency all hold for the k-means clustering and are therefore consistent.
320 Clustering

Alternatively, one can relax the Consistency property. For example, say that two
clusterings C = (C1 , . . . Ck ) and C 0 = (C10 , . . . Cl0 ) are compatible if for every
clusters Ci 2 C and Cj0 2 C 0 , either Ci ✓ Cj0 or Cj0 ✓ Ci or Ci \ Cj0 = ; (it is
worthwhile noting that for every dendrogram, every two clusterings that are ob-
tained by trimming that dendrogram are compatible). “Refinement Consistency”
is the requirement that, under the assumptions of the Consistency property, the
new clustering F (X , d0 ) is compatible with the old clustering F (X , d). Many
common clustering functions satisfy this requirement as well as Scale Invariance
and Richness. Furthermore, one can come up with many other, di↵erent, prop-
erties of clustering functions that sound intuitive and desirable and are satisfied
by some common clustering functions.
There are many ways to interpret these results. We suggest to view it as indi-
cating that there is no “ideal” clustering function. Every clustering function will
inevitably have some “undesirable” properties. The choice of a clustering func-
tion for any given task must therefore take into account the specific properties
of that task. There is no generic clustering solution, just as there is no clas-
sification algorithm that will learn every learnable task (as the No-Free-Lunch
theorem shows). Clustering, just like classification prediction, must take into
account some prior knowledge about the specific task at hand.

22.6 Summary

Clustering is an unsupervised learning problem, in which we wish to partition


a set of points into “meaningful” subsets. We presented several clustering ap-
proaches including linkage-based algorithms, the k-means family, spectral clus-
tering, and the information bottleneck. We discussed the difficulty of formalizing
the intuitive meaning of clustering.

22.7 Bibliographic Remarks

The k-means algorithm is sometimes named Lloyd’s algorithm, after Stuart


Lloyd, who proposed the method in 1957. For a more complete overview of
spectral clustering we refer the reader to the excellent tutorial by Von Luxburg
(2007). The information bottleneck method was introduced by Tishby, Pereira
& Bialek (1999). For an additional discussion on the axiomatic approach see
Ackerman & Ben-David (2008).

22.8 Exercises

1. Suboptimality of k-Means: For every parameter t > 1, show that there


exists an instance of the k-means problem for which the k-means algorithm
22.8 Exercises 321

(might) find a solution whose k-means objective is at least t · OPT, where


OPT is the minimum k-means objective.
2. k-Means Might Not Necessarily Converge to a Local Minimum:
Show that the k-means algorithm might converge to a point which is not
a local minimum. Hint: Suppose that k = 2 and the sample points are
{1, 2, 3, 4} ⇢ R suppose we initialize the k-means with the centers {2, 4};
and suppose we break ties in the definition of Ci by assigning i to be the
smallest value in argminj kx µj k.
3. Given a metric space (X , d), where |X | < 1, and k 2 N, we would like to find
a partition of X into C1 , . . . , Ck which minimizes the expression

Gk diam ((X , d), (C1 , . . . , Ck )) = max diam(Cj ),


j2[d]

where diam(Cj ) = maxx,x0 2Cj d(x, x0 ) (we use the convention diam(Cj ) = 0
if |Cj | < 2).
Similarly to the k-means objective, it is NP-hard to minimize the k-
diam objective. Fortunately, we have a very simple approximation algorithm:
Initially, we pick some x 2 X and set µ1 = x. Then, the algorithm iteratively
sets
8j 2 {2, . . . , k}, µj = argmax min d(x, µi ).
x2X i2[j 1]

Finally, we set

8i 2 [k], Ci = {x 2 X : i = argmin d(x, µj )}.


j2[k]

Prove that the algorithm described is a 2-approximation algorithm. That


is, if we denote its output by Ĉ1 , . . . , Ĉk , and denote the optimal solution by
C1⇤ , . . . , Ck⇤ , then,
⇤ ⇤
Gk diam ((X , d), (Ĉ1 , . . . , Ĉk ))  2 · Gk diam ((X , d), (C1 , . . . , Ck )).

Hint: Consider the point µk+1 (in other words, the next center we would have
chosen, if we wanted k + 1 clusters). Let r = minj2[k] d(µj , µk+1 ). Prove the
following inequalities

Gk diam ((X , d), (Ĉ1 , . . . , Ĉk ))  2r


Gk diam ((X, d), (C1⇤ , . . . , Ck⇤ )) r.

4. Recall that a clustering function, F , is called Center-Based Clustering if, for


some monotonic function f : R+ ! R+ , on every given input (X , d), F (X , d)
is a clustering that minimizes the objective
k X
X
Gf ((X , d), (C1 , . . . Ck )) = min f (d(x, µi )),
µ1 ,...µk 2X 0
i=1 x2Ci

where X 0 is either X or some superset of X .


322 Clustering

Prove that for every k > 1 the k-diam clustering function defined in the
previous exercise is not a center-based clustering function.
Hint: Given a clustering input (X , d), with |X | > 2, consider the e↵ect of
adding many close-by points to some (but not all) of the members of X , on
either the k-diam clustering or any given center-based clustering.
5. Recall that we discussed three clustering “properties”: Scale Invariance, Rich-
ness, and Consistency. Consider the Single Linkage clustering algorithm.
1. Find which of the three properties is satisfied by Single Linkage with the
Fixed Number of Clusters (any fixed nonzero number) stopping rule.
2. Find which of the three properties is satisfied by Single Linkage with the
Distance Upper Bound (any fixed nonzero upper bound) stopping rule.
3. Show that for any pair of these properties there exists a stopping criterion
for Single Linkage clustering, under which these two axioms are satisfied.
6. Given some number k, let k-Richness be the following requirement:
For any finite X and every partition C = (C1 , . . . Ck ) of X (into nonempty subsets)
there exists some dissimilarity function d over X such that F (X , d) = C.
Prove that, for every number k, there exists a clustering function that
satisfies the three properties: Scale Invariance, k-Richness, and Consistency.
23 Dimensionality Reduction

Dimensionality reduction is the process of taking data in a high dimensional


space and mapping it into a new space whose dimensionality is much smaller.
This process is closely related to the concept of (lossy) compression in infor-
mation theory. There are several reasons to reduce the dimensionality of the
data. First, high dimensional data impose computational challenges. Moreover,
in some situations high dimensionality might lead to poor generalization abili-
ties of the learning algorithm (for example, in Nearest Neighbor classifiers the
sample complexity increases exponentially with the dimension—see Chapter 19).
Finally, dimensionality reduction can be used for interpretability of the data, for
finding meaningful structure of the data, and for illustration purposes.
In this chapter we describe popular methods for dimensionality reduction. In
those methods, the reduction is performed by applying a linear transformation
to the original data. That is, if the original data is in Rd and we want to embed
it into Rn (n < d) then we would like to find a matrix W 2 Rn,d that induces
the mapping x 7! W x. A natural criterion for choosing W is in a way that will
enable a reasonable recovery of the original x. It is not hard to show that in
general, exact recovery of x from W x is impossible (see Exercise 1).
The first method we describe is called Principal Component Analysis (PCA).
In PCA, both the compression and the recovery are performed by linear transfor-
mations and the method finds the linear transformations for which the di↵erences
between the recovered vectors and the original vectors are minimal in the least
squared sense.
Next, we describe dimensionality reduction using random matrices W . We
derive an important lemma, often called the “Johnson-Lindenstrauss lemma,”
which analyzes the distortion caused by such a random dimensionality reduction
technique.
Last, we show how one can reduce the dimension of all sparse vectors using
again a random matrix. This process is known as Compressed Sensing. In this
case, the recovery process is nonlinear but can still be implemented efficiently
using linear programming.
We conclude by underscoring the underlying “prior assumptions” behind PCA
and compressed sensing, which can help us understand the merits and pitfalls of
the two methods.

Understanding Machine Learning, c 2014 by Shai Shalev-Shwartz and Shai Ben-David


Published 2014 by Cambridge University Press.
Personal use only. Not for distribution. Do not post.
Please link to http://www.cs.huji.ac.il/~shais/UnderstandingMachineLearning
324 Dimensionality Reduction

23.1 Principal Component Analysis (PCA)

Let x1 , . . . , xm be m vectors in Rd . We would like to reduce the dimensional-


ity of these vectors using a linear transformation. A matrix W 2 Rn,d , where
n < d, induces a mapping x 7! W x, where W x 2 Rn is the lower dimensionality
representation of x. Then, a second matrix U 2 Rd,n can be used to (approxi-
mately) recover each original vector x from its compressed version. That is, for
a compressed vector y = W x, where y is in the low dimensional space Rn , we
can construct x̃ = U y, so that x̃ is the recovered version of x and resides in the
original high dimensional space Rd .
In PCA, we find the compression matrix W and the recovering matrix U so
that the total squared distance between the original and recovered vectors is
minimal; namely, we aim at solving the problem
m
X
argmin kxi U W xi k22 . (23.1)
W 2Rn,d ,U 2Rd,n i=1

To solve this problem we first show that the optimal solution takes a specific
form.
lemma 23.1 Let (U, W ) be a solution to Equation (23.1). Then the columns of
U are orthonormal (namely, U > U is the identity matrix of Rn ) and W = U > .
Proof Fix any U, W and consider the mapping x 7! U W x. The range of this
mapping, R = {U W x : x 2 Rd }, is an n dimensional linear subspace of Rd . Let
V 2 Rd,n be a matrix whose columns form an orthonormal basis of this subspace,
namely, the range of V is R and V > V = I. Therefore, each vector in R can be
written as V y where y 2 Rn . For every x 2 Rd and y 2 Rn we have
kx V yk22 = kxk2 + y> V > V y 2y> V > x = kxk2 + kyk2 2y> (V > x),
where we used the fact that V > V is the identity matrix of Rn . Minimizing the
preceding expression with respect to y by comparing the gradient with respect
to y to zero gives that y = V > x. Therefore, for each x we have that
V V > x = argmin kx x̃k22 .
x̃2R

In particular this holds for x1 , . . . , xm and therefore we can replace U, W by


V, V > and by that do not increase the objective
m
X m
X
kxi U W xi k22 kxi V V > xi k22 .
i=1 i=1

Since this holds for every U, W the proof of the lemma follows.
On the basis of the preceding lemma, we can rewrite the optimization problem
given in Equation (23.1) as follows:
m
X
argmin kxi U U > xi k22 . (23.2)
U 2Rd,n :U > U =I i=1
23.1 Principal Component Analysis (PCA) 325

We further simplify the optimization problem by using the following elementary


algebraic manipulations. For every x 2 Rd and a matrix U 2 Rd,n such that
U > U = I we have
kx UU > xk2 = kxk2 2x> UU > x + x> U U > UU > x
= kxk2 x> UU > x
= kxk2 trace(U > xx> U ), (23.3)
where the trace of a matrix is the sum of its diagonal entries. Since the trace is
a linear operator, this allows us to rewrite Equation (23.2) as follows:
m
!
X
argmax trace U > xi x>
i U . (23.4)
U 2Rd,n :U > U =I i=1
Pm >
Let A = i=1 xi xi . The matrix A is symmetric and therefore it can be
written using its spectral decomposition as A = VDV > , where D is diagonal and
V > V = VV > = I. Here, the elements on the diagonal of D are the eigenvalues of
A and the columns of V are the corresponding eigenvectors. We assume without
loss of generality that D1,1 D2,2 · · · Dd,d . Since A is positive semidefinite
it also holds that Dd,d 0. We claim that the solution to Equation (23.4) is
the matrix U whose columns are the n eigenvectors of A corresponding to the
largest n eigenvalues.
Pm
theorem 23.2 Let x1 , . . . , xm be arbitrary vectors in Rd , let A = i=1 xi x> i ,
and let u1 , . . . , un be n eigenvectors of the matrix A corresponding to the largest
n eigenvalues of A. Then, the solution to the PCA optimization problem given
in Equation (23.1) is to set U to be the matrix whose columns are u1 , . . . , un
and to set W = U > .
Proof Let VDV > be the spectral decomposition of A. Fix some matrix U 2 Rd,n
with orthonormal columns and let B = V > U . Then, VB = VV > U = U . It
follows that
U > AU = B > V > VDV > VB = B > DB,
and therefore
d
X n
X
trace(U > AU ) = Dj,j 2
Bj,i .
j=1 i=1

Note that B > B = U > VV > U = U > U = I. Therefore, the columns of B are
Pd Pn 2
also orthonormal, which implies that j=1 i=1 Bj,i = n. In addition, let B̃ 2
R d,d
be a matrix such that its first n columns are the columns of B and in
Pd
addition B̃ > B̃ = I. Then, for every j we have i=1 B̃j,i
2
= 1, which implies that
Pn 2
B
i=1 j,i  1. It follows that:
d
X
trace(U > AU )  max Dj,j j .
2[0,1]d : k k1 n
j=1
326 Dimensionality Reduction

It is not hard to verify (see Exercise 2) that the right-hand side equals to
Pn
j=1 Dj,j . We have therefore shown that for every matrix U 2 Rd,n with or-
>
Pn
thonormal columns it holds that trace(U AU )  j=1 Dj,j . On the other hand,
if we set U to be the matrix whose columns are the n leading eigenvectors of A
Pn
we obtain that trace(U > AU ) = j=1 Dj,j , and this concludes our proof.

Remark 23.1 The proof of Theorem 23.2 also tells us that the value of the
Pn
objective of Equation (23.4) is i=1 Di,i . Combining this with Equation (23.3)
Pm Pd
and noting that i=1 kxi k2 = trace(A) = i=1 Di,i we obtain that the optimal
Pd
objective value of Equation (23.1) is i=n+1 Di,i .

Remark 23.2 It is a common practice to “center” the examples before applying


1
Pm
PCA. That is, we first calculate µ = m i=1 xi and then apply PCA on the
vectors (x1 µ), . . . , (xm µ). This is also related to the interpretation of PCA
as variance maximization (see Exercise 4).

23.1.1 A More Efficient Solution for the Case d m


In some situations the original dimensionality of the data is much larger than
the number of examples m. The computational complexity of calculating the
PCA solution as described previously is O(d3 ) (for calculating eigenvalues of A)
plus O(md2 ) (for constructing the matrix A). We now show a simple trick that
enables us to calculate the PCA solution more efficiently when d m.
Pm >
Recall that the matrix A is defined to be i=1 xi xi . It is convenient to rewrite
A = X > X where X 2 Rm,d is a matrix whose ith row is x> i . Consider the
matrix B = XX > . That is, B 2 Rm,m is the matrix whose i, j element equals
hxi , xj i. Suppose that u is an eigenvector of B: That is, Bu = u for some
2 R. Multiplying the equality by X > and using the definition of B we obtain
X > XX > u = X > u. But, using the definition of A, we get that A(X > u) =
X>u
(X > u). Thus, kX > uk is an eigenvector of A with eigenvalue of .
We can therefore calculate the PCA solution by calculating the eigenvalues of
B instead of A. The complexity is O(m3 ) (for calculating eigenvalues of B) and
m2 d (for constructing the matrix B).

Remark 23.3 The previous discussion also implies that to calculate the PCA
solution we only need to know how to calculate inner products between vectors.
This enables us to calculate PCA implicitly even when d is very large (or even
infinite) using kernels, which yields the kernel PCA algorithm.

23.1.2 Implementation and Demonstration


A pseudocode of PCA is given in the following.
23.1 Principal Component Analysis (PCA) 327

1.5

0.5

−0.5

−1

−1.5
−1.5 −1 −0.5 0 0.5 1 1.5

Figure 23.1 A set of vectors in R2 (blue x’s) and their reconstruction after
dimensionality reduction to R1 using PCA (red circles).

PCA
input
A matrix of m examples X 2 Rm,d
number of components n
if (m > d)
A = X >X
Let u1 , . . . , un be the eigenvectors of A with largest eigenvalues
else
B = XX >
Let v1 , . . . , vn be the eigenvectors of B with largest eigenvalues
for i = 1, . . . , n set ui = kX >1 vi k X > vi
output: u1 , . . . , un

To illustrate how PCA works, let us generate vectors in R2 that approximately


reside on a line, namely, on a one dimensional subspace of R2 . For example,
suppose that each example is of the form (x, x + y) where x is chosen uniformly
at random from [ 1, 1] and y is sampled from a Gaussian distribution with mean
0 and standard deviation of 0.1. Suppose we apply PCA on this data. Then, the
eigenvector
p p corresponding to the largest eigenvalue will be close to the vector
(1/ 2, 1/ 2). When projecting a point (x, x + y) on this principal component
we will obtain the scalar 2x+y
p . The reconstruction of the original vector will be
2
((x + y/2), (x + y/2)). In Figure 23.1 we depict the original versus reconstructed
data.
Next, we demonstrate the e↵ectiveness of PCA on a data set of faces. We
extracted images of faces from the Yale data set (Georghiades, Belhumeur &
Kriegman 2001). Each image contains 50⇥50 = 2500 pixels; therefore the original
dimensionality is very high.
328 Dimensionality Reduction

o o oo oo o

x xxx x xx
+ +
+++
++ *
* ** **

Figure 23.2 Images of faces extracted from the Yale data set. Top-Left: the original
images in R50x50 . Top-Right: the images after dimensionality reduction to R10 and
reconstruction. Middle row: an enlarged version of one of the images before and after
PCA. Bottom: The images after dimensionality reduction to R2 . The di↵erent marks
indicate di↵erent individuals.

Some images of faces are depicted on the top-left side of Figure 23.2. Using
PCA, we reduced the dimensionality to R10 and reconstructed back to the orig-
inal dimension, which is 502 . The resulting reconstructed images are depicted
on the top-right side of Figure 23.2. Finally, on the bottom of Figure 23.2 we
depict a 2 dimensional representation of the images. As can be seen, even from a
2 dimensional representation of the images we can still roughly separate di↵erent
individuals.
23.2 Random Projections 329

23.2 Random Projections

In this section we show that reducing the dimension by using a random linear
transformation leads to a simple compression scheme with a surprisingly low
distortion. The transformation x 7! W x, when W is a random matrix, is often
referred to as a random projection. In particular, we provide a variant of a famous
lemma due to Johnson and Lindenstrauss, showing that random projections do
not distort Euclidean distances too much.
Let x1 , x2 be two vectors in Rd . A matrix W does not distort too much the
distance between x1 and x2 if the ratio
kW x1 W x2 k
kx1 x2 k
is close to 1. In other words, the distances between x1 and x2 before and after
the transformation are almost the same. To show that kW x1 W x2 k is not too
far away from kx1 x2 k it suffices to show that W does not distort the norm of
the di↵erence vector x = x1 x2 . Therefore, from now on we focus on the ratio
kW xk
kxk .
We start with analyzing the distortion caused by applying a random projection
to a single vector.

lemma 23.3 Fix some x 2 Rd . Let W 2 Rn,d be a random matrix such that
each Wi,j is an independent normal random variable. Then, for every ✏ 2 (0, 3)
we have
" p 2
#
k(1/ n)W xk 2
P 2
1 >✏  2 e ✏ n/6 .
kxk

Proof Without loss of generality we can assume that kxk2 = 1. Therefore, an


equivalent inequality is
⇥ ⇤ 2
P (1 ✏)n  kW xk2  (1 + ✏)n 1 2e ✏ n/6 .

Let wi be the ith row of W . The random variable hwi , xi is a weighted sum of
d independent normal random variables and therefore it is normally distributed
P
with zero mean and variance j x2j = kxk2 = 1. Therefore, the random vari-
P n
able kW xk2 = 2
i=1 (hwi , xi) has a
2
n distribution. The claim now follows
directly from a measure concentration property of 2 random variables stated in
Lemma B.12 given in Section B.7.

The Johnson-Lindenstrauss lemma follows from this using a simple union


bound argument.

lemma 23.4 (Johnson-Lindenstrauss Lemma) Let Q be a finite set of vectors


in Rd . Let 2 (0, 1) and n be an integer such that
r
6 log(2|Q|/ )
✏=  3.
n
330 Dimensionality Reduction

Then, with probability of at least 1 over a choice of a random matrix W 2 Rn,d


such that each element of W is distributed normally with zero mean and variance
of 1/n we have
kW xk2
sup 1 < ✏.
x2Q kxk2
Proof Combining Lemma 23.3 and the union bound we have that for every
✏ 2 (0, 3):

kW xk2 2
P sup 2
1 > ✏  2 |Q| e ✏ n/6 .
x2Q kxk
Let denote the right-hand side of the inequality; thus we obtain that
r
6 log(2|Q|/ )
✏= .
n

Interestingly, the bound given in Lemma 23.4 does not depend on the original
dimension of x. In fact, the bound holds even if x is in an infinite dimensional
Hilbert space.

23.3 Compressed Sensing

Compressed sensing is a dimensionality reduction technique which utilizes a prior


assumption that the original vector is sparse in some basis. To motivate com-
pressed sensing, consider a vector x 2 Rd that has at most s nonzero elements.
That is,
def
kxk0 = |{i : xi 6= 0}|  s.
Clearly, we can compress x by representing it using s (index,value) pairs. Fur-
thermore, this compression is lossless – we can reconstruct x exactly from the s
(index,value) pairs. Now, lets take one step forward and assume that x = U ↵,
where ↵ is a sparse vector, k↵k0  s, and U is a fixed orthonormal matrix. That
is, x has a sparse representation in another basis. It turns out that many nat-
ural vectors are (at least approximately) sparse in some representation. In fact,
this assumption underlies many modern compression schemes. For example, the
JPEG-2000 format for image compression relies on the fact that natural images
are approximately sparse in a wavelet basis.
Can we still compress x into roughly s numbers? Well, one simple way to do
this is to multiply x by U > , which yields the sparse vector ↵, and then represent
↵ by its s (index,value) pairs. However, this requires us first to “sense” x, to
store it, and then to multiply it by U > . This raises a very natural question: Why
go to so much e↵ort to acquire all the data when most of what we get will be
thrown away? Cannot we just directly measure the part that will not end up
being thrown away?
23.3 Compressed Sensing 331

Compressed sensing is a technique that simultaneously acquires and com-


presses the data. The key result is that a random linear transformation can
compress x without losing information. The number of measurements needed is
order of s log(d). That is, we roughly acquire only the important information
about the signal. As we will see later, the price we pay is a slower reconstruction
phase. In some situations, it makes sense to save time in compression even at
the price of a slower reconstruction. For example, a security camera should sense
and compress a large amount of images while most of the time we do not need to
decode the compressed data at all. Furthermore, in many practical applications,
compression by a linear transformation is advantageous because it can be per-
formed efficiently in hardware. For example, a team led by Baraniuk and Kelly
has proposed a camera architecture that employs a digital micromirror array to
perform optical calculations of a linear transformation of an image. In this case,
obtaining each compressed measurement is as easy as obtaining a single raw
measurement. Another important application of compressed sensing is medical
imaging, in which requiring fewer measurements translates to less radiation for
the patient.
Informally, the main premise of compressed sensing is the following three “sur-
prising” results:

1. It is possible to reconstruct any sparse signal fully if it was compressed by


x 7! W x, where W is a matrix which satisfies a condition called the Re-
stricted Isoperimetric Property (RIP). A matrix that satisfies this property is
guaranteed to have a low distortion of the norm of any sparse representable
vector.
2. The reconstruction can be calculated in polynomial time by solving a linear
program.
3. A random n ⇥ d matrix is likely to satisfy the RIP condition provided that n
is greater than an order of s log(d).

Formally,

definition 23.5 (RIP) A matrix W 2 Rn,d is (✏, s)-RIP if for all x 6= 0 s.t.
kxk0  s we have
kW xk22
1  ✏.
kxk22
The first theorem establishes that RIP matrices yield a lossless compression
scheme for sparse vectors. It also provides a (nonefficient) reconstruction scheme.

theorem 23.6 Let ✏ < 1 and let W be a (✏, 2s)-RIP matrix. Let x be a vector
s.t. kxk0  s, let y = W x be the compression of x, and let

x̃ 2 argmin kvk0
v:W v=y

be a reconstructed vector. Then, x̃ = x.


332 Dimensionality Reduction

Proof We assume, by way of contradiction, that x̃ 6= x. Since x satisfies the


constraints in the optimization problem that defines x̃ we clearly have that
kx̃k0  kxk0  s. Therefore, kx x̃k0  2s and we can apply the RIP in-
equality on the vector x x̃. But, since W (x x̃) = 0 we get that |0 1|  ✏,
which leads to a contradiction.

The reconstruction scheme given in Theorem 23.6 seems to be nonefficient


because we need to minimize a combinatorial objective (the sparsity of v). Quite
surprisingly, it turns out that we can replace the combinatorial objective, kvk0 ,
with a convex objective, kvk1 , which leads to a linear programming problem that
can be solved efficiently. This is stated formally in the following theorem.

theorem 23.7 Assume that the conditions of Theorem 23.6 holds and that
✏ < 1+1p2 . Then,
x = argmin kvk0 = argmin kvk1 .
v:W v=y v:W v=y

In fact, we will prove a stronger result, which holds even if x is not a sparse
vector.

theorem 23.8 Let ✏ < 1+1p2 and let W be a (✏, 2s)-RIP matrix. Let x be an
arbitrary vector and denote

xs 2 argmin kx vk1 .
v:kvk0 s

That is, xs is the vector which equals x on the s largest elements of x and equals
0 elsewhere. Let y = W x be the compression of x and let

x? 2 argmin kvk1
v:W v=y

be the reconstructed vector. Then,


1+⇢
kx? xk2  2 s 1/2
kx x s k1 ,
1 ⇢
p
where ⇢ = 2✏/(1 ✏).

Note that in the special case that x = xs we get an exact recovery, x? = x, so


Theorem 23.7 is a special case of Theorem 23.8. The proof of Theorem 23.8 is
given in Section 23.3.1.
Finally, the third result tells us that random matrices with n ⌦(s log(d)) are
likely to be RIP. In fact, the theorem shows that multiplying a random matrix
by an orthonormal matrix also provides an RIP matrix. This is important for
compressing signals of the form x = U ↵ where x is not sparse but ↵ is sparse.
In that case, if W is a random matrix and we compress using y = W x then this
is the same as compressing ↵ by y = (W U )↵ and since W U is also RIP we can
reconstruct ↵ (and thus also x) from y.
23.3 Compressed Sensing 333

theorem 23.9 Let U be an arbitrary fixed d ⇥ d orthonormal matrix, let ✏,


be scalars in (0, 1), let s be an integer in [d], and let n be an integer that satisfies
s log(40d/( ✏))
n 100 .
✏2
Let W 2 Rn,d be a matrix s.t. each element of W is distributed normally with
zero mean and variance of 1/n. Then, with proabability of at least 1 over the
choice of W , the matrix W U is (✏, s)-RIP.

23.3.1 Proofs*
Proof of Theorem 23.8
We follow a proof due to Candès (2008).
Let h = x? x. Given a vector v and a set of indices I we denote by vI the
vector whose ith element is vi if i 2 I and 0 otherwise.
The first trick we use is to partition the set of indices [d] = {1, . . . , d} into
disjoint sets of size s. That is, we will write [d] = T0 [· T1 [· T2 . . . Td/s 1 where
for all i, |Ti | = s, and we assume for simplicity that d/s is an integer. We define
the partition as follows. In T0 we put the s indices corresponding to the s largest
elements in absolute values of x (ties are broken arbitrarily). Let T0c = [d] \ T0 .
Next, T1 will be the s indices corresponding to the s largest elements in absolute
c
value of hT0c . Let T0,1 = T0 [ T1 and T0,1 = [d] \ T0,1 . Next, T2 will correspond to
the s largest elements in absolute value of hT0,1
c . And, we will construct T3 , T4 , . . .

in the same way.


To prove the theorem we first need the following lemma, which shows that
RIP also implies approximate orthogonality.

lemma 23.10 Let W be an (✏, 2s)-RIP matrix. Then, for any two disjoint sets
I, J, both of size at most s, and for any vector u we have that hW uI , W uJ i 
✏kuI k2 kuJ k2 .

Proof W.l.o.g. assume kuI k2 = kuJ k2 = 1.


kW uI + W uJ k22 kW uI W uJ k22
hW uI , W uJ i = .
4
But, since |J [ I|  2s we get from the RIP condition that kW uI + W uJ k22 
(1 + ✏)(kuI k22 + kuJ k22 ) = 2(1 + ✏) and that kW uI W uJ k22  (1 ✏)(kuI k22 +
kuJ k22 ) = 2(1 ✏), which concludes our proof.
We are now ready to prove the theorem. Clearly,

khk2 = khT0,1 + hT0,1


c k2  khT
0,1 k2 + khT0,1 k2 .
c (23.5)
To prove the theorem we will show the following two claims:
1/2
Claim 1:. khT0,1
c k2  khT k2 + 2s
0
kx xs k1 .
Claim 2:. khT0,1 k2  12⇢⇢ s 1/2 kx xs k1 .
334 Dimensionality Reduction

Combining these two claims with Equation (23.5) we get that


1/2
khk2  khT0,1 k2 + khT0,1
c k2  2khT k + 2s
0,1 2
kx xs k1
⇣ ⌘
 2 12⇢⇢ + 1 s 1/2 kx xs k1
1+⇢ 1/2
=2 s kx x s k1 ,
1 ⇢
and this will conclude our proof.

Proving Claim 1:
To prove this claim we do not use the RIP condition at all but only use the fact
that x? minimizes the `1 norm. Take j > 1. For each i 2 Tj and i0 2 Tj 1 we
have that |hi |  |hi0 |. Therefore, khTj k1  khTj 1 k1 /s. Thus,
khTj k2  s1/2 khTj k1  s 1/2
khTj 1
k1 .
Summing this over j = 2, 3, . . . and using the triangle inequality we obtain that
X
khT0,1
c k2  khTj k2  s 1/2 khT0c k1 (23.6)
j 2

Next, we show that khT0c k1 cannot be large. Indeed, from the definition of x?
we have that kxk1 kx? k1 = kx + hk1 . Thus, using the triangle inequality we
obtain that
X X
kxk1 kx+hk1 = |xi +hi |+ |xi +hi | kxT0 k1 khT0 k1 +khT0c k1 kxT0c k1
i2T0 i2T0c
(23.7)
and since kxT0c k1 = kx xs k1 = kxk1 kxT0 k1 we get that
khT0c k1  khT0 k1 + 2kxT0c k1 . (23.8)
Combining this with Equation (23.6) we get that
1/2 1/2
khT0,1
c k2  s khT0 k1 + 2kxT0c k1  khT0 k2 + 2s kxT0c k1 ,
which concludes the proof of claim 1.

Proving Claim 2:
For the second claim we use the RIP condition to get that
✏)khT0,1 k22  kW hT0,1 k22 .
(1 (23.9)
P P
Since W hT0,1 = W h j 2 W h Tj = j 2 W hTj we have that
X X
kW hT0,1 k22 = hW hT0,1 , W hTj i = hW hT0 + W hT1 , W hTj i.
j 2 j 2

From the RIP condition on inner products we obtain that for all i 2 {1, 2} and
j 2 we have
|hW hTi , W hTj i|  ✏khTi k2 khTj k2 .
23.3 Compressed Sensing 335

p
Since khT0 k2 + khT1 k2  2khT0,1 k2 we therefore get that
p X
kW hT0,1 k22  2✏khT0,1 k2 khTj k2 .
j 2

Combining this with Equation (23.6) and Equation (23.9) we obtain


p
(1 ✏)khT0,1 k22  2✏khT0,1 k2 s 1/2 khT0c k1 .

Rearranging the inequality gives


p
2✏ 1/2
khT0,1 k2  s khT0c k1 .
1 ✏
Finally, using Equation (23.8) we get that
1/2 1/2
khT0,1 k2  ⇢s (khT0 k1 + 2kxT0c k1 )  ⇢khT0 k2 + 2⇢s kxT0c k1 ,

but since khT0 k2  khT0,1 k2 this implies


2⇢ 1/2
khT0,1 k2  s kxT0c k1 ,
1 ⇢
which concludes the proof of the second claim.

Proof of Theorem 23.9


To prove the theorem we follow an approach due to (Baraniuk, Davenport, De-
Vore & Wakin 2008). The idea is to combine the Johnson-Lindenstrauss (JL)
lemma with a simple covering argument.
We start with a covering property of the unit ball.
3 d
lemma 23.11 Let ✏ 2 (0, 1). There exists a finite set Q ⇢ Rd of size |Q|  ✏
such that
sup min kx vk  ✏.
x:kxk1 v2Q

Proof Let k be an integer and let

Q0 = {x 2 Rd : 8j 2 [d], 9i 2 { k, k + 1, . . . , k} s.t. xj = ki }.

Clearly, |Q0 | = (2k + 1)d . We shall set Q = Q0 \ B2 (1), where B2 (1) is the unit
`2 ball of Rd . Since the points in Q0 are distributed evenly on the unit `1 ball,
the size of Q is the size of Q0 times the ratio between the volumes of the unit `2
and `1 balls. The volume of the `1 ball is 2d and the volume of B2 (1) is

⇡ d/2
.
(1 + d/2)
For simplicity, assume that d is even and therefore
⇣ ⌘d/2
d/2
(1 + d/2) = (d/2)! e ,
336 Dimensionality Reduction

where in the last inequality we used Stirling’s approximation. Overall we obtained


that

|Q|  (2k + 1)d (⇡/e)d/2 (d/2) d/2


2 d
. (23.10)

Now lets specify k. For each x 2 B2 (1) let v 2 Q be the vector whose ith element
is sign(xi ) b|xi | kc/k. Then, for each element we have that |xi vi |  1/k and
thus
p
d
kx vk  .
k
p
To ensure that the right-hand side will be at most ✏ we shall set k = d d/✏e.
Plugging this value into Equation (23.10) we conclude that
p ⇣ q ⌘d
d
|Q|  (3 d/(2✏))d (⇡/e)d/2 (d/2) d/2 = 3✏ 2e ⇡
 3✏ .

Let x be a vector that can be written as x = U ↵ with U being some orthonor-


mal matrix and k↵k0  s. Combining the earlier covering property and the JL
lemma (Lemma 23.4) enables us to show that a random W will not distort any
such x.

lemma 23.12 Let U be an orthonormal d ⇥ d matrix and let I ⇢ [d] be a set


of indices of size |I| = s. Let S be the span of {Ui : i 2 I}, where Ui is the ith
column of U . Let 2 (0, 1), ✏ 2 (0, 1), and n 2 N such that
log(2/ ) + s log(12/✏)
n 24 .
✏2
Then, with probability of at least 1 over a choice of a random matrix W 2 Rn,d
such that each element of W is independently distributed according to N (0, 1/n),
we have
kW xk
sup 1 < ✏.
x2S kxk
Proof It suffices to prove the lemma for all x 2 S with kxk = 1. We can write
x = UI ↵ where ↵ 2 Rs , k↵k2 = 1, and UI is the matrix whose columns are
{Ui : i 2 I}. Using Lemma 23.11 we know that there exists a set Q of size
|Q|  (12/✏)s such that

sup min k↵ vk  (✏/4).


↵:k↵k=1 v2Q

But since U is orthogonal we also have that

sup min kUI ↵ UI vk  (✏/4).


↵:k↵k=1 v2Q

Applying Lemma 23.4 on the set {UI v : v 2 Q} we obtain that for n satisfying
23.3 Compressed Sensing 337

the condition given in the lemma, the following holds with probability of at least
1 :
kW UI vk2
sup 1  ✏/2,
v2Q kUI vk2

This also implies that


kW UI vk
sup 1  ✏/2.
v2Q kUI vk

Let a be the smallest number such that


kW xk
8x 2 S,  1 + a.
kxk

Clearly a < 1. Our goal is to show that a  ✏. This follows from the fact that
for any x 2 S of unit norm there exists v 2 Q such that kx UI vk  ✏/4 and
therefore

kW xk  kW UI vk + kW (x UI v)k  1 + ✏/2 + (1 + a)✏/4.

Thus,
kW xk
8x 2 S,  1 + (✏/2 + (1 + a)✏/4) .
kxk

But the definition of a implies that

✏/2 + ✏/4
a  ✏/2 + (1 + a)✏/4 ) a   ✏.
1 ✏/4
kW xk
This proves that for all x 2 S we have kxk 1  ✏. The other side follows from
this as well since

kW xk kW UI vk kW (x UI v)k 1 ✏/2 (1 + ✏)✏/4 1 ✏.

The preceding lemma tells us that for x 2 S of unit norm we have

(1 ✏)  kW xk  (1 + ✏),

which implies that

(1 2 ✏)  kW xk2  (1 + 3 ✏).

The proof of Theorem 23.9 follows from this by a union bound over all choices
of I.
338 Dimensionality Reduction

23.4 PCA or Compressed Sensing?

Suppose we would like to apply a dimensionality reduction technique to a given


set of examples. Which method should we use, PCA or compressed sensing? In
this section we tackle this question, by underscoring the underlying assumptions
behind the two methods.
It is helpful first to understand when each of the methods can guarantee per-
fect recovery. PCA guarantees perfect recovery whenever the set of examples is
contained in an n dimensional subspace of Rd . Compressed sensing guarantees
perfect recovery whenever the set of examples is sparse (in some basis). On the
basis of these observations, we can describe cases in which PCA will be better
than compressed sensing and vice versa.
As a first example, suppose that the examples are the vectors of the standard
basis of Rd , namely, e1 , . . . , ed , where each ei is the all zeros vector except 1 in the
ith coordinate. In this case, the examples are 1-sparse. Hence, compressed sensing
will yield a perfect recovery whenever n ⌦(log(d)). On the other hand, PCA
will lead to poor performance, since the data is far from being in an n dimensional
subspace, as long as n < d. Indeed, it is easy ro verify that in such a case, the
averaged recovery error of PCA (i.e., the objective of Equation (23.1) divided by
m) will be (d n)/d, which is larger than 1/2 whenever n  d/2.
We next show a case where PCA is better than compressed sensing. Consider
m examples that are exactly on an n dimensional subspace. Clearly, in such a
case, PCA will lead to perfect recovery. As to compressed sensing, note that
the examples are n-sparse in any orthonormal basis whose first n vectors span
the subspace. Therefore, compressed sensing would also work if we will reduce
the dimension to ⌦(n log(d)). However, with exactly n dimensions, compressed
sensing might fail. PCA has also better resilience to certain types of noise. See
(Chang, Weiss & Freeman 2009) for a discussion.

23.5 Summary

We introduced two methods for dimensionality reduction using linear transfor-


mations: PCA and random projections. We have shown that PCA is optimal in
the sense of averaged squared reconstruction error, if we restrict the reconstruc-
tion procedure to be linear as well. However, if we allow nonlinear reconstruction,
PCA is not necessarily the optimal procedure. In particular, for sparse data, ran-
dom projections can significantly outperform PCA. This fact is at the heart of
the compressed sensing method.
23.6 Bibliographic Remarks 339

23.6 Bibliographic Remarks

PCA is equivalent to best subspace approximation using singular value decom-


position (SVD). The SVD method is described in Appendix C. SVD dates back
to Eugenio Beltrami (1873) and Camille Jordan (1874). It has been rediscovered
many times. In the statistical literature, it was introduced by Pearson (1901). Be-
sides PCA and SVD, there are additional names that refer to the same idea and
are being used in di↵erent scientific communities. A few examples are the Eckart-
Young theorem (after Carl Eckart and Gale Young who analyzed the method in
1936), the Schmidt-Mirsky theorem, factor analysis, and the Hotelling transform.
Compressed sensing was introduced in Donoho (2006) and in (Candes & Tao
2005). See also Candes (2006).

23.7 Exercises

1. In this exercise we show that in the general case, exact recovery of a linear
compression scheme is impossible.
1. let A 2 Rn,d be an arbitrary compression matrix where n  d 1. Show
that there exists u, v 2 Rn , u 6= v such that Au = Av.
2. Conclude that exact recovery of a linear compression scheme is impossible.
2. Let ↵ 2 Rd such that ↵1 ↵2 · · · ↵d 0. Show that
d
X n
X
max ↵j j = ↵j .
2[0,1]d :k k1 n
j=1 j=1

Hint: Take every vector 2 [0, 1]d such that k k1  n. Let i be the minimal
index for which i < 1. If i = n + 1 we are done. Otherwise, show that we can
increase i , while possibly decreasing j for some j > i, and obtain a better
solution. This will imply that the optimal solution is to set i = 1 for i  n
and i = 0 for i > n.
3. Kernel PCA: In this exercise we show how PCA can be used for construct-
ing nonlinear dimensionality reduction on the basis of the kernel trick (see
Chapter 16).
Let X be some instance space and let S = {x1 , . . . , xm } be a set of points
in X . Consider a feature mapping : X ! V , where V is some Hilbert space
(possibly of infinite dimension). Let K : X ⇥ X be a kernel function, that is,
k(x, x0 ) = h (x), (x0 )i. Kernel PCA is the process of mapping the elements
in S into V using , and then applying PCA over { (x1 ), . . . , (xm )} into
Rn . The output of this process is the set of reduced elements.
Show how this process can be done in polynomial time in terms of m
and n, assuming that each evaluation of K(·, ·) can be calculated in a con-
stant time. In particular, if your implementation requires multiplication of
two matrices A and B, verify that their product can be computed. Similarly,
340 Dimensionality Reduction

if an eigenvalue decomposition of some matrix C is required, verify that this


decomposition can be computed.
4. An Interpretation of PCA as Variance Maximization:
Let x1 , . . . , xm be m vectors in Rd , and let x be a random vector distributed
according to the uniform distribution over x1 , . . . , xm . Assume that E[x] = 0.
1. Consider the problem of finding a unit vector, w 2 Rd , such that the
random variable hw, xi has maximal variance. That is, we would like to
solve the problem
m
1 X
argmax Var[hw, xi] = argmax (hw, xi i)2 .
w:kwk=1 w:kwk=1 m i=1

Show that the solution of the problem is to set w to be the first principle
vector of x1 , . . . , xm .
2. Let w1 be the first principal component as in the previous question. Now,
suppose we would like to find a second unit vector, w2 2 Rd , that maxi-
mizes the variance of hw2 , xi, but is also uncorrelated to hw1 , xi. That is,
we would like to solve:

argmax Var[hw, xi].


w:kwk=1, E[(hw1 ,xi)(hw,xi)]=0

Show that the solution to this problem is to set w to be the second principal
component of x1 , . . . , xm .
Hint: Note that

E[(hw1 , xi)(hw, xi)] = w1> E[xx> ]w = mw1> Aw,


P >
where A = i xi xi . Since w is an eigenvector of A we have that the
constraint E[(hw1 , xi)(hw, xi)] = 0 is equivalent to the constraint

hw1 , wi = 0.

5. The Relation between SVD and PCA: Use the SVD theorem (Corol-
lary C.6) for providing an alternative proof of Theorem 23.2.
6. Random Projections Preserve Inner Products: The Johnson-Lindenstrauss
lemma tells us that a random projection preserves distances between a finite
set of vectors. In this exercise you need to prove that if the set of vectors are
within the unit ball, then not only are the distances between any two vectors
preserved, but the inner product is also preserved.
Let Q be a finite set of vectors in Rd and assume that for every x 2 Q we
have kxk  1.
1. Let 2 (0, 1) and n be an integer such that
r
6 log(|Q|2 / )
✏=  3.
n
Prove that with probability of at least 1 over a choice of a random
23.7 Exercises 341

matrix W 2 Rn,d , where each element of W is independently distributed


according to N (0, 1/n), we have
|hW u, W vi hu, vi|  ✏
for every u, v 2 Q.
Hint: Use JL to bound both kWku+vk (u+v)k
and kWku(u vkv)k .
2. (*) Let x1 , . . . , xm be a set of vectors in Rd of norm at most 1, and assume
that these vectors are linearly separable with margin of . Assume that
d 1/ 2 . Show that there exists a constant c > 0 such that if we randomly
project these vectors into Rn , for n = c/ 2 , then with probability of at least
99% it holds that the projected vectors are linearly separable with margin
/2.
24 Generative Models

We started this book with a distribution free learning framework; namely, we


did not impose any assumptions on the underlying distribution over the data.
Furthermore, we followed a discriminative approach in which our goal is not to
learn the underlying distribution but rather to learn an accurate predictor. In
this chapter we describe a generative approach, in which it is assumed that the
underlying distribution over the data has a specific parametric form and our goal
is to estimate the parameters of the model. This task is called parametric density
estimation.
The discriminative approach has the advantage of directly optimizing the
quantity of interest (the prediction accuracy) instead of learning the underly-
ing distribution. This was phrased as follows by Vladimir Vapnik in his principle
for solving problems using a restricted amount of information:

When solving a given problem, try to avoid a more general problem as an intermediate
step.

Of course, if we succeed in learning the underlying distribution accurately,


we are considered to be “experts” in the sense that we can predict by using
the Bayes optimal classifier. The problem is that it is usually more difficult to
learn the underlying distribution than to learn an accurate predictor. However,
in some situations, it is reasonable to adopt the generative learning approach.
For example, sometimes it is easier (computationally) to estimate the parameters
of the model than to learn a discriminative predictor. Additionally, in some cases
we do not have a specific task at hand but rather would like to model the data
either for making predictions at a later time without having to retrain a predictor
or for the sake of interpretability of the data.
We start with a popular statistical method for estimating the parameters of
the data, which is called the maximum likelihood principle. Next, we describe two
generative assumptions which greatly simplify the learning process. We also de-
scribe the EM algorithm for calculating the maximum likelihood in the presence
of latent variables. We conclude with a brief description of Bayesian reasoning.

Understanding Machine Learning, c 2014 by Shai Shalev-Shwartz and Shai Ben-David


Published 2014 by Cambridge University Press.
Personal use only. Not for distribution. Do not post.
Please link to http://www.cs.huji.ac.il/~shais/UnderstandingMachineLearning
24.1 Maximum Likelihood Estimator 343

24.1 Maximum Likelihood Estimator

Let us start with a simple example. A drug company developed a new drug to
treat some deadly disease. We would like to estimate the probability of survival
when using the drug. To do so, the drug company sampled a training set of m
people and gave them the drug. Let S = (x1 , . . . , xm ) denote the training set,
where for each i, xi = 1 if the ith person survived and xi = 0 otherwise. We can
model the underlying distribution using a single parameter, ✓ 2 [0, 1], indicating
the probability of survival.
We now would like to estimate the parameter ✓ on the basis of the training
set S. A natural idea is to use the average number of 1’s in S as an estimator.
That is,
m
1 X
✓ˆ = xi . (24.1)
m i=1

ˆ = ✓. That is, ✓ˆ is an unbiased estimator of ✓. Furthermore, since ✓ˆ is


Clearly, ES [✓]
the average of m i.i.d. binary random variables we can use Hoe↵ding’s inequality
to get that with probability of at least 1 over the choice of S we have that
r
log(2/ )
|✓ˆ ✓|  . (24.2)
2m

Another interpretation of ✓ˆ is as the Maximum Likelihood Estimator, as we


formally explain now. We first write the probability of generating the sample S:
m
Y P P
P[S = (x1 , . . . , xm )] = ✓xi (1 ✓)1 xi
=✓ i xi
(1 ✓) i (1 xi )
.
i=1

We define the log likelihood of S, given the parameter ✓, as the log of the preceding
expression:
X X
L(S; ✓) = log (P[S = (x1 , . . . , xm )]) = log(✓) xi + log(1 ✓) (1 xi ).
i i

The maximum likelihood estimator is the parameter that maximizes the likeli-
hood
✓ˆ 2 argmax L(S; ✓). (24.3)

Next, we show that in our case, Equation (24.1) is a maximum likelihood esti-
mator. To see this, we take the derivative of L(S; ✓) with respect to ✓ and equate
it to zero:
P P
i xi i (1 xi )
= 0.
✓ 1 ✓
Solving the equation for ✓ we obtain the estimator given in Equation (24.1).
344 Generative Models

24.1.1 Maximum Likelihood Estimation for Continuous Random Variables


Let X be a continuous random variable. Then, for most x 2 R we have P[X =
x] = 0 and therefore the definition of likelihood as given before is trivialized. To
overcome this technical problem we define the likelihood as log of the density of
the probability of X at x. That is, given an i.i.d. training set S = (x1 , . . . , xm )
sampled according to a density distribution P✓ we define the likelihood of S given
✓ as
m
! m
Y X
L(S; ✓) = log P✓ (xi ) = log(P✓ (xi )).
i=1 i=1

As before, the maximum likelihood estimator is a maximizer of L(S; ✓) with


respect to ✓.
As an example, consider a Gaussian random variable, for which the density
function of X is parameterized by ✓ = (µ, ) and is defined as follows:
✓ ◆
1 (x µ)2
P✓ (x) = p exp .
2⇡ 2 2
We can rewrite the likelihood as
m
1 X p
L(S; ✓) = 2
(xi µ)2 m log( 2 ⇡).
2 i=1

To find a parameter ✓ = (µ, ) that optimizes this we take the derivative of the
likelihood w.r.t. µ and w.r.t. and compare it to 0. We obtain the following two
equations:
m
d 1 X
L(S; ✓) = 2 (xi µ) = 0
dµ i=1
m
d 1 X m
L(S; ✓) = 3 (xi µ)2 =0
d i=1

Solving the preceding equations we obtain the maximum likelihood estimates:


v
m u m
1 X u1 X
µ̂ = xi and ˆ=t (xi µ̂)2
m i=1 m i=1

Note that the maximum likelihood estimate is not always an unbiased estimator.
For example, while µ̂ is unbiased, it is possible to show that the estimate ˆ of
the variance is biased (Exercise 1).

Simplifying Notation
To simplify our notation, we use P[X = x] in this chapter to describe both the
probability that X = x (for discrete random variables) and the density of the
distribution at x (for continuous variables).
24.1 Maximum Likelihood Estimator 345

24.1.2 Maximum Likelihood and Empirical Risk Minimization


The maximum likelihood estimator shares some similarity with the Empirical
Risk Minimization (ERM) principle, which we studied extensively in previous
chapters. Recall that in the ERM principle we have a hypothesis class H and
we use the training set for choosing a hypothesis h 2 H that minimizes the
empirical risk. We now show that the maximum likelihood estimator is an ERM
for a particular loss function.
Given a parameter ✓ and an observation x, we define the loss of ✓ on x as

`(✓, x) = log(P✓ [x]). (24.4)

That is, `(✓, x) is the negation of the log-likelihood of the observation x, assuming
the data is distributed according to P✓ . This loss function is often referred to as
the log-loss. On the basis of this definition it is immediate that the maximum
likelihood principle is equivalent to minimizing the empirical risk with respect
to the loss function given in Equation (24.4). That is,
m
X m
X
argmin ( log(P✓ [xi ])) = argmax log(P✓ [xi ]).
✓ i=1 ✓ i=1

Assuming that the data is distributed according to a distribution P (not neces-


sarily of the parametric form we employ), the true risk of a parameter ✓ becomes
X
E[`(✓, x)] = P[x] log(P✓ [x])
x
x
X ✓ ◆ X ✓ ◆
P[x] 1 (24.5)
= P[x] log + P[x] log ,
x
P✓ [x] x
P[x]
| {z } | {z }
DRE [P||P✓ ] H(P)

where DRE is called the relative entropy, and H is called the entropy func-
tion. The relative entropy is a divergence measure between two probabilities.
For discrete variables, it is always nonnegative and is equal to 0 only if the two
distributions are the same. It follows that the true risk is minimal when P✓ = P.
The expression given in Equation (24.5) underscores how our generative as-
sumption a↵ects our density estimation, even in the limit of infinite data. It
shows that if the underlying distribution is indeed of a parametric form, then by
choosing the correct parameter we can make the risk be the entropy of the distri-
bution. However, if the distribution is not of the assumed parametric form, even
the best parameter leads to an inferior model and the suboptimality is measured
by the relative entropy divergence.

24.1.3 Generalization Analysis


How good is the maximum likelihood estimator when we learn from a finite
training set?
346 Generative Models

To answer this question we need to define how we assess the quality of an approxi-
mated solution of the density estimation problem. Unlike discriminative learning,
where there is a clear notion of “loss,” in generative learning there are various
ways to define the loss of a model. On the basis of the previous subsection, one
natural candidate is the expected log-loss as given in Equation (24.5).
In some situations, it is easy to prove that the maximum likelihood principle
guarantees low true risk as well. For example, consider the problem of estimating
the mean of a Gaussian variable of unit variance. We saw previously that the
1
P ?
maximum likelihood estimator is the average: µ̂ = m i xi . Let µ be the optimal
parameter. Then,
✓ ◆
Pµ? [x]
E ? [`(µ̂, x) `(µ? , x)] = E ? log
x⇠N (µ ,1) x⇠N (µ ,1) Pµ̂ [x]
✓ ◆
1 1
= E? (x µ? )2 + (x µ̂)2
x⇠N (µ ,1) 2 2
2 ? 2
µ̂ (µ )
= + (µ? µ̂) E [x]
2 2 x⇠N (µ? ,1)

µ̂2 (µ? )2
= + (µ? µ̂) µ?
2 2
1
= (µ̂ µ? )2 . (24.6)
2
Next, we note that µ̂ is the average of m Gaussian variables and therefore it is
also distributed normally with mean µ? and variance ? /m. From this fact we
can derive bounds of the form: with probability of at least 1 we have that
? ?
|µ̂ µ |  ✏ where ✏ depends on /m and on .
In some situations, the maximum likelihood estimator clearly overfits. For
example, consider a Bernoulli random variable X and let P[X = 1] = ✓? . As
we saw previously, using Hoe↵ding’s inequality we can easily derive a guarantee
on |✓? ✓| ˆ that holds with high probability (see Equation (24.2)). However, if
our goal is to obtain a small value of the expected log-loss function as defined in
Equation (24.5) we might fail. For example, assume that ✓? is nonzero but very
small. Then, the probability that no element of a sample of size m will be 1 is
?
(1 ✓? )m , which is greater than e 2✓ m . It follows that whenever m  log(2) 2✓ ? ,
the probability that the sample is all zeros is at least 50%, and in that case, the
maximum likelihood rule will set ✓ˆ = 0. But the true risk of the estimate ✓ˆ = 0
is
ˆ x)] = ✓? `(✓,
E [`(✓, ˆ 1) + (1 ˆ 0)
✓? )`(✓,
x⇠✓ ?
ˆ + (1
= ✓? log(1/✓) ✓? ) log(1/(1 ˆ
✓))
= ✓? log(1/0) = 1.

This simple example shows that we should be careful in applying the maximum
likelihood principle.
To overcome overfitting, we can use the variety of tools we encountered pre-
24.2 Naive Bayes 347

viously in the book. A simple regularization technique is outlined in Exercise


2.

24.2 Naive Bayes

The Naive Bayes classifier is a classical demonstration of how generative as-


sumptions and parameter estimations simplify the learning process. Consider
the problem of predicting a label y 2 {0, 1} on the basis of a vector of features
x = (x1 , . . . , xd ), where we assume that each xi is in {0, 1}. Recall that the Bayes
optimal classifier is
hBayes (x) = argmax P[Y = y|X = x].
y2{0,1}

To describe the probability function P[Y = y|X = x] we need 2d parameters,


each of which corresponds to P[Y = 1|X = x] for a certain value of x 2 {0, 1}d .
This implies that the number of examples we need grows exponentially with the
number of features.
In the Naive Bayes approach we make the (rather naive) generative assumption
that given the label, the features are independent of each other. That is,
d
Y
P[X = x|Y = y] = P[Xi = xi |Y = y].
i=1

With this assumption and using Bayes’ rule, the Bayes optimal classifier can be
further simplified:
hBayes (x) = argmax P[Y = y|X = x]
y2{0,1}

= argmax P[Y = y]P[X = x|Y = y]


y2{0,1}
d
Y
= argmax P[Y = y] P[Xi = xi |Y = y]. (24.7)
y2{0,1} i=1

That is, now the number of parameters we need to estimate is only 2d + 1.


Here, the generative assumption we made reduced significantly the number of
parameters we need to learn.
When we also estimate the parameters using the maximum likelihood princi-
ple, the resulting classifier is called the Naive Bayes classifier.

24.3 Linear Discriminant Analysis

Linear discriminant analysis (LDA) is another demonstration of how generative


assumptions simplify the learning process. As in the Naive Bayes classifier we
consider again the problem of predicting a label y 2 {0, 1} on the basis of a
348 Generative Models

vector of features x = (x1 , . . . , xd ). But now the generative assumption is as


follows. First, we assume that P[Y = 1] = P[Y = 0] = 1/2. Second, we assume
that the conditional probability of X given Y is a Gaussian distribution. Finally,
the covariance matrix of the Gaussian distribution is the same for both values
of the label. Formally, let µ0 , µ1 2 Rd and let ⌃ be a covariance matrix. Then,
the density distribution is given by
✓ ◆
1 1 T 1
P[X = x|Y = y] = exp (x µ y ) ⌃ (x µ y ) .
(2⇡)d/2 |⌃|1/2 2
As we have shown in the previous section, using Bayes’ rule we can write

hBayes (x) = argmax P[Y = y]P[X = x|Y = y].


y2{0,1}

This means that we will predict hBayes (x) = 1 i↵


✓ ◆
P[Y = 1]P[X = x|Y = 1]
log > 0.
P[Y = 0]P[X = x|Y = 0]
This ratio is often called the log-likelihood ratio.
In our case, the log-likelihood ratio becomes
1
2 (x µ0 )T ⌃ 1
(x µ0 ) 1
2 (x µ1 )T ⌃ 1
(x µ1 )

We can rewrite this as hw, xi + b where

w = (µ1 µ0 )T ⌃ 1
and b= 1
2 µT0 ⌃ 1
µ0 µT1 ⌃ 1
µ1 . (24.8)

As a result of the preceding derivation we obtain that under the aforemen-


tioned generative assumptions, the Bayes optimal classifier is a linear classifier.
Additionally, one may train the classifier by estimating the parameter µ0 , µ1
and ⌃ from the data, using, for example, the maximum likelihood estimator.
With those estimators at hand, the values of w and b can be calculated as in
Equation (24.8).

24.4 Latent Variables and the EM Algorithm

In generative models we assume that the data is generated by sampling from


a specific parametric distribution over our instance space X . Sometimes, it is
convenient to express this distribution using latent random variables. A natural
example is a mixture of k Gaussian distributions. That is, X = Rd and we
assume that each x is generated as follows. First, we choose a random number in
{1, . . . , k}. Let Y be a random variable corresponding to this choice, and denote
P[Y = y] = cy . Second, we choose x on the basis of the value of Y according to
a Gaussian distribution
✓ ◆
1 1 T 1
P[X = x|Y = y] = exp (x µy ) ⌃y (x µy ) . (24.9)
(2⇡)d/2 |⌃y |1/2 2
24.4 Latent Variables and the EM Algorithm 349

Therefore, the density of X can be written as:


k
X
P[X = x] = P[Y = y]P[X = x|Y = y]
y=1
k
X ✓ ◆
1 1 T 1
= cy exp (x µy ) ⌃y (x µy ) .
y=1
(2⇡) |⌃y |1/2
d/2 2

Note that Y is a hidden variable that we do not observe in our data. Neverthe-
less, we introduce Y since it helps us describe a simple parametric form of the
probability of X.
More generally, let ✓ be the parameters of the joint distribution of X and Y
(e.g., in the preceding example, ✓ consists of cy , µy , and ⌃y , for all y = 1, . . . , k).
Then, the log-likelihood of an observation x can be written as
k
!
X
log (P✓ [X = x]) = log P✓ [X = x, Y = y] .
y=1

Given an i.i.d. sample, S = (x1 , . . . , xm ), we would like to find ✓ that maxi-


mizes the log-likelihood of S,
m
Y
L(✓) = log P✓ [X = xi ]
i=1
m
X
= log P✓ [X = xi ]
i=1
m k
!
X X
= log P✓ [X = xi , Y = y] .
i=1 y=1

The maximum-likelihood estimator is therefore the solution of the maximization


problem
m k
!
X X
argmax L(✓) = argmax log P✓ [X = xi , Y = y] .
✓ ✓ i=1 y=1

In many situations, the summation inside the log makes the preceding opti-
mization problem computationally hard. The Expectation-Maximization (EM)
algorithm, due to Dempster, Laird, and Rubin, is an iterative procedure for
searching a (local) maximum of L(✓). While EM is not guaranteed to find the
global maximum, it often works reasonably well in practice.
EM is designed for those cases in which, had we known the values of the latent
variables Y , then the maximum likelihood optimization problem would have been
tractable. More precisely, define the following function over m ⇥ k matrices and
the set of parameters ✓:
m X
X k
F (Q, ✓) = Qi,y log (P✓ [X = xi , Y = y]) .
i=1 y=1
350 Generative Models

If each row of Q defines a probability over the ith latent variable given X = xi ,
then we can interpret F (Q, ✓) as the expected log-likelihood of a training set
(x1 , y1 ), . . . , (xm , ym ), where the expectation is with respect to the choice of
each yi on the basis of the ith row of Q. In the definition of F , the summation is
outside the log, and we assume that this makes the optimization problem with
respect to ✓ tractable:
assumption 24.1 For any matrix Q 2 [0, 1]m,k , such that each row of Q sums
to 1, the optimization problem
argmax F (Q, ✓)

is tractable.
The intuitive idea of EM is that we have a “chicken and egg” problem. On one
hand, had we known Q, then by our assumption, the optimization problem of
finding the best ✓ is tractable. On the other hand, had we known the parameters
✓ we could have set Qi,y to be the probability of Y = y given that X = xi .
The EM algorithm therefore alternates between finding ✓ given Q and finding Q
given ✓. Formally, EM finds a sequence of solutions (Q(1) , ✓ (1) ), (Q(2) , ✓ (2) ), . . .
where at iteration t, we construct (Q(t+1) , ✓ (t+1) ) by performing two steps.

• Expectation Step: Set


(t+1)
Qi,y = P✓(t) [Y = y|X = xi ]. (24.10)
This step is called the Expectation step, because it yields a new probabil-
ity over the latent variables, which defines a new expected log-likelihood
function over ✓.
• Maximization Step: Set ✓ (t+1) to be the maximizer of the expected log-
likelihood, where the expectation is according to Q(t+1) :
✓ (t+1) = argmax F (Q(t+1) , ✓). (24.11)

By our assumption, it is possible to solve this optimization problem effi-


ciently.
The initial values of ✓ (1) and Q(1) are usually chosen at random and the
procedure terminates after the improvement in the likelihood value stops being
significant.

24.4.1 EM as an Alternate Maximization Algorithm


To analyze the EM algorithm, we first view it as an alternate maximization
algorithm. Define the following objective function
m X
X k
G(Q, ✓) = F (Q, ✓) Qi,y log(Qi,y ).
i=1 y=1
24.4 Latent Variables and the EM Algorithm 351

The second term is the sum of the entropies of the rows of Q. Let

( k
)
X
m,k
Q= Q 2 [0, 1] : 8i, Qi,y = 1
y=1

be the set of matrices whose rows define probabilities over [k]. The following
lemma shows that EM performs alternate maximization iterations for maximiz-
ing G.

lemma 24.2 The EM procedure can be rewritten as:

Q(t+1) = argmax G(Q, ✓ (t) )


Q2Q
(t+1)
✓ = argmax G(Q(t+1) , ✓) .

Furthermore, G(Q(t+1) , ✓ (t) ) = L(✓ (t) ).

Proof Given Q(t+1) we clearly have that

argmax G(Q(t+1) , ✓) = argmax F (Q(t+1) , ✓).


✓ ✓

Therefore, we only need to show that for any ✓, the solution of argmaxQ2Q G(Q, ✓)
is to set Qi,y = P✓ [Y = y|X = xi ]. Indeed, by Jensen’s inequality, for any Q 2 Q
we have that

m
X k
X ✓ ◆!
P✓ [X = xi , Y = y]
G(Q, ✓) = Qi,y log
i=1 y=1
Qi,y
m k
!!
X X P✓ [X = xi , Y = y]
 log Qi,y
i=1 y=1
Qi,y
m k
!
X X
= log P✓ [X = xi , Y = y]
i=1 y=1
Xm
= log (P✓ [X = xi ]) = L(✓),
i=1
352 Generative Models

while for Qi,y = P✓ [Y = y|X = xi ] we have


m
X k
X ✓ ◆!
P✓ [X = xi , Y = y]
G(Q, ✓) = P✓ [Y = y|X = xi ] log
i=1 y=1
P✓ [Y = y|X = xi ]
m X
X k
= P✓ [Y = y|X = xi ] log (P✓ [X = xi ])
i=1 y=1
m
X k
X
= log (P✓ [X = xi ]) P✓ [Y = y|X = xi ]
i=1 y=1
Xm
= log (P✓ [X = xi ]) = L(✓).
i=1

This shows that setting Qi,y = P✓ [Y = y|X = xi ] maximizes G(Q, ✓) over Q 2 Q


and shows that G(Q(t+1) , ✓ (t) ) = L(✓ (t) ).

The preceding lemma immediately implies:

theorem 24.3 The EM procedure never decreases the log-likelihood; namely,


for all t,
L(✓ (t+1) ) L(✓ (t) ).

Proof By the lemma we have

L(✓ (t+1) ) = G(Q(t+2) , ✓ (t+1) ) G(Q(t+1) , ✓ (t) ) = L(✓ (t) ).

24.4.2 EM for Mixture of Gaussians (Soft k-Means)


Consider the case of a mixture of k Gaussians in which ✓ is a triplet (c, {µ1 , . . . , µk }, {⌃1 , . . . , ⌃k })
where P✓ [Y = y] = cy and P✓ [X = x|Y = y] is as given in Equation (24.9). For
simplicity, we assume that ⌃1 = ⌃2 = · · · = ⌃k = I, where I is the identity
matrix. Specifying the EM algorithm for this case we obtain the following:

• Expectation step: For each i 2 [m] and y 2 [k] we have that


1
P✓(t) [Y = y|X = xi ] = P (t) [Y = y] P✓(t) [X = xi |Y = y]
Zi ✓
✓ ◆
1 (t) 1
= cy exp kxi µ(t) y k 2
, (24.12)
Zi 2
P
where Zi is a normalization factor which ensures that y P✓(t) [Y = y|X =
xi ] sums to 1.
• Maximization step: We need to set ✓ t+1 to be a maximizer of Equation (24.11),
24.5 Bayesian Reasoning 353

which in our case amounts to maximizing the following expression w.r.t. c


and µ:
Xm X k ✓ ◆
1 2
P✓(t) [Y = y|X = xi ] log(cy ) kxi µy k . (24.13)
i=1 y=1
2

Comparing the derivative of Equation (24.13) w.r.t. µy to zero and rear-


ranging terms we obtain:
Pm
i=1 P✓ (t) [Y = y|X = xi ] xi
µy = P m .
i=1 P✓ (t) [Y = y|X = xi ]
That is, µy is a weighted average of the xi where the weights are according
to the probabilities calculated in the E step. To find the optimal c we need
to be more careful since we must ensure that c is a probability vector. In
Exercise 3 we show that the solution is:
Pm
P✓(t) [Y = y|X = xi ]
cy = Pk i=1 Pm . (24.14)
0
y 0 =1 i=1 P✓ (t) [Y = y |X = xi ]

It is interesting to compare the preceding algorithm to the k-means algorithm


described in Chapter 22. In the k-means algorithm, we first assign each example
to a cluster according to the distance kxi µy k. Then, we update each center
µy according to the average of the examples assigned to this cluster. In the EM
approach, however, we determine the probability that each example belongs to
each cluster. Then, we update the centers on the basis of a weighted sum over
the entire sample. For this reason, the EM approach for k-means is sometimes
called “soft k-means.”

24.5 Bayesian Reasoning

The maximum likelihood estimator follows a frequentist approach. This means


that we refer to the parameter ✓ as a fixed parameter and the only problem is
that we do not know its value. A di↵erent approach to parameter estimation
is called Bayesian reasoning. In the Bayesian approach, our uncertainty about
✓ is also modeled using probability theory. That is, we think of ✓ as a random
variable as well and refer to the distribution P[✓] as a prior distribution. As its
name indicates, the prior distribution should be defined by the learner prior to
observing the data.
As an example, let us consider again the drug company which developed a
new drug. On the basis of past experience, the statisticians at the drug company
believe that whenever a drug has reached the level of clinic experiments on
people, it is likely to be e↵ective. They model this prior belief by defining a
density distribution on ✓ such that
(
0.8 if ✓ > 0.5
P[✓] = (24.15)
0.2 if ✓  0.5
354 Generative Models

As before, given a specific value of ✓, it is assumed that the conditional proba-


bility, P[X = x|✓], is known. In the drug company example, X takes values in
{0, 1} and P[X = x|✓] = ✓x (1 ✓)1 x .
Once the prior distribution over ✓ and the conditional distribution over X
given ✓ are defined, we again have complete knowledge of the distribution over
X. This is because we can write the probability over X as a marginal probability
X X
P[X = x] = P[X = x, ✓] = P[✓]P[X = x|✓],
✓ ✓

where the last equality follows from the definition of conditional probability. If
✓ is continuous we replace P[✓] with the density function and the sum becomes
an integral:
Z
P[X = x] = P[✓]P[X = x|✓] d✓.

Seemingly, once we know P[X = x], a training set S = (x1 , . . . , xm ) tells us


nothing as we are already experts who know the distribution over a new point
X. However, the Bayesian view introduces dependency between S and X. This is
because we now refer to ✓ as a random variable. A new point X and the previous
points in S are independent only conditioned on ✓. This is di↵erent from the
frequentist philosophy in which ✓ is a parameter that we might not know, but
since it is just a parameter of the distribution, a new point X and previous points
S are always independent.
In the Bayesian framework, since X and S are not independent anymore, what
we would like to calculate is the probability of X given S, which by the chain
rule can be written as follows:
X X
P[X = x|S] = P[X = x|✓, S] P[✓|S] = P[X = x|✓] P[✓|S].
✓ ✓

The second inequality follows from the assumption that X and S are independent
when we condition on ✓. Using Bayes’ rule we have
P[S|✓] P[✓]
P[✓|S] = ,
P[S]
and together with the assumption that points are independent conditioned on ✓,
we can write
m
P[S|✓] P[✓] 1 Y
P[✓|S] = = P[X = xi |✓] P[✓].
P[S] P[S] i=1
We therefore obtain the following expression for Bayesian prediction:
m
1 X Y
P[X = x|S] = P[X = x|✓] P[X = xi |✓] P[✓]. (24.16)
P[S] i=1

Getting back to our drug company example, we can rewrite P[X = x|S] as
Z P P
1
P[X = x|S] = ✓x+ i xi (1 ✓)1 x+ i (1 xi ) P[✓] d✓.
P [S]
24.6 Summary 355

It is interesting to note that when P[✓] is uniform we obtain that


Z P P
P[X = x|S] / ✓x+ i xi (1 ✓)1 x+ i (1 xi ) d✓.

Solving the preceding integral (using integration by parts) we obtain


P
( i xi ) + 1
P[X = 1|S] = .
m+2
Recall that the prediction
P according to the maximum likelihood principle in this
ˆ i xi
case is P[X = 1|✓] = m . The Bayesian prediction with uniform prior is rather
similar to the maximum likelihood prediction, except it adds “pseudoexamples”
to the training set, thus biasing the prediction toward the uniform prior.

Maximum A Posteriori
In many situations, it is difficult to find a closed form solution to the integral
given in Equation (24.16). Several numerical methods can be used to approxi-
mate this integral. Another popular solution is to find a single ✓ which maximizes
P[✓|S]. The value of ✓ which maximizes P[✓|S] is called the Maximum A Poste-
riori estimator. Once this value is found, we can calculate the probability that
X = x given the maximum a posteriori estimator and independently on S.

24.6 Summary

In the generative approach to machine learning we aim at modeling the distri-


bution over the data. In particular, in parametric density estimation we further
assume that the underlying distribution over the data has a specific paramet-
ric form and our goal is to estimate the parameters of the model. We have
described several principles for parameter estimation, including maximum like-
lihood, Bayesian estimation, and maximum a posteriori. We have also described
several specific algorithms for implementing the maximum likelihood under dif-
ferent assumptions on the underlying data distribution, in particular, Naive
Bayes, LDA, and EM.

24.7 Bibliographic Remarks

The maximum likelihood principle was studied by Ronald Fisher in the beginning
of the 20th century. Bayesian statistics follow Bayes’ rule, which is named after
the 18th century English mathematician Thomas Bayes.
There are many excellent books on the generative and Bayesian approaches
to machine learning. See, for example, (Bishop 2006, Koller & Friedman 2009,
MacKay 2003, Murphy 2012, Barber 2012).
356 Generative Models

24.8 Exercises

1. Prove that the maximum likelihood estimator of the variance of a Gaussian


variable is biased.
2. Regularization for Maximum Likelihood: Consider the following regularized
loss minimization:
m
1 X 1
log(1/P✓ [xi ]) + (log(1/✓) + log(1/(1 ✓))) .
m i=1 m

• Show that the preceding objective is equivalent to the usual empirical error
had we added two pseudoexamples to the training set. Conclude that
the regularized maximum likelihood estimator would be
m
!
1 X
ˆ
✓= 1+ xi .
m+2 i=1

• Derive a high probability bound on |✓ˆ ✓? |. Hint: Rewrite this as |✓ˆ E[✓]+
ˆ
ˆ
E[✓] ✓ | and then use the triangle inequality and Hoe↵ding inequality.
?

• Use this to bound the true risk. Hint: Use the fact that now ✓ˆ m+2 1
to
ˆ ?
relate |✓ ✓ | to the relative entropy.
3. • Consider a general optimization problem of the form:
k
X X
max ⌫y log(cy ) s.t. cy > 0, cy = 1 ,
c
y=1 y

where ⌫ 2 Rk+ is a vector of nonnegative weights. Verify that the M step


of soft k-means involves solving such an optimization problem.
• Let c? = P 1 ⌫y ⌫. Show that c? is a probability vector.
y
• Show that the optimization problem is equivalent to the problem:
X
min DRE (c? ||c) s.t. cy > 0, cy = 1 .
c
y

• Using properties of the relative entropy, conclude that c? is the solution to


the optimization problem.
25 Feature Selection and Generation

In the beginning of the book, we discussed the abstract model of learning, in


which the prior knowledge utilized by the learner is fully encoded by the choice
of the hypothesis class. However, there is another modeling choice, which we
have so far ignored: How do we represent the instance space X ? For example, in
the papayas learning problem, we proposed the hypothesis class of rectangles in
the softness-color two dimensional plane. That is, our first modeling choice was
to represent a papaya as a two dimensional point corresponding to its softness
and color. Only after that did we choose the hypothesis class of rectangles as a
class of mappings from the plane into the label set. The transformation from the
real world object “papaya” into the scalar representing its softness or its color
is called a feature function or a feature for short; namely, any measurement of
the real world object can be regarded as a feature. If X is a subset of a vector
space, each x 2 X is sometimes referred to as a feature vector. It is important to
understand that the way we encode real world objects as an instance space X is
by itself prior knowledge about the problem.
Furthermore, even when we already have an instance space X which is rep-
resented as a subset of a vector space, we might still want to change it into a
di↵erent representation and apply a hypothesis class on top of it. That is, we
may define a hypothesis class on X by composing some class H on top of a
feature function which maps X into some other vector space X 0 . We have al-
ready encountered examples of such compositions – in Chapter 15 we saw that
kernel-based SVM learns a composition of the class of halfspaces over a feature
mapping that maps each original instance in X into some Hilbert space. And,
indeed, the choice of is another form of prior knowledge we impose on the
problem.
In this chapter we study several methods for constructing a good feature set.
We start with the problem of feature selection, in which we have a large pool
of features and our goal is to select a small number of features that will be
used by our predictor. Next, we discuss feature manipulations and normalization.
These include simple transformations that we apply on our original features. Such
transformations may decrease the sample complexity of our learning algorithm,
its bias, or its computational complexity. Last, we discuss several approaches for
feature learning. In these methods, we try to automate the process of feature
construction.

Understanding Machine Learning, c 2014 by Shai Shalev-Shwartz and Shai Ben-David


Published 2014 by Cambridge University Press.
Personal use only. Not for distribution. Do not post.
Please link to http://www.cs.huji.ac.il/~shais/UnderstandingMachineLearning
358 Feature Selection and Generation

We emphasize that while there are some common techniques for feature learn-
ing one may want to try, the No-Free-Lunch theorem implies that there is no ulti-
mate feature learner. Any feature learning algorithm might fail on some problem.
In other words, the success of each feature learner relies (sometimes implicitly)
on some form of prior assumption on the data distribution. Furthermore, the
relative quality of features highly depends on the learning algorithm we are later
going to apply using these features. This is illustrated in the following example.
Example 25.1 Consider a regression problem in which X = R2 , Y = R, and
the loss function is the squared loss. Suppose that the underlying distribution
is such that an example (x, y) is generated as follows: First, we sample x1 from
the uniform distribution over [ 1, 1]. Then, we deterministically set y = x1 2 .
Finally, the second feature is set to be x2 = y + z, where z is sampled from the
uniform distribution over [ 0.01, 0.01]. Suppose we would like to choose a single
feature. Intuitively, the first feature should be preferred over the second feature
as the target can be perfectly predicted based on the first feature alone, while it
cannot be perfectly predicted based on the second feature. Indeed, choosing the
first feature would be the right choice if we are later going to apply polynomial
regression of degree at least 2. However, if the learner is going to be a linear
regressor, then we should prefer the second feature over the first one, since the
optimal linear predictor based on the first feature will have a larger risk than
the optimal linear predictor based on the second feature.

25.1 Feature Selection

Throughout this section we assume that X = Rd . That is, each instance is repre-
sented as a vector of d features. Our goal is to learn a predictor that only relies
on k ⌧ d features. Predictors that use only a small subset of features require a
smaller memory footprint and can be applied faster. Furthermore, in applications
such as medical diagnostics, obtaining each possible “feature” (e.g., test result)
can be costly; therefore, a predictor that uses only a small number of features
is desirable even at the cost of a small degradation in performance, relative to
a predictor that uses more features. Finally, constraining the hypothesis class to
use a small subset of features can reduce its estimation error and thus prevent
overfitting.
Ideally, we could have tried all subsets of k out of d features and choose the
subset which leads to the best performing predictor. However, such an exhaustive
search is usually computationally intractable. In the following we describe three
computationally feasible approaches for feature selection. While these methods
cannot guarantee finding the optimal subset, they often work reasonably well in
practice. Some of the methods come with formal guarantees on the quality of the
selected subsets under certain assumptions. We do not discuss these guarantees
here.
25.1 Feature Selection 359

25.1.1 Filters
Maybe the simplest approach for feature selection is the filter method, in which
we assess individual features, independently of other features, according to some
quality measure. We can then select the k features that achieve the highest score
(alternatively, decide also on the number of features to select according to the
value of their scores).
Many quality measures for features have been proposed in the literature.
Maybe the most straightforward approach is to set the score of a feature ac-
cording to the error rate of a predictor that is trained solely by that feature.
To illustrate this, consider a linear regression problem with the squared loss.
Let v = (x1,j , . . . , xm,j ) 2 Rm be a vector designating the values of the jth
feature on a training set of m examples and let y = (y1 , . . . , ym ) 2 Rm be the
values of the target on the same m examples. The empirical squared loss of an
ERM linear predictor that uses only the jth feature would be
1
min kav + b yk2 ,
a,b2R m
where the meaning of adding a scalar b to a vector v is adding b to all coordinates
1
Pm
of v. To solve this problem, let v̄ = m i=1 vi be the averaged value of the
1
P m
feature and let ȳ = m i=1 y i be the averaged value of the target. Clearly (see
Exercise 1),
1 1
min kav + b yk2 = min ka(v v̄) + b (y ȳ)k2 . (25.1)
a,b2R m a,b2R m

Taking the derivative of the right-hand side objective with respect to b and
comparing it to zero we obtain that b = 0. Similarly, solving for a (once we know
that b = 0) yields a = hv v̄, y ȳi/kv v̄k2 . Plugging this value back into the
objective we obtain the value
(hv v̄, y ȳi)2
ky ȳk2 .
kv v̄k2
Ranking the features according to the minimal loss they achieve is equivalent
to ranking them according to the absolute value of the following score (where
now a higher score yields a better feature):
1
hv v̄, y ȳi hv v̄, y ȳi
= q m q . (25.2)
kv v̄k ky ȳk 1
v̄k2 m1
ky ȳk2
m kv

The preceding expression is known as Pearson’s correlation coefficient. The nu-


merator is the empirical estimate of the covariance of the jth feature and the
target value, E[(v E v)(y E y)], while the denominator is the squared root of
the empirical estimate for the variance of the jth feature, E[(v E v)2 ], times
the variance of the target. Pearson’s coefficient ranges from 1 to 1, where if
the Pearson’s coefficient is either 1 or 1, there is a linear mapping from v to y
with zero empirical risk.
360 Feature Selection and Generation

If Pearson’s coefficient equals zero it means that the optimal linear function
from v to y is the all-zeros function, which means that v alone is useless for
predicting y. However, this does not mean that v is a bad feature, as it might
be the case that together with other features v can perfectly predict y. Indeed,
consider a simple example in which the target is generated by the function y =
x1 + 2x2 . Assume also that x1 is generated from the uniform distribution over
{±1}, and x2 = 12 x1 + 12 z, where z is also generated i.i.d. from the uniform
distribution over {±1}. Then, E[x1 ] = E[x2 ] = E[y] = 0, and we also have

E[yx1 ] = E[x21 ] + 2 E[x2 x1 ] = E[x21 ] E[x21 ] + E[zx1 ] = 0.

Therefore, for a large enough training set, the first feature is likely to have a
Pearson’s correlation coefficient that is close to zero, and hence it will most
probably not be selected. However, no function can predict the target value well
without knowing the first feature.
There are many other score functions that can be used by a filter method.
Notable examples are estimators of the mutual information or the area under
the receiver operating characteristic (ROC) curve. All of these score functions
su↵er from similar problems to the one illustrated previously. We refer the reader
to Guyon & Elissee↵ (2003).

25.1.2 Greedy Selection Approaches


Greedy selection is another popular approach for feature selection. Unlike filter
methods, greedy selection approaches are coupled with the underlying learning
algorithm. The simplest instance of greedy selection is forward greedy selection.
We start with an empty set of features, and then we gradually add one feature
at a time to the set of selected features. Given that our current set of selected
features is I, we go over all i 2
/ I, and apply the learning algorithm on the set
of features I [ {i}. Each such application yields a di↵erent predictor, and we
choose to add the feature that yields the predictor with the smallest risk (on
the training set or on a validation set). This process continues until we either
select k features, where k is a predefined budget of allowed features, or achieve
an accurate enough predictor.
Example 25.2 (Orthogonal Matching Pursuit) To illustrate the forward
greedy selection approach, we specify it to the problem of linear regression with
the squared loss. Let X 2 Rm,d be a matrix whose rows are the m training
instances. Let y 2 Rm be the vector of the m labels. For every i 2 [d], let Xi
be the ith column of X. Given a set I ⇢ [d] we denote by XI the matrix whose
columns are {Xi : i 2 I}.
The forward greedy selection method starts with I0 = ;. At iteration t, we
look for the feature index jt , which is in

argmin mint kXIt 1 [{j}


w yk2 .
j w2R
25.1 Feature Selection 361

Then, we update It = It 1 [ {jt }.


We now describe a more efficient implementation of the forward greedy selec-
tion approach for linear regression which is called Orthogonal Matching Pursuit
(OMP). The idea is to keep an orthogonal basis of the features aggregated so
far. Let Vt be a matrix whose columns form an orthonormal basis of the columns
of XIt .
Clearly,

min kXIt w yk2 = mint kVt ✓ yk2 .


w ✓2R

We will maintain a vector ✓ t which minimizes the right-hand side of the equation.
Initially, we set I0 = ;, V0 = ;, and ✓ 1 to be the empty vector. At round t, for
every j, we decompose Xj = vj + uj where vj = Vt 1 Vt> 1 Xj is the projection
of Xj onto the subspace spanned by Vt 1 and uj is the part of Xj orthogonal to
Vt 1 (see Appendix C). Then,

min kVt 1✓ + ↵uj yk2


✓,↵
⇥ ⇤
= min kVt 1 ✓ yk2 + ↵2 kuj k2 + 2↵huj , Vt 1 ✓ yi
✓,↵
⇥ ⇤
= min kVt 1 ✓ yk2 + ↵2 kuj k2 + 2↵huj , yi
✓,↵
⇥ ⇤ ⇥ ⇤
= min kVt 1 ✓ yk2 + min ↵2 kuj k2 2↵huj , yi
✓ ↵
⇥ ⇤ ⇥ ⇤
= kVt 1 ✓ t 1 yk + min ↵2 kuj k2 2↵huj , yi
2

(huj , yi)2
= kVt 1 ✓t 1 yk2 .
kuj k2

It follows that we should select the feature

(huj , yi)2
jt = argmax .
j kuj k2

The rest of the update is to set


 
u jt hujt , yi
Vt = Vt 1 , , ✓t = ✓t 1 ; .
kujt k2 kujt k2

The OMP procedure maintains an orthonormal basis of the selected features,


where in the preceding description, the orthonormalization property is obtained
by a procedure similar to Gram-Schmidt orthonormalization. In practice, the
Gram-Schmidt procedure is often numerically unstable. In the pseudocode that
follows we use SVD (see Section C.4) at the end of each round to obtain an
orthonormal basis in a numerically stable manner.
362 Feature Selection and Generation

Orthogonal Matching Pursuit (OMP)


input:
data matrix X 2 Rm,d , labels vector y 2 Rm ,
budget of features T
initialize: I1 = ;
for t = 1, . . . , T
use SVD to find an orthonormal basis V 2 Rm,t 1
of XIt
(for t = 1 set V to be the all zeros matrix)
foreach j 2 [d] \ It let uj = Xj V V > Xj
(huj ,yi)2
let jt = argmaxj 2I
/ t :kuj k>0 kuj k2
update It+1 = It [ {jt }
output IT +1

More Efficient Greedy Selection Criteria


Let R(w) be the empirical risk of a vector w. At each round of the forward
greedy selection method, and for every possible j, we should minimize R(w)
over the vectors w whose support is It 1 [ {j}. This might be time consuming.
A simpler approach is to choose jt that minimizes

argmin min R(wt 1 + ⌘ej ),


j ⌘2R

where ej is the all zeros vector except 1 in the jth element. That is, we keep
the weights of the previously chosen coordinates intact and only optimize over
the new variable. Therefore, for each j we need to solve an optimization problem
over a single variable, which is a much easier task than optimizing over t.
An even simpler approach is to upper bound R(w) using a “simple” function
and then choose the feature which leads to the largest decrease in this upper
bound. For example, if R is a -smooth function (see Equation (12.5) in Chap-
ter 12), then
@R(w)
R(w + ⌘ej )  R(w) + ⌘ + ⌘ 2 /2.
@wj

Minimizing the right-hand side over ⌘ yields ⌘ = @R(w)


@wj ·
1
and plugging this
value into the above yields
✓ ◆2
1 @R(w)
R(w + ⌘ej )  R(w) .
2 @wj

This value is minimized if the partial derivative of R(w) with respect to wj is


maximal. We can therefore choose jt to be the index of the largest coordinate of
the gradient of R(w) at w.
Remark 25.3 (AdaBoost as a Forward Greedy Selection Procedure) It is pos-
sible to interpret the AdaBoost algorithm from Chapter 10 as a forward greedy
25.1 Feature Selection 363

selection procedure with respect to the function


0 0 11
Xm Xd
R(w) = log @ exp @ yi wj hj (xi )AA . (25.3)
i=1 j=1

See Exercise 3.

Backward Elimination
Another popular greedy selection approach is backward elimination. Here, we
start with the full set of features, and then we gradually remove one feature at a
time from the set of features. Given that our current set of selected features is I,
we go over all i 2 I, and apply the learning algorithm on the set of features I \{i}.
Each such application yields a di↵erent predictor, and we choose to remove the
feature i for which the predictor obtained from I \ {i} has the smallest risk (on
the training set or on a validation set).
Naturally, there are many possible variants of the backward elimination idea.
It is also possible to combine forward and backward greedy steps.

25.1.3 Sparsity-Inducing Norms


The problem of minimizing the empirical risk subject to a budget of k features
can be written as
min LS (w) s.t. kwk0  k,
w

where1
kwk0 = |{i : wi 6= 0}|.

In other words, we want w to be sparse, which implies that we only need to


measure the features corresponding to nonzero elements of w.
Solving this optimization problem is computationally hard (Natarajan 1995,
Davis, Mallat & Avellaneda 1997). A possible relaxation is to replace the non-
Pd
convex function kwk0 with the `1 norm, kwk1 = i=1 |wi |, and to solve the
problem
min LS (w) s.t. kwk1  k1 , (25.4)
w

where k1 is a parameter. Since the `1 norm is a convex function, this problem


can be solved efficiently as long as the loss function is convex. A related problem
is minimizing the sum of LS (w) plus an `1 norm regularization term,

min (LS (w) + kwk1 ) , (25.5)


w

where is a regularization parameter. Since for any k1 there exists a such that
1 The function k · k0 is often referred to as the `0 norm. Despite the use of the “norm”
notation, k · k0 is not really a norm; for example, it does not satisfy the positive
homogeneity property of norms, kawk0 6= |a| kwk0 .
364 Feature Selection and Generation

Equation (25.4) and Equation (25.5) lead to the same solution, the two problems
are in some sense equivalent.
The `1 regularization often induces sparse solutions. To illustrate this, let us
start with the simple optimization problem
✓ ◆
1 2
min w xw + |w| . (25.6)
w2R 2

It is easy to verify (see Exercise 2) that the solution to this problem is the “soft
thresholding” operator
w = sign(x) [|x| ]+ , (25.7)
def
where [a]+ = max{a, 0}. That is, as long as the absolute value of x is smaller
than , the optimal solution will be zero.
Next, consider a one dimensional regression problem with respect to the squared
loss:
m
!
1 X 2
argmin (xi w yi ) + |w| .
w2Rm 2m i=1

We can rewrite the problem as


! m
! !
1 X X
argmin 1
m x2i w 2 1
m x i yi w + |w| .
w2Rm 2 i i=1

1
P Pm
For simplicity let us assume that m i x2i = 1, and denote hx, yi = i=1 x i yi ;
then the optimal solution is

w = sign(hx, yi) [|hx, yi|/m ]+ .

That is, the solution will be zero unless the correlation between the feature x
and the labels vector y is larger than .
Remark 25.4 Unlike the `1 norm, the `2 norm does not induce sparse solutions.
Indeed, consider the problem above with an `2 regularization, namely,
m
!
1 X
argmin (xi w yi )2 + w2 .
w2Rm 2m i=1

Then, the optimal solution is


hx, yi/m
w= .
kxk2 /m + 2
This solution will be nonzero even if the correlation between x and y is very small.
In contrast, as we have shown before, when using `1 regularization, w will be
nonzero only if the correlation between x and y is larger than the regularization
parameter .
25.2 Feature Manipulation and Normalization 365

Adding `1 regularization to a linear regression problem with the squared loss


yields the LASSO algorithm, defined as
✓ ◆
1 2
argmin kXw yk + kwk1 . (25.8)
w 2m
Under some assumptions on the distribution and the regularization parameter
, the LASSO will find sparse solutions (see, for example, (Zhao & Yu 2006)
and the references therein). Another advantage of the `1 norm is that a vector
with low `1 norm can be “sparsified” (see, for example, (Shalev-Shwartz, Zhang
& Srebro 2010) and the references therein).

25.2 Feature Manipulation and Normalization

Feature manipulations or normalization include simple transformations that we


apply on each of our original features. Such transformations may decrease the
approximation or estimation errors of our hypothesis class or can yield a faster
algorithm. Similarly to the problem of feature selection, here again there are no
absolute “good” and “bad” transformations, but rather each transformation that
we apply should be related to the learning algorithm we are going to apply on
the resulting feature vector as well as to our prior assumptions on the problem.
To motivate normalization, consider a linear regression problem with the
squared loss. Let X 2 Rm,d be a matrix whose rows are the instance vectors
and let y 2 Rm be a vector of target values. Recall that ridge regression returns
the vector

1
argmin kXw yk2 + kwk2 = (2 mI + X > X) 1 X > y.
w m
Suppose that d = 2 and the underlying data distribution is as follows. First we
sample y uniformly at random from {±1}. Then, we set x1 to be y + 0.5↵, where
↵ is sampled uniformly at random from {±1}, and we set x2 to be 0.0001y. Note
that the optimal weight vector is w? = [0; 10000], and LD (w? ) = 0. However,
the objective of ridge regression at w? is 108 . In contrast, the objective of ridge
regression at w = [1; 0] is likely to be close to 0.25 + . It follows that whenever
> 100.25
8 1 ⇡ 0.25 ⇥ 10
8
, the objective of ridge regression is smaller at the
suboptimal solution w = [1; 0]. Since typically should be at least 1/m (see
the analysis in Chapter 13), it follows that in the aforementioned example, if the
number of examples is smaller than 108 then we are likely to output a suboptimal
solution.
The crux of the preceding example is that the two features have completely
di↵erent scales. Feature normalization can overcome this problem. There are
many ways to perform feature normalization, and one of the simplest approaches
is simply to make sure that each feature receives values between 1 and 1. In
the preceding example, if we divide each feature by the maximal value it attains
366 Feature Selection and Generation

we will obtain that x1 = y+0.5↵


1.5 and x2 = y. Then, for  10 3 the solution of
ridge regression is quite close to w? .
Moreover, the generalization bounds we have derived in Chapter 13 for reg-
ularized loss minimization depend on the norm of the optimal vector w? and
on the maximal norm of the instance vectors.2 Therefore, in the aforementioned
example, before we normalize the features we have that kw? k2 = 108 , while af-
ter we normalize the features we have that kw? k2 = 1. The maximal norm of
the instance vector remains roughly the same; hence the normalization greatly
improves the estimation error.
Feature normalization can also improve the runtime of the learning algorithm.
For example, in Section 14.5.3 we have shown how to use the Stochastic Gradient
Descent (SGD) optimization algorithm for solving the regularized loss minimiza-
tion problem. The number of iterations required by SGD to converge also depends
on the norm of w? and on the maximal norm of kxk. Therefore, as before, using
normalization can greatly decrease the runtime of SGD.
Next, we demonstrate in the following how a simple transformation on features,
such as clipping, can sometime decrease the approximation error of our hypoth-
esis class. Consider again linear regression with the squared loss. Let a > 1 be
a large number, suppose that the target y is chosen uniformly at random from
{±1}, and then the single feature x is set to be y with probability (1 1/a)
and set to be ay with probability 1/a. That is, most of the time our feature is
bounded but with a very small probability it gets a very high value. Then, for
any w, the expected squared loss of w is
1
LD (w) = E (wx y)2
✓2 ◆
1 1 1 1
= 1 (wy y)2 + (awy y)2 .
a 2 a 2
Solving for w we obtain that w? = a22a 1
+a 1 , which goes to zero as a goes to infin-
ity. Therefore, the objective at w? goes to 0.5 as a goes to infinity. For example,
for a = 100 we will obtain LD (w? ) 0.48. Next, suppose we apply a “clipping”
transformation; that is, we use the transformation x 7! sign(x) min{1, |x|}. Then,
following this transformation, w? becomes 1 and LD (w? ) = 0. This simple ex-
ample shows that a simple transformation can have a significant influence on the
approximation error.
Of course, it is not hard to think of examples in which the same feature trans-
formation actually hurts performance and increases the approximation error.
This is not surprising, as we have already argued that feature transformations
2 More precisely, the bounds we derived in Chapter 13 for regularized loss minimization
depend on kw? k2 and on either the Lipschitzness or the smoothness of the loss function.
For linear predictors and loss functions of the form `(w, (x, y)) = (hw, xi, y), where is
convex and either 1-Lipschitz or 1-smooth with respect to its first argument, we have that
` is either kxk-Lipschitz or kxk2 -smooth. For example, for the squared loss,
(a, y) = 12 (a y)2 , and `(w, (x, y)) = 12 (hw, xi y)2 is kxk2 -smooth with respect to its
first argument.
25.2 Feature Manipulation and Normalization 367

should rely on our prior assumptions on the problem. In the aforementioned ex-
ample, a prior assumption that may lead us to use the “clipping” transformation
is that features that get values larger than a predefined threshold value give us no
additional useful information, and therefore we can clip them to the predefined
threshold.

25.2.1 Examples of Feature Transformations


We now list several common techniques for feature transformations. Usually, it
is helpful to combine some of these transformations (e.g., centering + scaling).
In the following, we denote by f = (f1 , . . . , fm ) 2 Rm the value of the feature f
Pm
over the m training examples. Also, we denote by f¯ = m 1
i=1 fi the empirical
mean of the feature over all examples.

Centering:
This transformation makes the feature have zero mean, by setting fi fi f¯.

Unit Range:
This transformation makes the range of each feature be [0, 1]. Formally, let
fi fmin
fmax = maxi fi and fmin = mini fi . Then, we set fi fmax fmin . Similarly,
we can make the range of each feature be [ 1, 1] by the transformation fi
fi fmin
2 fmax fmin 1. Of course, it is easy to make the range [0, b] or [ b, b], where b is
a user-specified parameter.

Standardization:
This transformation makes all features have a zero mean and unit variance.
Pm
Formally, let ⌫ = m1
i=1 (fi f¯)2 be the empirical variance of the feature.
fp
i f
¯
Then, we set fi ⌫
.

Clipping:
This transformation clips high or low values of the feature. For example, fi
sign(fi ) max{b, |fi |}, where b is a user-specified parameter.

Sigmoidal Transformation:
As its name indicates, this transformation applies a sigmoid function on the
1
feature. For example, fi 1+exp(b fi ) , where b is a user-specified parameter.
This transformation can be thought of as a “soft” version of clipping: It has a
small e↵ect on values close to zero and behaves similarly to clipping on values
far away from zero.
368 Feature Selection and Generation

Logarithmic Transformation:
The transformation is fi log(b+fi ), where b is a user-specified parameter. This
is widely used when the feature is a “counting” feature. For example, suppose
that the feature represents the number of appearances of a certain word in a
text document. Then, the di↵erence between zero occurrences of the word and
a single occurrence is much more important than the di↵erence between 1000
occurrences and 1001 occurrences.
Remark 25.5 In the aforementioned transformations, each feature is trans-
formed on the basis of the values it obtains on the training set, independently
of other features’ values. In some situations we would like to set the parameter
of the transformation on the basis of other features as well. A notable example
is a transformation in which one applies a scaling to the features so that the
empirical average of some norm of the instances becomes 1.

25.3 Feature Learning

So far we have discussed feature selection and manipulations. In these cases, we


start with a predefined vector space Rd , representing our features. Then, we select
a subset of features (feature selection) or transform individual features (feature
transformation). In this section we describe feature learning, in which we start
with some instance space, X , and would like to learn a function, : X ! Rd ,
which maps instances in X into a representation as d-dimensional feature vectors.
The idea of feature learning is to automate the process of finding a good rep-
resentation of the input space. As mentioned before, the No-Free-Lunch theorem
tells us that we must incorporate some prior knowledge on the data distribution
in order to build a good feature representation. In this section we present a few
feature learning approaches and demonstrate conditions on the underlying data
distribution in which these methods can be useful.
Throughout the book we have already seen several useful feature construc-
tions. For example, in the context of polynomial regression, we have mapped the
original instances into the vector space of all their monomials (see Section 9.2.2
in Chapter 9). After performing this mapping, we trained a linear predictor on
top of the constructed features. Automation of this process would be to learn
a transformation : X ! Rd , such that the composition of the class of linear
predictors on top of yields a good hypothesis class for the task at hand.
In the following we describe a technique of feature construction called dictio-
nary learning.

25.3.1 Dictionary Learning Using Auto-Encoders


The motivation of dictionary learning stems from a commonly used represen-
tation of documents as a “bag-of-words”: Given a dictionary of words D =
{w1 , . . . , wk }, where each wi is a string representing a word in the dictionary,
25.3 Feature Learning 369

and given a document, (p1 , . . . , pd ), where each pi is a word in the document,


we represent the document as a vector x 2 {0, 1}k , where xi is 1 if wi = pj for
some j 2 [d], and xi = 0 otherwise. It was empirically observed in many text
processing tasks that linear predictors are quite powerful when applied on this
representation. Intuitively, we can think of each word as a feature that measures
some aspect of the document. Given labeled examples (e.g., topics of the doc-
uments), a learning algorithm searches for a linear predictor that weights these
features so that a right combination of appearances of words is indicative of the
label.
While in text processing there is a natural meaning to words and to the dic-
tionary, in other applications we do not have such an intuitive representation
of an instance. For example, consider the computer vision application of object
recognition. Here, the instance is an image and the goal is to recognize which
object appears in the image. Applying a linear predictor on the pixel-based rep-
resentation of the image does not yield a good classifier. What we would like
to have is a mapping that would take the pixel-based representation of the
image and would output a bag of “visual words,” representing the content of the
image. For example, a “visual word” can be “there is an eye in the image.” If
we had such representation, we could have applied a linear predictor on top of
this representation to train a classifier for, say, face recognition. Our question is,
therefore, how can we learn a dictionary of “visual words” such that a bag-of-
words representation of an image would be helpful for predicting which object
appears in the image?
A first naive approach for dictionary learning relies on a clustering algorithm
(see Chapter 22). Suppose that we learn a function c : X ! {1, . . . , k}, where
c(x) is the cluster to which x belongs. Then, we can think of the clusters as
“words,” and of instances as “documents,” where a document x is mapped to
the vector (x) 2 {0, 1}k , where (x)i is 1 if and only if x belongs to the ith
cluster. Now, it is straightforward to see that applying a linear predictor on (x)
is equivalent to assigning the same target value to all instances that belong to
the same cluster. Furthermore, if the clustering is based on distances from a
class center (e.g., k-means), then a linear predictor on (x) yields a piece-wise
constant predictor on x.
Both the k-means and PCA approaches can be regarded as special cases of a
more general approach for dictionary learning which is called auto-encoders. In an
auto-encoder we learn a pair of functions: an “encoder” function, : Rd ! Rk ,
and a “decoder” function, : Rk ! Rd . The goal of the learning process is to
P
find a pair of functions such that the reconstruction error, i kxi ( (xi ))k2 ,
is small. Of course, we can trivially set k = d and both , to be the identity
mapping, which yields a perfect reconstruction. We therefore must restrict and
in some way. In PCA, we constrain k < d and further restrict and to be
linear functions. In k-means, k is not restricted to be smaller than d, but now
and rely on k centroids, µ1 , . . . , µk , and (x) returns an indicator vector
370 Feature Selection and Generation

in {0, 1}k that indicates the closest centroid to x, while takes as input an
indicator vector and returns the centroid representing this vector.
An important property of the k-means construction, which is key in allowing
k to be larger than d, is that maps instances into sparse vectors. In fact, in
k-means only a single coordinate of (x) is nonzero. An immediate extension of
the k-means construction is therefore to restrict the range of to be vectors with
at most s nonzero elements, where s is a small integer. In particular, let and
be functions that depend on µ1 , . . . , µk . The function maps an instance vector
x to a vector (x) 2 Rk , where (x) should have at most s nonzero elements.
Pk
The function (v) is defined to be i=1 vi µi . As before, our goal is to have a
small reconstruction error, and therefore we can define

(x) = argmin kx (v)k2 s.t. kvk0  s,


v

where kvk0 = |{j : vj 6= 0}|. Note that when s = 1 and we further restrict kvk1 =
1 then we obtain the k-means encoding function; that is, (x) is the indicator
vector of the centroid closest to x. For larger values of s, the optimization problem
in the preceding definition of becomes computationally difficult. Therefore, in
practice, we sometime use `1 regularization instead of the sparsity constraint and
define to be
⇥ ⇤
(x) = argmin kx (v)k2 + kvk1 ,
v

where > 0 is a regularization parameter. Anyway, the dictionary learning


problem is now to find the vectors µ1 , . . . , µk such that the reconstruction er-
Pm
ror, i=1 kxi ( (x))k2 , is as small as possible. Even if is defined using
the `1 regularization, this is still a computationally hard problem (similar to
the k-means problem). However, several heuristic search algorithms may give
reasonably good solutions. These algorithms are beyond the scope of this book.

25.4 Summary

Many machine learning algorithms take the feature representation of instances


for granted. Yet the choice of representation requires careful attention. We dis-
cussed approaches for feature selection, introducing filters, greedy selection al-
gorithms, and sparsity-inducing norms. Next we presented several examples for
feature transformations and demonstrated their usefulness. Last, we discussed
feature learning, and in particular dictionary learning. We have shown that fea-
ture selection, manipulation, and learning all depend on some prior knowledge
on the data.
25.5 Bibliographic Remarks 371

25.5 Bibliographic Remarks

Guyon & Elissee↵ (2003) surveyed several feature selection procedures, including
many types of filters.
Forward greedy selection procedures for minimizing a convex objective sub-
ject to a polyhedron constraint date back to the Frank-Wolfe algorithm (Frank
& Wolfe 1956). The relation to boosting has been studied by several authors,
including, (Warmuth, Liao & Ratsch 2006, Warmuth, Glocer & Vishwanathan
2008, Shalev-Shwartz & Singer 2008). Matching pursuit has been studied in the
signal processing community (Mallat & Zhang 1993). Several papers analyzed
greedy selection methods under various conditions. See, for example, Shalev-
Shwartz, Zhang & Srebro (2010) and the references therein.
The use of the `1 -norm as a surrogate for sparsity has a long history (e.g. Tib-
shirani (1996) and the references therein), and much work has been done on un-
derstanding the relationship between the `1 -norm and sparsity. It is also closely
related to compressed sensing (see Chapter 23). The ability to sparsify low `1
norm predictors dates back to Maurey (Pisier 1980-1981). In Section 26.4 we
also show that low `1 norm can be used to bound the estimation error of our
predictor.
Feature learning and dictionary learning have been extensively studied recently
in the context of deep neural networks. See, for example, (Lecun & Bengio 1995,
Hinton et al. 2006, Ranzato et al. 2007, Collobert & Weston 2008, Lee et al.
2009, Le et al. 2012, Bengio 2009) and the references therein.

25.6 Exercises

1. Prove the equality given in Equation (25.1). Hint: Let a⇤ , b⇤ be minimizers of


the left-hand side. Find a, b such that the objective value of the right-hand
side is smaller than that of the left-hand side. Do the same for the other
direction.
2. Show that Equation (25.7) is the solution of Equation (25.6).
3. AdaBoost as a Forward Greedy Selection Algorithm: Recall the Ad-
aBoost algorithm from Chapter 10. In this section we give another interpre-
tation of AdaBoost as a forward greedy selection algorithm.
• Given a set of m instances x1 , . . . , xm , and a hypothesis class H of finite
VC dimension, show that there exist d and h1 , . . . , hd such that for every
h 2 H there exists i 2 [d] with hi (xj ) = h(xj ) for every j 2 [m].
• Let R(w) be as defined in Equation (25.3). Given some w, define fw to be
the function
d
X
fw (·) = wi hi (·).
i=1
372 Feature Selection and Generation

Let D be the distribution over [m] defined by


exp( yi fw (xi ))
Di = ,
Z
where Z is a normalization factor that ensures that D is a probability
vector. Show that
X m
@R(w)
= Di yi hj (xi ).
wj i=1
Pm
Furthermore, denoting ✏j = i=1 Di 1[hj (xi )6=yi ] , show that
@R(w)
= 2✏j 1.
wj
@R(w)
Conclude that if ✏j  1/2 then wj /2.
(t+1)
• Show that
p the update of AdaBoost guarantees R(w ) R(w(t) ) 
log( 1 4 2 ). Hint: Use the proof of Theorem 10.2.
Part IV
Advanced Theory
26 Rademacher Complexities

In Chapter 4 we have shown that uniform convergence is a sufficient condition


for learnability. In this chapter we study the Rademacher complexity, which
measures the rate of uniform convergence. We will provide generalization bounds
based on this measure.

26.1 The Rademacher Complexity

Recall the definition of an ✏-representative sample from Chapter 4, repeated here


for convenience.
definition 26.1 (✏-Representative Sample) A training set S is called ✏-representative
(w.r.t. domain Z, hypothesis class H, loss function `, and distribution D) if
sup |LD (h) LS (h)|  ✏.
h2H

We have shown that if S is an ✏/2 representative sample then the ERM rule
is ✏-consistent, namely, LD (ERMH (S))  minh2H LD (h) + ✏.
To simplify our notation, let us denote
def def
F = ` H = {z 7! `(h, z) : h 2 H},
and given f 2 F, we define
m
1 X
LD (f ) = E [f (z)] , LS (f ) = f (zi ).
z⇠D m i=1

We define the representativeness of S with respect to F as the largest gap be-


tween the true error of a function f and its empirical error, namely,
def
RepD (F, S) = sup LD (f ) LS (f ) . (26.1)
f 2F

Now, suppose we would like to estimate the representativeness of S using the


sample S only. One simple idea is to split S into two disjoint sets, S = S1 [ S2 ;
refer to S1 as a validation set and to S2 as a training set. We can then estimate
the representativeness of S by
sup LS1 (f ) LS2 (f ) . (26.2)
f 2F

Understanding Machine Learning, c 2014 by Shai Shalev-Shwartz and Shai Ben-David


Published 2014 by Cambridge University Press.
Personal use only. Not for distribution. Do not post.
Please link to http://www.cs.huji.ac.il/~shais/UnderstandingMachineLearning
376 Rademacher Complexities

This can be written more compactly by defining = ( 1 , . . . , m ) 2 {±1}m to


be a vector such that S1 = {zi : i = 1} and S2 = {zi : i = 1}. Then, if we
further assume that |S1 | = |S2 | then Equation (26.2) can be rewritten as
Xm
2
sup i f (zi ). (26.3)
m f 2F i=1
The Rademacher complexity measure captures this idea by considering the ex-
pectation of the above with respect to a random choice of . Formally, let F S
be the set of all possible evaluations a function f 2 F can achieve on a sample
S, namely,
F S = {(f (z1 ), . . . , f (zm )) : f 2 F}.
Let the variables in be distributed i.i.d. according to P[ i = 1] = P[ i = 1] =
1
2 . Then, the Rademacher complexity of F with respect to S is defined as follows:
" m
#
def 1
X
R(F S) = E sup i f (zi ) . (26.4)
m ⇠{±1}m f 2F i=1
More generally, given a set of vectors, A ⇢ Rm , we define
" m
#
def 1
X
R(A) = E sup i ai . (26.5)
m a2A i=1

The following lemma bounds the expected value of the representativeness of


S by twice the expected Rademacher complexity.
lemma 26.2
E [ RepD (F, S)]  2 E R(F S).
S⇠D m S⇠D m

Proof Let S 0 = {z10 , . . . , zm


0
} be another i.i.d. sample. Clearly, for all f 2 F,
LD (f ) = ES 0 [LS 0 (f )]. Therefore, for every f 2 F we have
LD (f ) LS (f ) = E0 [LS 0 (f )] LS (f ) = E0 [LS 0 (f ) LS (f )].
S S

Taking supremum over f 2 F of both sides, and using the fact that the supremum
of expectation is smaller than expectation of the supremum we obtain
sup LD (f ) LS (f ) = sup E0 [LS 0 (f ) LS (f )]
f 2F f 2F S
" #
 E0 sup LS 0 (f ) LS (f ) .
S f 2F

Taking expectation over S on both sides we obtain


" # " #
E sup LD (f ) LS (f )  E sup LS 0 (f ) LS (f )
S f 2F S,S 0 f 2F
" m
# (26.6)
1 X
= E sup (f (zi0 ) f (zi )) .
m S,S 0 f 2F i=1
26.1 The Rademacher Complexity 377

Next, we note that for each j, zj and zj0 are i.i.d. variables. Therefore, we can
replace them without a↵ecting the expectation:
2 0 13
X
E 0 4 sup @(f (zj0 ) f (zj )) + (f (zi0 ) f (zi ))A5 =
S,S f 2F
i6=j
2 0 13 (26.7)
X
E 4 sup @(f (zj ) f (zj0 )) + (f (zi0 ) f (zi ))A5 .
S,S 0 f 2F
i6=j

Let j be a random variable such that P[ j = 1] = P[ j = 1] = 1/2. From


Equation (26.7) we obtain that
2 0 13
X
E 4 sup @ j (f (zj0 ) f (zj )) + (f (zi0 ) f (zi ))A 5
S,S 0 , j f 2F
i6=j
1 1
= (l.h.s. of Equation (26.7)) + (r.h.s. of Equation (26.7)) (26.8)
2 2 0 2 13
X
= E 0 4 sup @(f (zj0 ) f (zj )) + (f (zi0 ) f (zi ))A 5 .
S,S f 2F
i6=j

Repeating this for all j we obtain that


" m
# " m
#
X X
0 0
E 0 sup (f (zi ) f (zi )) = E0 sup i (f (zi ) f (zi )) . (26.9)
S,S f 2F i=1 S,S , f 2F i=1

Finally,
X X X
0 0
sup i (f (zi ) f (zi ))  sup i f (zi ) + sup i f (zi )
f 2F i f 2F i f 2F i

and since the probability of is the same as the probability of , the right-hand
side of Equation (26.9) can be bounded by
" #
X X
0
E0 sup i f (zi ) + sup i f (zi )
S,S , f 2F f 2F
i i
0
= m E0 [R(F S )] + m E[R(F S)] = 2m E[R(F S)].
S S S

The lemma immediately yields that, in expectation, the ERM rule finds a
hypothesis which is close to the optimal hypothesis in H.
theorem 26.3 We have
E [LD (ERMH (S)) LS (ERMH (S))]  2 E R(` H S).
S⇠D m S⇠D m

Furthermore, for any h? 2 H


E [LD (ERMH (S)) LD (h? )]  2 E R(` H S).
S⇠D m S⇠D m
378 Rademacher Complexities

Furthermore, if h? = argminh LD (h) then for each 2 (0, 1) with probability of


at least 1 over the choice of S we have
2 ES 0 ⇠Dm R(` H S 0 )
LD (ERMH (S)) LD (h? )  .

Proof The first inequality follows directly from Lemma 26.2. The second in-
equality follows because for any fixed h? ,

LD (h? ) = E[LS (h? )] E[LS (ERMH (S))].


S S

The third inequality follows from the previous inequality by relying on Markov’s
inequality (note that the random variable LD (ERMH (S)) LD (h? ) is nonnega-
tive).

Next, we derive bounds similar to the bounds in Theorem 26.3 with a better
dependence on the confidence parameter . To do so, we first introduce the
following bounded di↵erences concentration inequality.

lemma 26.4 (McDiarmid’s Inequality) Let V be some set and let f : V m ! R


be a function of m variables such that for some c > 0, for all i 2 [m] and for all
x1 , . . . , xm , x0i 2 V we have
0
|f (x1 , . . . , xm ) f (x1 , . . . , xi 1 , xi , xi+1 , . . . , xm )|  c.

Let X1 , . . . , Xm be m independent random variables taking values in V . Then,


with probability of at least 1 we have
q
|f (X1 , . . . , Xm ) E[f (X1 , . . . , Xm )]|  c ln 2 m/2.

On the basis of the McDiarmid inequality we can derive generalization bounds


with a better dependence on the confidence parameter.

theorem 26.5 Assume that for all z and h 2 H we have that |`(h, z)|  c.
Then,

1. With probability of at least 1 , for all h 2 H,


r
0 2 ln(2/ )
LD (h) LS (h)  2 E R(` H S ) + c .
S 0 ⇠D m m
In particular, this holds for h = ERMH (S).
2. With probability of at least 1 , for all h 2 H,
r
2 ln(4/ )
LD (h) LS (h)  2 R(` H S) + 4 c .
m
In particular, this holds for h = ERMH (S).
3. For any h? , with probability of at least 1 ,
r
? 2 ln (8/ )
LD (ERMH (S)) LD (h )  2 R(` H S) + 5 c .
m
26.1 The Rademacher Complexity 379

Proof First note that the random variable RepD (F, S) = suph2H (LD (h) LS (h))
satisfies the bounded di↵erences condition of Lemma 26.4 with a constant 2c/m.
Combining the bounds in Lemma 26.4 with Lemma 26.2 we obtain that with
probability of at least 1 ,
r r
2 ln(2/ ) 0 2 ln(2/ )
RepD (F, S)  E RepD (F, S) + c  2 E0 R(` H S ) + c .
m S m
The first inequality of the theorem follows from the definition of RepD (F, S).
For the second inequality we note that the random variable R(` H S) also
satisfies the bounded di↵erences condition of Lemma 26.4 with a constant 2c/m.
Therefore, the second inequality follows from the first inequality, Lemma 26.4,
and the union bound. Finally, for the last inequality, denote hS = ERMH (S)
and note that

LD (hS ) LD (h? )
= LD (hS ) LS (hS ) + LS (hS ) LS (h? ) + LS (h? ) LD (h? )
 (LD (hS ) LS (hS )) + (LS (h? ) LD (h? )) . (26.10)

The first summand on the right-hand side is bounded by the second inequality of
the theorem. For the second summand, we use the fact that h? does not depend
on S; hence by using Hoe↵ding’s inequality we obtain that with probaility of at
least 1 /2,
r
? ? ln(4/ )
LS (h ) LD (h )  c . (26.11)
2m
Combining this with the union bound we conclude our proof.

The preceding theorem tells us that if the quantity R(` H S) is small then it
is possible to learn the class H using the ERM rule. It is important to emphasize
that the last two bounds given in the theorem depend on the specific training
set S. That is, we use S both for learning a hypothesis from H as well as for
estimating the quality of it. This type of bound is called a data-dependent bound.

26.1.1 Rademacher Calculus


Let us now discuss some properties of the Rademacher complexity measure.
These properties will help us in deriving some simple bounds on R(` H S) for
specific cases of interest.
The following lemma is immediate from the definition.

lemma 26.6 For any A ⇢ Rm , scalar c 2 R, and vector a0 2 Rm , we have

R({c a + a0 : a 2 A})  |c| R(A).

The following lemma tells us that the convex hull of A has the same complexity
as A.
380 Rademacher Complexities

PN
lemma 26.7 Let A be a subset of Rm and let A0 = { j=1 ↵j a(j) : N 2
N, 8j, a(j) 2 A, ↵j 0, k↵k1 = 1}. Then, R(A0 ) = R(A).
Proof The main idea follows from the fact that for any vector v we have
N
X
sup ↵j vj = max vj .
↵ 0:k↵k1 =1 j=1 j

Therefore,
m
X N
X
0 (j)
m R(A ) = E sup sup i ↵j ai
↵ 0:k↵k1 =1 a(1) ,...,a(N ) i=1 j=1
N
X m
X (j)
=E sup ↵j sup i ai
↵ 0:k↵k1 =1 j=1 a(j) i=1
m
X
= E sup i ai
a2A i=1

= m R(A),
and we conclude our proof.
The next lemma, due to Massart, states that the Rademacher complexity of
a finite set grows logarithmically with the size of the set.
lemma 26.8 (Massart lemma) Let A = {a1 , . . . , aN } be a finite set of vectors
PN
in Rm . Define ā = N1 i=1 ai . Then,
p
2 log(N )
R(A)  max ka āk .
a2A m
Proof Based on Lemma 26.6, we can assume without loss of generality that
ā = 0. Let > 0 and let A0 = { a1 , . . . , aN }. We upper bound the Rademacher
complexity as follows:
  ✓ ◆
mR(A0 ) = E max0 h , ai = E log max0 eh ,ai
a2A a2A
" !#
X
 E log eh ,ai
a2A0
" #!
X
 log E eh ,ai
// Jensen’s inequality
a2A0
m
!
XY
i ai
= log E [e ] ,
i
a2A0 i=1

where the last equality occurs because the Rademacher variables are indepen-
dent. Next, using Lemma A.6 we have that for all ai 2 R,
i ai
exp(ai ) + exp( ai )
Ee =  exp(a2i /2),
i 2
26.1 The Rademacher Complexity 381

and therefore
m
XY ✓ ◆! X
!
0 a2i 2
mR(A )  log exp = log exp kak /2
2
a2A0 i=1 a2A0
✓ ◆
 log |A0 | max0 exp kak2 /2 = log(|A0 |) + max0 (kak2 /2).
a2A a2A

Since R(A) = 1 R(A0 ) we obtain from the equation that


2
log(|A|) + maxa2A (kak2 /2)
R(A)  .
m
p
Setting = 2 log(|A|)/ maxa2A kak2 and rearranging terms we conclude our
proof.
The following lemma shows that composing A with a Lipschitz function does
not blow up the Rademacher complexity. The proof is due to Kakade and Tewari.

lemma 26.9 (Contraction lemma) For each i 2 [m], let i : R ! R be a ⇢-


Lipschitz function, namely for all ↵, 2 R we have | i (↵) i ( )|  ⇢ |↵ |.
For a 2 Rm let (a) denote the vector ( 1 (a1 ), . . . , m (ym )). Let A = { (a) :
a 2 A}. Then,
R( A)  ⇢ R(A).

Proof For simplicity, we prove the lemma for the case ⇢ = 1. The case ⇢ 6=
1 will follow by defining 0 = ⇢1 and then using Lemma 26.6. Let Ai =
{(a1 , . . . , ai 1 , i (ai ), ai+1 , . . . , am ) : a 2 A}. Clearly, it suffices to prove that
for any set A and all i we have R(Ai )  R(A). Without loss of generality we will
prove the latter claim for i = 1 and to simplify notation we omit the subscript
from 1 . We have
" m
#
X
mR(A1 ) = E sup i ai
a2A1 i=1
" m
#
X
= E sup 1 (a1 ) + i ai
a2A i=2
" m
! m
!#
1 X X
= E sup (a1 ) + i ai + sup (a1 ) + i ai
2 2 ,..., m a2A i=2 a2A i=2
" m m
!#
1 X X
= E sup (a1 ) (a01 ) + i ai + 0
i ai
2 2 ,..., m a,a0 2A i=2 i=2
" m m
!#
1 X X
 E sup |a1 a01 | + i ai + 0
i ai , (26.12)
2 2 ,..., m a,a0 2A i=2 i=2

where in the last inequality we used the assumption that is Lipschitz. Next,
we note that the absolute value on |a1 a01 | in the preceding expression can
382 Rademacher Complexities

be omitted since both a and a0 are from the same set A and the rest of the
expression in the supremum is not a↵ected by replacing a and a0 . Therefore,
" m m
!#
1 X X
0 0
mR(A1 )  E sup a1 a1 + i ai + i ai . (26.13)
2 2 ,..., m a,a0 2A i=2 i=2

But, using the same equalities as in Equation (26.12), it is easy to see that the
right-hand side of Equation (26.13) exactly equals m R(A), which concludes our
proof.

26.2 Rademacher Complexity of Linear Classes

In this section we analyze the Rademacher complexity of linear classes. To sim-


plify the derivation we first define the following two classes:

H1 = {x 7! hw, xi : kwk1  1} , H2 = {x 7! hw, xi : kwk2  1}. (26.14)

The following lemma bounds the Rademacher complexity of H2 . We allow


the xi to be vectors in any Hilbert space (even infinite dimensional), and the
bound does not depend on the dimensionality of the Hilbert space. This property
becomes useful when analyzing kernel methods.

lemma 26.10 Let S = (x1 , . . . , xm ) be vectors in a Hilbert space. Define: H2


S = {(hw, x1 i, . . . , hw, xm i) : kwk2  1}. Then,
maxi kxi k2
R(H2 S)  p .
m
Proof Using Cauchy-Schwartz inequality we know that for any vectors w, v we
have hw, vi  kwk kvk. Therefore,
" m
#
X
mR(H2 S) = E sup i ai (26.15)
a2H2 S i=1
" m
#
X
=E sup i hw, xi i
w:kwk1 i=1
" m
#
X
=E sup hw, i xi i
w:kwk1 i=1
" m
#
X
E k i xi k2 .
i=1

Next, using Jensen’s inequality we have that


20 1 3 0 2 311/2
" m # m 2 1/2 m 2
X 6@ X 7 @ 4 X
E i xi = E4 i xi
A 5 E i xi
5A (26.16)
.
i=1 2 i=1 2 i=1 2
26.3 Generalization Bounds for SVM 383

Finally, since the variables 1 , . . . , m are independent we have


" m # 2 3
X X
E k 2
i xi k2 = E
4 i j hxi , xj i
5
i=1 i,j

X m
X ⇥ 2

= hxi , xj i E [ i j] + hxi , xi i E i
i6=j i=1
Xm
= kxi k22  m max kxi k22 .
i
i=1

Combining this with Equation (26.15) and Equation (26.16) we conclude our
proof.
Next we bound the Rademacher complexity of H1 S.
lemma 26.11 Let S = (x1 , . . . , xm ) be vectors in Rn . Then,
r
2 log(2n)
R(H1 S)  max kxi k1 .
i m
Proof Using Holder’s inequality we know that for any vectors w, v we have
hw, vi  kwk1 kvk1 . Therefore,
" m
#
X
mR(H1 S) = E sup i ai
a2H1 S i=1
" m
#
X
=E sup i hw, xi i
w:kwk1 1 i=1
" m
#
X
=E sup hw, i xi i
w:kwk1 1 i=1
" m
#
X
E k i xi k1 . (26.17)
i=1
p
For each j 2 [n], let vj = (x1,j , . . . , xm,j ) 2 Rm . Note that kvj k2  m maxi kxi k1 .
Let V = {v1 , . . . , vn , v1 , . . . , vn }. The right-hand side of Equation (26.17) is
m R(V ). Using Massart lemma (Lemma 26.8) we have that
p
R(V )  max kxi k1 2 log(2n)/m,
i

which concludes our proof.

26.3 Generalization Bounds for SVM

In this section we use Rademacher complexity to derive generalization bounds


for generalized linear predictors with Euclidean norm constraint. We will show
how this leads to generalization bounds for hard-SVM and soft-SVM.
384 Rademacher Complexities

We shall consider the following general constraint-based formulation. Let H =


{w : kwk2  B} be our hypothesis class, and let Z = X ⇥ Y be the examples
domain. Assume that the loss function ` : H ⇥ Z ! R is of the form

`(w, (x, y)) = (hw, xi, y), (26.18)

where : R ⇥ Y ! R is such that for all y 2 Y, the scalar function a 7! (a, y)


is ⇢-Lipschitz. For example, the hinge-loss function, `(w, (x, y)) = max{0, 1
yhw, xi}, can be written as in Equation (26.18) using (a, y) = max{0, 1
ya}, and note that is 1-Lipschitz for all y 2 {±1}. Another example is the
absolute loss function, `(w, (x, y)) = |hw, xi y|, which can be written as in
Equation (26.18) using (a, y) = |a y|, which is also 1-Lipschitz for all y 2 R.
The following theorem bounds the generalization error of all predictors in H
using their empirical error.

theorem 26.12 Suppose that D is a distribution over X ⇥ Y such that with


probability 1 we have that kxk2  R. Let H = {w : kwk2  B} and let
` : H ⇥ Z ! R be a loss function of the form given in Equation (26.18)
such that for all y 2 Y, a 7! (a, y) is a ⇢-Lipschitz function and such that
maxa2[ BR,BR] | (a, y)|  c. Then, for any 2 (0, 1), with probability of at least
1 over the choice of an i.i.d. sample of size m,
r
2⇢BR 2 ln(2/ )
8w 2 H, LD (w)  LS (w) + p +c .
m m

Proof Let F = {(x, y) 7! (hw, xi, y) : w 2 H}. We will show that with
p
probability 1, R(F S)  ⇢BR/ m and then the theorem will follow from
Theorem 26.5. Indeed, the set F S can be written as

F S = {( (hw, x1 i, y1 ), . . . , (hw, xm i, ym )) : w 2 H},

and the bound on R(F S) follows directly by combining Lemma 26.9, Lemma 26.10,
and the assumption that kxk2  R with probability 1.

We next derive a generalization bound for hard-SVM based on the previous


theorem. For simplicity, we do not allow a bias term and consider the hard-SVM
problem:
argmin kwk2 s.t. 8i, yi hw, xi i 1 (26.19)
w

theorem 26.13 Consider a distribution D over X ⇥{±1} such that there exists
some vector w? with P(x,y)⇠D [yhw? , xi 1] = 1 and such that kxk2  R with
probability 1. Let wS be the output of Equation (26.19). Then, with probability
of at least 1 over the choice of S ⇠ Dm , we have that
r
2 R kw? k ? 2 ln(2/ )
P [y 6= sign(hwS , xi)]  p + (1 + R kw k) .
(x,y)⇠D m m
26.3 Generalization Bounds for SVM 385

Proof Throughout the proof, let the loss function be the ramp loss (see Sec-
tion 15.2.3). Note that the range of the ramp loss is [0, 1] and that it is a
1-Lipschitz function. Since the ramp loss upper bounds the zero-one loss, we
have that
P [y 6= sign(hwS , xi)]  LD (wS ).
(x,y)⇠D

Let B = kw? k2 and consider the set H = {w : kwk2  B}. By the definition of
hard-SVM and our assumption on the distribution, we have that wS 2 H with
probability 1 and that LS (wS ) = 0. Therefore, using Theorem 26.12 we have
that
r
2BR 2 ln(2/ )
LD (wS )  LS (wS ) + p + .
m m

Remark 26.1 Theorem 26.13 implies that the sample complexity of hard-SVM
2 ? 2
grows like R kw
✏2
k
. Using a more delicate analysis and the separability assump-
R2 kw? k2
tion, it is possible to improve the bound to an order of ✏ .
?
The bound in the preceding theorem depends on kw k, which is unknown.
In the following we derive a bound that depends on the norm of the output of
SVM; hence it can be calculated from the training set itself. The proof is similar
to the derivation of bounds for structure risk minimization (SRM).

theorem 26.14 Assume that the conditions of Theorem 26.13 hold. Then,
with probability of at least 1 over the choice of S ⇠ Dm , we have that
s
4RkwS k ln( 4 log2 (kwS k) )
P [y 6= sign(hwS , xi)]  p + .
(x,y)⇠D m m

Proof For any integer i, let Bi = 2i , Hi = {w : kwk  Bi }, and let i = 2i2 .


Fix i, then using Theorem 26.12 we have that with probability of at least 1 i
r
2Bi R 2 ln(2/ i )
8w 2 Hi , LD (w)  LS (w) + p +
m m
P1
Applying the union bound and using i=1 i  we obtain that with probability
of at least 1 this holds for all i. Therefore, for all w, if we let i = dlog2 (kwk)e
2 2
then w 2 Hi , Bi  2kwk, and 2i = (2i)  (4 log2 (kwk)) . Therefore,
r
2Bi R 2 ln(2/ i )
LD (w)  LS (w) + p +
m m
r
4kwkR 4(ln(4 log2 (kwk)) + ln(1/ ))
 LS (w) + p + .
m m
In particular, it holds for wS , which concludes our proof.
386 Rademacher Complexities

Remark 26.2 Note that all the bounds we have derived do not depend on the
dimension of w. This property is utilized when learning SVM with kernels, where
the dimension of w can be extremely large.

26.4 Generalization Bounds for Predictors with Low `1 Norm

In the previous section we derived generalization bounds for linear predictors


with an `2 -norm constraint. In this section we consider the following general `1 -
norm constraint formulation. Let H = {w : kwk1  B} be our hypothesis class,
and let Z = X ⇥ Y be the examples domain. Assume that the loss function,
` : H ⇥ Z ! R, is of the same form as in Equation (26.18), with : R ⇥ Y ! R
being ⇢-Lipschitz w.r.t. its first argument. The following theorem bounds the
generalization error of all predictors in H using their empirical error.
theorem 26.15 Suppose that D is a distribution over X ⇥ Y such that with
probability 1 we have that kxk1  R. Let H = {w 2 Rd : kwk1  B} and
let ` : H ⇥ Z ! R be a loss function of the form given in Equation (26.18)
such that for all y 2 Y, a 7! (a, y) is an ⇢-Lipschitz function and such that
maxa2[ BR,BR] | (a, y)|  c. Then, for any 2 (0, 1), with probability of at least
1 over the choice of an i.i.d. sample of size m,
r r
2 log(2d) 2 ln(2/ )
8w 2 H, LD (w)  LS (w) + 2⇢BR +c .
m m
Proof The proof is identical to the proof of Theorem 26.12, while relying on
Lemma 26.11 instead of relying on Lemma 26.10.
It is interesting to compare the two bounds given in Theorem 26.12 and The-
orem 26.15. Apart from the extra log(d) factor that appears in Theorem 26.15,
both bounds look similar. However, the parameters B, R have di↵erent meanings
in the two bounds. In Theorem 26.12, the parameter B imposes an `2 constraint
on w and the parameter R captures a low `2 -norm assumption on the instances.
In contrast, in Theorem 26.15 the parameter B imposes an `1 constraint on w
(which is stronger than an `2 constraint) while the parameter R captures a low
`1 -norm assumption on the instance (which is weaker than a low `2 -norm as-
sumption). Therefore, the choice of the constraint should depend on our prior
knowledge of the set of instances and on prior assumptions on good predictors.

26.5 Bibliographic Remarks

The use of Rademacher complexity for bounding the uniform convergence is


due to (Koltchinskii & Panchenko 2000, Bartlett & Mendelson 2001, Bartlett
& Mendelson 2002). For additional reading see, for example, (Bousquet 2002,
Boucheron, Bousquet & Lugosi 2005, Bartlett, Bousquet & Mendelson 2005).
26.5 Bibliographic Remarks 387

Our proof of the concentration lemma is due to Kakade and Tewari lecture
notes. Kakade, Sridharan & Tewari (2008) gave a unified framework for deriving
bounds on the Rademacher complexity of linear classes with respect to di↵erent
assumptions on the norms.
27 Covering Numbers

In this chapter we describe another way to measure the complexity of sets, which
is called covering numbers.

27.1 Covering

definition 27.1 (Covering) Let A ⇢ Rm be a set of vectors. We say that A


is r-covered by a set A0 , with respect to the Euclidean metric, if for all a 2 A
there exists a0 2 A0 with ka a0 k  r. We define by N (r, A) the cardinality of
the smallest A0 that r-covers A.

Example 27.1 (Subspace) Suppose that A ⇢ Rm , let c = maxa2A kak, p and as-
sume that A lies in a d-dimensional subspace of Rm . Then, N (r, A)  (2c d/r)d .
To see this, let v1 , . . . , vd be an orthonormal basis of the subspace. Then, any
Pd
a 2 A can be written as a = i=1 ↵i vi with k↵k1  k↵k2 = kak2  c. Let
✏ 2 R and consider the set
( d )
X
0 0 0
A = ↵i vi : 8i, ↵i 2 { c, c + ✏, c + 2✏, . . . , c} .
i=1
Pd
Given a 2 A s.t. a = i=1↵i vi with k↵k1  c, there exists a0 2 A0 such that
X X
ka a0 k2 = k (↵i0 ↵i )vi k2  ✏2 kvi k2  ✏2 d.
i i
p
Choose ✏ = r/ d; then ka a0 k  r and therefore A0 is an r-cover of A. Hence,
✓ ◆d p !d
0 2c 2c d
N (r, A)  |A | = = .
✏ r

27.1.1 Properties
The following lemma is immediate from the definition.

lemma 27.2 For any A ⇢ Rm , scalar c > 0, and vector a0 2 Rm , we have

8r > 0, N (r, {c a + a0 : a 2 A})  N (cr, A).

Understanding Machine Learning, c 2014 by Shai Shalev-Shwartz and Shai Ben-David


Published 2014 by Cambridge University Press.
Personal use only. Not for distribution. Do not post.
Please link to http://www.cs.huji.ac.il/~shais/UnderstandingMachineLearning
27.2 From Covering to Rademacher Complexity via Chaining 389

Next, we derive a contraction principle.

lemma 27.3 For each i 2 [m], let i : R ! R be a ⇢-Lipschitz function;


namely, for all ↵, 2 R we have | i (↵) i ( )|  ⇢ |↵ |. For a 2 Rm let
(a) denote the vector ( 1 (a1 ), . . . , m (am )). Let A = { (a) : a 2 A}. Then,

N (⇢ r, A)  N (r, A).

Proof Define B = A. Let A0 be an r-cover of A and define B 0 = A0 .


Then, for all a 2 A there exists a0 2 A0 with ka a0 k  r. So,
X X
k (a) (a0 )k2 = ( i (ai ) 0 2
i (ai ))  ⇢2 (ai a0i )2  (⇢r)2 .
i i

Hence, B 0 is an (⇢ r)-cover of B.

27.2 From Covering to Rademacher Complexity via Chaining

The following lemma bounds the Rademacher complexity of A based on the


covering numbers N (r, A). This technique is called Chaining and is attributed
to Dudley.

lemma 27.4 Let c = minā maxa2A ka āk. Then, for any integer M > 0,

M q
c2 M
6c X k k , A)).
R(A)  p + 2 log(N (c 2
m m
k=1

Proof Let ā be a minimizer of the objective function given in the definition


of c. On the basis of Lemma 26.6, we can analyze the Rademacher complexity
assuming that ā = 0.
Consider the set B0 = {0} and note that it is a c-cover of A. Let B1 , . . . , BM
be sets such that each Bk corresponds to a minimal (c 2 k )-cover of A. Let
a⇤ = argmaxa2A h , ai (where if there is more than one maximizer, choose one
in an arbitrary way, and if a maximizer does not exist, choose a⇤ such that
h , a⇤ i is close enough to the supremum). Note that a⇤ is a function of . For
each k, let b(k) be the nearest neighbor of a⇤ in Bk (hence b(k) is also a function
of ). Using the triangle inequality,

kb(k) b(k 1)
k  kb(k) a⇤ k + ka⇤ b(k 1)
k  c (2 k
+2 (k 1)
) = 3c 2 k
.

For each k define the set

B̂k = {(a a0 ) : a 2 Bk , a0 2 Bk 1 , ka a0 k  3 c 2 k
}.
390 Covering Numbers

We can now write


1
R(A) = Eh , a⇤ i
m
" M
#
1 X
= E h , a⇤ b (M )
i+ h ,b (k)
b (k 1)
i
m
k=1
" #
1 h i M
X 1
 E k k ka⇤ b (M )
k + E sup h , ai .
m m a2B̂k
k=1
p
Since k k = m and ka⇤ b(M ) k  c 2 M , the first summand is at most
pc 2 M . Additionally, by Massart lemma,
m
p p
1 k 2 log(N (c 2 k , A)2 ) k log(N (c 2 k , A))
E sup h , ai  3 c 2 = 6c2 .
m a2B̂k m m

Therefore,
M q
c2 M
6c X k k , A)).
R(A)  p + 2 log(N (c2
m m
k=1

As a corollary we obtain the following:


lemma 27.5 Assume that there are ↵, > 0 such that for any k 1 we have
q
log(N (c2 k , A))  ↵ + k.
Then,
6c
R(A)  (↵ + 2 ) .
m
Proof The bound follows from Lemma 27.4 by taking M ! 1 and noting that
P1 k
P1
k=1 2 = 1 and k=1 k2 k = 2.
Example 27.2 Consider a set A which lies in a d dimensional subspace of Rm
⇣ p ⌘d
and such that c = maxa2A kak. We have shown that N (r, A)  2cr d . There-
fore, for any k,
q r ⇣ p ⌘
log(N (c2 , A))  d log 2k+1 d
k

q p p
 d log(2 d) + k d
q p p
 d log(2 d) + d k.
Hence Lemma 27.5 yields
✓q ◆ p !
6c p p c d log(d)
R(A)  d log(2 d) + 2 d = O .
m m
27.3 Bibliographic Remarks 391

27.3 Bibliographic Remarks

The chaining technique is due to Dudley (1987). For an extensive study of cover-
ing numbers as well as other complexity measures that can be used to bound the
rate of uniform convergence we refer the reader to (Anthony & Bartlet 1999).
28 Proof of the Fundamental Theorem
of Learning Theory

In this chapter we prove Theorem 6.8 from Chapter 6. We remind the reader
the conditions of the theorem, which will hold throughout this chapter: H is a
hypothesis class of functions from a domain X to {0, 1}, the loss function is the
0 1 loss, and VCdim(H) = d < 1.
We shall prove the upper bound for both the realizable and agnostic cases
and shall prove the lower bound for the agnostic case. The lower bound for the
realizable case is left as an exercise.

28.1 The Upper Bound for the Agnostic Case

For the upper bound we need to prove that there exists C such that H is agnostic
PAC learnable with sample complexity
d + ln(1/ )
mH (✏, )  C .
✏2
We will prove the slightly looser bound:
d log(d/✏) + ln(1/ )
mH (✏, )  C . (28.1)
✏2
The tighter bound in the theorem statement requires a more involved proof, in
which a more careful analysis of the Rademacher complexity using a technique
called “chaining” should be used. This is beyond the scope of this book.
To prove Equation (28.1), it suffices to show that applying the ERM with a
sample size
✓ ◆
32d 64d 8
m 4 2 · log 2
+ 2 · (8d log(e/d) + 2 log(4/ ))
✏ ✏ ✏
yields an ✏, -learner for H. We prove this result on the basis of Theorem 26.5.
Let (x1 , y1 ), . . . , (xm , ym ) be a classification training set. Recall that the Sauer-
Shelah lemma tells us that if VCdim(H) = d then
⇣ e m ⌘d
|{(h(x1 ), . . . , h(xm )) : h 2 H}|  .
d
Denote A = {(1[h(x1 )6=y1 ] , . . . , 1[h(xm )6=ym ] ) : h 2 H}. This clearly implies that
⇣ e m ⌘d
|A|  .
d

Understanding Machine Learning, c 2014 by Shai Shalev-Shwartz and Shai Ben-David


Published 2014 by Cambridge University Press.
Personal use only. Not for distribution. Do not post.
Please link to http://www.cs.huji.ac.il/~shais/UnderstandingMachineLearning
28.2 The Lower Bound for the Agnostic Case 393

Combining this with Lemma 26.8 we obtain the following bound on the Rademacher
complexity:
r
2d log(em/d)
R(A)  .
m
Using Theorem 26.5 we obtain that with probability of at least 1 , for every
h 2 H we have that
r r
8d log(em/d) 2 log(2/ )
LD (h) LS (h)  + .
m m
Repeating the previous argument for minus the zero-one loss and applying the
union bound we obtain that with probability of at least 1 , for every h 2 H
it holds that
r r
8d log(em/d) 2 log(4/ )
|LD (h) LS (h)|  +
m m
r
8d log(em/d) + 2 log(4/ )
2 .
m
To ensure that this is smaller than ✏ we need
4
m · (8d log(m) + 8d log(e/d) + 2 log(4/ )) .
✏2
Using Lemma A.2, a sufficient condition for the inequality to hold is that
✓ ◆
32d 64d 8
m 4 2 · log + 2 · (8d log(e/d) + 2 log(4/ )) .
✏ ✏2 ✏

28.2 The Lower Bound for the Agnostic Case

Here, we prove that there exists C such that H is agnostic PAC learnable with
sample complexity
d + ln(1/ )
mH (✏, ) C .
✏2
We will prove the lower bound in two parts. First, we will show that m(✏, )
0.5 log(1/(4 ))/✏2 , and second we will show that for every  1/8 we have that
m(✏, ) 8d/✏2 . These two bounds will conclude the proof.

28.2.1 Showing That m(✏, ) 0.5 log(1/(4 ))/✏2


p
We first show that for any ✏ < 1/ 2 and any 2 (0, 1), we have that m(✏, )
0.5 log(1/(4 ))/✏2 . To do so, we show that for m  0.5 log(1/(4 ))/✏2 , H is not
learnable.
Choose one example that is shattered by H. That is, let c be an example such
394 Proof of the Fundamental Theorem of Learning Theory

that there are h+ , h 2 H for which h+ (c) = 1 and h (c) = 1. Define two
distributions, D+ and D , such that for b 2 {±1} we have

(
1+yb✏
2 if x = c
Db ({(x, y)}) =
0 otherwise.

That is, all the distribution mass is concentrated on two examples (c, 1) and
(c, 1), where the probability of (c, b) is 1+b✏ 2 and the probability of (c, b) is
1 b✏
2 .
Let A be an arbitrary algorithm. Any training set sampled from Db has the
form S = (c, y1 ), . . . , (c, ym ). Therefore, it is fully characterized by the vector
y = (y1 , . . . , ym ) 2 {±1}m . Upon receiving a training set S, the algorithm A
returns a hypothesis h : X ! {±1}. Since the error of A w.r.t. Db only depends
on h(c), we can think of A as a mapping from {±1}m into {±1}. Therefore,
we denote by A(y) the value in {±1} corresponding to the prediction of h(c),
where h is the hypothesis that A outputs upon receiving the training set S =
(c, y1 ), . . . , (c, ym ).
Note that for any hypothesis h we have

1 h(c)b✏
LDb (h) = .
2

In particular, the Bayes optimal hypothesis is hb and

(
1 A(y)b✏ 1 ✏ ✏ if A(y) 6= b
LDb (A(y)) LDb (hb ) = =
2 2 0 otherwise.

Fix A. For b 2 {±1}, let Y b = {y 2 {0, 1}m : A(y) 6= b}. The distribution Db
induces a probability Pb over {±1}m . Hence,

X
P [LDb (A(y)) LDb (hb ) = ✏] = Db (Y b ) = Pb [y]1[A(y)6=b] .
y

Denote N + = {y : |{i : yi = 1}| m/2} and N = {±1}m \ N + . Note that for


any y 2 N + we have P+ [y] P [y] and for any y 2 N we have P [y] P+ [y].
28.2 The Lower Bound for the Agnostic Case 395

Therefore,
max P [LDb (A(y)) LDb (hb ) = ✏]
b2{±1}
X
= max Pb [y]1[A(y)6=b]
b2{±1}
y
1X 1X
P+ [y]1[A(y)6=+] + P [y]1[A(y)6= ]
2 y 2 y
1 X 1 X
= (P+ [y]1[A(y)6=+] + P [y]1[A(y)6= ] ) + (P+ [y]1[A(y)6=+] + P [y]1[A(y)6= ] )
2 +
2
y2N y2N
1 X 1 X
(P [y]1[A(y)6=+] + P [y]1[A(y)6= ] ) + (P+ [y]1[A(y)6=+] + P+ [y]1[A(y)6= ] )
2 +
2
y2N y2N
1 X 1 X
= P [y] + P+ [y] .
2 +
2
y2N y2N
P P
Next note that y2N + P [y] = y2N P+ [y], and both values are the prob-
ability that a Binomial (m, (1 ✏)/2) random variable will have value greater
than m/2. Using Lemma B.11, this probability is lower bounded by
1⇣ p ⌘ 1⇣ p ⌘
1 1 exp( m✏2 /(1 ✏2 )) 1 1 exp( 2m✏2 ) ,
2 2
where we used the assumption that ✏2  1/2. It follows that if m  0.5 log(1/(4 ))/✏2
then there exists b such that
P [LDb (A(y)) LDb (hb ) = ✏]
✓ q ◆
1 p
1 1 4 ,
2
where the last inequality follows by standard algebraic manipulations. This con-
cludes our proof.

28.2.2 Showing That m(✏, 1/8) 8d/✏2


p
We shall now prove that for every ✏ <p 1/(8 2) we have that m(✏, ) 8d ✏2 .
Let ⇢ = 8✏ and note that ⇢ 2 (0, 1/ 2). We will construct a family of distri-
butions as follows. First, let C = {c1 , . . . , cd } be a set of d instances which are
shattered by H. Second, for each vector (b1 , . . . , bd ) 2 {±1}d , define a distribu-
tion Db such that
(
1 1+ybi ⇢
· 2 if 9i : x = ci
Db ({(x, y)}) = d
0 otherwise.
That is, to sample an example according to Db , we first sample an element ci 2 C
uniformly at random, and then set the label to be bi with probability (1 + ⇢)/2
or bi with probability (1 ⇢)/2.
It is easy to verify that the Bayes optimal predictor for Db is the hypothesis
396 Proof of the Fundamental Theorem of Learning Theory

1 ⇢
h 2 H such that h(ci ) = bi for all i 2 [d], and its error is 2 . In addition, for
any other function f : X ! {±1}, it is easy to verify that
1 + ⇢ |{i 2 [d] : f (ci ) 6= bi }| 1 ⇢ |{i 2 [d] : f (ci ) = bi }|
LDb (f ) = · + · .
2 d 2 d
Therefore,
|{i 2 [d] : f (ci ) 6= bi }|
LDb (f ) min LDb (h) = ⇢ · . (28.2)
h2H d
Next, fix some learning algorithm A. As in the proof of the No-Free-Lunch
theorem, we have that

max E m LDb (A(S)) min LDb (h) (28.3)
Db :b2{±1}d S⇠Db h2H

E E m LDb (A(S)) min LDb (h) (28.4)
Db :b⇠U ({±1}d ) S⇠Db h2H

|{i 2 [d] : A(S)(ci ) 6= bi |
= E Em ⇢ · (28.5)
Db :b⇠U ({±1}d ) S⇠Db d
d
⇢X
= E E 1[A(S)(ci )6=bi ] , (28.6)
d i=1 Db :b⇠U ({±1}d ) S⇠Dbm

where the first equality follows from Equation (28.2). In addition, using the
definition of Db , to sample S ⇠ Db we can first sample (j1 , . . . , jm ) ⇠ U ([d])m , set
xr = cji , and finally sample yr such that P[yr = bji ] = (1 + ⇢)/2. Let us simplify
the notation and use y ⇠ b to denote sampling according to P[y = b] = (1 + ⇢)/2.
Therefore, the right-hand side of Equation (28.6) equals
d
⇢X
E E E 1[A(S)(ci )6=bi ] . (28.7)
d i=1 j⇠U ([d])m b⇠U ({±1}d ) 8r,yr ⇠bjr

We now proceed in two steps. First, we show that among all learning algorithms,
A, the one which minimizes Equation (28.7) (and hence also Equation (28.4))
is the Maximum-Likelihood learning rule, denoted AM L . Formally, for each i,
AM L (S)(ci ) is the majority vote among the set {yr : r 2 [m], xr = ci }. Second,
we lower bound Equation (28.7) for AM L .

lemma 28.1 Among all algorithms, Equation (28.4) is minimized for A being
the Maximum-Likelihood algorithm, AM L , defined as
!
X
8i, AM L (S)(ci ) = sign yr .
r:xr =ci

Proof Fix some j 2 [d]m . Note that given j and y 2 {±1}m , the training set
S is fully determined. Therefore, we can write A(j, y) instead of A(S). Let us
also fix i 2 [d]. Denote b¬i the sequence (b1 , . . . , bi 1 , bi+1 , . . . , bm ). Also, for any
28.2 The Lower Bound for the Agnostic Case 397

y 2 {±1}m , let y I denote the elements of y corresponding to indices for which


jr = i and let y ¬I be the rest of the elements of y. We have

E E 1[A(S)(ci )6=bi ]
b⇠U ({±1}d ) 8r,yr ⇠bjr
1 X X
= E P [y|b¬i , bi ]1[A(j,y)(ci )6=bi ]
2 b¬i ⇠U ({±1}d 1)
y
bi 2{±1}
0 1
X 1 X X
= E P [y ¬I |b¬i ] @ P [y I |bi ]1[A(j,y)(ci )6=bi ] A .
b¬i ⇠U ({±1}d 1) 2 I
y ¬I y bi 2{±1}

The sum within the parentheses is minimized when A(j, y)(ci ) is the maximizer
of P [y I |bi ] over bi 2 {±1}, which is exactly the Maximum-Likelihood rule. Re-
peating the same argument for all i we conclude our proof.

Fix i. For every j, let ni (j) = {|t : jt = i|} be the number of instances in which
the instance is ci . For the Maximum-Likelihood rule, we have that the quantity

E E 1[AM L (S)(ci )6=bi ]


b⇠U ({±1}d ) 8r,yr ⇠bjr

is exactly the probability that a binomial (ni (j), (1 ⇢)/2) random variable will
be larger than ni (j)/2. Using Lemma B.11, and the assumption ⇢2  1/2, we
have that
1⇣ p ⌘
P [B ni (j)/2] 1 1 e 2ni (j)⇢2 .
2
We have thus shown that
d
⇢X
E E E 1[A(S)(ci )6=bi ]
d i=1 j⇠U ([d])m b⇠U ({±1}d ) 8r,yr ⇠bjr

⇢ X
d ⇣ p ⌘
E m 1 1 e 2⇢2 ni (j)
2d i=1 j⇠U ([d])

⇢ X
d ⇣ p ⌘
E m 1 2⇢2 ni (j) ,
2d i=1 j⇠U ([d])

where in the last inequality we used the inequality 1 e a  a.


Since the square root function is concave, we can apply Jensen’s inequality to
obtain that the above is lower bounded by
!
⇢ X
d r
1 2⇢2 E ni (j)
2d i=1 j⇠U ([d])m

⇢ X⇣ ⌘
d p
= 1 2⇢2 m/d
2d i=1
⇢⇣ p ⌘
= 1 2⇢2 m/d .
2
398 Proof of the Fundamental Theorem of Learning Theory

As long as m < 8⇢d2 , this term would be larger than ⇢/4.


In summary, we have shown that if m < 8⇢d2 then for any algorithm there
exists a distribution such that

E m LD (A(S)) min LD (h) ⇢/4.
S⇠D h2H

Finally, Let = ⇢1 (LD (A(S)) minh2H LD (h)) and note that 2 [0, 1] (see
Equation (28.5)). Therefore, using Lemma B.1, we get that

✏ ✏
P[LD (A(S)) min LD (h) > ✏] = P > E[ ]
h2H ⇢ ⇢
1 ✏
.
4 ⇢
Choosing ⇢ = 8✏ we conclude that if m < 512d ✏2 , then with probability of at least
1/8 we will have LD (A(S)) minh2H LD (h) ✏.

28.3 The Upper Bound for the Realizable Case

Here we prove that there exists C such that H is PAC learnable with sample
complexity
d ln(1/✏) + ln(1/ )
mH (✏, )  C .

We do so by showing that for m C d ln(1/✏)+ln(1/

)
, H is learnable using the
ERM rule. We prove this claim based on the notion of ✏-nets.

definition 28.2 (✏-net) Let X be a domain. S ⇢ X is an ✏-net for H ⇢ 2X


with respect to a distribution D over X if

8h 2 H : D(h) ✏ ) h \ S 6= ;.

theorem 28.3 Let H ⇢ 2X with VCdim(H) = d. Fix ✏ 2 (0, 1), 2 (0, 1/4)
and let
✓ ✓ ◆ ✓ ◆◆
8 16e 2
m 2d log + log .
✏ ✏
Then, with probability of at least 1 over a choice of S ⇠ Dm we have that S
is an ✏-net for H.

Proof Let

B = {S ⇢ X : |S| = m, 9h 2 H, D(h) ✏, h \ S = ;}

be the set of sets which are not ✏-nets. We need to bound P[S 2 B]. Define

B 0 = {(S, T ) ⇢ X : |S| = |T | = m, 9h 2 H, D(h) ✏, h \ S = ;, |T \ h| > ✏m


2 }.
28.3 The Upper Bound for the Realizable Case 399

Claim 1
P[S 2 B]  2 P[(S, T ) 2 B 0 ].
Proof of Claim 1 : Since S and T are chosen independently we can write
⇥ ⇤ h ⇥ ⇤i
P[(S, T ) 2 B 0 ] = E 2m 1[(S,T )2B 0 ] = E m E m 1[(S,T )2B 0 ] .
(S,T )⇠D S⇠D T ⇠D

Note that (S, T ) 2 B 0 implies S 2 B and therefore 1[(S,T )2B 0 ] = 1[(S,T )2B 0 ] 1[S2B] ,
which gives
P[(S, T ) 2 B 0 ] = E E 1[(S,T )2B 0 ] 1[S2B]
S⇠D m T ⇠D m

= E 1[S2B] E 1[(S,T )2B 0 ] .


S⇠D m T ⇠D m

Fix some S. Then, either 1[S2B] = 0 or S 2 B and then 9hS such that D(hS ) ✏
and |hS \ S| = 0. It follows that a sufficient condition for (S, T ) 2 B 0 is that
|T \ hS | > ✏m
2 . Therefore, whenever S 2 B we have

E 1[(S,T )2B 0 ] P [|T \ hS | > ✏m


2 ].
T ⇠D m T ⇠D m

But, since we now assume S 2 B we know that D(hS ) = ⇢ ✏. Therefore,


|T \ hS | is a binomial random variable with parameters ⇢ (probability of success
for a single try) and m (number of tries). Cherno↵’s inequality implies
2 2
P[|T \hS |  ⇢m
 e m⇢ (m⇢ m⇢/2) m⇢/2
e m✏/2
e d log(1/ )/2 d/2
 1/2.
2 ] =e =
Thus,
⇢m
P[|T \ hS | > ✏m
2 ] = 1 P[|T \ hS |  ✏m
2 ] 1 P[|T \ hS |  2 ] 1/2.
Combining all the preceding we conclude the proof of Claim 1.

Claim 2 (Symmetrization):
P[(S, T ) 2 B 0 ]  e ✏m/4 ⌧H (2m).
Proof of Claim 2 : To simplify notation, let ↵ = m✏/2 and for a sequence A =
(x1 , . . . , x2m ) let A0 = (x1 , . . . , xm ). Using the definition of B 0 we get that
P[A 2 B 0 ] = E max 1[D(h) ✏] 1[|h\A0 |=0] 1[|h\A| ↵]
A⇠D 2m h2H

 E max 1[|h\A0 |=0] 1[|h\A| ↵] .


A⇠D 2m h2H

Now, let us define by HA the e↵ective number of di↵erent hypotheses on A,


namely, HA = {h \ A : h 2 H }. It follows that
P[A 2 B 0 ]  E max 1[|h\A0 |=0] 1[|h\A| ↵]
A⇠D 2m h2HA
X
 E 1[|h\A0 |=0] 1[|h\A| ↵] .
A⇠D 2m
h2HA

Let J = {j ⇢ [2m] : |j| = m}. For any j 2 J and A = (x1 , . . . , x2m ) define
Aj = (xj1 , . . . , xjm ). Since the elements of A are chosen i.i.d., we have that
for any j 2 J and any function f (A, A0 ) it holds that EA⇠D2m [f (A, A0 )] =
400 Proof of the Fundamental Theorem of Learning Theory

EA⇠D2m [f (A, Aj )]. Since this holds for any j it also holds for the expectation of
j chosen at random from J. In particular, it holds for the function f (A, A0 ) =
P
h2HA 1[|h\A0 |=0] 1[|h\A| ↵] . We therefore obtain that
X
P[A 2 B 0 ]  E 2m E 1[|h\Aj |=0] 1[|h\A| ↵]
A⇠D j⇠J
h2HA
X
= E 2m 1[|h\A| ↵] E 1[|h\Aj |=0] .
A⇠D j⇠J
h2HA

Now, fix some A s.t. |h \ A| ↵. Then, Ej 1[|h\Aj |=0] is the probability that
when choosing m balls from a bag with at least ↵ red balls, we will never choose
a red ball. This probability is at most

(1 ↵/(2m))m = (1 ✏/4)m  e ✏m/4


.

We therefore get that


X
P[A 2 B 0 ]  E 2m e ✏m/4
e ✏m/4
E |HA |.
A⇠D A⇠D 2m
h2HA

Using the definition of the growth function we conclude the proof of Claim 2.
Completing the Proof: By Sauer’s lemma we know that ⌧H (2m)  (2em/d)d .
Combining this with the two claims we obtain that

P[S 2 B]  2(2em/d)d e ✏m/4


.

We would like the right-hand side of the inequality to be at most ; that is,

2(2em/d)d e ✏m/4
 .

Rearranging, we obtain the requirement


4 4d 4
m (d log(2em/d) + log(2/ )) = log(m) + (d log(2e/d) + log(2/ ).
✏ ✏ ✏
Using Lemma A.2, a sufficient condition for the preceding to hold is that
✓ ◆
16d 8d 8
m log + (d log(2e/d) + log(2/ ).
✏ ✏ ✏
A sufficient condition for this is that
✓ ◆
16d 8d 16 1
m log + (d log(2e/d) + 2 log(2/ )
✏ ✏ ✏
✓ ✓ ◆◆
16d 8d 2e 8
= log + log(2/ )
✏ d✏ ✏
✓ ✓ ◆ ✓ ◆◆
8 16e 2
= 2d log + log .
✏ ✏
and this concludes our proof.
28.3 The Upper Bound for the Realizable Case 401

28.3.1 From ✏-Nets to PAC Learnability


theorem 28.4 Let H be a hypothesis class over X with VCdim(H) = d. Let
D be a distribution over X and let c 2 H be a target hypothesis. Fix ✏, 2 (0, 1)
and let m be as defined in Theorem 28.3. Then, with probability of at least 1
over a choice of m i.i.d. instances from X with labels according to c we have that
any ERM hypothesis has a true error of at most ✏.
a a
Proof Define the class Hc = {c h : h 2 H}, where c h = (h \ c) [ (c \ h). It is
easy to verify that if some A ⇢ X is shattered by H then it is also shattered by Hc
and vice versa. Hence, VCdim(H) = VCdim(Hc ). Therefore, using Theorem 28.3
we know that with probability of at least 1 , the sample S is an ✏-net for Hc .
a
Note that LD (h) = D(h c). Therefore, for any h 2 H with LD (h) ✏ we have
a
that |(h c) \ S| > 0, which implies that h cannot be an ERM hypothesis, which
concludes our proof.
29 Multiclass Learnability

In Chapter 17 we have introduced the problem of multiclass categorization, in


which the goal is to learn a predictor h : X ! [k]. In this chapter we address PAC
learnability of multiclass predictors with respect to the 0-1 loss. As in Chapter 6,
the main goal of this chapter is to:

• Characterize which classes of multiclass hypotheses are learnable in the (mul-


ticlass) PAC model.
• Quantify the sample complexity of such hypothesis classes.

In view of the fundamental theorem of learning theory (Theorem 6.8), it is natu-


ral to seek a generalization of the VC dimension to multiclass hypothesis classes.
In Section 29.1 we show such a generalization, called the Natarajan dimension,
and state a generalization of the fundamental theorem based on the Natarajan
dimension. Then, we demonstrate how to calculate the Natarajan dimension of
several important hypothesis classes.
Recall that the main message of the fundamental theorem of learning theory
is that a hypothesis class of binary classifiers is learnable (with respect to the
0-1 loss) if and only if it has the uniform convergence property, and then it
is learnable by any ERM learner. In Chapter 13, Exercise 2, we have shown
that this equivalence breaks down for a certain convex learning problem. The
last section of this chapter is devoted to showing that the equivalence between
learnability and uniform convergence breaks down even in multiclass problems
with the 0-1 loss, which are very similar to binary classification. Indeed, we
construct a hypothesis class which is learnable by a specific ERM learner, but
for which other ERM learners might fail and the uniform convergence property
does not hold.

29.1 The Natarajan Dimension

In this section we define the Natarajan dimension, which is a generalization of


the VC dimension to classes of multiclass predictors. Throughout this section,
let H be a hypothesis class of multiclass predictors; namely, each h 2 H is a
function from X to [k].

Understanding Machine Learning, c 2014 by Shai Shalev-Shwartz and Shai Ben-David


Published 2014 by Cambridge University Press.
Personal use only. Not for distribution. Do not post.
Please link to http://www.cs.huji.ac.il/~shais/UnderstandingMachineLearning
29.2 The Multiclass Fundamental Theorem 403

To define the Natarajan dimension, we first generalize the definition of shat-


tering.
definition 29.1 (Shattering (Multiclass Version)) We say that a set C ⇢ X
is shattered by H if there exist two functions f0 , f1 : C ! [k] such that
• For every x 2 C, f0 (x) 6= f1 (x).
• For every B ⇢ C, there exists a function h 2 H such that
8x 2 B, h(x) = f0 (x) and 8x 2 C \ B, h(x) = f1 (x).
definition 29.2 (Natarajan Dimension) The Natarajan dimension of H, de-
noted Ndim(H), is the maximal size of a shattered set C ⇢ X .
It is not hard to see that in the case that there are exactly two classes,
Ndim(H) = VCdim(H). Therefore, the Natarajan dimension generalizes the VC
dimension. We next show that the Natarajan dimension allows us to general-
ize the fundamental theorem of statistical learning from binary classification to
multiclass classification.

29.2 The Multiclass Fundamental Theorem

theorem 29.3 (The Multiclass Fundamental Theorem) There exist absolute


constants C1 , C2 > 0 such that the following holds. For every hypothesis class H
of functions from X to [k], such that the Natarajan dimension of H is d, we have
1. H has the uniform convergence property with sample complexity
d + log(1/ ) d log (k) + log(1/ )
C1  mUC
H (✏, )  C2 .
✏2 ✏2
2. H is agnostic PAC learnable with sample complexity
d + log(1/ ) d log (k) + log(1/ )
C1  mH (✏, )  C2 .
✏2 ✏2
3. H is PAC learnable (assuming realizability) with sample complexity
kd
d + log(1/ ) d log ✏ + log(1/ )
C1  mH (✏, )  C2 .
✏ ✏

29.2.1 On the Proof of Theorem 29.3


The lower bounds in Theorem 29.3 can be deduced by a reduction from the
binary fundamental theorem (see Exercise 5).
The upper bounds in Theorem 29.3 can be proved along the same lines of the
proof of the fundamental theorem for binary classification, given in Chapter 28
(see Exercise 4). The sole ingredient of that proof that should be modified in a
nonstraightforward manner is Sauer’s lemma. It applies only to binary classes
and therefore must be replaced. An appropriate substitute is Natarajan’s lemma:
404 Multiclass Learnability

lemma 29.4 (Natarajan) |H|  |X |Ndim(H) · k 2Ndim(H) .

The proof of Natarajan’s lemma shares the same spirit of the proof of Sauer’s
lemma and is left as an exercise (see Exercise 3).

29.3 Calculating the Natarajan Dimension

In this section we show how to calculate (or estimate) the Natarajan dimen-
sion of several popular classes, some of which were studied in Chapter 17. As
these calculations indicate, the Natarajan dimension is often proportional to the
number of parameters required to define a hypothesis.

29.3.1 One-versus-All Based Classes


In Chapter 17 we have seen two reductions of multiclass categorization to bi-
nary classification: One-versus-All and All-Pairs. In this section we calculate the
Natarajan dimension of the One-versus-All method.
Recall that in One-versus-All we train, for each label, a binary classifier that
distinguishes between that label and the rest of the labels. This naturally sug-
gests considering multiclass hypothesis classes of the following form. Let Hbin ⇢
k
{0, 1}X be a binary hypothesis class. For every h̄ = (h1 , . . . , hk ) 2 (Hbin ) define
T (h̄) : X ! [k] by
T (h̄)(x) = argmax hi (x).
i2[k]

If there are two labels that maximize hi (x), we choose the smaller one. Also, let
OvA,k k
Hbin = {T (h̄) : h̄ 2 (Hbin ) }.
OvA,k
What “should” be the Natarajan dimension of Hbin ? Intuitively, to specify a
hypothesis in Hbin we need d = VCdim(Hbin ) parameters. To specify a hypothe-
OvA,k
sis in Hbin , we need to specify k hypotheses in Hbin . Therefore, kd parameters
should suffice. The following lemma establishes this intuition.

lemma 29.5 If d = VCdim(Hbin ) then


OvA,k
Ndim(Hbin )  3kd log (kd) .

Proof Let C ⇢ X be a shattered set. By the definition of shattering (for mul-


ticlass hypotheses)
⇣ ⌘
OvA,k
Hbin 2|C| .
C
OvA,k
On the other hand, each hypothesis in Hbin is determined by using k hypothe-
ses from Hbin . Therefore,
⇣ ⌘
OvA,k
Hbin  | (Hbin )C |k .
C
29.3 Calculating the Natarajan Dimension 405

By Sauer’s lemma, | (Hbin )C |  |C|d . We conclude that


⇣ ⌘
OvA,k
2|C|  Hbin  |C|dk .
C

The proof follows by taking the logarithm and applying Lemma A.1.
OvA,k
How tight is Lemma 29.5? It is not hard to see that for some classes, Ndim(Hbin )
can be much smaller than dk (see Exercise 1). However there are several natural
OvA,k
binary classes, Hbin (e.g., halfspaces), for which Ndim(Hbin ) = ⌦(dk) (see
Exercise 6).

29.3.2 General Multiclass-to-Binary Reductions


The same reasoning used to establish Lemma 29.5 can be used to upper bound
the Natarajan dimension of more general multiclass-to-binary reductions. These
reductions train several binary classifiers on the data. Then, given a new in-
stance, they predict its label by using some rule that takes into account the
labels predicted by the binary classifiers. These reductions include One-versus-
All and All-Pairs.
Suppose that such a method trains l binary classifiers from a binary class Hbin ,
and r : {0, 1}l ! [k] is the rule that determines the (multiclass) label according
to the predictions of the binary classifiers. The hypothesis class corresponding
l
to this method can be defined as follows. For every h̄ = (h1 , . . . , hl ) 2 (Hbin )
define R(h̄) : X ! [k] by
R(h̄)(x) = r(h1 (x), . . . , hl (x)).
Finally, let
r l
Hbin = {R(h̄) : h̄ 2 (Hbin ) }.
Similarly to Lemma 29.5 it can be proven that:
lemma 29.6 If d = VCdim(Hbin ) then
r
Ndim(Hbin )  3 l d log (l d) .
The proof is left as Exercise 2.

29.3.3 Linear Multiclass Predictors


Next, we consider the class of linear multiclass predictors (see Section 17.2). Let
: X ⇥ [k] ! Rd be some class-sensitive feature mapping and let
( )
H = x 7! argmaxhw, (x, i)i : w 2 Rd . (29.1)
i2[k]

Each hypothesis in H is determined by d parameters, namely, a vector w 2


Rd . Therefore, we would expect that the Natarajan dimension would be upper
bounded by d. Indeed:
406 Multiclass Learnability

theorem 29.7 Ndim(H )  d .


Proof Let C ⇢ X be a shattered set, and let f0 , f1 : C ! [k] be the two
functions that witness the shattering. We need to show that |C|  d. For every
def
x 2 C let ⇢(x) = (x, f0 (x)) (x, f1 (x)). We claim that the set ⇢(C) =
{⇢(x) : x 2 C} consists of |C| elements (i.e., ⇢ is one to one) and is shattered
by the binary hypothesis class of homogeneous linear separators on Rd ,
H = {x 7! sign(hw, xi) : w 2 Rd }.
Since VCdim(H) = d, it will follow that |C| = |⇢(C)|  d, as required.
To establish our claim it is enough to show that |H⇢(C) | = 2|C| . Indeed, given
a subset B ⇢ C, by the definition of shattering, there exists hB 2 H for which
8x 2 B, hB (x) = f0 (x) and 8x 2 C \ B, hB (x) = f1 (x).
Let wB 2 Rd be a vector that defines hB . We have that, for every x 2 B,
hw, (x, f0 (x))i > hw, (x, f1 (x))i ) hw, ⇢(x)i > 0.
Similarly, for every x 2 C \ B,
hw, ⇢(x)i < 0.
It follows that the hypothesis gB 2 H defined by the same w 2 Rd label the
points in ⇢(B) by 1 and the points in ⇢(C \ B) by 0. Since this holds for every
B ✓ C we obtain that |C| = |⇢(C)| and |H⇢(C) | = 2|C| , which concludes our
proof.
The theorem is tight in the sense that there are mappings for which Ndim(H ) =
⌦(d). For example, this is true for the multivector construction (see Section 17.2
and the Bibliographic Remarks at the end of this chapter). We therefore con-
clude:
corollary 29.8 Let X = Rn and let : X ⇥ [k] ! Rnk be the class sensitive
feature mapping for the multi-vector construction:
(x, y) = [ 0, . . . , 0 , x1 , . . . , xn , 0, . . . , 0 ].
| {z } | {z } | {z }
2R(y 1)n 2Rn 2R(k y)n

Let H be as defined in Equation (29.1). Then, the Natarajan dimension of H


satisfies
(k 1)(n 1)  Ndim(H )  kn.

29.4 On Good and Bad ERMs

In this section we present an example of a hypothesis class with the property


that not all ERMs for the class are equally successful. Furthermore, if we allow
an infinite number of labels, we will also obtain an example of a class that is
29.4 On Good and Bad ERMs 407

learnable by some ERM, but other ERMs will fail to learn it. Clearly, this also
implies that the class is learnable but it does not have the uniform convergence
property. For simplicity, we consider only the realizable case.
The class we consider is defined as follows. The instance space X will be any
finite or countable set. Let Pf (X ) be the collection of all finite and cofinite
subsets of X (that is, for each A 2 Pf (X ), either A or X \ A must be finite).
Instead of [k], the label set is Y = Pf (X ) [ {⇤}, where ⇤ is some special label.
For every A 2 Pf (X ) define hA : X ! Y by
(
A x2A
hA (x) =
⇤ x2 /A
Finally, the hypothesis class we take is

H = {hA : A 2 Pf (X )}.

Let A be some ERM algorithm for H. Assume that A operates on a sample


labeled by hA 2 H. Since hA is the only hypothesis in H that might return
the label A, if A observes the label A, it “knows” that the learned hypothesis
is hA , and, as an ERM, must return it (note that in this case the error of the
returned hypothesis is 0). Therefore, to specify an ERM, we should only specify
the hypothesis it returns upon receiving a sample of the form

S = {(x1 , ⇤), . . . , (xm , ⇤)}.

We consider two ERMs: The first, Agood , is defined by

Agood (S) = h; ;

that is, it outputs the hypothesis which predicts ‘*’ for every x 2 X . The second
ERM, Abad , is defined by

Abad (S) = h{x1 ,...xm }c .

The following claim shows that the sample complexity of Abad is about |X |-times
larger than the sample complexity of Agood . This establishes a gap between
di↵erent ERMs. If X is infinite, we even obtain a learnable class that is not
learnable by every ERM.

claim 29.9

1. Let ✏, > 0, D a distribution over X and hA 2 H. Let S be an i.i.d. sample


consisting of m 1✏ log 1 examples, sampled according to D and labeled by
hA . Then, with probability of at least 1 , the hypothesis returned by Agood
will have an error of at most ✏.
2. There exists a constant a > 0 such that for every 0 < ✏ < a there exists a
distribution D over X and hA 2 H such that the following holds. The hypoth-
esis returned by Abad upon receiving a sample of size m  |X|6✏ 1 , sampled
1
according to D and labeled by hA , will have error ✏ with probability e 6 .
408 Multiclass Learnability

Proof Let D be a distribution over X and suppose that the correct labeling
is hA . For any sample, Agood returns either h; or hA . If it returns hA then its
true error is zero. Thus, it returns a hypothesis with error ✏ only if all the m
examples in the sample are from X \ A while the error of h; , LD (h; ) = PD [A],
is ✏. Assume m 1✏ log( 1 ); then the probability of the latter event is no more
than (1 ✏)m  e ✏m  . This establishes item 1.
Next we prove item 2. We restrict the proof to the case that |X | = d < 1.
The proof for infinite X is similar. Suppose that X = {x0 , . . . , xd 1 }.
Let a > 0 be small enough such that 1 2✏ e 4✏ for every ✏ < a and fix
some ✏ < a. Define a distribution on X by setting P[x0 ] = 1 2✏ and for all
1  i  d 1, P[xi ] = d2✏1 . Suppose that the correct hypothesis is h; and let the
sample size be m. Clearly, the hypothesis returned by Abad will err on all the
examples from X which are not in the sample. By Cherno↵’s bound, if m  d6✏1 ,
1
then with probability e 6 , the sample will include no more than d 2 1 examples
from X . Thus the returned hypothesis will have error ✏.

The conclusion of the example presented is that in multiclass classification,


the sample complexity of di↵erent ERMs may di↵er. Are there “good” ERMs
for every hypothesis class? The following conjecture asserts that the answer is
yes.

conjecture 29.10 The realizable sample complexity of every hypothesis class


X
H ⇢ [k] is
✓ ◆
Ndim(H)
mH (✏, ) = Õ .

We emphasize that the Õ notation may hide only poly-log factors of ✏, , and
Ndim(H), but no factor of k.

29.5 Bibliographic Remarks

The Natarajan dimension is due to Natarajan (1989). That paper also established
the Natarajan lemma and the generalization of the fundamental theorem. Gen-
eralizations and sharper versions of the Natarajan lemma are studied in Haussler
& Long (1995). Ben-David, Cesa-Bianchi, Haussler & Long (1995) defined a large
family of notions of dimensions, all of which generalize the VC dimension and
may be used to estimate the sample complexity of multiclass classification.
The calculation of the Natarajan dimension, presented here, together with
calculation of other classes, can be found in Daniely et al. (2012). The example
of good and bad ERMs, as well as conjecture 29.10, are from Daniely et al.
(2011).
29.6 Exercises 409

29.6 Exercises

1. Let d, k > 0. Show that there exists a binary hypothesis Hbin of VC dimension
OvA,k
d such that Ndim(Hbin ) = d.
2. Prove Lemma 29.6.
3. Prove Natarajan’s lemma.
Hint: Fix some x0 2 X . For i, j 2 [k], denote by Hij all the functions f :
X \ {x0 } ! [k] that can be extended to a function in H both by defining
P
f (x0 ) = i and by defining f (x0 ) = j. Show that |H|  |HX \{x0 } | + i6=j |Hij |
and use induction.
4. Adapt the proof of the binary fundamental theorem and Natarajan’s lemma
to prove that, for some universal constant C > 0 and for every hypothesis
class of Natarajan dimension d, the agnostic sample complexity of H is
kd
d log + log(1/ )

mH (✏, )  C .
✏2
5. Prove that, for some universal constant C > 0 and for every hypothesis class
of Natarajan dimension d, the agnostic sample complexity of H is
d + log(1/ )
mH (✏, ) C .
✏2
Hint: Deduce it from the binary fundamental theorem.
6. Let H be the binary hypothesis class of (nonhomogenous) halfspaces in Rd .
The goal of this exercise is to prove that Ndim(HOvA,k ) (d 1) · (k 1).
1. Let Hdiscrete be the class of all functions f : [k 1] ⇥ [d 1] ! {0, 1} for
which there exists some i0 such that, for every j 2 [d 1]
8i < i0 , f (i, j) = 1 while 8i > i0 , f (i, j) = 0.
OvA,k
Show that Ndim(Hdiscrete ) = (d 1) · (k 1).
2. Show that Hdiscrete can be realized by H. That is, show that there exists
a mapping : [k 1] ⇥ [d 1] ! Rd such that
Hdiscrete ⇢ {h : h 2 H} .
Hint: You can take (i, j) to be the vector whose jth coordinate is 1, whose
last coordinate is i and the rest are zeros.
3. Conclude that Ndim(HOvA,k ) (d 1) · (k 1).
30 Compression Bounds

Throughout the book, we have tried to characterize the notion of learnability


using di↵erent approaches. At first we have shown that the uniform conver-
gence property of a hypothesis class guarantees successful learning. Later on we
introduced the notion of stability and have shown that stable algorithms are
guaranteed to be good learners. Yet there are other properties which may be
sufficient for learning, and in this chapter and its sequel we will introduce two
approaches to this issue: compression bounds and the PAC-Bayes approach.
In this chapter we study compression bounds. Roughly speaking, we shall see
that if a learning algorithm can express the output hypothesis using a small sub-
set of the training set, then the error of the hypothesis on the rest of the examples
estimates its true error. In other words, an algorithm that can “compress” its
output is a good learner.

30.1 Compression Bounds

To motivate the results, let us first consider the following learning protocol.
First, we sample a sequence of k examples denoted T . On the basis of these
examples, we construct a hypothesis denoted hT . Now we would like to estimate
the performance of hT so we sample a fresh sequence of m k examples, denoted
V , and calculate the error of hT on V . Since V and T are independent, we
immediately get the following from Bernstein’s inequality (see Lemma B.10).
lemma 30.1 Assume that the range of the loss function is [0, 1]. Then,
" s #
2LV (hT ) log(1/ ) 4 log(1/ )
P LD (hT ) LV (hT ) +  .
|V | |V |
To derive this bound, all we needed was independence between T and V .
Therefore, we can redefine the protocol as follows. First, we agree on a sequence
of k indices I = (i1 , . . . , ik ) 2 [m]k . Then, we sample a sequence of m examples
S = (z1 , . . . , zm ). Now, define T = SI = (zi1 , . . . , zik ) and define V to be the
rest of the examples in S. Note that this protocol is equivalent to the protocol
we defined before – hence Lemma 30.1 still holds.
Applying a union bound over the choice of the sequence of indices we obtain
the following theorem.

Understanding Machine Learning, c 2014 by Shai Shalev-Shwartz and Shai Ben-David


Published 2014 by Cambridge University Press.
Personal use only. Not for distribution. Do not post.
Please link to http://www.cs.huji.ac.il/~shais/UnderstandingMachineLearning
30.1 Compression Bounds 411

theorem 30.2 Let k be an integer and let B : Z k ! H be a mapping from


sequences of k examples to the hypothesis class. Let m 2k be a training set
size and let A : Z m ! H be a learning rule that receives a training sequence S
of size m and returns a hypothesis such that A(S) = B(zi1 , . . . , zik ) for some
(i1 , . . . , ik ) 2 [m]k . Let V = {zj : j 2
/ (i1 , . . . , ik )} be the set of examples which
were not selected for defining A(S). Then, with probability of at least 1 over
the choice of S we have
r
4k log(m/ ) 8k log(m/ )
LD (A(S))  LV (A(S)) + LV (A(S)) + .
m m
Proof For any I 2 [m]k let hI = B(zi1 , . . . , zik ). Let n = m k. Combining
Lemma 30.1 with the union bound we have
" r #
k 2LV (hI ) log(1/ ) 4 log(1/ )
P 9I 2 [m] s.t. LD (hI ) LV (hI ) +
n n
" r #
X 2LV (hI ) log(1/ ) 4 log(1/ )
 P LD (hI ) LV (hI ) +
k
n n
I2[m]

 mk .

Denote 0 = mk . Using the assumption k  m/2, which implies that n =


0
m k m/2, the above implies that with probability of at least 1 we have
that
r
4k log(m/ 0 ) 8k log(m/ 0 )
LD (A(S))  LV (A(S)) + LV (A(S)) + ,
m m
which concludes our proof.

As a direct corollary we obtain:

corollary 30.3 Assuming the conditions of Theorem 30.2, and further as-
suming that LV (A(S)) = 0, then, with probability of at least 1 over the choice
of S we have
8k log(m/ )
LD (A(S))  .
m
These results motivate the following definition:

definition 30.4 (Compression Scheme) Let H be a hypothesis class of


functions from X to Y and let k be an integer. We say that H has a compression
scheme of size k if the following holds:
For all m there exists A : Z m ! [m]k and B : Z k ! H such that for all h 2 H,
if we feed any training set of the form (x1 , h(x1 )), . . . , (xm , h(xm )) into A and
then feed (xi1 , h(xi1 )), . . . , (xik , h(xik )) into B, where (i1 , . . . , ik ) is the output
of A, then the output of B, denoted h0 , satisfies LS (h0 ) = 0.

It is possible to generalize the definition for unrealizable sequences as follows.


412 Compression Bounds

definition 30.5 (Compression Scheme for Unrealizable Sequences)


Let H be a hypothesis class of functions from X to Y and let k be an integer.
We say that H has a compression scheme of size k if the following holds:
For all m there exists A : Z m ! [m]k and B : Z k ! H such that for all h 2 H,
if we feed any training set of the form (x1 , y1 ), . . . , (xm , ym ) into A and then
feed (xi1 , yi1 ), . . . , (xik , yik ) into B, where (i1 , . . . , ik ) is the output of A, then
the output of B, denoted h0 , satisfies LS (h0 )  LS (h).

The following lemma shows that the existence of a compression scheme for
the realizable case also implies the existence of a compression scheme for the
unrealizable case.

lemma 30.6 Let H be a hypothesis class for binary classification, and assume
it has a compression scheme of size k in the realizable case. Then, it has a
compression scheme of size k for the unrealizable case as well.

Proof Consider the following scheme: First, find an ERM hypothesis and denote
it by h. Then, discard all the examples on which h errs. Now, apply the realizable
compression scheme on the examples that have not been removed. The output of
the realizable compression scheme, denoted h0 , must be correct on the examples
that have not been removed. Since h errs on the removed examples it follows
that the error of h0 cannot be larger than the error of h; hence h0 is also an ERM
hypothesis.

30.2 Examples

In the examples that follows, we present compression schemes for several hy-
pothesis classes for binary classification. In light of Lemma 30.6 we focus on the
realizable case. Therefore, to show that a certain hypothesis class has a com-
pression scheme, it is necessary to show that there exist A, B, and k for which
LS (h0 ) = 0.

30.2.1 Axis Aligned Rectangles


Note that this is an uncountable infinite class. We show that there is a simple
compression scheme. Consider the algorithm A that works as follows: For each
dimension, choose the two positive examples with extremal values at this dimen-
sion. Define B to be the function that returns the minimal enclosing rectangle.
Then, for k = 2d, we have that in the realizable case, LS (B(A(S))) = 0.

30.2.2 Halfspaces
Let X = Rd and consider the class of homogenous halfspaces, {x 7! sign(hw, xi) :
w 2 Rd }.
30.2 Examples 413

A Compression Scheme:
W.l.o.g. assume all labels are positive (otherwise, replace xi by yi xi ). The com-
pression scheme we propose is as follows. First, A finds the vector w which is
in the convex hull of {x1 , . . . , xm } and has minimal norm. Then, it represents it
as a convex combination of d points in the sample (it will be shown later that
this is always possible). The output of A are these d points. The algorithm B
receives these d points and set w to be the point in their convex hull of minimal
norm.
Next we prove that this indeed is a compression sceme. Since the data is
linearly separable, the convex hull of {x1 , . . . , xm } does not contain the origin.
Consider the point w in this convex hull closest to the origin. (This is a unique
point which is the Euclidean projection of the origin onto this convex hull.) We
claim that w separates the data.1 To see this, assume by contradiction that
2
hw, xi i  0 for some i. Take w0 = (1 ↵)w + ↵xi for ↵ = kxi kkwk 2 +kwk2 2 (0, 1).

Then w0 is also in the convex hull and

kw0 k2 = (1 ↵)2 kwk2 + ↵2 kxi k2 + 2↵(1 ↵)hw, xi i


2 2 2 2
 (1 ↵) kwk + ↵ kxi k
kxi k4 kwk2 + kxi k2 kwk4
=
(kwk2 + kxi k2 )2
kxi k2 kwk2
=
kwk2 + kxi k2
1
= kwk2 ·
kwk /kxi k2 + 1
2

< kwk2 ,

which leads to a contradiction.


We have thus shown that w is also an ERM. Finally, since w is in the convex
hull of the examples, we can apply Caratheodory’s theorem to obtain that w is
also in the convex hull of a subset of d + 1 points of the polygon. Furthermore,
the minimality of w implies that w must be on a face of the polygon and this
implies it can be represented as a convex combination of d points.
It remains to show that w is also the projection onto the polygon defined by the
d points. But this must be true: On one hand, the smaller polygon is a subset of
the larger one; hence the projection onto the smaller cannot be smaller in norm.
On the other hand, w itself is a valid solution. The uniqueness of projection
concludes our proof.

30.2.3 Separating Polynomials


Let X = Rd and consider the class x 7! sign(p(x)) where p is a degree r polyno-
mial.
1 It can be shown that w is the direction of the max-margin solution.
414 Compression Bounds

Note that p(x) can be rewritten as hw, (x)i where the elements of (x) are all
the monomials of x up to degree r. Therefore, the problem of constructing a com-
pression scheme for p(x) reduces to the problem of constructing a compression
0
scheme for halfspaces in Rd where d0 = O(dr ).

30.2.4 Separation with Margin


Suppose that a training set is separated with margin . The Perceptron algorithm
guarantees to make at most 1/ 2 updates before converging to a solution that
makes no mistakes on the entire training set. Hence, we have a compression
scheme of size k  1/ 2 .

30.3 Bibliographic Remarks

Compression schemes and their relation to learning were introduced by Little-


stone & Warmuth (1986). As we have shown, if a class has a compression scheme
then it is learnable. For binary classification problems, it follows from the funda-
mental theorem of learning that the class has a finite VC dimension. The other
direction, namely, whether every hypothesis class of finite VC dimension has a
compression scheme of finite size, is an open problem posed by Manfred War-
muth and is still open (see also (Floyd 1989, Floyd & Warmuth 1995, Ben-David
& Litman 1998, Livni & Simon 2013).
31 PAC-Bayes

The Minimum Description Length (MDL) and Occam’s razor principles allow a
potentially very large hypothesis class but define a hierarchy over hypotheses and
prefer to choose hypotheses that appear higher in the hierarchy. In this chapter
we describe the PAC-Bayesian approach that further generalizes this idea. In
the PAC-Bayesian approach, one expresses the prior knowledge by defining prior
distribution over the hypothesis class.

31.1 PAC-Bayes Bounds

As in the MDL paradigm, we define a hierarchy over hypotheses in our class H.


Now, the hierarchy takes the form of a prior distribution over H. That is, we
assign a probability (or density if H is continuous) P (h) 0 for each h 2 H
and refer to P (h) as the prior score of h. Following the Bayesian reasoning
approach, the output of the learning algorithm is not necessarily a single hy-
pothesis. Instead, the learning process defines a posterior probability over H,
which we denote by Q. In the context of a supervised learning problem, where
H contains functions from X to Y, one can think of Q as defining a randomized
prediction rule as follows. Whenever we get a new instance x, we randomly pick
a hypothesis h 2 H according to Q and predict h(x). We define the loss of Q on
an example z to be
def
`(Q, z) = E [`(h, z)].
h⇠Q

By the linearity of expectation, the generalization loss and training loss of Q can
be written as
def def
LD (Q) = E [LD (h)] and LS (Q) = E [LS (h)].
h⇠Q h⇠Q

The following theorem tells us that the di↵erence between the generalization
loss and the empirical loss of a posterior Q is bounded by an expression that
depends on the Kullback-Leibler divergence between Q and the prior distribu-
tion P . The Kullback-Leibler is a natural measure of the distance between two
distributions. The theorem suggests that if we would like to minimize the gen-
eralization loss of Q, we should jointly minimize both the empirical loss of Q
and the Kullback-Leibler distance between Q and the prior distribution. We will

Understanding Machine Learning, c 2014 by Shai Shalev-Shwartz and Shai Ben-David


Published 2014 by Cambridge University Press.
Personal use only. Not for distribution. Do not post.
Please link to http://www.cs.huji.ac.il/~shais/UnderstandingMachineLearning
416 PAC-Bayes

later show how in some cases this idea leads to the regularized risk minimization
principle.

theorem 31.1 Let D be an arbitrary distribution over an example domain Z.


Let H be a hypothesis class and let ` : H ⇥ Z ! [0, 1] be a loss function. Let P be
a prior distribution over H and let 2 (0, 1). Then, with probability of at least
1 over the choice of an i.i.d. training set S = {z1 , . . . , zm } sampled according
to D, for all distributions Q over H (even such that depend on S), we have
s
D(Q||P ) + ln m/
LD (Q)  LS (Q) + ,
2(m 1)

where
def
D(Q||P ) = E [ln(Q(h)/P (h))]
h⇠Q

is the Kullback-Leibler divergence.

Proof For any function f (S), using Markov’s inequality:

ES [ef (S) ]
P[f (S) ✏] = P[ef (S) e✏ ]  . (31.1)
S S e✏
Let (h) = LD (h) LS (h). We will apply Equation (31.1) with the function
✓ ◆
2
f (S) = sup 2(m 1) E ( (h)) D(Q||P ) .
Q h⇠Q

We now turn to bound ES [ef (S) ]. The main trick is to upper bound f (S) by
using an expression that does not depend on Q but rather depends on the prior
probability P . To do so, fix some S and note that from the definition of D(Q||P )
we get that for all Q,
1) (h)2
2(m 1) E ( (h))2 D(Q||P ) = E [ln(e2(m P (h)/Q(h))]
h⇠Q h⇠Q
1) (h)2
 ln E [e2(m P (h)/Q(h)]
h⇠Q
1) (h)2
= ln E [e2(m ], (31.2)
h⇠P

where the inequality follows from Jensen’s inequality and the concavity of the
log function. Therefore,
1) (h)2
E[ef (S) ]  E E [e2(m ]. (31.3)
S S h⇠P

The advantage of the expression on the right-hand side stems from the fact that
we can switch the order of expectations (because P is a prior that does not
depend on S), which yields
1) (h)2
E[ef (S) ]  E E[e2(m ]. (31.4)
S h⇠P S
31.2 Bibliographic Remarks 417

1) (h)2
Next, we claim that for all h we have ES [e2(m ]  m. To do so, recall that
Hoe↵ding’s inequality tells us that
2m✏2
P[ (h) ✏]  e .
S
2
This implies that ES [e2(m 1) (h) ]  m (see Exercise 1). Combining this with
Equation (31.4) and plugging into Equation (31.1) we get
m
P[f (S) ✏]  . (31.5)
S e✏
Denote the right-hand side of the above , thus ✏ = ln(m/ ), and we therefore
obtain that with probability of at least 1 we have that for all Q

2(m 1) E ( (h))2 D(Q||P )  ✏ = ln(m/ ).


h⇠Q

Rearranging the inequality and using Jensen’s inequality again (the function x2
is convex) we conclude that
✓ ◆2
ln(m/ ) + D(Q||P )
E (h)  E ( (h))2  . (31.6)
h⇠Q h⇠Q 2(m 1)

Remark 31.1 (Regularization) The PAC-Bayes bound leads to the following


learning rule:

Given a prior P , return a posterior Q that minimizes the function


s
D(Q||P ) + ln m/
LS (Q) + . (31.7)
2(m 1)

This rule is similar to the regularized risk minimization principle. That is, we
jointly minimize the empirical loss of Q on the sample and the Kullback-Leibler
“distance” between Q and P .

31.2 Bibliographic Remarks

PAC-Bayes bounds were first introduced by McAllester (1998). See also (McAllester
1999, McAllester 2003, Seeger 2003, Langford & Shawe-Taylor 2003, Langford
2006).

31.3 Exercises
2m✏2
1. Let X be a random variable that satisfies P[X ✏]  e . Prove that
2
E[e2(m 1)X ]  m.
418 PAC-Bayes

2. • Suppose that H is a finite hypothesis class, set the prior to be uniform over
H, and set the posterior to be Q(hS ) = 1 for some hS and Q(h) = 0 for
all other h 2 H. Show that
s
ln(|H|) + ln(m/ )
LD (hS )  LS (h) + .
2(m 1)
Compare to the bounds we derived using uniform convergence.
• Derive a bound similar to the Occam bound given in Chapter 7 using the
PAC-Bayes bound
Appendix A Technical Lemmas

lemma A.1 Let a > 0. Then: x 2a log(a) ) x a log(x). It follows that a


necessary condition for the inequality x < a log(x) to hold is that x < 2a log(a).
p
Proof First note that for a 2 (0, e ] the inequality x a log(x) holds uncon-
p
ditionally and therefore the claim is trivial. From now on, assume that a > e.
Consider the function f (x) = x a log(x). The derivative is f 0 (x) = 1 a/x.
Thus, for x > a the derivative is positive and the function increases. In addition,

f (2a log(a)) = 2a log(a) a log(2a log(a))


= 2a log(a) a log(a) a log(2 log(a))
= a log(a) a log(2 log(a)).

Since a 2 log(a) > 0 for all a > 0, the proof follows.

lemma A.2 Let a 1 and b > 0. Then: x 4a log(2a)+2b ) x a log(x)+b.

Proof It suffices to prove that x 4a log(2a) + 2b implies that both x


2a log(x) and x 2b. Since we assume a 1 we clearly have that x 2b.
In addition, since b > 0 we have that x 4a log(2a) which using Lemma A.1
implies that x 2a log(x). This concludes our proof.

lemma A.3 Let X be a random variable and x0 2 R be a scalar and assume


2 2
that there exists a > 0 such that for all t 0 we have P[|X x0 | > t]  2e t /a .
Then, E[|X x0 |]  4 a.

Proof For all i = 0, 1, 2, . . . denote ti = a i. Since ti is monotonically increasing


P1
we have that E[|X x0 |] is at most i=1 ti P[|X x0 | > ti 1 ]. Combining this
P1 2
with the assumption in the lemma we get that E[|X x0 |]  2 a i=1 ie (i 1) .
The proof now follows from the inequalities
1
X X 5 Z 1
2 2 2
ie (i 1)  ie (i 1) + xe (x 1) dx < 1.8 + 10 7 < 2 .
i=1 i=1 5

lemma A.4 Let X be a random variable and x0 2 R be a scalar and assume


for all t 0 we have P[|X x0 | >
that there exists a > 0 and b e such that p
t2 /a2
t]  2b e . Then, E[|X x |]  a(2 + log(b)).
0

Understanding Machine Learning, c 2014 by Shai Shalev-Shwartz and Shai Ben-David


Published 2014 by Cambridge University Press.
Personal use only. Not for distribution. Do not post.
Please link to http://www.cs.huji.ac.il/~shais/UnderstandingMachineLearning
420 Technical Lemmas

p
Proof For all i = 0, 1, 2, . . . denote ti = a (i+ log(b)). Since ti is monotonically
increasing we have that

p 1
X
E[|X x0 |]  a log(b) + ti P[|X x0 | > ti 1 ].
i=1

Using the assumption in the lemma we have

1
X 1
X p p
log(b))2
ti P[|X x0 | > ti 1]  2ab (i + log(b))e (i 1+

i=1 i=1
Z 1
(x 1)2
 2ab p xe dx
1+ log(b)
Z 1
y2
= 2ab p (y + 1)e dy
log(b)
Z 1
2
 4ab p ye y dy
log(b)
h i1
y2
= 2ab e p
log(b)

= 2 a b/b = 2 a.

Combining the preceding inequalities we conclude our proof.

lemma A.5 Let m, d be two positive integers such that d  m 2. Then,

d ✓ ◆
X ⇣ e m ⌘d
m
 .
k d
k=0

Proof We prove the claim by induction. For d = 1 the left-hand side equals
1 + m while the right-hand side equals em; hence the claim is true. Assume that
the claim holds for d and let us prove it for d + 1. By the induction assumption
we have

d+1 ✓ ◆
X ⇣ e m ⌘d ◆✓
m m
 +
k d d+1
k=0
✓ ◆d !
⇣ e m ⌘d d m(m 1)(m 2) · · · (m d)
= 1+
d em (d + 1)d!
✓ ◆d !
⇣ em ⌘d d (m d)
 1+ .
d e (d + 1)d!
Technical Lemmas 421

Using Stirling’s approximation we further have that


✓ ◆d !
⇣ e m ⌘d d (m d)
 1+ p
d e (d + 1) 2⇡d(d/e)d
⇣ e m ⌘d ✓ m d

= 1+ p
d 2⇡d(d + 1)
⇣ e m ⌘d d + 1 + (m d)/p2⇡d
= ·
d d+1
⇣ e m ⌘d d + 1 + (m d)/2
 ·
d d+1
⇣ e m ⌘d d/2 + 1 + m/2
= ·
d d+1
⇣ e m ⌘d m
 · ,
d d+1
where in the last inequality we used the assumption that d  m 2. On the
other hand,
✓ ◆d+1 ⇣ ✓ ◆d
em e m ⌘d em d
= · ·
d+1 d d+1 d+1
⇣ e m ⌘d em 1
= · ·
d d + 1 (1 + 1/d)d
⇣ e m ⌘d em 1
· ·
d d+1 e
⇣ e m ⌘d m
= · ,
d d+1
which proves our inductive argument.
lemma A.6 For all a 2 R we have
ea + e a 2
 ea /2
.
2
Proof Observe that
X1
an
ea = .
n=0
n!

Therefore,
X1
ea + e a
a2n
= ,
2 n=0
(2n)!

and
X1
a2 /2 a2n
e = .
n=0
2n n!

Observing that (2n)! 2n n! for every n 0 we conclude our proof.


Appendix B Measure Concentration

Let Z1 , . . . , Zm be an i.i.d. sequence of random variables and let µ be their mean.


The strong law of large numbers states that when m tends to infinity, the em-
1
Pm
pirical average, m i=1 Zi , converges to the expected value µ, with probability
1. Measure concentration inequalities quantify the deviation of the empirical
average from the expectation when m is finite.

B.1 Markov’s Inequality

We start with an inequality which is called Markov’s inequality. Let Z be a


nonnegative random variable. The expectation of Z can be written as follows:
Z 1
E[Z] = P[Z x]dx. (B.1)
x=0

Since P[Z x] is monotonically nonincreasing we obtain


Z a Z a
8a 0, E[Z] P[Z x]dx P[Z a]dx = a P[Z a]. (B.2)
x=0 x=0
Rearranging the inequality yields Markov’s inequality:
E[Z]
8a 0, P[Z a]  . (B.3)
a
For random variables that take value in [0, 1], we can derive from Markov’s
inequality the following.
lemma B.1 Let Z be a random variable that takes values in [0, 1]. Assume that
E[Z] = µ. Then, for any a 2 (0, 1),
µ (1 a)
P[Z > 1 a] .
a
This also implies that for every a 2 (0, 1),
µ a
P[Z > a] µ a.
1 a
Proof Let Y = 1 Z. Then Y is a nonnegative random variable with E[Y ] =
1 E[Z] = 1 µ. Applying Markov’s inequality on Y we obtain
E[Y ] 1 µ
P[Z  1 a] = P[1 Z a] = P[Y a]  = .
a a

Understanding Machine Learning, c 2014 by Shai Shalev-Shwartz and Shai Ben-David


Published 2014 by Cambridge University Press.
Personal use only. Not for distribution. Do not post.
Please link to http://www.cs.huji.ac.il/~shais/UnderstandingMachineLearning
B.2 Chebyshev’s Inequality 423

Therefore,
1 µ a+µ 1
P[Z > 1 a] 1 = .
a a

B.2 Chebyshev’s Inequality

Applying Markov’s inequality on the random variable (Z E[Z])2 we obtain


Chebyshev’s inequality:
Var[Z]
8a > 0, P[|Z E[Z]| a] = P[(Z E[Z])2 a2 ]  , (B.4)
a2
where Var[Z] = E[(Z E[Z])2 ] is the variance of Z.
1
Pm
Consider the random variable m i=1 Zi . Since Z1 , . . . , Zm are i.i.d. it is easy
to verify that
" m
#
1 X Var[Z1 ]
Var Zi = .
m i=1 m

Applying Chebyshev’s inequality, we obtain the following:

lemma B.2 Let Z1 , . . . , Zm be a sequence of i.i.d. random variables and assume


that E[Z1 ] = µ and Var[Z1 ]  1. Then, for any 2 (0, 1), with probability of at
least 1 we have
m r
1 X 1
Zi µ  .
m i=1 m

Proof Applying Chebyshev’s inequality we obtain that for all a > 0


" m
#
1 X Var[Z1 ] 1
P Zi µ > a  2
 .
m i=1 ma m a2

The proof follows by denoting the right-hand side and solving for a.

The deviation between the empirical average and the mean given previously
decreases polynomially with m. It is possible to obtain a significantly faster
decrease. In the sections that follow we derive bounds that decrease exponentially
fast.

B.3 Cherno↵’s Bounds

Let Z1 , . . . , Zm be independent Bernoulli variables where for every i, P[Zi = 1] =


Pm Pm
pi and P[Zi = 0] = 1 pi . Let p = i=1 pi and let Z = i=1 Zi . Using the
424 Measure Concentration

monotonicity of the exponent function and Markov’s inequality, we have that for
every t > 0

E[etZ ]
P[Z > (1 + )p] = P[etZ > et(1+ )p
] . (B.5)
e(1+ )tp

Next,
P Y
E[etZ ] = E[et i Zi
] = E[ etZi ]
i
Y
tZi
= E[e ] by independence
i
Y
= pi et + (1 pi )e0
i
Y
= 1 + pi (et 1)
i
Y t
 epi (e 1)
using 1 + x  ex
i
P
pi (et 1)
=e i

t
= e(e 1)p
.

Combining the above with Equation (B.5) and choosing t = log(1 + ) we obtain

lemma B.3 Let Z1 , . . . , Zm be independent Bernoulli variables where for every


Pm Pm
i, P[Zi = 1] = pi and P[Zi = 0] = 1 pi . Let p = i=1 pi and let Z = i=1 Zi .
Then, for any > 0,

h( ) p
P[Z > (1 + )p]  e ,

where

h( ) = (1 + ) log(1 + ) .

Using the inequality h(a) a2 /(2 + 2a/3) we obtain

lemma B.4 Using the notation of Lemma B.3 we also have

2
p
P[Z > (1 + )p]  e 2+2 /3 .

For the other direction, we apply similar calculations:

tZ t(1 )p E[e tZ
]
P[Z < (1 )p] = P[ Z > (1 )p] = P[e >e ] , (B.6)
e (1 )tp
B.4 Hoe↵ding’s Inequality 425

and,
P Y
tZ t Zi tZi
E[e ] = E[e i ] = E[ e ]
i
Y
tZi
= E[e ] by independence
i
Y
t
= 1 + pi (e 1)
i
Y t
 epi (e 1)
using 1 + x  ex
i
t
(e 1)p
=e .

Setting t = log(1 ) yields


p
e ph( )
P[Z < (1 )p]  =e .
e(1 ) log(1 )p

It is easy to verify that h( ) h( ) and hence

lemma B.5 Using the notation of Lemma B.3 we also have


2
ph( ) ph( ) p
P[Z < (1 )p]  e e e 2+2 /3 .

B.4 Hoe↵ding’s Inequality

lemma B.6 (Hoe↵ding’s inequality) Let Z1 , . . . , Zm be a sequence of i.i.d.


Pm
i=1 Zi . Assume that E[Z̄] = µ and P[a 
1
random variables and let Z̄ = m
Zi  b] = 1 for every i. Then, for any ✏ > 0
" m
#
X
P m 1
Zi µ > ✏  2 exp 2 m ✏2 /(b a)2 .
i=1
P
Proof Denote Xi = Zi E[Zi ] and X̄ = m 1
i Xi . Using the monotonicity of
the exponent function and Markov’s inequality, we have that for every > 0
and ✏ > 0,

P[X̄ ✏] = P[e e ✏]  e ✏
E[e X̄
].

Using the independence assumption we also have


" #
Y Y
X̄ Xi /m Xi /m
E[e ] = E e = E[e ].
i i

By Hoe↵ding’s lemma (Lemma B.7 later), for every i we have


2 (ba)2
Xi /m
E[e ]e 8m2 .
426 Measure Concentration

Therefore,
Y 2 (b
a)2 2 (b a)2
✏ ✏+
P[X̄ ✏]  e e 8m2 =e 8m .
i

Setting = 4m✏/(b a)2 we obtain


2m✏2
P[X̄ ✏]  e (b a)2 .

Applying the same arguments on the variable X̄ we obtain that P[X̄  ✏] 


2m✏2
e (b a)2 . The theorem follows by applying the union bound on the two cases.

lemma B.7 (Hoe↵ding’s lemma) Let X be a random variable that takes values
in the interval [a, b] and such that E[X] = 0. Then, for every > 0,
2 (b a)2
X
E[e ]e 8 .
x
Proof Since f (x) = e is a convex function, we have that for every ↵ 2 (0, 1),
and x 2 [a, b],

f (x)  ↵f (a) + (1 ↵)f (b).


b x
Setting ↵ = b a 2 [0, 1] yields

x b x a x a b
e  e + e .
b a b a
Taking the expectation, we obtain that

X b E[X] a E[x] a b b a a
E[e ] e + e = e e b,
b a b a b a b a
where we used the fact that E[X] = 0. Denote h = (b a), p = b aa , and
L(h) = hp + log(1 p + peh ). Then, the expression on the right-hand side of
the above can be rewritten as eL(h) . Therefore, to conclude our proof it suffices
2
to show that L(h)  h8 . This follows from Taylor’s theorem using the facts:
L(0) = L0 (0) = 0 and L00 (h)  1/4 for all h.

B.5 Bennet’s and Bernstein’s Inequalities

Bennet’s and Bernsein’s inequalities are similar to Cherno↵’s bounds, but they
hold for any sequence of independent random variables. We state the inequalities
without proof, which can be found, for example, in Cesa-Bianchi & Lugosi (2006).

lemma B.8 (Bennet’s inequality) Let Z1 , . . . , Zm be independent random vari-


ables with zero mean, and assume that Zi  1 with probability 1. Let
m
2 1 X
E[Zi2 ].
m i=1
B.5 Bennet’s and Bernstein’s Inequalities 427

Then for all ✏ > 0,


" m
#
X 2
h( m

P Zi > ✏  e m 2 ).
i=1

where
h(a) = (1 + a) log(1 + a) a.
By using the inequality h(a) a2 /(2 + 2a/3) it is possible to derive the
following:
lemma B.9 (Bernstein’s inequality) Let Z1 , . . . , Zm be i.i.d. random variables
with a zero mean. If for all i, P(|Zi | < M ) = 1, then for all t > 0 :
" m
# !
X t2 /2
P Zi > t  exp P .
i=1
E Zj2 + M t/3

B.5.1 Application
Bernstein’s inequality can be used to interpolate between the rate 1/✏ we derived
for PAC learning in the realizable case (in Chapter 2) and the rate 1/✏2 we derived
for the unrealizable case (in Chapter 4).
lemma B.10 Let ` : H ⇥ Z ! [0, 1] be a loss function. Let D be an arbitrary
distribution over Z. Fix some h. Then, for any 2 (0, 1) we have
" r #
2LD (h) log(1/ ) 2 log(1/ )
1. P LS (h) LD (h) + + 
S⇠D m 3m m
" r #
2LS (h) log(1/ ) 4 log(1/ )
2. P LD (h) LS (h) + + 
S⇠D m m m

Proof Define random variables ↵1 , . . . , ↵m s.t. ↵i = `(h, zi ) LD (h). Note that


E[↵i ] = 0 and that
E[↵i2 ] = E[`(h, zi )2 ] 2LD (h) E[`(h, zi )] + LD (h)2
= E[`(h, zi )2 ] LD (h)2
 E[`(h, zi )2 ]
 E[`(h, zi )] = LD (h),
where in the last inequality we used the fact that `(h, zi ) 2 [0, 1] and thus
`(h, zi )2  `(h, zi ). Applying Bernsein’s inequality over the ↵i ’s yields
"m # !
X t2 /2
P ↵i > t  exp P
i=1
E ↵j2 + t/3
✓ ◆
t2 /2 def
 exp = .
m LD (h) + t/3
428 Measure Concentration

Solving for t yields


t2 /2
= log(1/ )
m LD (h) + t/3
log(1/ )
) t2 /2 t log(1/ ) m LD (h) = 0
3 s
log(1/ ) log2 (1/ )
) t= + + 2 log(1/ ) m LD (h)
3 32
log(1/ ) p
2 + 2 log(1/ ) m LD (h)
3
1
P
Since m i ↵i = LS (h)
LD (h), it follows that with probability of at least 1 ,
r
log(1/ ) 2 log(1/ ) LD (h)
LS (h) LD (h)  2 + ,
3m m
which proves the first inequality. The second part of the lemma follows in a
similar way.

B.6 Slud’s Inequality


Pm
Let X be a (m, p) binomial variable. That is, X = i=1 Zi , where each Zi is 1
with probability p and 0 with probability 1 p. Assume that p = (1 ✏)/2. Slud’s
inequality (Slud 1977) tells us that P[X m/2] is lower boundedp by the proba-
bility that a normal variable will be greater than or equal to m✏2 /(1 ✏2 ). The
following lemma follows by standard tail bounds for the normal distribution.
lemma B.11 Let X be a (m, p) binomial variable and assume that p = (1 ✏)/2.
Then,
1⇣ p ⌘
P[X m/2] 1 1 exp( m✏2 /(1 ✏2 )) .
2

2
B.7 Concentration of Variables

Let X1 , . . . , Xk be k independent normally distributed random variables. That


is, for all i, Xi ⇠ N (0, 1). The distribution of the random variable Xi2 is called
2
(chi square) and the distribution of the random variable Z = X12 + · · · + Xk2
is called 2k (chi square with k degrees of freedom). Clearly, E[Xi2 ] = 1 and
E[Z] = k. The following lemma states that Xk2 is concentrated around its mean.
2
lemma B.12 Let Z ⇠ k. Then, for all ✏ > 0 we have
✏2 k/6
P[Z  (1 ✏)k]  e ,
and for all ✏ 2 (0, 3) we have
✏2 k/6
P[Z (1 + ✏)k]  e .
2
B.7 Concentration of Variables 429

Finally, for all ✏ 2 (0, 3),


2
✏)k  Z  (1 + ✏)k] 1 2e ✏ k/6 .
P [(1
Pk
Proof Let us write Z = i=1 Xi2 where Xi ⇠ N (0, 1). To prove both bounds
we use Cherno↵’s bounding method. For the first inequality, we first bound
2 2
E[e X1 ], where > 0 will be specified later. Since e a  1 a + a2 for all a 0
we have that
2
X12
E[e ]  1 E[X12 ] + E[X14 ].
2
Using the well known equalities, E[X12 ] = 1 and E[X14 ] = 3, and the fact that
1 a  e a we obtain that
X12 2 + 32 2
E[e ]  1 + 3
2 e .
Now, applying Cherno↵’s bounding method we get that
h i
P[ Z (1 ✏)k] = P e Z e (1 ✏)k
⇥ ⇤
 e(1 ✏)k E e Z
⇣ h 2
i⌘k
= e(1 ✏)k E e X1
2
k+ 32
 e(1 ✏)k
e k
3 2
✏k + 2 k
=e .
Choose = ✏/3 we obtain the first inequality stated in the lemma.
For the second inequality, we use a known closed form expression for the
moment generating function of a 2k distributed random variable:
h 2
i
8 < 12 , E e Z = (1 2 ) k/2 . (B.7)

On the basis of the equation and using Cherno↵’s bounding method we have
h i
P[Z (1 + ✏)k)] = P e Z e(1+✏)k
⇥ ⇤
 e (1+✏)k E e Z
(1+✏)k k/2
=e (1 2 )
(1+✏)k
e ek = e ✏k
,
where the last inequality occurs because (1 a)  e a . Setting = ✏/6 (which
is in (0, 1/2) by our assumption) we obtain the second inequality stated in the
lemma.
Finally, the last inequality follows from the first two inequalities and the union
bound.
Appendix C Linear Algebra

C.1 Basic Definitions

In this chapter we only deal with linear algebra over finite dimensional Euclidean
spaces. We refer to vectors as column vectors.
Given two d dimensional vectors u, v 2 Rd , their inner product is

d
X
hu, vi = ui v i .
i=1

p
The Euclidean norm (a.k.a. the `2 norm) is kuk = hu, ui. We also use the `1
Pd
norm, kuk1 = i=1 |ui | and the `1 norm kuk1 = maxi |ui |.
A subspace of Rd is a subset of Rd which is closed under addition and scalar
multiplication. The span of a set of vectors u1 , . . . , uk is the subspace containing
all vectors of the form

k
X
↵i ui
i=1

where for all i, ↵i 2 R.


A set of vectors U = {u1 , . . . , uk } is independent if for every i, ui is not in the
span of u1 , . . . , ui 1 , ui+1 , . . . , uk . We say that U spans a subspace V if V is the
span of the vectors in U . We say that U is a basis of V if it is both independent
and spans V. The dimension of V is the size of a basis of V (and it can be verified
that all bases of V have the same size). We say that U is an orthogonal set if for
all i 6= j, hui , uj i = 0. We say that U is an orthonormal set if it is orthogonal
and if for every i, kui k = 1.
Given a matrix A 2 Rn,d , the range of A is the span of its columns and the
null space of A is the subspace of all vectors that satisfy Au = 0. The rank of A
is the dimension of its range.
The transpose of a matrix A, denoted A> , is the matrix whose (i, j) entry
equals the (j, i) entry of A. We say that A is symmetric if A = A> .

Understanding Machine Learning, c 2014 by Shai Shalev-Shwartz and Shai Ben-David


Published 2014 by Cambridge University Press.
Personal use only. Not for distribution. Do not post.
Please link to http://www.cs.huji.ac.il/~shais/UnderstandingMachineLearning
C.2 Eigenvalues and Eigenvectors 431

C.2 Eigenvalues and Eigenvectors

Let A 2 Rd,d be a matrix. A non-zero vector u is an eigenvector of A with a


corresponding eigenvalue if
Au = u.

theorem C.1 (Spectral Decomposition) If A 2 Rd,d is a symmetric matrix of


rank k, then there exists an orthonormal basis of Rd , u1 , . . . , ud , such that each
Pd
ui is an eigenvector of A. Furthermore, A can be written as A = i=1 i ui u> i ,
where each i is the eigenvalue corresponding to the eigenvector ui . This can
be written equivalently as A = U DU > , where the columns of U are the vectors
u1 , . . . , ud , and D is a diagonal matrix with Di,i = i and for i 6= j, Di,j =
0. Finally, the number of i which are nonzero is the rank of the matrix, the
eigenvectors which correspond to the nonzero eigenvalues span the range of A,
and the eigenvectors which correspond to zero eigenvalues span the null space of
A.

C.3 Positive definite matrices

A symmetric matrix A 2 Rd,d is positive definite if all its eigenvalues are positive.
A is positive semidefinite if all its eigenvalues are nonnegative.

theorem C.2 Let A 2 Rd,d be a symmetric matrix. Then, the following are
equivalent definitions of positive semidefiniteness of A:

• All the eigenvalues of A are nonnegative.


• For every vector u, hu, Aui 0.
• There exists a matrix B such that A = BB > .

C.4 Singular Value Decomposition (SVD)

Let A 2 Rm,n be a matrix of rank r. When m 6= n, the eigenvalue decomposition


given in Theorem C.1 cannot be applied. We will describe another decomposition
of A, which is called Singular Value Decomposition, or SVD for short.
Unit vectors v 2 Rn and u 2 Rm are called right and left singular vectors of
A with corresponding singular value > 0 if

Av = u and A> u = v.

We first show that if we can find r orthonormal singular vectors with positive
singular values, then we can decompose A = U DV > , with the columns of U and
V containing the left and right singular vectors, and D being a diagonal r ⇥ r
matrix with the singular values on its diagonal.
432 Linear Algebra

lemma C.3 Let A 2 Rm,n be a matrix of rank r. Assume that v1 , . . . , vr is an


orthonormal set of right singular vectors of A, u1 , . . . , ur is an orthonormal set
of corresponding left singular vectors of A, and 1 , . . . , r are the corresponding
singular values. Then,
Xr
>
A= i ui vi .
i=1

It follows that if U is a matrix whose columns are the ui ’s, V is a matrix whose
columns are the vi ’s, and D is a diagonal matrix with Di,i = i , then
A = U DV > .
Proof Any right singular vector of A must be in the range of A> (otherwise,
the singular value will have to be zero). Therefore, v1 , . . . , vr is an orthonormal
basis of the range of A. Let us complete it to an orthonormal basis of Rn by
Pr
adding the vectors vr+1 , . . . , vn . Define B = i=1 i ui vi> . It suffices to prove
that for all i, Avi = Bvi . Clearly, if i > r then Avi = 0 and Bvi = 0 as well.
For i  r we have
r
X
>
Bvi = j uj vj vi = i ui = Avi ,
j=1

where the last equality follows from the definition.


The next lemma relates the singular values of A to the eigenvalues of A> A
and AA> .
lemma C.4 v, u are right and left singular vectors of A with singular value
i↵ v is an eigenvector of A> A with corresponding eigenvalue 2 and u = 1
Av
> 2
is an eigenvector of AA with corresponding eigenvalue .
Proof Suppose that is a singular value of A with v 2 Rn being the corre-
sponding right singular vector. Then,
A> Av = A> u = 2
v.
Similarly,
AA> u = Av = 2
u.
For the other direction, if 6= 0 is an eigenvalue of A> A, with v being the
corresponding
p eigenvector, then > 0 because A> A is positive semidefinite. Let
1
= ,u = Av. Then,
p Av
u= p = Av,

and
1
A> u = A> Av = v = v.
C.4 Singular Value Decomposition (SVD) 433

Finally, we show that if A has rank r then it has r orthonormal singular


vectors.

lemma C.5 Let A 2 Rm,n with rank r. Define the following vectors:

v1 = argmax kAvk
v2Rn :kvk=1

v2 = argmax kAvk
v2Rn :kvk=1
hv,v1 i=0
..
.
vr = argmax kAvk
v2Rn :kvk=1
8i<r, hv,vi i=0

Then, v1 , . . . , vr is an orthonormal set of right singular vectors of A.

Proof First note that since the rank of A is r, the range of A is a subspace of
dimension r, and therefore it is easy to verify that for all i = 1, . . . , r, kAvi k > 0.
Let W 2 Rn,n be an orthonormal matrix obtained by the eigenvalue decompo-
sition of A> A, namely, A> A = W DW > , with D being a diagonal matrix with
D1,1 D2,2 ··· 0. We will show that v1 , . . . , vr are eigenvectors of A> A
that correspond to nonzero eigenvalues, and, hence, using Lemma C.4 it follows
that these are also right singular vectors of A. The proof is by induction. For the
basis of the induction, note that any unit vector v can be written as v = W x,
for x = W > v, and note that kxk = 1. Therefore,
n
X
kAvk2 = kAW xk2 = kW DW > W xk2 = kW Dxk2 = kDxk2 = 2
Di,i xi 2 .
i=1

Therefore,
n
X
max kAvk2 = max 2
Di,i xi 2 .
v:kvk=1 x:kxk=1
i=1

The solution of the right-hand side is to set x = (1, 0, . . . , 0), which implies that
v1 is the first eigenvector of A> A. Since kAv1 k > 0 it follows that D1,1 > 0 as
required. For the induction step, assume that the claim holds for some 1  t 
r 1. Then, any v which is orthogonal to v1 , . . . , vt can be written as v = W x
with all the first t elements of x being zero. It follows that
n
X
max kAvk2 = max 2
Di,i xi 2 .
v:kvk=1,8it,v> vi =0 x:kxk=1
i=t+1

The solution of the right-hand side is the all zeros vector except xt+1 = 1. This
implies that vt+1 is the (t + 1)th column of W . Finally, since kAvt+1 k > 0 it
follows that Dt+1,t+1 > 0 as required. This concludes our proof.
434 Linear Algebra

corollary C.6 (The SVD theorem) Let A 2 Rm,n with rank r. Then A =
U DV > where D is an r ⇥ r matrix with nonzero singular values of A and the
columns of U, V are orthonormal left and right singular vectors of A. Further-
2
more, for all i, Di,i is an eigenvalue of A> A, the ith column of V is the cor-
responding eigenvector of A> A and the ith column of U is the corresponding
eigenvector of AA> .
Notes

Understanding Machine Learning, c 2014 by Shai Shalev-Shwartz and Shai Ben-David


Published 2014 by Cambridge University Press.
Personal use only. Not for distribution. Do not post.
Please link to http://www.cs.huji.ac.il/~shais/UnderstandingMachineLearning
References

Abernethy, J., Bartlett, P. L., Rakhlin, A. & Tewari, A. (2008), Optimal strategies and
minimax lower bounds for online convex games, in ‘Proceedings of the Nineteenth
Annual Conference on Computational Learning Theory’.
Ackerman, M. & Ben-David, S. (2008), Measures of clustering quality: A working set
of axioms for clustering, in ‘Proceedings of Neural Information Processing Systems
(NIPS)’, pp. 121–128.
Agarwal, S. & Roth, D. (2005), Learnability of bipartite ranking functions, in ‘Pro-
ceedings of the 18th Annual Conference on Learning Theory’, pp. 16–31.
Agmon, S. (1954), ‘The relaxation method for linear inequalities’, Canadian Journal
of Mathematics 6(3), 382–392.
Aizerman, M. A., Braverman, E. M. & Rozonoer, L. I. (1964), ‘Theoretical foundations
of the potential function method in pattern recognition learning’, Automation and
Remote Control 25, 821–837.
Allwein, E. L., Schapire, R. & Singer, Y. (2000), ‘Reducing multiclass to binary: A uni-
fying approach for margin classifiers’, Journal of Machine Learning Research 1, 113–
141.
Alon, N., Ben-David, S., Cesa-Bianchi, N. & Haussler, D. (1997), ‘Scale-sensitive dimen-
sions, uniform convergence, and learnability’, Journal of the ACM 44(4), 615–631.
Anthony, M. & Bartlet, P. (1999), Neural Network Learning: Theoretical Foundations,
Cambridge University Press.
Baraniuk, R., Davenport, M., DeVore, R. & Wakin, M. (2008), ‘A simple proof of
the restricted isometry property for random matrices’, Constructive Approximation
28(3), 253–263.
Barber, D. (2012), Bayesian reasoning and machine learning, Cambridge University
Press.
Bartlett, P., Bousquet, O. & Mendelson, S. (2005), ‘Local rademacher complexities’,
Annals of Statistics 33(4), 1497–1537.
Bartlett, P. L. & Ben-David, S. (2002), ‘Hardness results for neural network approxi-
mation problems’, Theor. Comput. Sci. 284(1), 53–66.
Bartlett, P. L., Long, P. M. & Williamson, R. C. (1994), Fat-shattering and the learn-
ability of real-valued functions, in ‘Proceedings of the seventh annual conference on
Computational learning theory’, ACM, pp. 299–310.
Bartlett, P. L. & Mendelson, S. (2001), Rademacher and Gaussian complexities: Risk
bounds and structural results, in ‘14th Annual Conference on Computational Learn-
ing Theory, COLT 2001’, Vol. 2111, Springer, Berlin, pp. 224–240.

Understanding Machine Learning, c 2014 by Shai Shalev-Shwartz and Shai Ben-David


Published 2014 by Cambridge University Press.
Personal use only. Not for distribution. Do not post.
Please link to http://www.cs.huji.ac.il/~shais/UnderstandingMachineLearning
438 References

Bartlett, P. L. & Mendelson, S. (2002), ‘Rademacher and Gaussian complexities: Risk


bounds and structural results’, Journal of Machine Learning Research 3, 463–482.
Ben-David, S., Cesa-Bianchi, N., Haussler, D. & Long, P. (1995), ‘Characterizations
of learnability for classes of {0, . . . , n}-valued functions’, Journal of Computer and
System Sciences 50, 74–86.
Ben-David, S., Eiron, N. & Long, P. (2003), ‘On the difficulty of approximately maxi-
mizing agreements’, Journal of Computer and System Sciences 66(3), 496–514.
Ben-David, S. & Litman, A. (1998), ‘Combinatorial variability of vapnik-chervonenkis
classes with applications to sample compression schemes’, Discrete Applied Mathe-
matics 86(1), 3–25.
Ben-David, S., Pal, D., & Shalev-Shwartz, S. (2009), Agnostic online learning, in ‘Con-
ference on Learning Theory (COLT)’.
Ben-David, S. & Simon, H. (2001), ‘Efficient learning of linear perceptrons’, Advances
in Neural Information Processing Systems pp. 189–195.
Bengio, Y. (2009), ‘Learning deep architectures for AI’, Foundations and Trends in
Machine Learning 2(1), 1–127.
Bengio, Y. & LeCun, Y. (2007), ‘Scaling learning algorithms towards ai’, Large-Scale
Kernel Machines 34.
Bertsekas, D. (1999), Nonlinear Programming, Athena Scientific.
Beygelzimer, A., Langford, J. & Ravikumar, P. (2007), ‘Multiclass classification with
filter trees’, Preprint, June .
Birkho↵, G. (1946), ‘Three observations on linear algebra’, Revi. Univ. Nac. Tucuman,
ser A 5, 147–151.
Bishop, C. M. (2006), Pattern recognition and machine learning, Vol. 1, springer New
York.
Blum, L., Shub, M. & Smale, S. (1989), ‘On a theory of computation and complexity
over the real numbers: Np-completeness, recursive functions and universal machines’,
Am. Math. Soc 21(1), 1–46.
Blumer, A., Ehrenfeucht, A., Haussler, D. & Warmuth, M. K. (1987), ‘Occam’s razor’,
Information Processing Letters 24(6), 377–380.
Blumer, A., Ehrenfeucht, A., Haussler, D. & Warmuth, M. K. (1989), ‘Learnability
and the Vapnik-Chervonenkis dimension’, Journal of the Association for Computing
Machinery 36(4), 929–965.
Borwein, J. & Lewis, A. (2006), Convex Analysis and Nonlinear Optimization, Springer.
Boser, B. E., Guyon, I. M. & Vapnik, V. N. (1992), A training algorithm for optimal
margin classifiers, in ‘Conference on Learning Theory (COLT)’, pp. 144–152.
Bottou, L. & Bousquet, O. (2008), The tradeo↵s of large scale learning, in ‘NIPS’,
pp. 161–168.
Boucheron, S., Bousquet, O. & Lugosi, G. (2005), ‘Theory of classification: a survey of
recent advances’, ESAIM: Probability and Statistics 9, 323–375.
Bousquet, O. (2002), Concentration Inequalities and Empirical Processes Theory Ap-
plied to the Analysis of Learning Algorithms, PhD thesis, Ecole Polytechnique.
Bousquet, O. & Elissee↵, A. (2002), ‘Stability and generalization’, Journal of Machine
Learning Research 2, 499–526.
Boyd, S. & Vandenberghe, L. (2004), Convex Optimization, Cambridge University
Press.
References 439

Breiman, L. (1996), Bias, variance, and arcing classifiers, Technical Report 460, Statis-
tics Department, University of California at Berkeley.
Breiman, L. (2001), ‘Random forests’, Machine learning 45(1), 5–32.
Breiman, L., Friedman, J. H., Olshen, R. A. & Stone, C. J. (1984), Classification and
Regression Trees, Wadsworth & Brooks.
Candès, E. (2008), ‘The restricted isometry property and its implications for com-
pressed sensing’, Comptes Rendus Mathematique 346(9), 589–592.
Candes, E. J. (2006), Compressive sampling, in ‘Proc. of the Int. Congress of Math.,
Madrid, Spain’.
Candes, E. & Tao, T. (2005), ‘Decoding by linear programming’, IEEE Trans. on
Information Theory 51, 4203–4215.
Cesa-Bianchi, N. & Lugosi, G. (2006), Prediction, learning, and games, Cambridge
University Press.
Chang, H. S., Weiss, Y. & Freeman, W. T. (2009), ‘Informative sensing’, arXiv preprint
arXiv:0901.4275 .
Chapelle, O., Le, Q. & Smola, A. (2007), Large margin optimization of ranking mea-
sures, in ‘NIPS Workshop: Machine Learning for Web Search’.
Collins, M. (2000), Discriminative reranking for natural language parsing, in ‘Machine
Learning’.
Collins, M. (2002), Discriminative training methods for hidden Markov models: Theory
and experiments with perceptron algorithms, in ‘Conference on Empirical Methods
in Natural Language Processing’.
Collobert, R. & Weston, J. (2008), A unified architecture for natural language process-
ing: deep neural networks with multitask learning, in ‘International Conference on
Machine Learning (ICML)’.
Cortes, C. & Vapnik, V. (1995), ‘Support-vector networks’, Machine Learning
20(3), 273–297.
Cover, T. (1965), ‘Behavior of sequential predictors of binary sequences’, Trans. 4th
Prague Conf. Information Theory Statistical Decision Functions, Random Processes
pp. 263–272.
Cover, T. & Hart, P. (1967), ‘Nearest neighbor pattern classification’, Information
Theory, IEEE Transactions on 13(1), 21–27.
Crammer, K. & Singer, Y. (2001), ‘On the algorithmic implementation of multiclass
kernel-based vector machines’, Journal of Machine Learning Research 2, 265–292.
Cristianini, N. & Shawe-Taylor, J. (2000), An Introduction to Support Vector Machines,
Cambridge University Press.
Daniely, A., Sabato, S., Ben-David, S. & Shalev-Shwartz, S. (2011), Multiclass learn-
ability and the erm principle, in ‘Conference on Learning Theory (COLT)’.
Daniely, A., Sabato, S. & Shwartz, S. S. (2012), Multiclass learning approaches: A
theoretical comparison with implications, in ‘NIPS’.
Davis, G., Mallat, S. & Avellaneda, M. (1997), ‘Greedy adaptive approximation’, Jour-
nal of Constructive Approximation 13, 57–98.
Devroye, L. & Györfi, L. (1985), Nonparametric Density Estimation: The L B1 S View,
Wiley.
Devroye, L., Györfi, L. & Lugosi, G. (1996), A Probabilistic Theory of Pattern Recog-
nition, Springer.
440 References

Dietterich, T. G. & Bakiri, G. (1995), ‘Solving multiclass learning problems via error-
correcting output codes’, Journal of Artificial Intelligence Research 2, 263–286.
Donoho, D. L. (2006), ‘Compressed sensing’, Information Theory, IEEE Transactions
on 52(4), 1289–1306.
Dudley, R., Gine, E. & Zinn, J. (1991), ‘Uniform and universal glivenko-cantelli classes’,
Journal of Theoretical Probability 4(3), 485–510.
Dudley, R. M. (1987), ‘Universal Donsker classes and metric entropy’, Annals of Prob-
ability 15(4), 1306–1326.
Fisher, R. A. (1922), ‘On the mathematical foundations of theoretical statistics’, Philo-
sophical Transactions of the Royal Society of London. Series A, Containing Papers
of a Mathematical or Physical Character 222, 309–368.
Floyd, S. (1989), Space-bounded learning and the Vapnik-Chervonenkis dimension, in
‘Conference on Learning Theory (COLT)’, pp. 349–364.
Floyd, S. & Warmuth, M. (1995), ‘Sample compression, learnability, and the Vapnik-
Chervonenkis dimension’, Machine Learning 21(3), 269–304.
Frank, M. & Wolfe, P. (1956), ‘An algorithm for quadratic programming’, Naval Res.
Logist. Quart. 3, 95–110.
Freund, Y. & Schapire, R. (1995), A decision-theoretic generalization of on-line learning
and an application to boosting, in ‘European Conference on Computational Learning
Theory (EuroCOLT)’, Springer-Verlag, pp. 23–37.
Freund, Y. & Schapire, R. E. (1999), ‘Large margin classification using the perceptron
algorithm’, Machine Learning 37(3), 277–296.
Garcia, J. & Koelling, R. (1996), ‘Relation of cue to consequence in avoidance learning’,
Foundations of animal behavior: classic papers with commentaries 4, 374.
Gentile, C. (2003), ‘The robustness of the p-norm algorithms’, Machine Learning
53(3), 265–299.
Georghiades, A., Belhumeur, P. & Kriegman, D. (2001), ‘From few to many: Illumina-
tion cone models for face recognition under variable lighting and pose’, IEEE Trans.
Pattern Anal. Mach. Intelligence 23(6), 643–660.
Gordon, G. (1999), Regret bounds for prediction problems, in ‘Conference on Learning
Theory (COLT)’.
Gottlieb, L.-A., Kontorovich, L. & Krauthgamer, R. (2010), Efficient classification for
metric data, in ‘23rd Conference on Learning Theory’, pp. 433–440.
Guyon, I. & Elissee↵, A. (2003), ‘An introduction to variable and feature selection’,
Journal of Machine Learning Research, Special Issue on Variable and Feature Selec-
tion 3, 1157–1182.
Hadamard, J. (1902), ‘Sur les problèmes aux dérivées partielles et leur signification
physique’, Princeton University Bulletin 13, 49–52.
Hastie, T., Tibshirani, R. & Friedman, J. (2001), The Elements of Statistical Learning,
Springer.
Haussler, D. (1992), ‘Decision theoretic generalizations of the PAC model for neural
net and other learning applications’, Information and Computation 100(1), 78–150.
Haussler, D. & Long, P. M. (1995), ‘A generalization of sauer’s lemma’, Journal of
Combinatorial Theory, Series A 71(2), 219–240.
Hazan, E., Agarwal, A. & Kale, S. (2007), ‘Logarithmic regret algorithms for online
convex optimization’, Machine Learning 69(2–3), 169–192.
References 441

Hinton, G. E., Osindero, S. & Teh, Y.-W. (2006), ‘A fast learning algorithm for deep
belief nets’, Neural Computation 18(7), 1527–1554.
Hiriart-Urruty, J.-B. & Lemaréchal, C. (1996), Convex Analysis and Minimization Al-
gorithms: Part 1: Fundamentals, Vol. 1, Springer.
Hsu, C.-W., Chang, C.-C. & Lin, C.-J. (2003), ‘A practical guide to support vector
classification’.
Hyafil, L. & Rivest, R. L. (1976), ‘Constructing optimal binary decision trees is NP-
complete’, Information Processing Letters 5(1), 15–17.
Joachims, T. (2005), A support vector method for multivariate performance measures,
in ‘Proceedings of the International Conference on Machine Learning (ICML)’.
Kakade, S., Sridharan, K. & Tewari, A. (2008), On the complexity of linear prediction:
Risk bounds, margin bounds, and regularization, in ‘NIPS’.
Karp, R. M. (1972), Reducibility among combinatorial problems, Springer.
Kearns, M. J., Schapire, R. E. & Sellie, L. M. (1994), ‘Toward efficient agnostic learn-
ing’, Machine Learning 17, 115–141.
Kearns, M. & Mansour, Y. (1996), On the boosting ability of top-down decision tree
learning algorithms, in ‘ACM Symposium on the Theory of Computing (STOC)’.
Kearns, M. & Ron, D. (1999), ‘Algorithmic stability and sanity-check bounds for leave-
one-out cross-validation’, Neural Computation 11(6), 1427–1453.
Kearns, M. & Valiant, L. G. (1988), Learning Boolean formulae or finite automata is
as hard as factoring, Technical Report TR-14-88, Harvard University Aiken Compu-
tation Laboratory.
Kearns, M. & Vazirani, U. (1994), An Introduction to Computational Learning Theory,
MIT Press.
Kleinberg, J. (2003), ‘An impossibility theorem for clustering’, Advances in Neural
Information Processing Systems pp. 463–470.
Klivans, A. R. & Sherstov, A. A. (2006), Cryptographic hardness for learning intersec-
tions of halfspaces, in ‘FOCS’.
Koller, D. & Friedman, N. (2009), Probabilistic Graphical Models: Principles and Tech-
niques, MIT Press.
Koltchinskii, V. & Panchenko, D. (2000), Rademacher processes and bounding the risk
of function learning, in ‘High Dimensional Probability II’, Springer, pp. 443–457.
Kuhn, H. W. (1955), ‘The hungarian method for the assignment problem’, Naval re-
search logistics quarterly 2(1-2), 83–97.
Kutin, S. & Niyogi, P. (2002), Almost-everywhere algorithmic stability and general-
ization error, in ‘Proceedings of the 18th Conference in Uncertainty in Artificial
Intelligence’, pp. 275–282.
La↵erty, J., McCallum, A. & Pereira, F. (2001), Conditional random fields: Probabilistic
models for segmenting and labeling sequence data, in ‘International Conference on
Machine Learning’, pp. 282–289.
Langford, J. (2006), ‘Tutorial on practical prediction theory for classification’, Journal
of machine learning research 6(1), 273.
Langford, J. & Shawe-Taylor, J. (2003), PAC-Bayes & margins, in ‘NIPS’, pp. 423–430.
Le Cun, L. (2004), Large scale online learning., in ‘Advances in Neural Information
Processing Systems 16: Proceedings of the 2003 Conference’, Vol. 16, MIT Press,
p. 217.
442 References

Le, Q. V., Ranzato, M.-A., Monga, R., Devin, M., Corrado, G., Chen, K., Dean, J. &
Ng, A. Y. (2012), Building high-level features using large scale unsupervised learning,
in ‘International Conference on Machine Learning (ICML)’.
Lecun, Y. & Bengio, Y. (1995), Convolutional Networks for Images, Speech and Time
Series, The MIT Press, pp. 255–258.
Lee, H., Grosse, R., Ranganath, R. & Ng, A. (2009), Convolutional deep belief networks
for scalable unsupervised learning of hierarchical representations, in ‘International
Conference on Machine Learning (ICML)’.
Littlestone, N. (1988), ‘Learning quickly when irrelevant attributes abound: A new
linear-threshold algorithm’, Machine Learning 2, 285–318.
Littlestone, N. & Warmuth, M. (1986), Relating data compression and learnability.
Unpublished manuscript.
Littlestone, N. & Warmuth, M. K. (1994), ‘The weighted majority algorithm’, Infor-
mation and Computation 108, 212–261.
Livni, R., Shalev-Shwartz, S. & Shamir, O. (2013), ‘A provably efficient algorithm for
training deep networks’, arXiv preprint arXiv:1304.7045 .
Livni, R. & Simon, P. (2013), Honest compressions and their application to compression
schemes, in ‘Conference on Learning Theory (COLT)’.
MacKay, D. J. (2003), Information theory, inference and learning algorithms,
Cambridge university press.
Mallat, S. & Zhang, Z. (1993), ‘Matching pursuits with time-frequency dictionaries’,
IEEE Transactions on Signal Processing 41, 3397–3415.
McAllester, D. A. (1998), Some PAC-Bayesian theorems, in ‘Conference on Learning
Theory (COLT)’.
McAllester, D. A. (1999), PAC-Bayesian model averaging, in ‘Conference on Learning
Theory (COLT)’, pp. 164–170.
McAllester, D. A. (2003), Simplified PAC-Bayesian margin bounds., in ‘Conference on
Learning Theory (COLT)’, pp. 203–215.
Minsky, M. & Papert, S. (1969), Perceptrons: An Introduction to Computational Ge-
ometry, The MIT Press.
Mukherjee, S., Niyogi, P., Poggio, T. & Rifkin, R. (2006), ‘Learning theory: stability is
sufficient for generalization and necessary and sufficient for consistency of empirical
risk minimization’, Advances in Computational Mathematics 25(1-3), 161–193.
Murata, N. (1998), ‘A statistical study of on-line learning’, Online Learning and Neural
Networks. Cambridge University Press, Cambridge, UK .
Murphy, K. P. (2012), Machine learning: a probabilistic perspective, The MIT Press.
Natarajan, B. (1995), ‘Sparse approximate solutions to linear systems’, SIAM J. Com-
puting 25(2), 227–234.
Natarajan, B. K. (1989), ‘On learning sets and functions’, Mach. Learn. 4, 67–97.
Nemirovski, A., Juditsky, A., Lan, G. & Shapiro, A. (2009), ‘Robust stochastic ap-
proximation approach to stochastic programming’, SIAM Journal on Optimization
19(4), 1574–1609.
Nemirovski, A. & Yudin, D. (1978), Problem complexity and method efficiency in opti-
mization, Nauka Publishers, Moscow.
Nesterov, Y. (2005), Primal-dual subgradient methods for convex problems, Technical
report, Center for Operations Research and Econometrics (CORE), Catholic Univer-
sity of Louvain (UCL).
References 443

Nesterov, Y. & Nesterov, I. (2004), Introductory lectures on convex optimization: A


basic course, Vol. 87, Springer Netherlands.
Noviko↵, A. B. J. (1962), On convergence proofs on perceptrons, in ‘Proceedings of the
Symposium on the Mathematical Theory of Automata’, Vol. XII, pp. 615–622.
Parberry, I. (1994), Circuit complexity and neural networks, The MIT press.
Pearson, K. (1901), ‘On lines and planes of closest fit to systems of points in space’,
The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science
2(11), 559–572.
Phillips, D. L. (1962), ‘A technique for the numerical solution of certain integral equa-
tions of the first kind’, Journal of the ACM 9(1), 84–97.
Pisier, G. (1980-1981), ‘Remarques sur un résultat non publié de B. maurey’.
Pitt, L. & Valiant, L. (1988), ‘Computational limitations on learning from examples’,
Journal of the Association for Computing Machinery 35(4), 965–984.
Poon, H. & Domingos, P. (2011), Sum-product networks: A new deep architecture, in
‘Conference on Uncertainty in Artificial Intelligence (UAI)’.
Quinlan, J. R. (1986), ‘Induction of decision trees’, Machine Learning 1, 81–106.
Quinlan, J. R. (1993), C4.5: Programs for Machine Learning, Morgan Kaufmann.
Rabiner, L. & Juang, B. (1986), ‘An introduction to hidden markov models’, IEEE
ASSP Magazine 3(1), 4–16.
Rakhlin, A., Shamir, O. & Sridharan, K. (2012), Making gradient descent optimal for
strongly convex stochastic optimization, in ‘International Conference on Machine
Learning (ICML)’.
Rakhlin, A., Sridharan, K. & Tewari, A. (2010), Online learning: Random averages,
combinatorial parameters, and learnability, in ‘NIPS’.
Rakhlin, S., Mukherjee, S. & Poggio, T. (2005), ‘Stability results in learning theory’,
Analysis and Applications 3(4), 397–419.
Ranzato, M., Huang, F., Boureau, Y. & Lecun, Y. (2007), Unsupervised learning of
invariant feature hierarchies with applications to object recognition, in ‘Computer
Vision and Pattern Recognition, 2007. CVPR’07. IEEE Conference on’, IEEE, pp. 1–
8.
Rissanen, J. (1978), ‘Modeling by shortest data description’, Automatica 14, 465–471.
Rissanen, J. (1983), ‘A universal prior for integers and estimation by minimum descrip-
tion length’, The Annals of Statistics 11(2), 416–431.
Robbins, H. & Monro, S. (1951), ‘A stochastic approximation method’, The Annals of
Mathematical Statistics pp. 400–407.
Rogers, W. & Wagner, T. (1978), ‘A finite sample distribution-free performance bound
for local discrimination rules’, The Annals of Statistics 6(3), 506–514.
Rokach, L. (2007), Data mining with decision trees: theory and applications, Vol. 69,
World scientific.
Rosenblatt, F. (1958), ‘The perceptron: A probabilistic model for information storage
and organization in the brain’, Psychological Review 65, 386–407. (Reprinted in
Neurocomputing (MIT Press, 1988).).
Rumelhart, D. E., Hinton, G. E. & Williams, R. J. (1986), Learning internal represen-
tations by error propagation, in D. E. Rumelhart & J. L. McClelland, eds, ‘Paral-
lel Distributed Processing – Explorations in the Microstructure of Cognition’, MIT
Press, chapter 8, pp. 318–362.
444 References

Sankaran, J. K. (1993), ‘A note on resolving infeasibility in linear programs by con-


straint relaxation’, Operations Research Letters 13(1), 19–20.
Sauer, N. (1972), ‘On the density of families of sets’, Journal of Combinatorial Theory
Series A 13, 145–147.
Schapire, R. (1990), ‘The strength of weak learnability’, Machine Learning 5(2), 197–
227.
Schapire, R. E. & Freund, Y. (2012), Boosting: Foundations and Algorithms, MIT press.
Schölkopf, B., Herbrich, R. & Smola, A. (2001), A generalized representer theorem, in
‘Computational learning theory’, pp. 416–426.
Schölkopf, B., Herbrich, R., Smola, A. & Williamson, R. (2000), A generalized repre-
senter theorem, in ‘NeuroCOLT’.
Schölkopf, B. & Smola, A. J. (2002), Learning with Kernels: Support Vector Machines,
Regularization, Optimization and Beyond, MIT Press.
Schölkopf, B., Smola, A. & Müller, K.-R. (1998), ‘Nonlinear component analysis as a
kernel eigenvalue problem’, Neural computation 10(5), 1299–1319.
Seeger, M. (2003), ‘Pac-bayesian generalisation error bounds for gaussian process clas-
sification’, The Journal of Machine Learning Research 3, 233–269.
Shakhnarovich, G., Darrell, T. & Indyk, P. (2006), Nearest-neighbor methods in learning
and vision: theory and practice, MIT Press.
Shalev-Shwartz, S. (2007), Online Learning: Theory, Algorithms, and Applications,
PhD thesis, The Hebrew University.
Shalev-Shwartz, S. (2011), ‘Online learning and online convex optimization’, Founda-
tions and Trends R in Machine Learning 4(2), 107–194.
Shalev-Shwartz, S., Shamir, O., Srebro, N. & Sridharan, K. (2010), ‘Learnability,
stability and uniform convergence’, The Journal of Machine Learning Research
9999, 2635–2670.
Shalev-Shwartz, S., Shamir, O. & Sridharan, K. (2010), Learning kernel-based halfs-
paces with the zero-one loss, in ‘Conference on Learning Theory (COLT)’.
Shalev-Shwartz, S., Shamir, O., Sridharan, K. & Srebro, N. (2009), Stochastic convex
optimization, in ‘Conference on Learning Theory (COLT)’.
Shalev-Shwartz, S. & Singer, Y. (2008), On the equivalence of weak learnability and
linear separability: New relaxations and efficient boosting algorithms, in ‘Proceedings
of the Nineteenth Annual Conference on Computational Learning Theory’.
Shalev-Shwartz, S., Singer, Y. & Srebro, N. (2007), Pegasos: Primal Estimated sub-
GrAdient SOlver for SVM, in ‘International Conference on Machine Learning’,
pp. 807–814.
Shalev-Shwartz, S. & Srebro, N. (2008), SVM optimization: Inverse dependence on
training set size, in ‘International Conference on Machine Learning’, pp. 928–935.
Shalev-Shwartz, S., Zhang, T. & Srebro, N. (2010), ‘Trading accuracy for sparsity
in optimization problems with sparsity constraints’, Siam Journal on Optimization
20, 2807–2832.
Shamir, O. & Zhang, T. (2013), Stochastic gradient descent for non-smooth optimiza-
tion: Convergence results and optimal averaging schemes, in ‘International Confer-
ence on Machine Learning (ICML)’.
Shapiro, A., Dentcheva, D. & Ruszczyński, A. (2009), Lectures on stochastic program-
ming: modeling and theory, Vol. 9, Society for Industrial and Applied Mathematics.
References 445

Shelah, S. (1972), ‘A combinatorial problem; stability and order for models and theories
in infinitary languages’, Pac. J. Math 4, 247–261.
Sipser, M. (2006), Introduction to the Theory of Computation, Thomson Course Tech-
nology.
Slud, E. V. (1977), ‘Distribution inequalities for the binomial law’, The Annals of
Probability 5(3), 404–412.
Steinwart, I. & Christmann, A. (2008), Support vector machines, Springerverlag New
York.
Stone, C. (1977), ‘Consistent nonparametric regression’, The annals of statistics
5(4), 595–620.
Taskar, B., Guestrin, C. & Koller, D. (2003), Max-margin markov networks, in ‘NIPS’.
Tibshirani, R. (1996), ‘Regression shrinkage and selection via the lasso’, J. Royal.
Statist. Soc B. 58(1), 267–288.
Tikhonov, A. N. (1943), ‘On the stability of inverse problems’, Dolk. Akad. Nauk SSSR
39(5), 195–198.
Tishby, N., Pereira, F. & Bialek, W. (1999), The information bottleneck method, in
‘The 37’th Allerton Conference on Communication, Control, and Computing’.
Tsochantaridis, I., Hofmann, T., Joachims, T. & Altun, Y. (2004), Support vector
machine learning for interdependent and structured output spaces, in ‘Proceedings
of the Twenty-First International Conference on Machine Learning’.
Valiant, L. G. (1984), ‘A theory of the learnable’, Communications of the ACM
27(11), 1134–1142.
Vapnik, V. (1992), Principles of risk minimization for learning theory, in J. E. Moody,
S. J. Hanson & R. P. Lippmann, eds, ‘Advances in Neural Information Processing
Systems 4’, Morgan Kaufmann, pp. 831–838.
Vapnik, V. (1995), The Nature of Statistical Learning Theory, Springer.
Vapnik, V. N. (1982), Estimation of Dependences Based on Empirical Data, Springer-
Verlag.
Vapnik, V. N. (1998), Statistical Learning Theory, Wiley.
Vapnik, V. N. & Chervonenkis, A. Y. (1971), ‘On the uniform convergence of relative
frequencies of events to their probabilities’, Theory of Probability and its applications
XVI(2), 264–280.
Vapnik, V. N. & Chervonenkis, A. Y. (1974), Theory of pattern recognition, Nauka,
Moscow. (In Russian).
Von Luxburg, U. (2007), ‘A tutorial on spectral clustering’, Statistics and computing
17(4), 395–416.
von Neumann, J. (1928), ‘Zur theorie der gesellschaftsspiele (on the theory of parlor
games)’, Math. Ann. 100, 295—320.
Von Neumann, J. (1953), ‘A certain zero-sum two-person game equivalent to the opti-
mal assignment problem’, Contributions to the Theory of Games 2, 5–12.
Vovk, V. G. (1990), Aggregating strategies, in ‘Conference on Learning Theory
(COLT)’, pp. 371–383.
Warmuth, M., Glocer, K. & Vishwanathan, S. (2008), Entropy regularized lpboost, in
‘Algorithmic Learning Theory (ALT)’.
Warmuth, M., Liao, J. & Ratsch, G. (2006), Totally corrective boosting algorithms
that maximize the margin, in ‘Proceedings of the 23rd international conference on
Machine learning’.
446 References

Weston, J., Chapelle, O., Vapnik, V., Elissee↵, A. & Schölkopf, B. (2002), Kernel depen-
dency estimation, in ‘Advances in neural information processing systems’, pp. 873–
880.
Weston, J. & Watkins, C. (1999), Support vector machines for multi-class pattern
recognition, in ‘Proceedings of the Seventh European Symposium on Artificial Neural
Networks’.
Wolpert, D. H. & Macready, W. G. (1997), ‘No free lunch theorems for optimization’,
Evolutionary Computation, IEEE Transactions on 1(1), 67–82.
Zhang, T. (2004), Solving large scale linear prediction problems using stochastic gradi-
ent descent algorithms, in ‘Proceedings of the Twenty-First International Conference
on Machine Learning’.
Zhao, P. & Yu, B. (2006), ‘On model selection consistency of Lasso’, Journal of Machine
Learning Research 7, 2541–2567.
Zinkevich, M. (2003), Online convex programming and generalized infinitesimal gradi-
ent ascent, in ‘International Conference on Machine Learning’.
Index

3-term DNF, 107 set, 156


F1 -score, 244 strongly convex, 174, 195
`1 norm, 183, 332, 363, 386 convex-Lipschitz-bounded learning, 166
accuracy, 38, 43 convex-smooth-bounded learning, 166
activation function, 269 covering numbers, 388
AdaBoost, 130, 134, 362 curse of dimensionality, 263
all-pairs, 228, 404 decision stumps, 132, 133
approximation error, 61, 64 decision trees, 250
auto-encoders, 368 dendrogram, 309, 310
backpropagation, 278 dictionary learning, 368
backward elimination, 363 di↵erential set, 188
bag-of-words, 209 dimensionality reduction, 323
base hypothesis, 137 discretization trick, 57
Bayes optimal, 46, 52, 260 discriminative, 342
Bayes rule, 354 distribution free, 342
Bayesian reasoning, 353 domain, 33
Bennet’s inequality, 426 domain of examples, 48
Bernstein’s inequality, 426 doubly stochastic matrix, 242
bias, 37, 61, 64 duality, 211
bias-complexity tradeo↵, 65 strong duality, 211
boolean conjunctions, 51, 79, 106 weak duality, 211
boosting, 130 Dudley classes, 81
boosting the confidence, 142 efficient computable, 100
boundedness, 165 EM, 348
C4.5, 254 empirical error, 35
CART, 254 empirical risk, 35, 48
chaining, 389 Empirical Risk Minimization, see ERM
Chebyshev’s inequality, 423 entropy, 345
Cherno↵ bounds, 423 relative entropy, 345
class-sensitive feature mapping, 230 epigraph, 157
classifier, 34 ERM, 35
clustering, 307 error decomposition, 64, 168
spectral, 315 estimation error, 61, 64
compressed sensing, 330 Expectation-Maximization, see EM
compression bounds, 410 face recognition, see Viola-Jones
compression scheme, 411 feasible, 100
computational complexity, 100 feature, 33
confidence, 38, 43 feature learning, 368
consistency, 92 feature normalization, 365
Consistent, 289 feature selection, 357, 358
contraction lemma, 381 feature space, 215
convex, 156 feature transformations, 367
function, 157 filters, 359

Understanding Machine Learning, c 2014 by Shai Shalev-Shwartz and Shai Ben-David


Published 2014 by Cambridge University Press.
Personal use only. Not for distribution. Do not post.
Please link to http://www.cs.huji.ac.il/~shais/UnderstandingMachineLearning
448 Index

forward greedy selection, 360 homogenous, 118


frequentist, 353 linear programming, 119
gain, 253 linear regression, 122
GD, see gradient descent linkage, 310
generalization error, 35 Lipschitzness, 160, 176, 191
generative models, 342 sub-gradient, 190
Gini index, 254 Littlestone dimension, see Ldim
Glivenko-Cantelli, 58 local minimum, 158
gradient, 158 logistic regression, 126
gradient descent, 185 loss, 35
Gram matrix, 219 loss function, 48
growth function, 73 0-1 loss, 48, 167
absolute value loss, 124, 128, 166
halfspace, 118
convex loss, 163
homogenous, 118, 205
generalized hinge-loss, 233
non-separable, 119
hinge loss, 167
separable, 118
Lipschitz loss, 166
Halving, 289
log-loss, 345
hidden layers, 270
logistic loss, 127
Hilbert space, 217
ramp loss, 209
Hoe↵ding’s inequality, 56, 425
smooth loss, 166
hold out, 146
square loss, 48
hypothesis, 34
surrogate loss, 167, 302
hypothesis class, 36
margin, 203
i.i.d., 38
Markov’s inequality, 422
ID3, 252
Massart lemma, 380
improper, see representation independent
max linkage, 310
inductive bias, see bias
maximum a-posteriori, 355
information bottleneck, 317
maximum likelihood, 343
information gain, 254
McDiarmid’s inequality, 378
instance, 33
MDL, 89, 90, 251
instance space, 33
measure concentration, 55, 422
integral image, 143
Minimum Description Length, see MDL
Johnson-Lindenstrauss lemma, 329 mistake bound, 288
k-means, 311, 313 mixture of Gaussians, 348
soft k-means, 352 model selection, 144, 147
k-median, 312 multiclass, 47, 227, 402
k-medoids, 312 cost-sensitive, 232
Kendall tau, 239 linear predictors, 230, 405
kernel PCA, 326 multi-vector, 231, 406
kernels, 215 Perceptron, 248
Gaussian kernel, 220 reductions, 227, 405
kernel trick, 217 SGD, 235
polynomial kernel, 220 SVM, 234
RBF kernel, 220 multivariate performance measures, 243
label, 33 Naive Bayes, 347
Lasso, 365, 386 Natarajan dimension, 402
generalization bounds, 386 NDCG, 239
latent variables, 348 Nearest Neighbor, 258
LDA, 347 k-NN, 258
Ldim, 290, 291 neural networks, 268
learning curves, 153 feedforward networks, 269
least squares, 124 layered networks, 269
likelihood ratio, 348 SGD, 277
linear discriminant analysis, see LDA no-free-lunch, 61
linear predictor, 117 non-uniform learning, 84
Index 449

Normalized Discounted Cumulative Gain, sample complexity, 44


see NDCG Sauer’s lemma, 73
Occam’s razor, 91 self-boundedness, 162
OMP, 360 sensitivity, 244
one-vs-all, 227 SGD, 190
one-vs-rest, see one-vs-all shattering, 69, 403
one-vs.-all, 404 single linkage, 310
online convex optimization, 300 Singular Value Decomposition, see SVD
online gradient descent, 300 Slud’s inequality, 428
online learning, 287 smoothness, 162, 177, 198
optimization error, 168 SOA, 292
oracle inequality, 179 sparsity-inducing norms, 363
orthogonal matching pursuit, see OMP specificity, 244
overfitting, 35, 65, 152 spectral clustering, 315
SRM, 85, 145
PAC, 43 stability, 173
agnostic PAC, 45, 46
Stochastic Gradient Descent, see SGD
agnostic PAC for general loss, 49 strong learning, 132
PAC-Bayes, 415 Structural Risk Minimization, see SRM
parametric density estimation, 342 structured output prediction, 236
PCA, 324 sub-gradient, 188
Pearson’s correlation coefficient, 359
Support Vector Machines, see SVM
Perceptron, 120 SVD, 431
kernelized Perceptron, 225 SVM, 202, 383
multiclass, 248
duality, 211
online, 301 generalization bounds, 208, 383
permutation matrix, 242 hard-SVM, 203, 204
polynomial regression, 125
homogenous, 205
precision, 244 kernel trick, 217
predictor, 34 soft-SVM, 206
prefix free language, 89
support vectors, 210
Principal Component Analysis, see PCA
prior knowledge, 63 target set, 47
Probably Approximately Correct, see PAC term-frequency, 231
projection, 193 TF-IDF, 231
projection lemma, 193 training error, 35
proper, 49 training set, 33
pruning, 254 true error, 35, 45
Rademacher complexity, 375 underfitting, 65, 152
random forests, 255 uniform convergence, 54, 55
random projections, 329 union bound, 39
ranking, 238 unsupervised learning, 308
bipartite, 243 validation, 144, 146
realizability, 37 cross validation, 149
recall, 244 train-validation-test split, 150
regression, 47, 122, 172 Vapnik-Chervonenkis dimension, see VC
regularization, 171 dimension
Tikhonov, 172, 174 VC dimension, 67, 70
regularized loss minimization, see RLM version space, 289
representation independent, 49, 107 Viola-Jones, 139
representative sample, 54, 375 weak learning, 130, 131
representer theorem, 218 Weighted-Majority, 295
ridge regression, 172
kernel ridge regression, 225
RIP, 331
risk, 35, 45, 48
RLM, 171, 199

You might also like